text
stringlengths 1
2.26M
| meta
dict |
|---|---|
\section{Introduction}
We study inference on scaling parameters of a conditionally Gaussian process under discrete noisy observations over a fixed time interval. There are still many open questions in the field of covariance estimation of Gaussian processes under high-frequency asymptotics. Existing results reveal surprising phenomena, such as unusual convergence rates and unexpected emergences of parameters in the asymptotic covariance of estimators, which calls for a better understanding of how the underlying signal process drives asymptotic quantities of interest. Particularly, the multidimensional interplay of estimation targets encumbers the understanding of central object, such as asymptotic information. Moreover, for covariance operators that depend on high-dimensional or possibly even infinite-dimensional parameters, the mathematical analysis is not trivial.
Gaussian processes constitute a versatile class with a wide range of applications. Finance marks a major field of interest in practice, where usually models driven by Brownian motions are regarded. Fractional processes yield a more controversial approach, cf. \citet{Rogers[1997]}, but are also highly relevant in, for example, geophysics and biomechanics, cf. \citet{Mandelbrot[1970]} and \citet{Bardet[2007]}. Integrated Gaussian processes are used in Physics and Biology, e.g. for modelling particles, cf. \citet{Tory[2000]}, or in the meteorological literature, cf. \citet{Boughton[1987]}. The increasing usage of sophisticated Gaussian processes, such as multifractional Brownian motions, cf. \citet{Bianchi[2013]}, calls for a general understanding of lower and upper bounds, at least for benchmark cases.
As mentioned conditionally Gaussian models play a major role in finance, where inference is commonly performed conditionally on the underlying volatility process, cf. \citet{Mykland[2012]} for a general framework. A fundamental estimation problem is the extraction of the quadratic covariation (or integrated covolatility) of a continuous martingale in terms of a Brownian motion under microstructure noise. Moreover, some even consider application-driven generalisations, such as asynchronous and irregular (non-equidistant) observation schemes with varying sample sizes. Several famous approaches exist, e.g. \citet{Zhang[2005]}, \citet{Jacod[2009]}, \citet{Barndorff[2011]}, \citet{Bibinger[2014]}, \citet{Hayashi[2005]} and \citet{Christensen[2013]}, with varying limiting behaviours depending on the employed estimation techniques. These variations make a comparison of the existing approaches difficult. Additionally and importantly, the asymptotic lower bounds are not yet completely understood, even under regular observation schemes. The reason for this lies in the fact that the underlying statistical properties in these models are mathematically highly involved, which can be seen by regarding the results on efficiency in the literature.
Notable works in the one-dimensional field exist, for a parametric set-up by \citet{Gloter[2001]}, and in a semi-parametric case by \citet{Reiss[2011]}, whose results are based on the verification of local asymptotic normality (LAN) and use sophisticated arguments such as asymptotic equivalences of experiments. An interesting finding in both cases, parametric and semi-parametric, is that due to the noise the optimal rate is of the unusual order $n^{-1/4}$. A multidimensional extension of these results marks the semi-parametric Cram\'er-Rao lower bound derived by \citet{Bibinger[2014]}. As the latter is provided under rather strong assumptions for synchronous and regular finite samples, in which non-parametric estimators are biased, an asymptotic characterisation of efficiency under asynchronicity is required. Moreover, \cite{Ogihara[2018]} derives asymptotic lower bounds for $d=2$.
Little is known about efficient estimation if the assumption that the signal is driven by a Brownian motion is dropped. The one-dimensional Cram\'er-Rao bound derived by \citet{Sabel[2014]} is noteworthy, where the signal is given by a fractional Brownian motion. However, an asymptotic and particularly multidimensional lower bound and its dependence on the Hurst parameter remain an open question.
\newpage
Estimation of scaling parameters of Gaussian processes under noise also attracts interest in other fields. Related models appear in nonparametric Bayesian problems, where Gaussian process priors subject to an unknown parameter (hyperparameter) are used, cf. \citet{Szabo[2013]}. The difference in their setting lies in the asymptotic behaviour of the scaling parameter itself, whose estimation is carried-out pathwise. Latent variance estimation can also be found in genetic fields, e.g. \citet{Verzelen[2018]}. Here, the task of estimating the heritability bears structural similarities to the problems in this work.
The aim of this paper is to provide a general asymptotic theory for Gaussian covariance estimation models. In the following Section~\ref{SecMainResults} the fundamental parametric model is introduced, in which the superposition of a scaled multivariate Gaussian process with additive errors is observed in equation \eqref{DiscPar}. A main contribution of this paper is the universal Convolution Theorem~\ref{ThmConvPara}, which gives a precise asymptotic characterisation of efficient estimation and includes the set-ups of \citet{Gloter[2001]} and \citet{Sabel[2014]} as special cases but also applies to more models of practical relevance given as examples below. Even though an idealised parametric model might not be as such utilisable for practical purposes, its asymptotic lower bounds provide a basic case benchmark for comparing estimation procedures of more general models. Moreover, the insight gained in the fundamental model might be used in far more complex models. This phenomenon resembles the approach with which the second main result, Theorem~\ref{ThmConvNonpara}, is derived, which marks a semi-parametric convolution theorem for estimating the integrated covolatility matrix. This result not only extends the set-up in \citet{Reiss[2011]} by multidimensionality and asynchronicity, but also weakens smoothness assumptions to Sobolev regularity $\beta>1/2$.
The following section gives an overview of the main results along with their proof techniques, imposed assumptions and examples. Section~\ref{SecParametric} contains the parametric analysis, particularly the verification of Theorem~\ref{ThmConvPara}. The construction of efficient estimators is followed by further asymptotic equivalences that provide further insight on the estimation problem. Section~\ref{SecSemiPara} concludes this work by the stepwise deduction of Theorem~\ref{ThmConvNonpara}. Most of the proofs and reviews of several mathematical concepts can be found in the \hyperref[Appendix]{Appendix}.
\newpage
\section{Methodology and main results}\label{SecMainResults}
\subsection{\textbf{Notation}}
We introduce spaces of matrix-valued functions as they appear as canonical parameter sets. For $A,B\in\mathbb R^{v\times w}$ and $C\in\mathbb R^{vw\times vw}$, let
\[\langle A,B\rangle_C:=\text{vec}(A)^{\top}C\text{vec}(B),\]
and set $\langle\cdot,\cdot\rangle:=\langle\cdot,\cdot\rangle_{I_{vw}}$, where $\text{vec}(A)\in\mathbb R^{vw}$ is the vectorisation of $A$ and $I_k$ denotes the identity matrix in $\mathbb R^{k\times k}$. Denote the corresponding induced norms by $\|\cdot\|_C$ and $\|\cdot\|$, given that $C>0$, i.e., if $C$ is positive-definite. Note that $\|\cdot\|$ is just the Hilbert-Schmidt norm.
Further let for $u\in\mathbb N,\ \Omega:=[0,1]$ and $f,g:\Omega^u\to\mathbb R^{v\times w}$ the inner product
\[\langle f,g\rangle_{L^2}:=\int_{\Omega^u}\langle f(t),g(t)\rangle dt\]
induce the norm $\|\cdot\|_{L^2}$ and the space $L^2=L^2(\Omega^u,\mathbb R^{v\times w})$. For $\beta\in(0,2)$ the $L^2$-subspace $H^{\beta}=H^{\beta}(\Omega^u,\mathbb R^{v\times w})$ consists of all $f:\Omega^u\to\mathbb R^{v\times w}$ such that
\[\|f\|_{H^{\beta}}:=\sum_{k:|k|<\beta}\|f^{(k)}\|_{L^2}+|f|_{H^{\beta}}<\infty.\]
Here $|\cdot|_{H^{\beta}}$ denotes the Sobolev-Slobodeckij semi-norm given for $\beta\neq1$ by
\[|f|^2_{H^{\beta}}:=\sup_{k:|k|=\lfloor\beta\rfloor}\int_{\Omega^u}\int_{\Omega^u}\frac{\|f^{(k)}(x)-f^{(k)}(y)\|^2}{|x-y|^{2(\beta-\lfloor\beta\rfloor)+u}}dxdy,\]
where $\lfloor \beta\rfloor$ denotes the integer part of $\beta$, and by $\sum_{k:|k|=1}\|f^{(k)}\|^2_{L^2}$ otherwise, where $k\in\{0,1\}^u$ denotes a multiindex with $|k|=\sum^u_{i=1}k_i$. For $u=1$ we often write $f':=f^{(1)}$. Within $H^{\beta}$ the ball of radius $L>0$ is defined via
\[H^{\beta}_L:=\{f\in H^{\beta}:\|f\|_{H^{\beta}}\leq L\}.\]
For $\gamma\in(0,1]$ and $N>0$ H\"older balls are given by
\[C^{\gamma}_N:=\{f:\Omega\to\mathbb R:\sup\nolimits_{s,t\in\Omega}|f(s)-f(t)|^{\gamma}/|s-t|\leq N\}.\]
Symmetric co-domains $\mathbb R^{d\times d}_{\text{sym}}:=\{A\in\mathbb R^{d\times d}:A=A^{\top}\}$ are highlighted by the notation $L^2_{\text{sym}}:=L^2(\Omega^u,\mathbb R^{d\times d}_{\text{sym}})$ and $H^{\beta}_{\text{sym}}:=H^{\beta}(\Omega^u,\mathbb R^{d\times d}_{\text{sym}})$. It is a basic fact that if $\beta>u/2$ for any $f\in H^{\beta}(\Omega^u,\mathbb R^{v\times w})$ a continuous version can be obtained after possibly modifying $f$ on a zero-subset of $\Omega^u$. An overview over Sobolev spaces and their embedding properties with respect to H\"older spaces can be found in \citet{Triebel[2010]}.
For $Z\sim\mathcal N(0,I_d)$ the matrix $\mathcal Z=\text{Cov}(\text{vec}(ZZ^{\top}))$ is twice the so-called symmetriser matrix, i.e., it has the property $\mathcal Z\text{vec}(A)=\text{vec}(A+A^{\top})$, $A\in\mathbb R^{d\times d}$, see e.g. \citet{Abadir[2005]}. Any $d^2\times d^2$-matrix $A\otimes A$ commutes with $\mathcal Z$. Moreover, $\mathcal Z$ is positive semi-definite and therefore not invertible.
For $(A_n)_{n\geq1}$ and $(B_n)_{n\geq1}$ in $\mathbb R^{d\times d}$ the expression $A_n\lesssim B_n$ means $\|A_n\|=\mathcal O(\|B_n\|)$ and $A_n\sim B_n$ means $A_n=\mathcal O(B_n)$ as well as $B_n=\mathcal O(A_n)$.
Finally, for a set of parameters $\Theta$ the Le Cam distance between two statistical experiments $\mathcal E=\{(X,\mathcal X,P_{\theta}):\theta\in\Theta\}$ and $\mathcal F=\{(Y,\mathcal Y, Q_{\theta}):\theta\in\Theta\}$ on Polish spaces is given by $\Delta(\mathcal E,\mathcal F):=\max\{\delta(\mathcal E,\mathcal F),\delta(\mathcal F,\mathcal E)\}$. Here $\delta$ denotes the one-sided deficiency
\[\delta(\mathcal E,\mathcal F):=\inf_K\sup_{\theta\in\Theta}\|K\cdot P_{\theta}-Q_{\theta}\|_{\text{TV}},\]
where the infimum is taken over all Markov kernels from $(X,\mathcal X)$ to $(Y,\mathcal Y)$ and $\|\cdot\|_{\text{TV}}$ denotes the total variation norm. Sequences $(\mathcal E_n)_{n\geq1}$ and $(\mathcal F_n)_{n\geq1}$ of experiments are called asymptotically equivalent if $\Delta(\mathcal E_n,\mathcal F_n)=o(1)$. The latter implies that asymptotic properties transfer from one model to the other, and vice versa. Properties of $\Delta$ can be found in Appendix~\ref{SsecLeCamDistance} and \ref{SsecWeakLanReg}, see also \cite{LeCam[2000]} for a thorough introduction.
\subsection{\textbf{Fundamental parametric model}}
Consider the $d$-dimensional discrete observation model generated by the observations
\begin{equation}\label{DiscPar}
\tilde Y_i=\Sigma^{1/2}G_{i/n}+\varepsilon_i,\quad i=1,\ldots,n,
\end{equation}
where $G=(G_t)_{t\in[0,1]}$ is such that $G\sim\mathcal N^{\otimes d}_{0,\Gamma}$, for a centred Gaussian measure $\mathcal N_{0,\Gamma}$ on $L^2([0,1],\mathbb R)$ with covariance operator $\Gamma$. Assume that $G$ is independent of the i.i.d. errors $\varepsilon_1,\ldots,\varepsilon_n\sim\mathcal N(0,\eta^2I_d)$. The noise level $\eta>0$ is a nuisance parameter, whereas $\Sigma$ is the parameter of interest subject to
\begin{equation}\label{Parspace}
\Theta_0:=\{\Sigma\in\mathbb R^{d\times d}_+:0<\Sigma<SI_d\},
\end{equation}
where $S>0$. Here $\mathbb R^{d\times d}_+$ denotes all positive-definite $\mathbb R^{d\times d}$-matrices and the ordering $\Sigma<SI_d$ is meant with respect to positive definiteness.
An important tool paving the way to asymptotic lower bounds in the present work are several asymptotic equivalences in Le Cam's sense. In order to obtain a mathematically more convenient working basis, consider the spectral analogue of \eqref{DiscPar} given by
\begin{equation}\label{SeqPar}
Y_p\sim\mathcal N(0,C_p),\quad C_p:=\Sigma\lambda_p+\frac{\eta^2}{n}I_d,\quad p\geq1.
\end{equation}
The sequence $\lambda=(\lambda_p)_{p\geq1}$ denotes the eigenvalue sequence of the covariance operator of $\Gamma$. The approximation error between the models \eqref{DiscPar} and \eqref{SeqPar} is quantifiable by the Le Cam $\Delta$-distance, which is negligible under the following regularity assumption, cf. Proposition~\ref{PropFDiscSeq} below.
\begin{assumption}\label{AssSoboCov}\textbf{-}$\pmb{G(\beta).}$ The function $(s,t)\mapsto\text{Cov}(G_s,G_t),\ s,t\in[0,1]$, lies in $H^{\beta}$ for some $\beta\in(1,2)$.
\end{assumption}
As an important consequence of asymptotic equivalence, LAN-expansions and convolution theorems in \eqref{DiscPar} and \eqref{SeqPar} coincide. However, as there are infinitely many non-identically distributed vectors $Y_p$ in \eqref{SeqPar} it is not clear at all whether a LAN-expansion holds since the sum of infinitely many remainder terms needs to be controlled. For the latter it will be crucial that the behaviour of certain subsequences $(\lambda_{p_n})_{n\geq1}$ carries over to the entire sequence $(\lambda_p)_{p\geq1}$ which can be done under the following.
\begin{assumption}\label{regvarass}\textbf{-}$\pmb{\lambda(\delta).}$ The eigenvalues $\lambda=(\lambda_p)_{p\geq1}$ of $\Gamma$ are strictly-positive and regularly varying at infinity with index $-\delta,\ \delta>1$, i.e.,
\begin{equation}\label{regvar}
\lim_{p\to\infty}\frac{\lambda_{\lfloor ap\rfloor}}{\lambda_p}=a^{-\delta},\ \forall a>0.
\end{equation}
\end{assumption}
If $P^n_{\Sigma}$ denotes the measure induced by \eqref{SeqPar} then Assumption~\ref{regvarass}-$\lambda(\delta)$ ensures that a certain LAN-expansion holds, i.e., for $H\in\R^{d\times d}_{\text{sym}}$ one has
\[\log\frac{dP^n_{\Sigma+r_nH}}{dP^n_{\Sigma}}\overset{P^n_{\Sigma}}{\to}\Delta_H-\frac{1}{2}\|H\|^2_{\mathcal I(\Sigma)\mathcal Z},\]
where $\Delta_H\sim\mathcal N(0,\|H\|^2_{\mathcal I(\Sigma)\mathcal Z})$ and $\mathcal I(\Sigma)\mathcal Z\in\mathbb R^{d^2\times d^2}$ is the asymptotic Fisher information matrix, cf. Proposition~\ref{LAN_gen}. The rate $r_n\to0$ is obtained by
\[\lim_{n\to\infty}n\lambda_{\lfloor r^{-2}_n\rfloor}=c,\]
where $c>0$ is chosen such that $r_n$ is normalised with respect to multiplicative scalars, e.g. $r_n=n^{-1/4}$ but not $r_n=2n^{-1/4}$. Thus a slow decay of $\lambda$ implies a fast decay of $r_n$, and vice versa. Since the Fisher information $\mathcal I(\Sigma)\mathcal Z$ is singular it is not obvious how classical implications from LAN-theory, e.g. a convolution theorem, can be obtained. This problem is overcome by symmetrising properties of $\mathcal Z$ which allow for certain isometries, cf. Remark~\ref{RmkZZZ} below. In a non-noisy set-up \citet{Brouste[2018]} recently derived asymptotic lower bounds despite singularity by usage of certain rate matrices. For a further discussion of $r_n$ and $\mathcal I(\Sigma)$ see Section~\ref{efficiency}.
\subsection{\textbf{Parametric main result}} Let $\psi(\Sigma)\in\mathbb R^k$ be a differentiable target of estimation in the sense that there is some $\nabla\psi_{\Sigma}\in\mathbb R^{k\times d^2}$ such that
\begin{equation}\label{reg_esti}
r^{-1}_n(\psi(\Sigma+r_nH)-\psi(\Sigma))\to\nabla\psi_{\Sigma}\text{vec}(H),\quad H\in\R^{d\times d}_{\text{sym}},
\end{equation}
as $n\to\infty$. In the following, sequences of so-called regular estimators $\hat\vartheta_n$ of $\psi(\Sigma)$ are regarded, cf. Appendix~\ref{SsecWeakLanReg} for a definition.
\begin{theorem}\label{ThmConvPara}
Let $\hat\vartheta_n$ be a sequence of regular estimators of $\psi(\Sigma)\in\mathbb R^k$ with \eqref{reg_esti} and suppose that Assumptions~\ref{AssSoboCov}-$G(\beta)$ and \ref{regvarass}-$\lambda(\delta)$ are met. Then under $P^n_{\Sigma+r_nH},\ H\in\R^{d\times d}_{\text{sym}}$, and as $n\to\infty$ it holds that
\[r^{-1}_n(\hat\vartheta_n-\psi(\Sigma+r_nH))\overset{d}{\to}\mathcal N\left(0,\tfrac{1}{4}\nabla\psi^{\top}_{\Sigma}\mathcal I(\Sigma)^{-1}\mathcal Z\nabla\psi_{\Sigma}\right)\ast R,\]
for some distribution $R$.
\end{theorem}
The deduction of the above result offers a comprehensive understanding of how efficient estimation, particularly the optimal estimation rate $r_n$ and the geometry of the Fisher information matrix, depends on the spectral properties of the signal. Moreover, Theorem~\ref{ThmConvPara} extends the knowledge of asymptotic lower bounds in a few one-dimensional models to a general class of underlying multidimensional Gaussian processes. It is noted that only the leading term of $(\lambda_p)_{p\geq1}$ has to be known for the derivation of lower bounds.
As mentioned before, several estimators have been designed for particular Gaussian models. In this work a universal estimation approach is given by
\[\hat\vartheta^{\text{ad}}_n:=\sum_{p\in\pi_n}W_p\lambda^{-1}_p\text{vec}(Y_pY^{\top}_p-\eta^2/n I_d),\]
where $\pi_n\subsetneq\mathbb N$ and $W_p\in\mathbb R^{d^2\times d^2}$ are adaptive weights. A spectral approach has been already used, e.g. by \citet{Bibinger[2014]}, for a covariation estimator, where martingale properties inherited from the Brownian motion are a key argument. In contrary, constructing $W_p$ independently of $(Y_p)_{p\in\pi_n}$ is the crucial idea in this work, which yields generality and gives
\[r^{-1}_n(\hat\vartheta^{\text{ad}}_n-\psi(\Sigma+r_nH))\overset{d}{\to}\mathcal N\left(0,\tfrac{1}{4}\nabla\psi^{\top}_{\Sigma}\mathcal I(\Sigma)^{-1}\mathcal Z\nabla\psi_{\Sigma}\right),\]
under $P^n_{\Sigma+r_nH}$, for any $H\in\R^{d\times d}_{\text{sym}}$, cf. Theorem~\ref{ThmOrAd}. The matching upper bounds imply that the derived lower bounds from Theorem~\ref{ThmConvPara} are sharp.
\begin{remark}\label{RmkDepNoise}
If the model is generalised to non-diagonal noise $\varepsilon_1,\ldots,\varepsilon_n\sim\mathcal N(0,H)$ with $H\in\mathbb R^{d\times d}_+$ known, then lower and upper bounds can be derived in the same way if the transformations $\tilde Y'_i:=H^{-1/2}Y_i$, $i=1,\ldots,n$ are used. In particular, $\Sigma$ in $\mathcal I(\Sigma)$ has to be replaced by $H^{-1/2}\Sigma H^{-1/2}$ and $\eta^2$ is set to the value $1$.
\end{remark}
\begin{remark}\label{RmkWeakDep}
Another possible extension is given by weakly dependent noise. Let us consider stationary $m$-dependent noise, i.e., $\mathbb E[\varepsilon_i\varepsilon_{i+j}]=\eta_j$ with $\eta_j=0,\ j>m$, which is used in high-frequency statistics, e.g. by \citet{Hautsch[2013]}. With $\eta'_n:=\text{Var}(n^{-1/2}\sum^n_{i=1}\varepsilon_i)=\eta_0+2\sum^m_{j=1}\frac{n-j}{n}\eta_j$ a `big-block-small-block' argument gives rise to the desired connection between discrete and sequence space model in the sense that $\eta^2$ in \eqref{SeqPar} should be replaced with $\lim_{n\to\infty}\eta'_n$ and the theory provided by this work can be applied. However, this results in more assumptions on $\beta$, $\gamma$ and $m$ and is therefore omitted.
\end{remark}
\begin{remark}\label{RmkSigmaRandom}
The techniques of this work can also be carried out if $\Sigma$ is random but $G$ given $\Sigma$ is still Gaussian. The derivation of a conditional convolution theorem is then obtained if Assumption H0 (which replaces the usage of Le Cam's third Lemma) of the general result by \citet{Clement[2013]} is met. Again, precise derivations are omitted.
\end{remark}
\begin{example}\label{ExBM}
If $G$ denotes a $d$-dimensional Brownian motion, then $\lambda^{\text{BM}}_p=(\pi(p-1/2))^{-2}$, i.e., Assumption~\ref{regvarass} holds with $\delta=2$. Then efficient regular estimators $\hat\vartheta_n$ of $\vartheta=\text{vec}(\Sigma)$ satisfy (cf. Theorem~\ref{ThmRateFisher} below)
\begin{equation}\label{clt_bm}
n^{1/4}(\hat\vartheta_n-\vartheta)\overset{d}{\to}\mathcal N(0,2\eta(\Sigma\otimes\Sigma^{1/2}+\Sigma^{1/2}\otimes\Sigma)\mathcal Z).
\end{equation}
For $d=1$ this result coincides with \citet{Gloter[2001]} and for $d\geq 1$, \eqref{clt_bm} extends asymptotically the Cram\'er-Rao bound of \citet{Bibinger[2014]}.
\end{example}
\begin{example}\label{ExfBM}
If $G$ is a fractional Brownian motion with Hurst exponent $H\in(0,1)$, then, by \citet{Chigansky[2018]}, the corresponding eigenvalues satisfy \eqref{regvar} with $\delta=2H+1$:
\[\lambda^{\text{fBM}}_p=\frac{\sin(H\pi)\Gamma(2H+1)}{(\pi p)^{2H+1}}+o(p^{-(2H+1)}),\quad p\geq1.\]
Precise asymptotic lower bounds have only been known for $d=1$ in a non-noisy setting, cf. \citet{Brouste[2018]}. In the multivariate noisy set-up Theorem~\ref{ThmConvPara} implies for $H>1/4$ that the rate of of efficient estimators is $r_n=n^{-1/(4H+2)}$, where the restriction $H>1/4$ ensures Assumption~\ref{AssSoboCov}-$G(\beta)$. The optimal asymptotic covariance can be easily calculated by Theorem~\ref{ThmRateFisher} below. Note that the Cram\'er-Rao bound in \citet{Sabel[2014]} holds for any $H\in(0,1)$. Whether the models \eqref{DiscPar} and \eqref{SeqPar} can be separated for $H\leq 1/4$ lies beyond the scope of this paper.
\end{example}
\begin{example}
The eigenvalues $\lambda^{\text{BB}}_p=(\pi p)^{-2}$ corresponding to a Brownian bridge have the same leading term as $\lambda^{\text{BM}}_p$ in Example~\ref{ExBM}, hence \eqref{clt_bm} holds as well. Similarly, regard the (stationary) Ornstein-Uhlenbeck process
\[\Sigma^{1/2}G_t=\Sigma^{1/2}G_0e^{-\beta t}+\Sigma^{1/2}\int^t_0e^{-\beta(t-s)}dB_s,\quad t\in[0,1],\]
where $G_0\sim\mathcal N(0,(2\beta)^{-1}I_d),\ \beta>0$ and $B$ is a standard Brownian motion. Under the normalisation $\beta=1/2$ the eigenvalues $\lambda^{\text{OU}}_p=\frac{2\beta}{p^2\pi^2}+o(p^{-2})$ imply \eqref{clt_bm} as well. This means that mean-reversion or the behaviour of bridges have no impact on estimation of $\Sigma$. In fact, the three models corresponding to $\lambda^{\text{BM}}_p$, $\lambda^{\text{BB}}_p$ and $\lambda^{\text{OU}}_p$ are even asymptotically equivalent, cf. Proposition~\ref{PropEigLeCamEqui}.
Similarly a fractional Brownian bridge and a fractional Ornstein-Uhlenbeck process seem to offer the same asymptotics as $\lambda^{\text{fBM}}_p$, cf. the (yet unpublished) drafts by \citet{Chigansky[2017]} and \citet{Chigansky[2018b]}.
\end{example}
\begin{example}
For the $m$-fold integrated Brownian motion the eigenvalues satisfy $\lambda^{m\text{BM}}_p=(\pi p)^{-(2m+2)}+o(p^{-(2m+2)})$, cf. \citet{Wang[2008]}. This implies $r_n=n^{-1/(4m+4)}$, which reveals the interesting phenomenon that very smooth signal paths lead to rather poor estimation rates, also cf. Example~\ref{ExfBM}, where regularity is increasing in $H$ whereas $r_n$ is decreasing.
\end{example}
\subsection{\textbf{Semi-parametric asynchronous model}}
On the basis of the parametric results asymptotic lower bounds in the more sophisticated asynchronous observation model
\begin{equation}\label{DiscSemi}
Y_{i,j}=(X_{t_{i,j}})_j+\varepsilon_{i,j},\quad 1\leq i\leq n_j,\ 1\leq j\leq d,
\end{equation}
are derived, where $X_t=X_0+\int^t_0\Sigma^{1/2}(s)dB_s$ denotes a continuous martingale in terms of a $d$-dimensional standard Brownian motion $B=(B_t)_{t\in[0,1]}$. The noise variables $\varepsilon_{i,j}\sim\mathcal N(0,\eta^2_j),\ 1\leq i\leq n_j$, with $\eta_j>0$ known, $1\leq j\leq d$, are mutually independent and independent of the signal $X=(X_t)_{t\in[0,1]}$. Moreover, suppose for the asymptotics $n_{\min}:=\min_{1\leq j\leq d}n_j\to\infty$ that $n_{\min}/n_j\to\nu_j$ for some $\nu_j\in(0,1],\ j=1,\ldots,d$.
\begin{assumption}\label{AssSigma}\textbf{-}$\pmb{\Sigma(\beta,M,S).}$ For some $\beta>1/2$, $M>0$, and $S>1$ we assume that $\Sigma$ belongs to the parameter set
\[\Theta_1:=\left\{A:[0,1]\to\R^{d\times d}_{\text{sym}}\Big|A\in H^{\beta}_M:S^{-1}I_d<A(t)<SI_d,\forall t\in[0,1]\right\}.\]
\end{assumption}
\begin{assumption}\label{AssRegF}\textbf{-}$\pmb{F(\gamma,N,\beta).}$ The observation times obey $t_{i,j}=F^{-1}_j(i/n_j)$ for a distribution function $F_j:[0,1]\to[0,1]$ with derivative $F'_j$ and
\begin{itemize}
\item[(i)] $F_j(0)=0$ and $F_j(1)=1$,
\item[(ii)] $F'_j\in C^{\gamma}_N$ and $F'_j>0$,
\end{itemize}
for $j=1,\ldots,d$, and some $\gamma\in(\beta,1],\ N>0$.
\end{assumption}
As in the parametric set-up, \eqref{DiscSemi} is approximated by a spectral representation for which the conditions $\gamma>\beta>1/2$ and $\Sigma>S^{-1}I_d$ are needed. The latter one is slightly restrictive but not uncommon, cf. \citet{Reiss[2011]}. The spectral representation is given by the mutually independent random vectors
\begin{equation}\label{SeqSemi}
Y_{pk}\sim\mathcal N(0,C_{pk}),\quad k=0,\ldots,m-1,\ p\geq 1,
\end{equation}
where $C_{pk}:=\Sigma(k/m)\lambda_{mp}+n^{-1}_{\min}\Xi^2(k/m),\ \lambda_{mp}:=(\pi pm)^{-2}$ and
\[\Xi^2(t):=\text{diag}(\eta^2_j\nu_j/(F'_j(t)))_{1\leq j\leq d}.\]
However, the approximation of \eqref{DiscSemi} by \eqref{SeqSemi} holds only for localisations $\Sigma+n^{-1/4}_{\min}H,\ H\inH^{\beta}_{\text{sym}}$, which nevertheless is the right ingredient to ensure that LAN-expansions in the sequence space carry over to \eqref{DiscSemi}, cf Proposition~\ref{PropLANequi}.
\subsection{\textbf{Semi-parametric main result}} For each $k$ the sequence $(Y_{pk})_{p\geq1}$ in \eqref{SeqSemi} is of the same type as the fundamental sequence space model in \eqref{SeqPar}. Indeed the parametric results can be applied simultaneously (over $k$) to the setting \eqref{SeqSemi}, for which we consider targets of estimation given by
\begin{equation}\label{targetinf}
\psi(\Sigma):=\int^1_0(W(\Sigma))(t)dt
\end{equation}
with a differentiable weight $W:\Theta_1\to L^2([0,1],\mathbb R^{d^2})$ in the sense that
\begin{equation}\label{targetreg}
n^{1/4}_{\min}(W(\Sigma+n^{-1/4}_{\min}H)-W(\Sigma))\to\nabla W_{\Sigma}\cdot\text{vec}(H),\quad H\inH^{\beta}_{\text{sym}},
\end{equation}
as $n_{\min}\to\infty$, for some $\nabla W_{\cdot}\in L^2([0,1],\mathbb R^{d^2\times d^2})$. An example is given by the choice $W(\Sigma)=\text{vec}(\Sigma)$ with $\nabla W_{\cdot}=I_{d^2}$.
\begin{theorem}\label{ThmConvNonpara}
Let $\hat\vartheta_n$ be a sequence of regular estimators of $\psi(\Sigma)$ as in \eqref{targetinf} with \eqref{targetreg} and suppose that Assumptions~\ref{AssSigma}-$\Sigma(\beta,M,S)$ and \ref{AssRegF}-$F(\gamma,N,\beta)$ are met. Then under $Q^n_{\Sigma+n^{-1/4}_{\min}H},\ H\inH^{\beta}_{\text{sym}}$, it holds that
\[n^{1/4}_{\min}(\hat\vartheta_n-\psi(\Sigma+n^{-1/4}_{\min}H))\overset{d}{\to}\mathcal N\Big(0,\frac{1}{4}\int^1_0(\nabla W_{\Sigma}\mathcal I^{-1}_{\Sigma}\mathcal Z\nabla W_{\Sigma}^{\top})(t)dt\Big)\ast R,\]
as $n_{\min}\to\infty$, for some $R$, where $Q^n_{\Sigma}$ is the measure induced by \eqref{SeqSemi} and
\[\mathcal I^{-1}_{\Sigma}(t)=8(\Sigma^{1/2}_{\Xi}(t)\otimes\Sigma(t)+\Sigma(t)\otimes \Sigma^{1/2}_{\Xi}(t)),\ t\in[0,1],\]
with $\Sigma^{1/2}_{\Xi}:=\Xi(\Xi^{-1}\Sigma\Xi^{-1})^{1/2}\Xi$.
\end{theorem}
The above statement extends the one-dimensional asymptotic efficiency results of \citet{Reiss[2011]} in various ways. Firstly, the needed H\"older-regularity $(1+\sqrt{5})/4\approx0.81$ in \citet{Reiss[2011]} can be relaxed to Sobolev regularity $\beta>1/2$. This relaxation is achieved by focussing on asymptotically equivalent experiments that share the same semi-parametric lower bounds for targets as in \eqref{targetinf}, whereas Rei\ss\ even considers experiments with common asymptotic non-parametric lower bounds. Moreover, Theorem~\ref{ThmConvNonpara} allows for multidimensionality of $\Sigma$ as well as for asynchronicity and therefore extends asymptotically the basic case Cram\'er-Rao bound for continuously differentiable $\Sigma$ by \citet{Bibinger[2014]}. Since the local method of moments estimator provided by \citet{Bibinger[2014]} attains the Gaussian part of the limit distribution of Theorem~\ref{ThmConvNonpara}, the derived bounds are sharp.
\begin{remark}
The steps that are taken to establish Theorem~\ref{ThmConvNonpara} can be developed analogously if $\Sigma=(\Sigma_t)_{t\in[0,1]}$ is assumed to be random with realisations in $\Theta_1$ and if $X$ conditioned on $\Sigma$ is still Gaussian. Again the result by \citet{Clement[2013]} gives a conditional convolution theorem, cf. Remark~\ref{RmkSigmaRandom}. The estimator provided by \citet{Altmeyer[2015]} attains the corresponding asymptotic stochastic lower bounds. Similarly, extensions for the noise can be obtained as illustrated in Remark~\ref{RmkDepNoise} and \ref{RmkWeakDep}.
\end{remark}
\section{Analysis of the fundamental parametric model}\label{SecParametric}
Throughout this section we assume that $\Sigma\in\Theta_0$ for some $S>0$, cf. \eqref{Parspace}, and that Assumption~\ref{AssSoboCov}-$G(\beta)$ and Assumption~\ref{regvarass}-$\lambda(\delta)$ are satisfied.
\subsection{\textbf{Connection between discrete and sequence space model}}\label{secCtsSeq}
Consider the discrete observation model \eqref{DiscPar} and its continuous analogue
\begin{equation}\label{ContPar}
dY_t=\Sigma^{1/2}G_tdt+\frac{\eta}{\sqrt{n}}dW_t,\quad t\in[0,1],
\end{equation}
where $W$ is a Wiener process independent of $G$. The model \eqref{ContPar} is consistent with observing the stochastic bilinear forms
\begin{equation}\label{cylmeas}
Y_f:=(f,dY):=\sum^d_{j=1}\int^1_0(f(t))_jd(Y_t)_j,\quad f\in L^2([0,1],\mathbb R^d).
\end{equation}
$Y_f$ is Gaussian with $\mathbb E[Y_f]=0$ and $\text{Cov}(Y_f,Y_g)=\langle K_{\Sigma,n}f,g\rangle_{L^2}$. The underlying covariance operator $K_{\Sigma,n}$ is given by
\[K_{\Sigma,n}:=T_{\Sigma^{1/2}}\text{diag}(\Gamma)_{1\leq j\leq d}T_{\Sigma^{1/2}}+\frac{\eta^2}{n}\text{Id},\]
with $T_{\Sigma^{1/2}}:f\mapsto\Sigma^{1/2} f,\ \text{Id}:f\mapsto f$ and $\text{diag}(\Gamma)_{1\leq j\leq d}:f\mapsto(\Gamma f_j)_{1\leq j\leq d}$ being the covariance operator of $G$. For the orthonormal eigenbasis $(\varphi_p)_{p\geq1}$ of $\Gamma$ and $e_{pi}:=(\mathbbm{1}_{\{i=j\}}\varphi_p)_{1\leq j\leq d}$ the vectors $(Y_{e_{p1}},\ldots,Y_{e_{pd}})^{\top},\ p\geq1,$ follow the same distribution as the sequence $(Y_p)_{p\geq1}$ in \eqref{SeqPar}.
\begin{definition}
Denote by $\mathcal F_n$ and $\mathcal F^s_n$ the statistical experiments that are generated by the observations \eqref{DiscPar} and \eqref{SeqPar}, respectively.
\end{definition}
Since $(\varphi_p)_{p\geq 1}$ is a basis, observing the sequence $(Y_p)_{p\geq1}$ in \eqref{SeqPar} is equivalent to observe \eqref{ContPar}. Moreover, the following is just a consequence of the more general Theorem~\ref{ThmLeCamDiscCont} given in the Appendix.
\begin{proposition}\label{PropFDiscSeq}
Under Assumption~\ref{AssSoboCov}-$G(\beta)$ the experiments $\mathcal F_n$ and $\mathcal F^s_n$ are asymptotically equivalent. More precisely, the Le Cam distance obeys
\[\Delta(\mathcal F_n,\mathcal F^s_n)=\mathcal O(Sn^{1-\beta}).\]
\end{proposition}
\subsection{\textbf{Local asymptotic normality}}\label{efficiency}
Denote the score in $\mathcal F^s_n$ by $\nabla\ell_n(\Sigma):=\sum_{p\geq1}\ell_{np}(\Sigma)$ and set $\mathcal I_n(\Sigma)\mathcal Z:=\text{Cov}(\nabla\ell_n(\Sigma))$, where
\begin{equation}\label{SeqScore}
\nabla\ell_{np}(\Sigma):=\frac{1}{2}\lambda_p\text{vec}(C^{-1}_pY_{p}Y^{\top}_{p}C^{-1}_p-C^{-1}_p).
\end{equation}
The Fisher information $\mathcal I_n(\Sigma)\mathcal Z=\sum_{p\geq1}\mathcal I_{np}(\Sigma)\mathcal Z\in\mathbb R^{d^2\times d^2}$ is driven by
\[\mathcal I_{np}(\Sigma):=\frac{1}{4}\lambda^2_p(C^{-1}_p\otimes C^{-1}_p)\quad p\geq1.\]
In the derivation of $\ell_n$ and $\mathcal I_n$ the following well-known identity was used:
\[\text{vec}(ABC)=(C^{\top}\otimes A)\text{vec}(B),\quad A,B,C\in\mathbb R^{d\times d}.\]
As a consequence of Assumption~\ref{regvarass}, $\mathcal I_n(\Sigma)\mathcal Z$ is well-defined. A crucial quantity is the rate $r_n\to0$ such that the asymptotic Fisher information
\[\mathcal I(\Sigma)\mathcal Z:=\lim_{n\to\infty}r^2_n\mathcal I_n(\Sigma)\mathcal Z\]
is well-defined, where $r_n$ is assumed to be normalised with respect to scalars, e.g. $r_n=n^{-1/4}$. The key to finding this rate $r_n$ lies in the interplay between the operators $\text{diag}(\Gamma)_{1\leq j\leq d}$ and $\tfrac{1}{n}\text{Id}$ along with the regular variation of $\lambda$. More precisely, in the covariance matrices $C_p=\Sigma\lambda_p+\tfrac{\eta^2}{n}I_d$, the impact of signal and noise is (nearly) balanced at the index $p_n$ with $\lambda(p_n)=n^{-1}$, where we identify the sequence $\lambda$ with some continuously interpolated non-increasing analogue $\lambda:\mathbb R_+\to\mathbb R_+$. It is well-known, that the representation
\begin{equation}\label{regvarrepr}
\lambda(p)=p^{-\delta}L(p)
\end{equation}
is valid, for some slowly varying $L:\mathbb R_+\to\mathbb R_+$, cf. \citet{Bingham[1989]}.
\begin{theorem}\label{ThmRateFisher}
Grant Assumption~\ref{regvarass}-$\lambda(\delta)$ on $\Gamma$. Then the Fisher information satisfies for any $\Sigma\in\R^{d\times d}_{\text{sym}}$ with $\Sigma>0$
\begin{equation}\label{FisherRate}
p^{-1}_n\mathcal I_n(\Sigma)\mathcal Z\to\mathcal I(\Sigma)\mathcal Z,\quad\text{as }n\to\infty,
\end{equation}
where $p_n$ is given by $\lambda(p_n)=n^{-1}$. If $Q$ is an orthogonal matrix such that $\Sigma=Q^{\top}\text{diag}(s_1,\ldots,s_d)Q$ then
\[\mathcal I(\Sigma)=(Q\otimes Q)^{\top}\text{diag}(v_{11},\ldots,v_{1d},v_{21},\ldots,v_{2d},v_{31},\ldots,v_{dd})(Q\otimes Q)\]
with eigenvalues
\[v_{i,j}=\frac{\zeta}{4\eta^{2/\delta}}\int^1_0(s_i+x^{\delta})^{-1}(s_j+x^{\delta})^{-1}dx,\quad i,j=1,\ldots,d,\]
where $\zeta=\lim_{n\to\infty}r^2_np_n$ for $r_n\sim p^{-1/2}_n$ standardised. Moreover, the convergence in \eqref{FisherRate} already holds for $\mathcal I_{\pi_n}(\Sigma):=\sum_{p\in\pi_n}\mathcal I_{np}(\Sigma)$, whenever $\pi_n=[\underline{\pi_n},\overline{\pi_n}]\cap\mathbb N,$ with $\underline{\pi_n}/p_n\to0$ and $(\underline\pi_n\wedge\overline{\pi_n}/p_n)\to\infty$.
\end{theorem}
By the above statement the rate $r_n$ satisfies the relation
\[r_n L(r^{-2}_n)^{1/(2\delta)}\sim n^{-1/(2\delta)},\]
with $L$ as in \eqref{regvarrepr}. Thus the rate $r_n$ is completely determined by the decay of $\lambda$. The slower $\lambda$ decreases the more observations $Y_p$ carry significant information about $\Sigma$ and the faster $\Sigma$ can be estimated. Moreover, solely the limiting behaviour of $L$ determines the constant $\zeta$. For instance, in the Brownian motion case $\lambda^{\text{BM}}_p=(p-1/2)^{-2}\pi^{-2}$ one has $\delta=2,p_n=\sqrt{n}/\pi+1/2$ and $L(p)=(\pi(2-1/(2p)))^{-2}$, which gives $r_n=n^{-1/4}$ and $\zeta=1/\pi$.
A simple calculation, cf. Remark~\ref{RmkFisherExp}, shows, that the eigenvalues obey
\[v_{i,j}=\frac{\zeta\pi}{4\delta\sin(\pi/\delta)\eta^{2/\delta}}\cdot\frac{s^{1/\delta-1}_j-s^{1/\delta-1}_i}{s_i-s_j}\]
and that they are driven by the slope of $x\mapsto-x^{1/\delta-1}$ between all pairs $(s_i,s_j)$. Whenever $s_i=s_j$ the slope equals the derivative at $s_i$. In particular, for the case $\Sigma=\sigma^2\in\mathbb R_+$ the Fisher information becomes
\[\mathcal I(\sigma^2)=\frac{\zeta\pi(1-1/\delta)}{4\delta\sin(\pi/\delta)\eta^{2/\delta}}\sigma^{2/\delta-4}.\]
Sufficient information to estimate $\Sigma$ efficiently in asymptotics is already provided by those observations $Y_p$ in $\mathcal F^s_n$, such that $p$ is subject to an interval $\pi_n$ as in Theorem~\ref{ThmRateFisher}. This means that maximal information about $\Sigma$ is asymptotically contained in (arbitrarily slowly) increasing neighbourhoods of $p_n$ within the spectrum of $Y=(Y_t)_{t\in[0,1]}$ in $\mathcal F_n$. This gives canonical choices of truncation indices for spectral estimators of $\Sigma$, cf. Section~\ref{estimation}.
For $\Sigma\in\Theta_0$ consider local alternatives of the form $\Sigma+r_nH,\ H\in\R^{d\times d}_{\text{sym}}$, where $r_n$ is chosen according to Theorem~\ref{ThmRateFisher}. Note that $\Sigma+r_nH\in\Theta_0$ for $n$ sufficiently large, hence $P^n_{\Sigma+r_nH}$ might be defined arbitrarily, whenever $\Sigma+r_nH\notin\Theta_0$. Denote by $\Delta_H$ the centred Gaussian process with
\[\text{Cov}(\Delta_{H_1},\Delta_{H_2})=\langle H_1,H_2\rangle_{\mathcal I(\Sigma)\mathcal Z},\quad H_1,H_2\in\R^{d\times d}_{\text{sym}},\]
where it is noted that $\mathcal I(\Sigma)\mathcal Z$ is positive definite on $\{\text{vec}(H):H\in\R^{d\times d}_{\text{sym}}\}$.
\begin{proposition}\label{LAN_gen}
Under Assumption~\ref{regvarass}-$\lambda(\delta)$, for any $\Sigma\in\Theta_0$, the following asymptotic expansion is satisfied in $\mathcal F^s_n$ as $n\to\infty$:
\begin{equation}\label{LAN}
\log\frac{dP^n_{\Sigma+r_nH}}{dP^n_{\Sigma}}=\Delta_{n,H}-\frac{r^2_n}{2}\|H\|^2_{\mathcal I_n(\Sigma)\mathcal Z}+\rho_n,\quad H\in\R^{d\times d}_{\text{sym}},
\end{equation}
where $\Delta_{n,H}\overset{d}{\to}\Delta_H$, under $P^n_{\Sigma},\ r^2_n\|H\|^2_{\mathcal I_n(\Sigma)\mathcal Z}\to\| H\|^2_{\mathcal I(\Sigma)\mathcal Z}$ and $\rho_n=o_{P^n_{\Sigma}}(1)$.
\end{proposition}
Note that $\Delta_{n,H}=r_n\text{vec}(H)^{\top}\nabla\ell_n(\Sigma)$, where $\nabla\ell_n$ denotes the score in $\mathcal F^s_n$. Moreover, the remainder obeys $\rho_n=\rho^{(1)}_n+\rho^{(2)}_n$ with $\mathbb E[\rho^{(1)}_n]=0$ and
\begin{align}
\label{LANrem1}\text{Var}(\rho^{(1)}_n)\leq&r^2_n\|H\|^2\|\Sigma^{-1}\|^2r^2_n\|H\|^2_{\mathcal I_n(\Sigma)\mathcal Z}=\mathcal O(r^2_n),\\
\label{LANrem2}|\rho^{(2)}_n|\leq&2r_n\|H\|\|\Sigma^{-1}\|r^2_n\|H\|^2_{\mathcal I_n(\Sigma)\mathcal Z}=\mathcal O(r_n),
\end{align}
hence \eqref{LAN} holds uniformly in $H$ over balls within $\R^{d\times d}_{\text{sym}}$.\newpage
An implication of the LAN-property~\eqref{LAN} is weak convergence of the localisations $\{P^n_{\Sigma+r_nH}:H\in\R^{d\times d}_{\text{sym}}\}$ to the Gaussian shift experiment $\mathcal G:=\{\mathcal N(\mathcal I(\Sigma)\mathcal Z\text{vec}(H),\mathcal I(\Sigma)\mathcal Z):H\in\R^{d\times d}_{\text{sym}}\}$. Given an observation $Y$ in $\mathcal G$ the property $\mathcal Z\text{vec}(H)=2\text{vec}(H)$ implies that the best unbiased estimator of $\text{vec}(H)$ is given by $\frac{1}{2}\mathcal I(\Sigma)^{-1}Y\sim\mathcal N(\text{vec}(H),\frac{1}{4}\mathcal I(\Sigma)^{-1}\mathcal Z)$. This determines the asymptotic distribution of regular estimators, which is made precise in the following.
\subsection{\textbf{Verification of Theorem~\ref{ThmConvPara}}}
\begin{proof}
If one closely follows the steps as in the verification of the general (convolution) Theorem 3.11.2 in \citet{VanDVaart[2013]} then the only peculiarity to be taken into account is the matrix $\mathcal Z$. More precisely, for an orthonormal basis $h_1,\ldots,h_{d^*},\ d^*:=d(d+1)/2$, of $\text{vec}(\R^{d\times d}_{\text{sym}}):=\{\text{vec}(A):A\in\R^{d\times d}_{\text{sym}}\}$ with respect to the inner product $\langle \cdot,\cdot\rangle_{\mathcal I(\Sigma)\mathcal Z}$, Proposition~\ref{LAN_gen} and Le Cam's Third Lemma yield
\begin{equation}\label{LimitDecomp}
r^{-1}_n(\hat\vartheta_n-\psi(\Sigma+r_nH))\overset{d}{\to}\mathcal N\Big(0,\sum^{d^*}_{k=1}\nabla\psi_{\Sigma}h_kh^{\top}_k\nabla\psi^{\top}_{\Sigma}\Big)\ast R,
\end{equation}
under $P^n_{\Sigma+r_nH}$, for some $R$. The independence of $h$ now follows by
\begin{align*}
&\Big(\sum^{d^*}_{k=1}\nabla\psi_{\Sigma}h_kh^{\top}_k\nabla\psi^{\top}_{\Sigma}\Big)_{i,j}=\sum^{d^*}_{k=1}\langle\nabla\psi_{\Sigma}^{(i)},h_k\rangle\langle\nabla\psi_{\Sigma}^{(j)},h_k\rangle\\
=&\frac{1}{4}\sum^{d^*}_{k=1}\langle\mathcal I^{-1}_{\Sigma}\nabla\psi_{\Sigma}^{(i)},h_k\rangle_{\mathcal I(\Sigma)\mathcal Z}\langle\mathcal I^{-1}_{\Sigma}\nabla\psi_{\Sigma}^{(j)},h_k\rangle_{\mathcal I(\Sigma)\mathcal Z}=\frac{1}{4}\langle\nabla\psi^{(i)}_{\Sigma},\mathcal I^{-1}_{\Sigma}\nabla\psi^{(j)}_{\Sigma}\rangle_{\mathcal Z},
\end{align*}
where $\nabla\psi^{(i)}_{\Sigma}$ denotes the $i$-th column of $\nabla\psi^{\top}_{\Sigma}$ and $1\leq i,j\leq d$.
\end{proof}
\begin{remark}\label{RmkZZZ}
Note that the singularity of $\mathcal I(\Sigma)\mathcal Z$ has no critical impact as $\langle\cdot,h_k\rangle_{\mathcal I(\Sigma)\mathcal Z}=2\langle\cdot,h_k\rangle_{\mathcal I(\Sigma)}$ is the essential isometry-type ingredient used.
\end{remark}
\subsection{\textbf{Estimation}}\label{estimation}
For each observation $Y_p$ in \eqref{SeqPar} an unbiased estimator of $\psi(\Sigma)=\text{vec}(\Sigma)$ can be obtained via
\[\hat\vartheta_p:=\lambda_p^{-1}\text{vec}\Big(Y_{np}Y^{\top}_{np}-\frac{\eta^2}{n}I_d\Big).\]
Since $\hat\vartheta_p,\ p\geq1,$ are independent it is reasonable to consider a weighted average to reduce variability. Let $\pi_n=[\underline \pi_n,\overline\pi_n]\cap\mathbb N$ be as in Theorem~\ref{ThmRateFisher} and set $\mathcal I_J(\Sigma):=\sum_{p\in J}\mathcal I_{np}(\Sigma)$, for $J\subseteq\mathbb N$. Then, by a Lagrange approach, the choice of weights
\[W_p(\Sigma):=\mathcal I_{\pi_n}(\Sigma)^{-1}\mathcal I_{np}(\Sigma)\]
ensures unbiasedness and minimal covariance of the oracle estimator
\[\hat\vartheta^{\text{or}}_n:=\sum_{p\in\pi_n}W_p(\Sigma)\hat\vartheta_p.\]
Let $\pi'_n\subsetneq\mathbb N$ be with $\pi_n\cap\pi'_n=\emptyset$, $n\geq1$, and $|\pi'_n|\to\infty$, as $n\to\infty$. Set $\hat\vartheta^{\text{pre}}_n:=\sum_{p\in\pi'_n}W^{\pi'_n}_p(SI_d)\hat\vartheta_p$, where $W^{\pi'_n}_p(\Sigma):=\mathcal I_{\pi'_n}(\Sigma)^{-1}\mathcal I_{np}(\Sigma)$, and set $\hat\Sigma^{\text{pre}}_n:=\text{mat}(\hat\vartheta^{\text{pre}}_n)$, where $\text{mat}:\mathbb R^{d^2}\to\mathbb R^{d\times d}$ is the inverse of $\text{vec}$. Then an adaptive version of $\hat\vartheta^{\text{or}}_n$ is obtained by
\begin{equation}\label{adaptest}
\hat\vartheta^{\text{ad}}_n:=\sum_{p\in\pi_n}W_p(\hat\Sigma^{\text{pre}}_n)\hat\vartheta_p.
\end{equation}
Note that it is crucial that $(W_p(\hat\Sigma^{\text{pre}}_n))_{p\in\pi'_n}$ is independent of $(Y_p)_{p\in\pi_n}$.
\begin{theorem}\label{ThmOrAd}
The estimators $\hat\vartheta^{\text{or}}_n$ and $\hat\vartheta^{\text{ad}}_n$ of $\psi(\Sigma)=\text{vec}(\Sigma)$ are regular and efficient in the sense of Theorem~\ref{ThmConvPara}. In particular, it holds that
\[r^{-1}_n(\hat\vartheta^{ad}_n-\psi(\Sigma+r_nH))\overset{d}{\to}\mathcal N(0,\tfrac{1}{4}\mathcal I(\Sigma)^{-1}\mathcal Z),\quad\text{as }n\to\infty,\]
under $P^n_{\Sigma+r_nH}$, for any $H\in\mathbb R^{d\times d}_{\text{sym}}$.
\end{theorem}
\begin{remark}
The estimator $\hat\vartheta^{\text{ad}}_n=\hat\vartheta^{\text{ad}}_n((Y_p)_{p\geq1})$ in $\mathcal F^s_n$ can be obtained in the initial model $\mathcal F_n$ by the explicit construction via interpolations given in the proof of Theorem~\ref{ThmLeCamDiscCont}. In particular, for an interpolated version $(\bar Y_t)_{t\in[0,1]}$ of \eqref{DiscGen}, cf. \eqref{contmodel_zwischen}, the estimator $\hat\vartheta^{\text{ad}}_n=\hat\vartheta^{\text{ad}}_n((\bar Y_p)_{p\geq1})$ in $\mathcal F_n$ can be built as in \eqref{adaptest} from
\[\bar Y_p:=((e_{pj},\bar Y))_{1\leq j\leq d},\quad p\geq1,\]
where $e_{pi}=(\mathbbm{1}\{i=j\}\varphi_p)_{1\leq j\leq d}$ and $\varphi_p$ is the eigenfunction corresponding to $\lambda_p$, cf. Section~\ref{secCtsSeq}. For the limit distribution of $\hat\vartheta^{\text{ad}}_n((\bar Y_p)_{p\geq1})$ note that for $\bar P^n_{\Sigma}:=\mathcal L((\bar Y_p)_{p\geq1})$ and $f$ continuous and bounded it easily can be seen that
\[\mathbb E_{\bar P^n_{\Sigma}}[f(\hat\vartheta^{\text{ad}}_n)]=\mathbb E_{P^n_{\Sigma}}[f(\hat\vartheta^{\text{ad}}_n)]+\mathcal O(\|f\|_{\infty}\|P^n_{\Sigma}-\bar P^n_{\Sigma}\|_{\text{TV}})\]
where the total variation norm satisfies $\|P^n_{\Sigma}-\bar P^n_{\Sigma}\|_{\text{TV}}\to0$, by the proof of Theorem~\ref{ThmLeCamDiscCont}. In particular, the estimator $\hat\vartheta^{\text{ad}}_n((\bar Y_p)_{p\geq1})$ has the same asymptotic properties as its counterpart constructed in $\mathcal F^s_n$ and it satisfies the statement of Theorem~\ref{ThmOrAd}.
\end{remark}
\subsection{\textbf{Further asymptotic equivalences}}\label{morecam}
The adaptive estimator $\hat\vartheta^{\text{ad}}_n$ in \eqref{adaptest} allows for further asymptotic equivalence statements that completes the asymptotic analysis of the fundamental parametric model $\mathcal F_n$. By Theorem~\ref{ThmRateFisher} the asymptotically significant information for estimating $\Sigma$ efficiently in $\mathcal F^s_n$ is already contained in the subexperiment $\mathcal F^s_{n,\pi_n}$ that is generated by the observations $(Y_p)_{p\in\pi_n}$, where $\pi_n$ is as in Theorem~\ref{ThmRateFisher}, i.e.,
\[\pi_n=[a_np_n,b_np_n]\cap\mathbb N,\quad a_n\downarrow0,\quad b_n\to\infty.\]
Clearly, $\mathcal F^s_n$ is at least as informative as $\mathcal F^s_{n,\pi_n}$, but even the reverse can be shown, at least asymptotically, given that the parameter set $\Theta_0$ is replaced by the more restrictive set (with $S>1$)
\begin{equation}\label{sparprime}
\Theta_0'=\{\Sigma\in\R^{d\times d}_{\text{sym}}:S^{-1}I_d<\Sigma<SI_d\}.
\end{equation}
\begin{proposition}\label{PropLeCamProj}
For parameter set $\Theta_0'$ in \eqref{sparprime} the experiments $\mathcal F^s_n$ and $\mathcal F^s_{n,\pi_n}$ are asymptotically equivalent in Le Cam's sense. More precisely,
\[\Delta(\mathcal F^s_n,\mathcal F^s_{n,\pi_n})=\mathcal O(S\log n)\mathcal O(a^{\delta-1/2}_n\vee b^{1/2-\delta}_n).\]
\end{proposition}
Proposition~\ref{PropLeCamProj} gives a further intuition on smoothing choices for several known estimation methods such as pre-averaging, where the frequencies of order $\sqrt{n}$ play a central role for models driven by a Brownian motion, cf. \citet{Jacod[2009]}.
Next the impact of deviations in the underlying eigenvalue sequence $\lambda$ is investigated. As we have seen in Theorem~\ref{ThmRateFisher}, the leading term of $(\lambda_p)_{p\geq1}$ completely determines the asymptotic lower bounds. As an example consider the cases in which $G$ in \eqref{ContPar} is a Brownian bridge or a Brownian motion. The respective underlying eigenvalue sequences read as
\[\lambda^{\text{BB}}_p=(\pi p)^{-2}\quad\text{and}\quad\lambda^{\text{BM}}_p=\pi^{-2}(p-1/2)^{-2},\]
respectively, and thus the bounds obtained by Theorem~\ref{ThmConvPara} coincide. In fact, even a general characterisation of asymptotic equivalence on the basis of the underlying eigenvalue sequence can be given.
\begin{proposition}\label{PropEigLeCamEqui}
For sequences $\lambda$ and $\lambda'$ satisfying Assumption~\ref{regvarass}-$\lambda(\delta)$ (with possibly different $\delta$) let $\mathcal F^s_n$ and $\mathcal F^{s'}_n$, respectively, be sequence space models of type \eqref{SeqPar} on $\Theta_0'$ as in \eqref{sparprime}. Then the following are equivalent:
\begin{enumerate}
\item $\lambda_p/\lambda'_p\to1$, as $p\to\infty$.
\item $r_n/r'_n\to1$, as $n\to\infty$, and $\mathcal I(\Sigma)=\mathcal I'(\Sigma)$, for all $\Sigma\in\Theta_0$.
\item $\Delta(\mathcal F^s_n,\mathcal F^{s'}_n)\to0$, as $n\to\infty$,
\end{enumerate}
where $r'_n$ and $\mathcal I'(\Sigma)\mathcal Z$ are the rate and asymptotic Fisher information in $\mathcal F^{s'}_n$.
\end{proposition}
\newpage
The impact of the leading term of $\lambda$ yields an interesting finding in the particular scenario, in which the signal process is a mixture
\[G_t=Z_{1,t}+Z_{2,t},\]
of two independent Gaussian processes $Z_i=(Z_{i,t})_{t\in[0,1]},\ i=1,2$. If the covariance operators of $Z_1$ and $Z_2$ are diagonalisable by the same basis then the process with more slowly decaying eigenvalues completely determines the asymptotic properties of the estimation problem. Therefore one might conjecture for $G$ being a so-called mixed fractional Brownian motion of Hurst index $H>1/2$, cf. \citet{Cheridito[2001]}, that solely the Brownian motion part contributes to the underlying asymptotics.
\section{Semiparametric efficiency under asynchronicity}\label{SecSemiPara}
In the following we suppose that Assumptions~\ref{AssSigma}-$\Sigma(\beta,M,S)$ and \ref{AssRegF}-$F(\gamma,N,\beta)$ hold.
\subsection{\textbf{Locally parametric approximation}}\label{SsecPwConAppr}
As in the parametric set-up, observing \eqref{DiscSemi} is approximated by its continuous analogue. However, in order to use the parametric results, locally constant approximations of $\Sigma$ and $F=(F_j)_{1\leq j\leq d}$ are considered. More precisely, for $m$ disjoint blocks $\text{I}_{mk}:=[k/m,(k+1)/m),k=0,\ldots,m-1$, and $\Sigma_{m,k}:=\Sigma(k/m)$ introduce
\[\Sigma_m:=\sum^{m-1}_{k=0}\Sigma_{m,k}\mathbbm{1}_{\text{I}_{mk}}(\cdot),\quad F'_{j,m}:=\sum^{m-1}_{k=0}F'_j\Big(\frac{k}{m}\Big)\mathbbm{1}_{\text{I}_{mk}}(\cdot),\quad j=1,\ldots,d.\]
and the corresponding continuous observation model
\begin{equation}\label{ContSemi}
dY^m_t=\Big(\int^t_0\Sigma^{1/2}_m(s)dB_s\Big)dt+\Xi_m(t)dW_t,\quad t\in[0,1],
\end{equation}
where
\[\Xi^2_m:=\text{diag}(\eta^2_j/(n_jF'_{j,m}))_{1\leq j\leq d}.\]
\begin{definition}\label{DefExpMCm}
For $n:=(n_1,\ldots,n_d)$ let $\mathcal M_n$ and $\mathcal M^c_n$ be the statistical experiments that are generated by the discrete and continuous observations \eqref{DiscSemi} and \eqref{ContSemi}, respectively.
\end{definition}
Let $\lambda_{mp}:=(\pi pm)^{-2}$, $p\geq1$, and set $n_{\max}:=\max_{1\leq j\leq d}n_j$. The Le Cam distance between $\mathcal M_n$ and $\mathcal M^c_n$ is bounded by the approximation errors of $\Sigma_m$ and $F'_{j,m}$. As $m$ will have to be chosen later in this section such that $m=o(\sqrt{n_{\min}})$, the restriction $\beta>1/2$ is evident in view of the following.
\begin{proposition}\label{PropPwc}
For any $\kappa\in(0,1/2)$ and $n_{\min}\to\infty$ it holds that
\[\Delta(\mathcal M_n,\mathcal M^c_n)=\mathcal O(MSn_{\max}n^{-3/2+\kappa}_{\min})+\mathcal O(MSn^{1/4}_{\max}m^{-\beta}).\]
In particular, asymptotic equivalence holds, given that $m=o(\sqrt{n_{\min}})$.
\end{proposition}
\subsection{\textbf{LAN for correlated and uncorrelated sequence space models}}
As described in Section~\ref{secCtsSeq} a continuous experiment can be represented in the sequence space. To this end, consider the (normalised) $L^2([0,1],\mathbb R)$-basis
\begin{align*}
\varphi_{0,0}(t)&:=\sqrt{m}\mathbbm{1}_{\text{I}_{m,0}}(t),\\
\varphi_{0,k+1}(t)&:=\frac{\sqrt{m}}{\sqrt{2}}(\mathbbm{1}_{\text{I}_{mk}}(t)-\mathbbm{1}_{\text{I}_{m,k+1}}(t)),\quad k=0,\ldots,m-2,\\
\varphi_{pk}(t)&:=\sqrt{2m}\cos(p\pi(tm-k))\mathbbm{1}_{\text{I}_{mk}}(t),\quad p\geq1,\quad k=0,\ldots,m-1.
\end{align*}
Via $e_{pki}=(\mathbbm{1}_{\{i=j\}}\varphi_{pk})_{1\leq j\leq d}$ Gaussian random vectors
\begin{equation}\label{SeqSemiTwo}
S_{pk}:=((e_{pki},dY^m))_{1\leq i\leq d},\quad p\geq0,\ k=0,\ldots,m-1
\end{equation}
are obtained, cf. \eqref{cylmeas}. Clearly observing the correlated vectors $(S_{pk})_{p\geq 0,k=0,\ldots,m-1}$ is equivalent to observing \eqref{ContSemi} and more informative than observing $(S_{pk})_{p\geq 1,k=0,\ldots,m-1}$. However, the latter sequence is independent and close to observing \eqref{SeqSemi}, hence it is similar to experiment $\mathcal F^s_n$ which has been intensively studied in Section~\ref{SecParametric}.
\begin{proposition}\label{PropLANequi}
Let $m=o(\sqrt{n_{\min}})$ be satisfied. Then any LAN-expansion with respect to $\Sigma+n^{-1/4}_{\min}H,\ H\inH^{\beta}_{\text{sym}}$ for the model \eqref{SeqSemi} is also valid in $\mathcal M^c_n$ and $\mathcal M_n$.
\end{proposition}
\subsection{\textbf{Verification of Theorem~\ref{ThmConvNonpara}}}
\begin{proof}
The score induced by \eqref{SeqSemi} equals $\nabla\ell_{n}(\Sigma):=\text{vec}(\nabla\ell^{(0)}_n(\Sigma),\ldots,\nabla\ell^{(m-1)}_n(\Sigma))$, where $\nabla\ell^{(k)}_n(\Sigma)$ is of the exact same shape as the parametric score in \eqref{SeqScore} with $\lambda_{mp},\ C_{pk}$ and $Y_{pk}$ replacing $\lambda_p,\ C_p$ and $Y_p$, respectively. Therefore the (not $\mathcal Z$-normalised) Fisher information in $\mathcal M^s_n$ is given by the block diagonal matrix
\[\mathcal I_{n,m}(\Sigma):=\begin{pmatrix}\mathcal I_n^{(0)}(\Sigma)&0&\cdots&0\\0&\mathcal I_n^{(1)}(\Sigma)&\cdots&0\\\vdots&&\ddots&\vdots\\0&\cdots&\cdots&\mathcal I_n^{(m-1)}(\Sigma)\end{pmatrix},\]
with blocks
\[\mathcal I_n^{(k)}(\Sigma):=\frac{1}{4}\sum^{\infty}_{p=1}\lambda^2_{mp}(C^{-1}_{pk}\otimes C^{-1}_{pk}),\ k=0,\ldots,m-1.\]
As in Theorem~\ref{ThmRateFisher}, regular variation of the eigenvalues $\lambda$ yields that on each block $\text{I}_{mk}$ the Fisher information grows with rate $\sqrt{n_{\min}}/m$ such that
\begin{equation}\label{FisherRiemann}
n^{-1/2}_{\min}\sum^{m-1}_{k=0}\mathcal I_n^{(k)}(\Sigma)=\frac{1}{m}\sum^{m-1}_{k=0}\mathcal I_{\Sigma}(k/m)+o(1)\to\int^1_0\mathcal I_{\Sigma}(t)dt,
\end{equation}
i.e., the rate is $n^{-1/4}_{\min}$, where (cf. proof of Theorem~\ref{ThmRateFisher} and Remark~\ref{RmkFisherExp})
\[\mathcal I_{\Sigma}(t)=\frac{1}{8}(\Sigma^{1/2}_{\Xi}(t)\otimes\Sigma(t)+\Sigma(t)\otimes \Sigma^{1/2}_{\Xi}(t))^{-1},\ t\in[0,1].\]
For $H\inH^{\beta}_{\text{sym}}$ - (as before) in the sense that $\Sigma+n^{-1/4}_{\min}H\in\Theta_1$, for $n$ sufficiently large - note that \eqref{LANrem1} and \eqref{LANrem2} hold uniformly in $H_{m,k}:=H(k/m),\ k=0,\ldots,m-1$. Thus applying Proposition~\ref{LAN_gen} simultaneously leads to (denoting by $Q^n_{\Sigma}$ the measure induced by \eqref{SeqSemi})
\begin{align*}
\log\frac{dQ^n_{\Sigma+n^{-1/4}_{\min}H}}{dQ^n_{\Sigma}}=\sum^{m-1}_{k=0}\Big(&\text{vec}(H_{m,k})^{\top}\nabla\ell^{(k)}_n(\Sigma)-\frac{1}{2\sqrt{n_{\min}}}\|H_{m,k}\|^2_{\mathcal I_n^{(k)}(\Sigma)\mathcal Z,L^2}\\
&+\rho^{(1,k)}_n+\rho^{(2,k)}_n\Big),
\end{align*}
where \eqref{FisherRiemann} implies $n^{-1/2}_{\min}\sum^{m-1}_{k=0}\|H_{m,k}\|^2_{\mathcal I_n^{(k)}(\Sigma)\mathcal Z,L^2}\to\|H\|^2_{\mathcal I_{\Sigma}\mathcal Z,L^2}$ with $\langle H,H\rangle_{\mathcal I_{\Sigma}\mathcal Z,L^2}:=\int^1_0\langle H(t),H(t)\rangle_{\mathcal I_{\Sigma}(t)\mathcal Z}dt$ (similarly for $\|\cdot\|_{\mathcal I_n^{(k)}(\Sigma)\mathcal Z,L^2}$). Moreover, for $k=0,\ldots,m-1$, \eqref{LANrem1} and \eqref{LANrem2} imply $\mathbb E[\rho^{(1,k)}_n]=0$ as well as
\[\text{Var}\Big(\sum^{m-1}_{k=0}\rho^{(1,k)}_n\Big)=\mathcal O(n^{-1/2}_{\min}),\quad\sum^{m-1}_{k=0}|\rho^{(2)}_{n,k}|=\mathcal O(n^{-1/4}_{\min}).\]
Since a central limit theorem applies for $n^{-1/4}_{\min}\sum^m_{k=1}\text{vec}(H_k)\nabla\ell^{(k)}_n(\Sigma)$ analogously as in Theorem~\ref{LAN} the sequence of experiments $\mathcal M^s_n$ satisfies
\begin{equation}\label{LANNonpara}
\log\frac{dP^{n,m}_{\Sigma+n^{-1/4}_{\min}H}}{dP^{n,m}_{\Sigma}}=\Delta_{n,\Sigma,H}+\frac{1}{2}\|H\|^2_{\mathcal I_{\Sigma}\mathcal Z,L^2},\quad H\inH^{\beta}_{\text{sym}},
\end{equation}
where $\Delta_{n,\Sigma,H}\overset{d}{\to}\Delta_{\Sigma,H}$, under $Q^n_{\Sigma}$, with $\Delta_{\Sigma,H}$ being the centred Gaussian process with $\text{Cov}(\Delta_{\Sigma,H_1},\Delta_{\Sigma,H_2})=\langle H_1,H_2\rangle_{\mathcal I_{\Sigma}\mathcal Z,L^2}$.
In order to establish a convolution theorem, the verification of Theorem 3.11.2 in \citet{VanDVaart[2013]} is once more closely followed. First denote for the asymptotic perturbation error by
\[\dot\kappa(H):=\int^1_0(\nabla W_{\Sigma}\text{vec}(H))(t)dt=\lim_{n_{\min}\to\infty}n^{1/4}_{\min}(\psi(\Sigma+n^{-1/4}_{\min}H)-\psi(\Sigma)).\]
For $U\geq1$ let $L_U$ be a $U$-dimensional subspace of $H^{\beta}_{\text{sym}}$ and let $H_1,\ldots,H_U$ be an orthonormal basis of $L_U$ with respect to $\langle\cdot,\cdot\rangle_{\mathcal I_{\Sigma}\mathcal Z,L^2}$. Denote by $\dot W^{(i)}_{\Sigma}$ the $i$-th column of $(\nabla W_{\Sigma})^{\top}$ and let $h_u:=\text{vec}(H_u),\ u=1\ldots,U$. Then \eqref{LANNonpara} and Le Cam's third Lemma yield that the limit distribution of regular estimators under $Q^n_{\Sigma+n^{-1/4}_{\min}H}$, $H\in L_U$, is a convolution of some $R$ with $\mathcal N(0,\sum^U_{u=1}\dot\kappa(H_u)\dot\kappa(H_u)^{\top})$, cf. \eqref{LimitDecomp}. Thus the $(i,j)$-entry of the optimal asymptotic covariance of estimating $\psi(\Sigma+n^{-1/4}_{\min}H)$ is obtained by a limiting argument and (once more) by the properties of $\mathcal Z$ via
\begin{align*}
&\lim_{U\to\infty}\sum^U_{u=1}(\dot\kappa(H_u)\dot\kappa(H_u)^{\top})_{i,j}=\lim_{U\to\infty}\sum^U_{u=1}\langle\dot W_{\Sigma}^{(i)},h_u\rangle_{L^2}\langle\dot W_{\Sigma}^{(j)},h_u\rangle_{L^2}\\
=&\lim_{U\to\infty}\frac{1}{4}\sum^U_{u=1}\langle\mathcal I^{-1}_{\Sigma}\dot W_{\Sigma}^{(i)},h_u\rangle_{\mathcal I_{\Sigma}\mathcal Z,L^2}\langle\mathcal I^{-1}_{\Sigma}\dot W_{\Sigma}^{(j)},h_u\rangle_{\mathcal I_{\Sigma}\mathcal Z,L^2}\\
=&\frac{1}{4}\langle\mathcal I^{-1}_{\Sigma}\dot W_{\Sigma}^{(i)},\mathcal I^{-1}_{\Sigma}\dot W_{\Sigma}^{(j)}\rangle_{\mathcal I_{\Sigma}\mathcal Z,L^2}.
\end{align*}
\end{proof}
|
{
"timestamp": "2020-04-21T02:11:08",
"yymm": "1809",
"arxiv_id": "1809.02360",
"language": "en",
"url": "https://arxiv.org/abs/1809.02360"
}
|
\section{Introduction}
One of the most fundamental topics in natural language processing is how best to derive high-level representations from constituent parts, as natural language meanings are a function of their constituent parts. How best to construct a sentence representation from distributed word embeddings is an example domain of this larger issue.
Even though sequential neural models such as recurrent neural networks (RNN) \cite{elman1990finding} and their variants including Long Short-Term Memory (LSTM) \cite{hochreiter1997long} and Gated Recurrent Unit (GRU) \cite{cho2014learning} have become the de-facto standard for condensing sentence-level information from a sequence of words into a fixed vector, there have been many lines of research towards better sentence representation using other neural architectures, e.g. convolutional neural networks (CNN) \cite{kim2014convolutional} or self-attention based models \cite{shen2018reinforced}.
From a linguistic point of view, the underlying tree structure---as expressed by its constituency and dependency trees---of a sentence is an integral part of its meaning.
Inspired by this fact, some recursive neural network (RvNN\footnote{To avoid confusion, we call recursive neural networks (or tree-structured NNs) RvNNs to distinguish them from recurrent neural networks RNNs, following the convention of some previous works.}) models are designed to reflect the syntactic tree structure, achieving impressive results on several sentence-level tasks such as sentiment analysis \cite{socher2012semantic,socher2013recursive}, machine translation \cite{yang2017towards}, natural language inference \cite{bowman2016fast}, and discourse relation classification \cite{wang2017tag}.
However, some recent works have \cite{yogatama2017learning,choi2018learning} proposed latent tree models, which learn to construct task-specific tree structures without explicit supervision, bringing into question the value of linguistically-motivated recursive neural models.
Witnessing the surprising performance of the latent tree models on some sentence-level tasks, there arises a natural question: \textit{Are linguistic tree structures the optimal way of composing sentence representations for NLP tasks?}
In this paper, we demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations, showing that our novel architecture based on constituency trees and their tag\footnote{In this work, we refer to both part-of-speech (POS) tags (e.g. DT-determiner, JJ-adjective) for words and phrase-level tags (e.g. NP-noun phrase, VP-verb phrase) simply as `tags'.} information obtains superior performance on several sentence-level tasks, including sentiment analysis and natural language inference.
A chief novelty of our approach is that we introduce a small separate tag-level tree-LSTM to control the composition function of the existing word-level tree-LSTM, which is in charge of extracting helpful syntactic signals for meaningful semantic composition of constituents by considering both the structures and linguistic tags of constituency trees simultaneously.
In addition, we demonstrate that applying a typical LSTM to preprocess the leaf nodes of a tree-LSTM greatly improves the performance of the tree models.
Moreover, we propose a clustered tag set to replace the existing tags on the assumption that the original syntactic tags are too fined-grained to be useful in neural models.
In short, our contributions in this work are as follows:
\begin{itemize}
\item We propose a new linguistically-motivated neural model which generates high-quality sentence representations by considering all the information extracted from constituency parse trees.
\item In addition, we demonstrate the superiority of the proposed models achieving new state-of-the-art performance within the similar model class on 4 out of 5 sentence classification benchmarks, as well as showing competitive results compared to other types of neural models.
\item We empirically show that another key point to the success of tree-structured models is to contextualize input word embeddings so that the corresponding input for each word in a sentence can better reflect the meaning of the whole sentence.
\end{itemize}
\section{Related Work}
Recursive neural networks (RvNN) are a kind of neural architecture which model sentences by exploiting syntactic structure.
While earlier RvNN models proposed utilizing diverse composition functions, including feed-forward neural networks \cite{socher2011parsing}, matrix-vector multiplication \cite{socher2012semantic}, and tensor computation \cite{socher2013recursive}, tree-LSTMs \cite{tai2015improved} remain the standard for several sentence-level tasks.
Even though classic RvNNs have demonstrated superior performance on a variety of tasks, their inflexibility, i.e. their inability to handle \textit{dynamic compositionality} for different syntactic configurations, is a considerable weakness.
For instance, it would be desirable if our model could distinguish e.g. adjective-noun composition from that of verb-noun or preposition-noun composition, as models failing to make such a distinction ignore real-world syntactic considerations such as `-arity' of function words (i.e. types), and the adjunct/argument distinction.
To enable dynamic compositionality in recursive neural networks, many previous works \cite{hashimoto2013simple,dong2014adaptive,qian2015learning,wang2017tag,liu2017dynamic,huang2017encoding,teng2017head} have proposed various methods.
One main direction of research leverages tag information, which is produced as a by-product of parsing.
In detail, \citeauthor{qian2015learning} (\citeyear{qian2015learning}) suggested TG-RNN, a model employing different composition functions according to POS tags, and TE-RNN/TE-RNTN, models which leverage tag embeddings as additional inputs for the existing tree-structured models.
Despite the novelty of utilizing tag information, the explosion of the number of parameters (in case of the TG-RNN) and the limited performance of the original models (in case of the TE-RNN/TE-RNTN) have prevented these models from being widely adopted.
Meanwhile, \citeauthor{wang2017tag} (\citeyear{wang2017tag}) and \citeauthor{huang2017encoding} (\citeyear{huang2017encoding}) proposed models based on a tree-LSTM which also uses the tag vectors to control the gate functions of the tree-LSTM.
In spite of their impressive results, there is a limitation that the trained tag embeddings are too simple to reflect the rich information which tags provide in different syntactic structures. To alleviate this problem, we introduce structure-aware tag representations in the next section.
Another way of building dynamic compositionality into RvNNs is to take advantage of a meta-network (or hyper-network).
Inspired by recent works on dynamic parameter prediction, DC-TreeLSTMs \cite{liu2017dynamic} dynamically create the parameters for compositional functions in a tree-LSTM. Specifically, the model has two separate tree-LSTM networks whose architectures are similar, but the smaller of the two is utilized to calculate the weights of the bigger one. A possible problem for this model is that it may be easy to be trained such that the role of each tree-LSTM is ambiguous, as they share the same input, i.e. word information. Therefore, we design two disentangled tree-LSTMs in our model so that one focuses on extracting useful features from only syntactic information while the other composes semantic units with the aid of the features. Furthermore, our model reduces the complexity of computation by utilizing typical tree-LSTM frameworks instead of computing the weights for each example.
Finally, some recent works \cite{yogatama2017learning,choi2018learning} have proposed latent tree-structured models that learn how to formulate tree structures from only sequences of tokens, without the aid of syntactic trees or linguistic information.
The latent tree models have the advantage of being able to find the optimized task-specific order of composition rather than a sequential or syntactic one.
In experiments, we compare our model with not only syntactic tree-based models but also latent tree models, demonstrating that modeling with explicit linguistic knowledge can be an attractive option.
\section{Model}
In this section, we introduce a novel RvNN architecture, called \textbf{SATA Tree-LSTM}\footnote{The implementation of our model and supplemental materials are available at https://github.com/galsang/SATA-Tree-LSTM.} (\textbf{S}tructure-\textbf{A}ware \textbf{T}ag \textbf{A}ugmented \textbf{Tree-LSTM}).
This model is similar to typical Tree-LSTMs, but provides dynamic compositionality by augmenting a separate tag-level tree-LSTM which produces structure-aware tag representations for each node in a tree.
In other words, our model has two independent tree-structured modules based on the same constituency tree, one of which (word-level tree-LSTM) is responsible for constructing sentence representations given a sequence of words as usual, while the other (tag-level tree-LSTM) provides supplementary syntactic information to the former.
In section 3.1, we first review tree-LSTM architectures.
Then in section 3.2, we introduce a tag-level tree-LSTM and structure-aware tag representations.
In section 3.3, we discuss an additional technique to boost the performance of tree-structured models, and in section 3.4, we describe the entire architecture of our model in detail.
\subsection{Tree-LSTM}
The LSTM \cite{hochreiter1997long} architecture was first introduced as an extension of the RNN architecture to mitigate the vanishing and exploding gradient problems. In addition, several works have discovered that applying the LSTM cell into tree structures can be an effective means of modeling sentence representations.
To be formal, the composition function of the cell in a tree-LSTM can be formulated as follows:
\begin{equation} \label{eq:1}
\begin{bmatrix}
\mathbf{i} \\
\mathbf{f}_l \\
\mathbf{f}_r \\
\mathbf{o} \\
\mathbf{g}
\end{bmatrix}
=
\begin{bmatrix}
\sigma \\
\sigma \\
\sigma \\
\sigma \\
\tanh
\end{bmatrix}
\Bigg( \mathbf{W}
\begin{bmatrix}
\mathbf{h}_l \\
\mathbf{h}_r \\
\end{bmatrix}
+ \mathbf{b} \Bigg)
\end{equation}
\begin{equation} \label{eq:2}
\mathbf{c} = \mathbf{f}_l \odot \mathbf{c}_l + \mathbf{f}_r \odot \mathbf{c}_r + \mathbf{i} \odot \mathbf{g}\\
\end{equation}
\begin{equation} \label{eq:3}
\mathbf{h} = \mathbf{o} \odot \tanh{\left(\mathbf{c}\right)}
\end{equation}
\noindent where $\mathbf{h}, \mathbf{c} \in\mathbb{R}^{d}$ indicate the hidden state and cell state of the LSTM cell, and $\mathbf{h}_l, \mathbf{h}_r, \mathbf{c}_l, \mathbf{c}_r \in\mathbb{R}^{d}$ the hidden states and cell states of a left and right child.
$\mathbf{g} \in\mathbb{R}^{d}$ is the newly composed input for the cell and $\mathbf{i}, \mathbf{f}_{l}, \mathbf{f}_{r}, \mathbf{o} \in\mathbb{R}^{d}$ represent an input gate, two forget gates (left, right), and an output gate respectively.
$\mathbf{W} \in\mathbb{R}^{5d\times2d}$ and $\mathbf{b} \in\mathbb{R}^{5d}$ are trainable parameters.
$\sigma$ corresponds to the sigmoid function, $\tanh$ to the hyperbolic tangent, and $\odot$ to element-wise multiplication.
Note the equations assume that there are only two children for each node, i.e. binary or binarized trees, following the standard in the literature. While RvNN models can be constructed on any tree structure, in this work we only consider constituency trees as inputs.
In spite of the obvious upside that recursive models have in being so flexible, they are known for being difficult to fully utilize with batch computations as compared to other neural architectures because of the diversity of structure found across sentences.
To alleviate this problem, \citeauthor{bowman2016fast} (\citeyear{bowman2016fast}) proposed the SPINN model, which brings a shift-reduce algorithm to the tree-LSTM.
As SPINN simplifies the process of constructing a tree into only two operations, i.e. shift and reduce, it can support more effective parallel computations while enjoying the advantages of tree structures.
For efficiency, our model also starts from our own SPINN re-implementation, whose function is exactly the same as that of the tree-LSTM.
\subsection{Structure-aware Tag Representation}
In most previous works using linguistic tag information \cite{qian2015learning,wang2017tag,huang2017encoding}, tags are usually represented as simple low-dimensional dense vectors, similar to word embeddings. This approach seems reasonable in the case of POS tags that are attached to the corresponding words, but phrase-level constituent tags (e.g. NP, VP, ADJP) vary greatly in size and shape, making them less amenable to uniform treatment. For instance, even the same phrase tags within different syntactic contexts can vary greatly in size and internal structure, as the case of NP tags in Figure \ref{fig:figure1} shows. Here, the NP consisting of DT[the]-NN[stories] has a different internal structure than the NP consisting of NP[the film 's]-NNS[shortcomings].
One way of deriving \textit{structure-aware} tag representations from the original tag embeddings is to introduce a separate tag-level tree-LSTM which accepts the typical tag embeddings at each node of a tree and outputs the computed structure-aware tag representations for the nodes. Note that the module concentrates on extracting useful syntactic features by considering only the tags and structures of the trees, excluding word information.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{figure1.pdf}
\caption{A constituency tree example from Stanford Sentiment Treebank.}
\label{fig:figure1}
\end{figure}
Formally, we denote a tag embedding for the tag attached to each node in a tree as $\textbf{e} \in\mathbb{R}^{d_\text{T}}$.
Then, the function of each cell in the tag tree-LSTM is defined in the following way.
Leaf nodes are defined by the following:
\begin{equation} \label{eq:4}
\begin{bmatrix}
\hat{\mathbf{c}} \\
\hat{\mathbf{h}} \\
\end{bmatrix}
= \tanh{\left(\mathbf{U}_\text{T} \mathbf{e} + \mathbf{a}_\text{T}\right)}
\end{equation}
\noindent while non-leaf nodes are defined by the following:
\begin{equation} \label{eq:5}
\begin{bmatrix}
\hat{\mathbf{i}} \\
\hat{\mathbf{f}}_l \\
\hat{\mathbf{f}}_r \\
\hat{\mathbf{o}} \\
\hat{\mathbf{g}}
\end{bmatrix}
=
\begin{bmatrix}
\sigma \\
\sigma \\
\sigma \\
\sigma \\
\tanh
\end{bmatrix}
\Bigg( \mathbf{W_\text{T}}
\begin{bmatrix}
\hat{\mathbf{h}}_l \\
\hat{\mathbf{h}}_r \\
\mathbf{e} \\
\end{bmatrix}
+ \mathbf{b}_\text{T} \Bigg)
\end{equation}
\begin{equation} \label{eq:6}
\hat{\mathbf{c}} = \hat{\mathbf{f}}_l \odot \hat{\mathbf{c}}_l + \hat{\mathbf{f}}_r \odot \hat{\mathbf{c}}_r + \hat{\mathbf{i}} \odot \hat{\mathbf{g}}\\
\end{equation}
\begin{equation} \label{eq:7}
\hat{\mathbf{h}} = \hat{\mathbf{o}} \odot \tanh{\left(\hat{\mathbf{c}}\right)}
\end{equation}
\noindent where $\hat{\mathbf{h}}, \hat{\mathbf{c}} \in\mathbb{R}^{d_\text{T}}$ represent the hidden state and cell state of each node in the tag tree-LSTM.
We regard the hidden state ($\hat{\mathbf{h}}$) as a structure-aware tag representation for the node.
$ \mathbf{U}_\text{T} \in\mathbb{R}^{2d_\text{T} \times d_\text{T}}, \textbf{a}_\text{T} \in\mathbb{R}^{2d_\text{T}}, \mathbf{W}_\text{T} \in\mathbb{R}^{5d_\text{T} \times 3d_\text{T}}$, and $\mathbf{b}_\text{T} \in\mathbb{R}^{5d_\text{T}}$ are trainable parameters.
The rest of the notation follows equations \ref{eq:1}, \ref{eq:2}, and \ref{eq:3}.
In case of leaf nodes, the states are computed by a simple non-linear transformation.
Meanwhile, the composition function in a non-leaf node absorbs the tag embedding ($\mathbf{e}$) as an additional input as well as the hidden states of the two children nodes. The benefit of revising tag representations according to the internal structure is that the derived embedding is a function of the corresponding makeup of the node, rather than a monolithic, categorical tag.
With regard to the tags themselves, we conjecture that the taxonomy of the tags currently in use in many NLP systems is too complex to be utilized effectively in deep neural models, considering the specificity of many tag sets and the limited amount of data with which to train. Thus, we cluster POS (word-level) tags into 12 groups following the universal POS tagset \cite{petrov2012universal} and phrase-level tags into 11 groups according to criteria analogous to the case of words, resulting in 23 tag categories in total. In this work, we use the revised coarse-grained tags instead of the original ones. For more details, we refer readers to the supplemental materials.
\subsection{Leaf-LSTM}
An inherent shortcoming of RvNNs relative to sequential models is that each intermediate representation in a tree is unaware of its external context until all the information is gathered together at the root node.
In other words, each composition process is prone to be locally optimized rather than globally optimized.
To mitigate this problem, we propose using a leaf-LSTM following the convention of some previous works \cite{eriguchi2016tree,yang2017towards,choi2018learning}, which is a typical LSTM that accepts a sequence of words in order.
Instead of leveraging word embeddings directly, we can use each hidden state and cell state of the leaf-LSTM as input tokens for leaf nodes in a tree-LSTM, anticipating the proper contextualization of the input sequence.
Formally, we denote a sequence of words in an input sentence as $w_{1:n}$ ($n$: the length of the sentence), and the corresponding word embeddings as $\mathbf{x}_{1:n}$. Then, the operation of the leaf-LSTM at time $t$ can be formulated as,
\begin{equation} \label{eq:8}
\begin{bmatrix}
\tilde{\mathbf{i}} \\
\tilde{\mathbf{f}} \\
\tilde{\mathbf{o}} \\
\tilde{\mathbf{g}}
\end{bmatrix}
=
\begin{bmatrix}
\sigma \\
\sigma \\
\sigma \\
\tanh
\end{bmatrix}
\Bigg( \mathbf{W}_\text{L}
\begin{bmatrix}
\tilde{\mathbf{h}}_{t-1} \\
\mathbf{x}_t \\
\end{bmatrix}
+ \mathbf{b}_\text{L} \Bigg)
\end{equation}
\begin{equation} \label{eq:9}
\tilde{\mathbf{c}}_t = \tilde{\mathbf{f}} \odot \tilde{\mathbf{c}}_{t-1} + \tilde{\mathbf{i}} \odot \tilde{\mathbf{g}}\\
\end{equation}
\begin{equation} \label{eq:10}
\tilde{\mathbf{h}}_t = \tilde{\mathbf{o}} \odot \tanh{\left(\tilde{\mathbf{c}}_t\right)}
\end{equation}
\noindent where $\mathbf{x}_t \in\mathbb{R}^{d_w}$ indicates an input word vector and $\tilde{\mathbf{h}}_t$, $\tilde{\mathbf{c}}_t \in\mathbb{R}^{d_h}$ represent the hidden and cell state of the LSTM at time $t$ ($\tilde{\mathbf{h}}_{t-1}$ corresponds to the hidden state at time $t$-1). $\mathbf{W}_\text{L}$ and $\mathbf{b}_\text{L} $ are learnable parameters. The remaining notation follows that of the tree-LSTM above.
In experiments, we demonstrate that introducing a leaf-LSTM fares better at processing the input words of a tree-LSTM compared to using a feed-forward neural network. We also explore the possibility of its bidirectional setting in ablation study.
\subsection{SATA Tree-LSTM}
\begin{figure}[!t]
\centering
\includegraphics[width=1\columnwidth]{figure2.pdf}
\caption{A diagram of SATA Tree-LSTM. The model has two separate tree-LSTM modules, the right of which (tag tree-LSTM) extracts a structure-aware tag representation to control the composition function of the remaining tree-LSTM (word tree-LSTM). Fully-connected: one-layered non-linear transformation.}
\label{fig:figure2}
\end{figure}
In this section, we define \textbf{SATA Tree-LSTM} (\textbf{S}tructure-\textbf{A}ware \textbf{T}ag \textbf{A}ugmented \textbf{Tree-LSTM}, see Figure \ref{fig:figure2}) which joins a tag-level tree-LSTM (section 3.2), a leaf-LSTM (section 3.3), and the original word tree-LSTM together.
As above we denote a sequence of words in an input sentence as $w_{1:n}$ and the corresponding word embeddings as $\mathbf{x}_{1:n}$. In addition, a tag embedding for the tag attached to each node in a tree is denoted by $\textbf{e} \in\mathbb{R}^{d_\text{T}}$. Then, we derive the final sentence representation for the input sentence with our model in two steps.
First, we compute structure-aware tag representations ($\hat{\mathbf{h}}$) for each node of a tree using the tag tree-LSTM (the right side of Figure \ref{fig:figure2}) as follows:
\begin{equation} \label{eq:11}
\begin{bmatrix}
\hat{\mathbf{c}} \\
\hat{\mathbf{h}} \\
\end{bmatrix}
=
\begin{cases}
\text{Tag-Tree-LSTM}(\mathbf{e}) & \text{if a leaf node} \\
\text{Tag-Tree-LSTM}(\hat{\mathbf{h}}_l, \hat{\mathbf{h}}_r, \mathbf{e}) & \text{otherwise}
\end{cases}
\end{equation}
\noindent where Tag-Tree-LSTM indicates the module we described in section 3.2.
Second, we combine semantic units recursively on the word tree-LSTM in a bottom-up fashion. For leaf nodes, we leverage the Leaf-LSTM (the bottom-left of Figure \ref{fig:figure2}, explained in section 3.3) to compute $\tilde{\mathbf{c}}_{t}$ and $\tilde{\mathbf{h}}_{t}$ in sequential order, with the corresponding input $\mathbf{x}_t$.
\begin{equation} \label{eq:12}
\begin{bmatrix}
\tilde{\mathbf{c}}_{t} \\
\tilde{\mathbf{h}}_{t} \\
\end{bmatrix}
= \text{Leaf-LSTM}(\tilde{\textbf{h}}_{t-1}, \textbf{x}_t)
\end{equation}
\noindent Then, the $\tilde{\mathbf{c}}_{t}$ and $\tilde{\mathbf{h}}_{t}$ can be utilized as input tokens to the word tree-LSTM, with the left (right) child of the target node corresponding to the $t$th word in the input sentence.
\begin{equation} \label{eq:13}
\begin{bmatrix}
\check{\textbf{c}}_{\{l, r\}} \\
\check{\textbf{h}}_{\{l, r\}}
\end{bmatrix}
=
\begin{bmatrix}
\tilde{\textbf{c}}_{t} \\
\tilde{\textbf{h}}_{t}
\end{bmatrix}
\end{equation}
In the non-leaf node case, we calculate phrase representations for each node
in the word tree-LSTM (the upper-left of Figure \ref{fig:figure2}) recursively as follows:
\begin{equation} \label{eq:14}
\check{\mathbf{g}} = \tanh{\left( \mathbf{U}_\text{w}
\begin{bmatrix}
\check{\mathbf{h}}_l \\
\check{\mathbf{h}}_r \\
\end{bmatrix}
+ \mathbf{a}_\text{w} \right)}
\end{equation}
\begin{equation} \label{eq:15}
\begin{bmatrix}
\check{\mathbf{i}} \\
\check{\mathbf{f}}_l \\
\check{\mathbf{f}}_r \\
\check{\mathbf{o}}
\end{bmatrix}
=
\begin{bmatrix}
\sigma \\
\sigma \\
\sigma \\
\sigma
\end{bmatrix}
\Bigg( \mathbf{W_\text{w}}
\begin{bmatrix}
\check{\mathbf{h}}_l \\
\check{\mathbf{h}}_r \\
\hat{\mathbf{h}} \\
\end{bmatrix}
+ \mathbf{b}_\text{w} \Bigg)
\end{equation}
\begin{equation} \label{eq:16}
\check{\mathbf{c}} = \check{\mathbf{f}}_l \odot \check{\mathbf{c}}_l + \check{\mathbf{f}}_r \odot \check{\mathbf{c}}_r + \check{\mathbf{i}} \odot \check{\mathbf{g}}
\end{equation}
\begin{equation} \label{eq:17}
\check{\mathbf{h}} = \check{\mathbf{o}} \odot \tanh{\left(\check{\mathbf{c}}\right)}
\end{equation}
\noindent where $\check{\mathbf{h}}$, $\check{\mathbf{c}} \in \mathbb{R}^{d_h}$ represent the hidden and cell state of each node in the word tree-LSTM. $\mathbf{U}_\text{w} \in \mathbb{R}^{d_h \times 2d_h}$, $\mathbf{W}_\text{w} \in \mathbb{R}^{4d_h \times \left(2d_h+d_\text{T}\right)}$, $\mathbf{a}_\text{w} \in \mathbb{R}^{d_h}$, $\mathbf{b}_\text{w} \in \mathbb{R}^{4d_h}$ are learned parameters. The remaining notation follows those of the previous sections.
Note that the structure-aware tag representations ($\hat{\mathbf{h}}$) are only utilized to control the gate functions of the word tree-LSTM in the form of additional inputs, and are not involved in the semantic composition ($\check{\mathbf{g}}$) directly.
Finally, the hidden state of the root node ($\check{\mathbf{h}}_\text{root}$) in the word-level tree-LSTM becomes the final sentence representation of the input sentence.
\section{Experiment and Discussion}
\subsection{Quantitative Analysis}
\subsubsection{Sentence classification tasks}
One of the most basic approaches to evaluate a sentence encoder is to measure the classification performance with the sentence representations made by the encoder. Thus, we conduct experiments on the following five datasets. (Summary statistics for the datasets are reported in the supplemental materials.)
\begin{itemize}
\item \textbf{MR}: A group of movie reviews with binary (positive / negative) classes. \cite{pang2005seeing}
\item \textbf{SST-2}: Stanford Sentiment Treebank \cite{socher2013recursive}.
Similar to MR, but each review is provided in the form of a binary parse tree whose nodes are annotated with numeric sentiment values.
For SST-2, we only consider binary (positive / negative) classes.
\item \textbf{SST-5}: Identical to SST-2, but the reviews are grouped into fine-grained (very negative, negative, neutral, positive, very positive) classes.
\item \textbf{SUBJ}: Sentences grouped as being either subjective or objective (binary classes). \cite{pang2004sentimental}
\item \textbf{TREC}: A dataset which groups questions into six different question types (classes). \cite{li2002learning}
\end{itemize}
As a preprocessing step, we construct parse trees for the sentences in the datasets using the Stanford PCFG parser \cite{klein2003accurate}.
Because syntactic tags are by-products of constituency parsing, we do not need further preprocessing.
To classify the sentence given our sentence representation ($\check{\mathbf{h}}_\text{root}$), we use one fully-connected layer with a ReLU activation, followed by a softmax classifier. The final predicted probability distribution of the class $y$ given the sentence $w_{1:n}$ is defined as follows,
\begin{equation}
\mathbf{s} = \text{ReLU}(\mathbf{W}_\text{s} \check{\mathbf{h}}_\text{root}+ \mathbf{b}_\text{s})
\end{equation}
\begin{equation}
p(y|w_{1:n}) = \text{softmax}(\mathbf{W}_\text{c}\mathbf{s} + \mathbf{b}_\text{c})
\end{equation}
\noindent where $\textbf{s} \in \mathbb{R}^{d_\text{s}}$ is the computed task-specific sentence representation for the classifier, and $\textbf{W}_\text{s} \in \mathbb{R}^{d_\text{s} \times d_h}$, $\textbf{W}_\text{c} \in \mathbb{R}^{d_\text{c} \times d_s}$, $\textbf{b}_\text{s} \in \mathbb{R}^{d_s}$, $\textbf{b}_\text{c} \in \mathbb{R}^{d_c}$ are trainable parameters. As an objective function, we use the cross entropy of the predicted and true class distributions.
The results of the experiments on the five datasets are shown in table \ref{table1}.
In this table, we report the test accuracy of our model and various other models on each dataset in terms of percentage.
To consider the effects of random initialization, we report the best numbers obtained from each several runs with hyper-parameters fixed.
Compared with the previous syntactic tree-based models as well as other neural models, our SATA Tree-LSTM shows superior or competitive performance on all tasks.
Specifically, our model achieves new state-of-the-art results within the tree-structured model class on 4 out of 5 sentence classification tasks---SST-2, SST-5, MR, and TREC.
The model shows its strength, in particular, when the datasets provide phrase-level supervision to facilitate tree structure learning (i.e. SST-2, SST-5).
Moreover, the numbers we report for SST-5 and TREC are competitive to the existing state-of-the-art results including ones from structurally pre-trained models such as ELMo \cite{peters2018deep}, proving our model's superiority.
Note that the SATA Tree-LSTM also outperforms the recent latent tree-based model, indicating that modeling a neural model with explicit linguistic knowledge can be an attractive option.
On the other hand, a remaining concern is that our SATA Tree-LSTM is not robust to random seeds when the size of a dataset is relatively small, as tag embeddings are randomly initialized rather than relying on pre-trained ones in contrast with the case of words.
From this observation, we could find out there needs a direction of research towards pre-trained tag embeddings.
\begin{table*}[t!]
\centering
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\textbf{Models}} & \textbf{SST-2} & \textbf{SST-5} & \textbf{MR} & \textbf{SUBJ} & \textbf{TREC} \\
\hline \hline
\multicolumn{6}{|l|}{\textbf{Tree-structured models}} \\
\hline
RNTN \cite{socher2013recursive} & 85.4 & 45.7 & - & - & - \\
AdaMC-RNTN \cite{dong2014adaptive} & 88.5 & 46.7 & - & - & - \\
TE-RNTN \cite{qian2015learning} & 87.7 & 49.8 & - & - & - \\
TBCNN \cite{mou2015discriminative} & 87.9 & 51.4 & - & - & 96.0 \\
Tree-LSTM \cite{tai2015improved} & 88.0 & 51.0 & - & - & - \\
AdaHT-LSTM-CM \cite{liu2017adaptive} & 87.8 & 50.2 & 81.9 & 94.1 & - \\
DC-TreeLSTM \cite{liu2017dynamic} & 87.8 & - & 81.7 & 93.7 & 93.8 \\
TE-LSTM \cite{huang2017encoding} & 89.6 & 52.6 & 82.2 & - & - \\
BiConTree \cite{teng2017head} & 90.3 & 53.5 & - & - & 94.8 \\
Gumbel Tree-LSTM$^\star$ \cite{choi2018learning} & 90.7 & 53.7 & - & - & - \\
TreeNet \cite{cheng2018treenet} & - & - & 83.6 & \underline{95.9} & 96.1 \\
\textbf{SATA Tree-LSTM (Ours)} & \textbf{91.3} & \underline{\textbf{54.4}} & \textbf{83.8} & \textbf{95.4} & \underline{\textbf{96.2}} \\
\hline \hline
\multicolumn{6}{|l|}{\textbf{Other neural models}} \\
\hline
CNN \cite{kim2014convolutional} & 88.1 & 48.0 & 81.5 & 93.4 & 93.6 \\
AdaSent \cite{zhao2015self} & - & - & 83.1 & 95.5 & 92.4 \\
LSTM-CNN \cite{zhou2016text} & 89.5 & 52.4 & 82.3 & 94.0 & 96.1 \\
byte-mLSTM$^\dagger$ \cite{radford2017learning} & \underline{91.8} & 52.9 & \underline{86.9} & 94.6 & - \\
BCN + Char + CoVe$^\dagger$ \cite{mccann2017learned} & 90.3 & 53.7 & - & - & 95.8 \\
BCN + Char + ELMo$^\dagger$ \cite{peters2018deep} & - & \underline{54.7$\pm$0.5} & - & - & - \\
\hline
\end{tabular}
}
\caption{The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous tree-structured models as well as other sophisticated models.
$\star$: Latent tree-structured models. $\dagger$: Models which are pre-trained with large external corpora.}
\label{table1}
\end{table*}
\subsubsection{Natural language inference}
To estimate the performance of our model beyond the tasks requiring only one sentence at a time, we conduct an experiment on the Stanford Natural Language Inference \cite{snli} dataset, each example of which consists of two sentences, the premise and the hypothesis. Our objective given the data is to predict the correct relationship between the two sentences among three options--- contradiction, neutral, or entailment.
We use the siamese architecture to encode both the premise ($p_{1:m}$) and hypothesis ($h_{1:n}$) following the standard of sentence-encoding models in the literature. (Specifically, $p_{1:m}$ is encoded as $\check{\mathbf{h}}_\text{root}^p \in \mathbb{R}^{d_h}$ and $h_{1:n}$ is encoded as $\check{\mathbf{h}}_\text{root}^h \in \mathbb{R}^{d_h}$ with the same encoder.) Then, we leverage some heuristics \cite{mou2016natural}, followed by one fully-connected layer with a ReLU activation and a softmax classifier. Specifically,
\begin{equation}
\mathbf{z} = \left[ \check{\mathbf{h}}_\text{root}^p; \check{\mathbf{h}}_\text{root}^h; | \check{\mathbf{h}}_\text{root}^p - \check{\mathbf{h}}_\text{root}^h |; \check{\mathbf{h}}_\text{root}^p \odot \check{\mathbf{h}}_\text{root}^h \right]
\end{equation}
\begin{equation}
\mathbf{s} = \text{ReLU}(\mathbf{W}_\text{s} \mathbf{z} + \mathbf{b}_\text{s})
\end{equation}
\begin{equation}
p(y|p_{1:m}, h_{1:n}) = \text{softmax}(\mathbf{W}_\text{c}\textbf{s} + \mathbf{b}_\text{c})
\end{equation}
\noindent where $\textbf{z} \in \mathbb{R}^{4d_h}$, $\textbf{s} \in \mathbb{R}^{d_s}$ are intermediate features for the classifier and
$\textbf{W}_\text{s} \in \mathbb{R}^{d_\text{s} \times 4d_h}$, $\textbf{W}_\text{c} \in \mathbb{R}^{d_\text{c} \times d_s}$, $\textbf{b}_\text{s} \in \mathbb{R}^{d_s}$, $\textbf{b}_\text{c} \in \mathbb{R}^{d_c}$ are again trainable parameters.
Our experimental results on the SNLI dataset are shown in table \ref{table2}. In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models.
(Latent Syntax Tree-LSTM: \citeauthor{yogatama2017learning} (\citeyear{yogatama2017learning}),
Tree-based CNN: \citeauthor{mou2016natural} (\citeyear{mou2016natural}),
Gumbel Tree-LSTM: \citeauthor{choi2018learning} (\citeyear{choi2018learning}),
NSE: \citeauthor{munkhdalai2017neural} (\citeyear{munkhdalai2017neural}),
Reinforced Self-Attention Network: \citeauthor{shen2018reinforced} (\citeyear{shen2018reinforced}),
Residual stacked encoders: \citeauthor{nie2017shortcut} (\citeyear{nie2017shortcut}),
BiLSTM with generalized pooling: \citeauthor{chen2018enhancing} (\citeyear{chen2018enhancing}).)
Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model.
Even though our model has proven its mettle, the effect of tag information seems relatively weak in the case of SNLI, which contains a large amount of data compared to the others.
One possible explanation is that neural models may learn some syntactic rules from large amounts of text when the text size is large enough, reducing the necessity of external linguistic knowledge.
We leave the exploration of the effectiveness of tags relative to data size for future work.
\subsubsection{Experimental details}
Here we go over the settings common across our models during experimentation. For more task-specific details, refer to the supplemental materials.
For our input embeddings, we used 300 dimensional 840B GloVe \cite{pennington2014glove} as pre-trained word embeddings, and tag representations were randomly sampled from the uniform distribution [-0.005, 0.005]. Tag vectors are revised during training while the fine-tuning of the word embedding depends on the task.
Our models were trained using the Adam \cite{kingma2014adam} or Adadelta \cite{zeiler2012adadelta} optimizer, depending on task.
For regularization, weight decay is added to the loss function except for SNLI following \citeauthor{loshchilov2017fixing} (\citeyear{loshchilov2017fixing}) and Dropout \cite{srivastava2014dropout} is also applied for the word embeddings and task-specific classifiers.
Moreover, batch normalization \cite{ioffe2015batch} is adopted for the classifiers.
As a default, all the weights in the model are initialized following \citeauthor{he2015delving} (\citeyear{he2015delving}) and the biases are set to 0.
The total norm of the gradients of the parameters is clipped not to be over 5 during training.
Our best models for each dataset were chosen by validation accuracy in cases where a validation set was provided as a part of the dataset. Otherwise, we perform a grid search on probable hyper-parameter settings, or run 10-fold cross-validation in cases where even a test set does not exist.
\begin{table}[t!]
\centering
\small
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{|l|c|c|}
\hline
\multicolumn{1}{|c|}{\textbf{Models}} & \textbf{Acc.} & \textbf{\# Params} \\
\hline \hline
\multicolumn{3}{|l|}{\textbf{Tree-structured models}} \\
\hline
100D Latent Syntax Tree-LSTM$^\star$ & 80.5 & 500K \\
300D Tree-based CNN & 82.1 & 3.5M \\
300D SPINN-PI & 83.2 & 3.7M \\
300D Gumbel Tree-LSTM$^\star$ & 85.6 & 2.9M \\
\textbf{300D SATA Tree-LSTM (Ours)} & \textbf{85.9} & \textbf{3.3M} \\
\hline \hline
\multicolumn{3}{|l|}{\textbf{Other neural models}} \\
\hline
300D NSE & 84.6 & 3.0M \\
300D Reinforced Self-Attention Network & 86.3 & 3.1M \\
600D Residual stacked encoders & 86.0 & 29M \\
600D BiLSTM with generalized pooling & \underline{86.6} & 65M \\
\hline
\end{tabular}}
\caption{The accuracy of diverse models on Stanford Natural Language Inference. For fair comparison, we only consider sentence-encoding based models. Our model achieves a comparable result with a moderate number of parameters.
$\star$: Latent tree models.}
\label{table2}
\end{table}
\subsection{Ablation Study}
In this section, we design an ablation study on the core modules of our model to explore their effectiveness.
The dataset used in this experiment is SST-2.
To conduct the experiment, we only replace the target module with other candidates while maintaining the other settings.
To be specific, we focus on two modules, the leaf-LSTM and structure-aware tag embeddings (tag-level tree-LSTM).
In the first case, the leaf-LSTM is replaced with a fully-connected layer with a $\tanh$ activation or Bi-LSTM.
In the second case, we replace the structure-aware tag embeddings with naive tag embeddings or do not employ them at all.
The experimental results are depicted in Figure \ref{fig:figure3}.
As the chart shows, our model outperforms all the other options we have considered.
In detail, the left part of the chart shows that the leaf-LSTM is the most effective option compared to its competitors.
Note that the sequential leaf-LSTM is somewhat superior or competitive than the bidirectional leaf-LSTM when both have a comparable number of parameters.
We conjecture this may because a backward LSTM does not add additional useful knowledge when the structure of a sentence is already known.
In conclusion, we use the uni-directional LSTM as a leaf module because of its simplicity and remarkable performance.
Meanwhile, the right part of the figure demonstrates that our newly introduced structure-aware embeddings have a real impact on improving the model performance.
Interestingly, employing the naive tag embeddings made no difference in terms of the test accuracy, even though the absolute validation accuracy increased (not reported in the figure).
This result supports our assumption that tag information should be considered in the structure.
\subsection{Qualitative Analysis}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{figure3.pdf}
\caption{An ablation study on the core modules of our model. The test accuracy of each model on SST-2 is reported. The results demonstrate that the modules play an important role for achieving the superior performance of our model. FC: A fully connected-layer with a $\tanh$ function. w/o tags: Tag embeddings are not used. w/ tags: The naive tag embeddings are directly inserted into each node of a tree.}
\label{fig:figure3}
\end{figure}
In previous sections, we have numerically demonstrated that our model is effective in encouraging useful composition of semantic units. Here, we directly investigate the computed representations for each node of a tree, showing that the remarkable performance of our model is mainly due to the gradual and recursive composition of the intermediate representations on the syntactic structure.
To observe the phrase-level embeddings at a glance, we draw a scatter plot in which a point represents the corresponding intermediate representation.
We utilize PCA (Principal Component Analysis) to project the representations into a two-dimensional vector space.
As a target parse tree, we reuse the one seen in Figure \ref{fig:figure1}.
The result is shown in Figure \ref{fig:figure4}.
From this figure, we confirm that the intermediate representations have a hierarchy in the semantic space, which is very similar to that of the parse tree.
In other words, as many tree-structured models pursue, we can see the tendency of constructing the representations from the low-level (the bottom of the figure) to the high-level (the top-left and top-right of the figure), integrating the meaning of the constituents recursively.
An interesting thing to note is that the final sentence representation is near that of the phrase \textit{`, the stories are quietly moving.'} rather than that of \textit{`Despite the film's shortcomings'}, catching the main meaning of the sentence.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\columnwidth]{figure4.pdf}
\caption{A scatter plot whose points represent the intermediate representations for each node of the tree in Figure 1. From this figure, we can see the tendency of constructing the representations recursively from the low to the high level.}
\label{fig:figure4}
\end{figure}
\section{Conclusion}
We have proposed a novel RvNN architecture to fully utilize linguistic priors.
A newly introduced tag-level tree-LSTM demonstrates that it can effectively control the composition function of the corresponding word-level tree-LSTM.
In addition, the proper contextualization of the input word vectors results in significant performance improvements on several sentence-level tasks.
For future work, we plan to explore a new way of exploiting dependency trees effectively, similar to the case of constituency trees.
\section*{Acknowledgments}
We thank anonymous reviewers for their constructive and fruitful comments. This work was supported by the National Research Foundation of Korea (NRF) grant
funded by the Korea government (MSIT) (NRF2016M3C4A7952587).
\fontsize{9pt}{10pt} \selectfont
|
{
"timestamp": "2018-11-27T02:27:03",
"yymm": "1809",
"arxiv_id": "1809.02286",
"language": "en",
"url": "https://arxiv.org/abs/1809.02286"
}
|
\section{Introduction}
Let $R$ be a Noetherian standard graded algebra over a field $k=R_0$, $\mathfrak{m}=\oplus_{n\in \mathbb Z_{> 0}}R_n$, and $I$ a homogeneous $R$-ideal. In this paper we study asymptotic behavior of the lowest degree of the local cohomology modules $\{\HH{i}{\mathfrak{m}}{R/I^n}\}_{n\in \mathbb N}$, provided that they are finite. As we make clear below, such behavior can be viewed as an ``asymptotic Kodaira vanishing for thickenings" phenomenon, and have recently appeared in various works such as \cite{BBLSZ, DM, Claudiu}.
To describe our motivations and questions precisely, let us recall some notations. For a graded $R$-module $M=\oplus_{i\in \mathbb Z}M_i$ one defines $$\indeg{M}=\min\{i\mid M_i\neq0\},\qquad \topdeg{M}=\max\{i\mid M_i\neq 0\}.$$
If $M=0$, we set $\indeg M=\infty$ and $\topdeg M=-\infty$. We also set
$$\beta(M)=\max\{i\mid (M/\mathfrak{m} M)_i\neq0\},$$
i.e., $\beta(M)$ is the maximal degree of an element in a minimal set of homogeneous generators of $M$.
The {\it Castelnuovo-Mumford regularity} of $M$ is defined as $$\reg(M)=\max\{\topdeg{\HH{i}{\mathfrak{m}}{M}}+i\}.$$
It is known that $\reg(R/I^n)$ agrees with a linear function for $n\gg 0$, this fact was proved independently in \cite{CHT} and \cite{Kod} when $R$ is a polynomial ring over a field, and extended in \cite{TW} for arbitrary standard graded rings.
If $\HH{i}{\mathfrak{m}}{M}\neq 0$, the $\topdeg$ of $\HH{i}{\mathfrak{m}}{M}$ is always finite, however this is not the case for $\indeg$. In fact, since $\HH{i}{\mathfrak{m}}{M}$ is an Artinian module, we have that $\indeg\HH{i}{\mathfrak{m}}{M}>-\infty$ if and only if $\HH{i}{\mathfrak{m}}{M}$ is Noetherian. Our work is guided by the following questions raised in \cite{DM}.
\begin{question}\label{motivQ}
Assume $\HH{i}{\mathfrak{m}}{R/I^n}$ is Noetherian for $n\gg 0$.
\begin{enumerate}
\item Does there exist $\alpha \in \mathbb Z$ such that $\indeg \HH{i}{\mathfrak{m}}{R/I^n} >\alpha n$ for every $n\gg 0$? In other words, is $\displaystyle\liminf_{n\rightarrow \infty} \frac{ \indeg\HH{i}{\mathfrak{m}}{R/I^n}}{n}$ finite?
\item If so, does the limit $\displaystyle\lim_{n\rightarrow \infty} \frac{ \indeg\HH{i}{\mathfrak{m}}{R/I^n}}{n}$ exist?
\end{enumerate}
\end{question}
It follows from \cite[1.4]{BBLSZ} that when $R$ is a polynomial ring over a field of characteristic $0$, $I$ is a prime ideal, $X:=\Proj(R/I)$ is locally a complete intersection (lci), and $i$ is at most the codimension of the singular locus of $X$, then $ \indeg\HH{i}{\mathfrak{m}}{R/I^n} \geqslant 0$ for all $n>0$. As explained there, this can be viewed as a Kodaira Vanishing Theorem for thickenings of $X$. When $I$ is a determinantal ideal, more precise behavior of vanishing results, and other homological invariants of thickenings of $I$ are available, see for instance \cite{Claudiu} and \cite{Raicu2}.
Our initial interest in the question came from \cite{DM} where we need an affirmative answer to part (1) of Question \ref{motivQ} to obtain efficient bounds on the lengths of local cohomology modules of powers. Robert Lazarsfeld pointed out to us that this is indeed the case when $X=\Proj(R/I)$ is a l.c.i variety, $k$ is of characteristic zero, and $i$ is at most the dimension of $X$. Thus Kodaira vanishing may not hold, but the lowest degrees of $\HH{i}{\mathfrak{m}}{R/I^n}$ are still bounded below by a linear function.
In this work we provide further answers to Question \ref{motivQ} above, in the case when $R$ is not necessarily a polynomial ring and $I$ may not be prime, or even reduced. The first main general result of this article (Theorem \ref{mainVB}), provides a linear lower bound for the initial degrees of local cohomology of the symmetric powers $\{S^n(E)\}_{n\in\mathbb N}$, where $E$ is a graded module that is locally free on $\Spec R \setminus \{\mathfrak{m}\}$. Our proof relies on a duality statement and the result on regularity by Trung and Wang in \cite{TW}. Here $(-)^*$ denotes the $R$-dual $\Hom_R(-,R)$.
\begin{thm} \label{MainT1}
Let $(R,\mathfrak{m})$ be a standard Cohen-Macaulay graded algebra over a field $k$ of characteristic zero. Set $d=\dim R\geqslant 2$. Let $E$ be a graded $R$-module which is free locally on $\Spec R\setminus \{\mathfrak{m}\}$.
Then there exists an integer $\varepsilon$ such that $$\indeg \HH{i}{\mathfrak{m}}{S^n(E)} \geqslant -\beta(E^*) n+\varepsilon$$ for every $n\geqslant 1$ and $ 1\leqslant i< d$.
\end{thm}
We apply this Theorem to answer Question \ref{motivQ} (1) affirmatively and effectively when $R$ is any standard graded algebra over a field of characteristic $0$, $R/I$ is Cohen-Macaulay, and $I$ is a complete intersection locally on $\Spec R\setminus \{\mathfrak{m}\}$. In this case, $\liminf_{n\rightarrow \infty} \frac{ \indeg\HH{i}{\mathfrak{m}}{R/I^n}}{n}$ is bounded below by $-\max\{\beta(E^*),0\}$ where $E$ is the conormal module $I/I^2$ (see Corollary \ref{conormal}). This result can be seen as an algebraic version of \cite[1.4]{BBLSZ} and \cite[5.6]{DM}, our proof via local cohomology of symmetric powers of conormal modules is inspired by the proofs of these results.
Theorem \ref{MainT1} also allows us to show a result on stabilization of maps of local cohomology of powers of ideals. We show that, if $I$ is as in the previous paragraph, and if $\beta(E^*)<0$, then the maps between local cohomology and Ext modules of consecutive powers of $I$ eventually stabilize on each graded degree. This result is closely related to \cite[1.1]{BBLSZ} and provide a partial answer to a question of Eisenbud, Musta\c{t}\u{a}, and Stillman (\cite[6.1]{EMS}), see Corollary \ref{stabExt} and its preceding paragraph for more details.
In general, if one only assumes that $\HH{i}{\mathfrak{m}}{R/I^n}$ is Noetherian for $n\gg 0$, it is complicated to find bounds on its lowest degrees. However, we are able to prove that there is a polynomial lower bound, regardless of the characteristic of $k$. The proof rests on a result by Chardin, Ha, and Hoa (\cite{CHH}), and is provided in Section \ref{polySec}.
In Section \ref{monoSec} we focus on the case where $I$ is a monomial ideal in a polynomial ring $R$. As expected, the extra combinatorial structure allows for better results. Assuming that $\HH{i}{\mathfrak{m}}{R/I^n}$ is Noetherian for $n\gg 0$, one can show that either $\indeg\HH{i}{\mathfrak{m}}{R/I^n}=0$ for $n\gg 0$ or $\displaystyle\liminf_{n\rightarrow \infty} \frac{ \indeg\HH{i}{\mathfrak{m}}{R/I^n}}{n}\geqslant 1$, and the latter holds precisely when $\tilde H_{i-1}(\Delta(I))= 0$, where $\Delta(I)$ is the simplicial complex whose Stanley-Reisner ideal is $\sqrt{I}$.
\section{Symmetric Powers of Locally Free Modules and Linear Lower Bound}\label{linearSec}
Let $E$ be a Noetherian graded module and set $u=\mu(E):=\dim_k E/\mathfrak{m} E$. Let $$F_1\xrightarrow{\phi} F_0\to E\to 0$$
be a minimal presentation of $E$, where $F_0$ and $F_1$ are graded free $R$-modules, and $\phi$ is an $u\times s$ matrix with entries in $\mathfrak{m}$. Let $T_1,\ldots, T_u$ be a set of variables and $\ell_1,\ldots, \ell_s$ the linear forms determined by
$$\begin{bmatrix}\ell_1,&\cdots,&\ell_s
\end{bmatrix}=
\begin{bmatrix}T_1,&\cdots,&T_u
\end{bmatrix}\phi.
$$
The ring $\Sym(E):=R[T_1,\ldots, T_u]/(\ell_1,\cdots, \ell_s)$ is the {\it symmetric algebra} of $E$. Let $d_1,\ldots, d_u$ be the degrees of a homogeneous minimal generating set of $E$. We can assign to $\Sym(E)$ a bi-graded structure where $T_i$ has bi-degree $(d_i,1)$ for every $i=1,\ldots, u$. The $n$th-graded component of $\Sym(E)$, $S^n(E)=\oplus_{a\in \mathbb Z}\Sym(E)_{(a,n)}$, is the {\it $n$th-symmetric power} of $E$.
Let $M$ be any Noetherian graded $R$-module and $U\subseteq E$ a graded submodule. We say $U$ is an {\it $M$-reduction} of $E$, if $S^n(E)\otimes_R M=S^1(U)S^{n-1}(E)\otimes_RM$ for $n\gg 0$, where $S^1(U)$ is seen as a submodule of $S^1(E)$. Following \cite{TW}, we define $$\rho_M(E):=\min\{\beta(U)\mid U \text{ is an $M$-reduction of } E \}.$$ We note that $\rho_M(E)\leqslant \beta(E)$ for every $R$-module $M$. The following theorem is the module version of \cite[3.2]{TW} and the proofs of these results are identical, however we include some relevant details for the reader's convenience. We remark that even though the algebras in \cite{TW} are positively graded, the proof of this result does not use this assumption.
\begin{thm} \label{regModules}
Let $R$ be a standard graded algebra over a Noetherian ring $A$. Let $E$ and $M$ be finitely generated graded $R$-modules. Then $$\reg(S^n(E)\otimes_R M)=\rho_M(E)n+e$$
for some integer $e\geqslant \indeg M$ and every $n\gg 0$.
\end{thm}
\begin{proof}
Let $U$ be an $M$-reduction of $E$ such that $d(U)=\rho_M(E)$. Let $\mathcal M=\Sym(E)\otimes_R M = \oplus_{n\in \mathbb N} S^n(E)\otimes_R M$ and notice $\mathcal M$ is a finitely generated graded $\Sym(U)$-module. Let $s=\mu_A(R_1)$, $v=\mu(U)$, and $u_1,\ldots, u_v$ the degrees of a homogeneous minimal generating set of $U$, then $\Sym(U)$ is a quotient ring of the bi-graded polynomial ring $A[x_1,\ldots, x_s, y_1,\ldots, y_v]$ where $x_i$ has degree $(1,0)$ for each $i$ and $y_j$ has degree $(u_i,1)$ for each $j$.
Therefore, \cite[2.2]{TW} implies $\reg(S^n(E)\otimes_R M)$ is a linear function $\rho n+e$ for some $e$ and $\rho\leqslant \rho_M(E)$. Finally, proceeding as in \cite[3.1]{TW} we obtain $\rho \geqslant \rho_M(E)$ and $e\geqslant \indeg M$, finishing the proof.
\end{proof}
For the next result, we assume $R$ is a local ring or standard graded over a field. Let $\mathfrak{m}$ be the (irrelevant) maximal ideal of $R$, $k=R/\mathfrak{m}$, and $E_R(k)$ the {\it (graded) injective hull} of $k$. For a (graded) $R$-module $M$ we set $$M^\vee:=\Hom_R(M,\, E_R(k)).$$
The following is a generalization of a duality result of Horrocks (\cite{Hor}).
\begin{prop}\label{duality}
Let $(R,\mathfrak{m},k)$ be a Cohen-Macaulay local ring $($or positively graded $k$-algebra$)$. Set
$d=\dim R\geqslant 2$
and $\omega$ a (graded) canonical module of $R$. Fix $1\leqslant i\leqslant d-1$, then for a $($graded$)$ $R$-module $M$ of dimension $d$ that is $S_{i+1}$ locally on $\Spec R\setminus \{\mathfrak{m}\}$ we have $($graded$)$ isomorphisms
$$\HH{i}{\mathfrak{m}}{M}^\vee\cong \HH{d-i+1}{\mathfrak{m}}{\Hom_R(M,\omega)}\quad\text{if } i\geqslant 2,$$
and,
$$\HH{1}{\mathfrak{m}}{M}^\vee\cong \ker\big(\HH{d}{\mathfrak{m}}{\Hom_R(M,\omega)}\rightarrow \HH{d}{\mathfrak{m}}{\Hom_R(F_0,\omega)}\big),$$
where $F_0\twoheadrightarrow M$ is a free module.
\end{prop}
\begin{proof} We begin the proof with the following claim.
\
\noindent {\it {\bf Claim:} Let $N$ be a $($graded$)$ $R$-module that is Maximal Cohen-Macaulay $($MCM$)$ locally on $\Spec R\setminus \{\mathfrak{m}\}$ and
$\cdots\to F_{ 0} \rightarrow N\to 0$ a $($graded$)$ free resolution of $N$. Then, $\Ext^1_R(N, \omega)\cong \ker\big(\HH{2}{\mathfrak{m}}{\Hom_R(N,\omega)}\rightarrow \HH{2}{\mathfrak{m}}{\Hom_R(F_0,\omega)}\big)$ if $d=2$, and $\Ext^1_R(N, \omega)\cong \HH{2}{\mathfrak{m}}{\Hom_R(N, \omega)}$ if $d\geqslant 3$.}
In order to prove this claim, we consider the $R$-modules $K$ and $C$ that fit in the following two exact sequences
\begin{equation}\label{qq1}
0\rightarrow\Hom_R(N,\omega)\rightarrow \Hom_R(F_0,\omega)\rightarrow C\rightarrow 0 \quad \quad\text{and}
\end{equation}
\begin{equation}\label{qq2}
0\rightarrow K\rightarrow \Hom_R(F_1,\omega)\rightarrow\Hom_R(F_2,\omega).
\end{equation}
By applying the depth lemma to \eqref{qq2} we obtain $\depth K\geqslant 2$, therefore $$\Ext^1_R(N,\omega)=\HH{0}{\mathfrak{m}}{\Ext^1_R(N,\omega)}=\HH{0}{\mathfrak{m}}{K/C}\cong \HH{1}{\mathfrak{m}}{C}$$ where the first equality follows by the assumption on $N$. Hence, the conclusion of the claim follows from \eqref{qq1}.
\
Now, back to the original statement, we note that the result follows by the claim and local duality \cite[3.6.19]{BH} if $d=2$, then we may assume $d\geqslant 3$. Let $\Omega^nM$ be the $n$th-syzygy module of $M$. Again by local duality and the claim we have
\begin{equation}\label{rr1}
\HH{i}{\mathfrak{m}}{M}^\vee\cong \Ext_R^{d-i}(M,\omega)\cong \Ext^1_R(\Omega^{d-i-1}M,\omega)\cong \HH{2}{\mathfrak{m}}{\Hom_R(\Omega^{d-i-1}M,\omega)}.
\end{equation}
Let $0\leqslant t\leqslant d-i-2$, by assumption $\Omega^tM$ is MCM in codimension $i+t+1$, which implies that $\dim \Ext^1_R(\Omega^{t}M,\omega)<d-i-t-1$. Let $\cdots\to F_{ 0} \rightarrow M\to 0$ be a (graded) free resolution of $M$. From the exact sequence
$$0\rightarrow\Hom_R(\Omega^{t}M,\omega)\rightarrow\Hom_R(F_t,\omega)\rightarrow\Hom_R(\Omega^{t+1}M,\omega)\rightarrow \Ext^1_R(\Omega^{t}M,\omega)\rightarrow 0$$
we obtain
\begin{equation}\label{rr2}
\HH{d-i-t}{\mathfrak{m}}{\Hom_R(\Omega^{t+1}M,\omega)}\cong \HH{d-i-t+1}{\mathfrak{m}}{\Hom_R(\Omega^{t}M,\omega)},\,\,\,\,\,\text{ if } i+t\geqslant 2,
\end{equation}
and
\begin{equation}\label{rr3}
\HH{d-1}{\mathfrak{m}}{\Hom_R(\Omega^{1}M,\omega)}\cong \ker\big(\HH{d}{\mathfrak{m}}{\Hom_R(M,\omega)}\rightarrow \HH{d}{\mathfrak{m}}{\Hom_R(F_0,\omega)}\big).
\end{equation}
The statement now follows from \eqref{rr1}, \eqref{rr2}, and \eqref{rr3}.
\end{proof}
Given an $R$-module $M$, we denote by $\Gamma(M)$ the {\it divided powers algebra} of $M$ \cite[Appendix 2]{E}. We set $M^*:=\Hom_R(M,R)$ and for a graded $R$-algebra $S=\oplus_{n\in \mathbb N}S_n$, we denote by $S^*:=\oplus_{n\in \mathbb N}S_n^*$ the {\it graded dual} of $S$.
We need the following technical lemma for the proof of our main result.
\begin{lemma}\label{symm}
Let $R$ be a commutative $($graded$)$ ring and $M$ a $($graded$)$ $R$-module. Then there exist natural $($graded$)$ maps
$$\Sym(M^*)\xrightarrow{\alpha} \Gamma(M^*)\xrightarrow{\beta} \Sym(M)^*.$$
Moreover, $\alpha$ is an isomorphism if $R$ contains the field of rational numbers and $\beta$ is an isomorphism if $M$ is free.
\end{lemma}
\begin{proof}
For the construction and results on $\alpha$, see \cite[Proposition III.3., page 256]{Roby}. See \cite[A2.6 and A2.7(c)]{E} for the corresponding information for $\beta$.
\end{proof}
The following is the main theorem of this section.
\begin{thm}\label{mainVB}
Let $(R,\mathfrak{m})$ be a Cohen-Macaulay standard
graded algebra over a field $k$ of characteristic zero. Set $d=\dim R\geqslant 2$. Let $E$ be a graded $R$-module which is free locally on $\Spec R\setminus \{\mathfrak{m}\}$.
Then there exists an integer $\varepsilon$ such that $$\indeg \HH{i}{\mathfrak{m}}{S^n(E)} \geqslant -\beta(E^*) n+\varepsilon$$ for every $n\geqslant 1$ and $ 1\leqslant i< d$.
\end{thm}
\begin{proof}
First, assume $i\geqslant 2$. Let $\omega$ be the canonical module of $R$.
By Hom-Tensor adjointness and the isomorphism $R\cong \Hom_R(\omega,\, \omega)$, we have $S^n(E^*)^*\cong \Hom_R(S^n(E^*)\otimes_R \omega, \omega).$
By the assumption we have $S^n(E^*)\otimes_R \omega$ is MCM locally on $\Spec R\setminus \{\mathfrak{m}\}$, therefore the natural map $$S^n(E^*)\otimes_R \omega\rightarrow \Hom_R(\Hom_R( S^n(E^*)\otimes_R \omega,\, \omega),\, \omega)$$
is an isomorphism locally on $\Spec R\setminus \{\mathfrak{m}\}$. Hence, by Proposition \ref{duality}
$$
\HH{i}{\mathfrak{m}}{S^n( E^*)^*}\cong \HH{i}{\mathfrak{m}}{\Hom_R( S^n(E^*)\otimes_R \omega,\, \omega)}\cong \HH{d-i+1}{\mathfrak{m}}{S^n(E^*)\otimes_R \omega }^\vee,
$$
By Theorem \ref{regModules}, we have $$\topdeg \HH{d-i+1}{\mathfrak{m}}{S^n(E^*) \otimes_R \omega }\leqslant \beta(E^*) n-\varepsilon$$ for some $\varepsilon \in \mathbb Z$ and every $n\geqslant 1$. Therefore $ \indeg \HH{i}{\mathfrak{m}}{S^n( E^*)^*}\geqslant -\beta(E^*) n+\varepsilon$ for every $n\geqslant 1$ and $i\geqslant 2$.
The map $\Sym(E^*)\xrightarrow{\beta\circ\alpha} \Sym(E)^*$ in Lemma \ref{symm} (with $M=E$) is an isomorphism locally on $\Spec R\setminus \{\mathfrak{m}\}$, hence
\begin{equation}\label{isomSym}
S^n( E^*)^*\cong S^n( E)^{**}.
\end{equation}
The result now follows for $i\geqslant 2$ by observing that $\HH{i}{\mathfrak{m}}{S^n( E)}\cong \HH{i}{\mathfrak{m}}{S^n( E)^{**}}
$ as $S^n(E)$ is free, hence reflexive, locally on $\Spec R\setminus \{\mathfrak{m}\}$.
Now, we show the statement for $i=1$. Fix $n\gg 0$ and consider the short exact sequence $$0\to \Ker(\varphi) \to S^n(E)\xrightarrow{\varphi} S^n(E)^{**}\to \mathcal C\to 0.$$
Since $\varphi$ is an isomorphism locally on $\Spec R\setminus \{\mathfrak{m}\}$, we have $\dim \Ker (\varphi) = \dim \mathcal C = 0$. Then $\HH{1}{\mathfrak{m}}{S^n(E)}\cong\mathcal C$, as $\depth S^n( E)^{**}\geqslant 2$. Therefore, $$\indeg \HH{1}{\mathfrak{m}}{S^n(E)} = \indeg \mathcal C \geqslant \indeg S^n(E)^{**}=\indeg S^n( E^*)^*,$$
where the last equality follows from \eqref{isomSym}. Let $\oplus_{i=1}^u R(-a_i)\rightarrow S^n( E^*)\rightarrow 0$ be the first map of a minimal homogeneous resolution of $S^n( E^*)$, where $u=\mu(S^n( E^*))$. Then, $S^n( E^*)^*\hookrightarrow \oplus_{i=1}^u R(a_i)$. We conclude $$\indeg S^n( E^*)^*\geqslant -\max_i\{a_i\}\geqslant -\reg (S^n( E^*))\geqslant -\beta(E^*)n+\varepsilon,$$
for some $\varepsilon\in \mathbb Z$ and $n\geqslant 1$ by Theorem \ref{regModules}.
\end{proof}
Assume $E$ is a graded submodule of a free graded $R$-module $F=\oplus_{i=1}^\gamma R(-d_i)$. We have the natural map of symmetric algebras
$$\Sym(E)\rightarrow \Sym(F)=R[T_1,\ldots,T_\gamma],$$ where each $T_i$ has bidegree $(d_i, 1)$. The image of this map is the bi-graded algebra$$\mathcal R[E]:=\oplus_{n\in \mathbb N}E^n\subset R[T_1,\ldots,T_\gamma].$$ The ring $\mathcal R[E]$ is called the {\it Rees algebra} of $E$ with respect to the embedding $E\subset F$.
It is known that if $E$ has a {\it rank}, i.e., $E_P$ is free of constant rank for every $P\in \Ass(R)$, then $\mathcal R[E]$ is isomorphic fo $\Sym(E)/(\text{$R$-torsion})$ and hence it is independent of the graded embedding of $E$ into a free module (\cite{EHU}).
\begin{cor}
Let $(R,\mathfrak{m},k)$ and $E$ be as in Theorem \ref{mainVB}. Assume that $E$ has a rank, then $$\indeg \HH{i}{\mathfrak{m}}{E^n} \geqslant -\beta(E^*)n+\varepsilon$$
for some $\varepsilon\in \mathbb Z$ and every $n\geqslant 1$, $1\leqslant i< d$.
\end{cor}
\begin{proof}
Since $E$ has a rank, and is free locally on $\Spec R \setminus \{\mathfrak{m}\}$, for every $n\geqslant 1$ we have $E^n\cong S^n(E)/(R\text{-torsion})$ and this $R$-torsion submodule is supported on $\{\mathfrak{m}\}$.
Therefore,
$\HH{i}{\mathfrak{m}}{E^n}=\HH{i}{\mathfrak{m}}{S^n(E)}$ for every $i\geqslant 1$ and the statement follows from Theorem \ref{mainVB}.
\end{proof}
\begin{cor}\label{conormal}
Let $(R,\mathfrak{m})$ be a standard graded algebra over a field $k$ of characteristic zero. Let $I$ be a homogeneous $R$-ideal such that $S=R/I$ is Cohen-Macaulay. Assume $I_\mathfrak{p}$ is generated by a regular sequence in $R_\mathfrak{p}$ for every $\mathfrak{p}\in \Spec(R)\setminus \{\mathfrak{m}\}$ and that $\dim S\geqslant 2$. Let $E=I/I^2$ be the {\it conormal module} of $I$ and $E^*=\Hom_{S}(E, S)$, then
\begin{enumerate}
\item if $\beta(E^*)\ls0$, there exists $C\in \mathbb Z$ such that $$\indeg \HH{i}{\mathfrak{m}}{R/I^n} \geqslant C$$
for every $n\geqslant 1$ and $1\leqslant i< \dim R/I$;
\item if $\beta(E^*)>0$, then there exists $\varepsilon\in \mathbb Z$ such that $$\indeg \HH{i}{\mathfrak{m}}{R/I^n} \geqslant -\beta(E^*)n+\varepsilon$$
for every $n\geqslant 1$ and $1\leqslant i< \dim R/I$.
\end{enumerate}
\end{cor}
\begin{proof}
By assumption $ E$ is a $R/I$-module that is free locally on $\Spec R\setminus \{\mathfrak{m}\}$. Since the natural epimorphism $S^n(E)\twoheadrightarrow I^n/I^{n+1}$ is an isomorphism locally on $\Spec R\setminus \{\mathfrak{m}\}$, we have $\HH{i}{\mathfrak{m}}{S^n(E)}\cong \HH{i}{\mathfrak{m}}{I^n/I^{n+1}}$ for $i\geqslant 1$. The conclusion now follows from Theorem \ref{mainVB} and by induction on $n$ via the inequality
$$\indeg \HH{i}{\mathfrak{m}}{R/I^{n+1}} \geqslant \min\{\indeg \HH{i}{\mathfrak{m}}{I^n/I^{n+1}},\,\indeg \HH{i}{\mathfrak{m}}{R/I^{n}}\}$$ for $n\geqslant 1$.
\end{proof}
\begin{example}\label{limKnown}
Let $R=k[x,y,u,v]/(xu^t-yv^t)$ for some $t\geqslant 1$ and $\chara k = 0$. Let $I=(x,y)$ and notice that $I$ and the ring $S=R/I$ satisfy the assumptions of Corollary \ref{conormal}. The graded free resolution of $I/I^2$ is
$$0\rightarrow S(-1-t)
\xrightarrow{
\begin{bmatrix} u^t\\-v^t\end{bmatrix}
}
S^2(-1)
\rightarrow
I/I^2\rightarrow 0.$$
Therefore, $(I/I^2)^*$ is isomorphic to the kernel of the map
$S^2(1)
\xrightarrow{
\begin{bmatrix} u^t&-v^t\end{bmatrix}
}
S(1+t)
$, which is generated by $\begin{bmatrix} v^t\\u^t\end{bmatrix}$. Hence $\beta((I/I^2)^*)=t-1$.
If $t\geqslant 2$, by Corollary \ref{conormal} we have $\indeg \HH{1}{\mathfrak{m}}{R/I^n} \geqslant -(t-1)n+\varepsilon$ for some $\varepsilon\in\mathbb Z$ and every $n\geqslant 1$. On the other hand, computing $ \HH{1}{\mathfrak{m}}{R/I^n}$ via the \u{C}ech complex of the system of parameters $\{u,v\}$ of the ring $R/I^n$, we obtain that the class of $[ \frac{x^{n-1}}{v^{tn-t}} , \, \frac{y^{n-1}}{u^{tn-t}} ]$ is nonzero and has degree $-(t-1)n+(t-1)$. We conclude that $$-(t-1)n+(t-1)\geqslant \indeg \HH{1}{\mathfrak{m}}{R/I^n} \geqslant -(t-1)n+\varepsilon$$ for every $n\geqslant 1$. Therefore, $$\lim_{n\rightarrow \infty}\frac{ \indeg \HH{1}{\mathfrak{m}}{R/I^n}}{n}=-(t-1).$$
Now, if $t=1$ we have $\beta((I/I^2)^*)=0$ and hence $\{\indeg \HH{1}{\mathfrak{m}}{R/I^n}\}_{n\in \mathbb N}$ is bounded below by a constant.
In fact, computations with Macaulay2 \cite{GS} suggests that the sequence $\{\indeg \HH{1}{\mathfrak{m}}{R/I^n}\}_{n\in \mathbb N}$ in Example \ref{limKnown} agrees with the linear function $-(t-1)(n-1)$ for each $t\geqslant 1$ and $n\geqslant 2$. We record some of these values below.
\begin{table}[ht]
\centering
\begin{tabular}{c| c c c c c c c c c c c c c c c c}
\backslashbox{$t$}{$n$} & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\
\hline
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
2 & -1 & -2 & -3 & -4 & -5 & -6 & -7 & -8 & -9 & -10 & -11 & -12 & -13 & -14 & -15 \\
3 & -2 & -4 & -6 & -8 & -10 & -12 & -14 & -16 & -18 & -20 & -22 & -24 & -26 & -28 & -30 \\
4 & -3 & -6 & -9 & -12 & -15 & -18 & -21 & -24 & -27 & -30 & -33 & -36 & -39 & -42 & -45 \\
5 & -4 & -8 & -12 & -16 & -20 & -24 & -28 & -32 & -36 & -40 & -44 & -48 & -52 & -56 & -60 \\
\end{tabular}
\label{sequences}
\end{table}
\end{example}
\begin{example}\label{maxMinors}
Let $X$ be a $2\times 3$ generic matrix and $ R=k[X]$ with $\chara k = 0$. Let $I=I_2(X)$ the ideal generated by the $2\times 2$ minors of $X$, then $R/I$ is Cohen-Macaulay of dimension 4, and $\Proj R/I$ is lci.
Using Macaulay2 \cite{GS} we obtain $\beta((I/I^2)^*) =-1$, therefore by Corollary \ref{conormal} the sequence $\{\indeg \HH{3}{\mathfrak{m}}{R/I^n}\}_{n\in \mathbb N}$ is bounded below by a constant. Indeed, $\indeg \HH{3}{\mathfrak{m}}{R/I^n} = 0 $ for every $n\geqslant 2$ \cite[5.1]{BBLSZ}.
\end{example}
In the following example we demonstrate that the lower bound $C$ in Corollary \ref{conormal} (1) may be negative.
\begin{example}
Let $R=k[x,y,z,u,v,w]/(x^2u^2+y^2v^2+z^2w^2)$ with $\chara k = 0$ and let $I=(x,y,z)$. Computations with Macaulay2 \cite{GS} show that $\beta((I/I^2)^*) =-1$ and suggest that $\indeg \HH{i}{\mathfrak{m}}{R/I^n}=-2$ for every $n \geqslant 3$.
\end{example}
In the following example we observe that, even when the ring $R$ is regular, the sequence $\{\indeg \HH{i}{\mathfrak{m}}{R/I^n}\}_{n\in \mathbb N}$ may have linear behavior with negative slope.
\begin{example}
Let $R=k[x,y,u,v]$ with $\chara k = 0$ and $I = (x^2u-y^2v,u^2,uv,v^2)$. Computations with Macaulay2 \cite{GS} show that $\beta((I/I^2)^*) =2$ and suggest that $\indeg \HH{i}{\mathfrak{m}}{R/I^n}=-2n+1$ for every $n \geqslant 1$.
\end{example}
In \cite[6.1]{EMS}, Eisenbud, Musta\c{t}\u{a}, and Stillman asked for which ideals $I$ in a polynomial ring $R$ over a field $k$, there exists a decreasing chain of ideals $\{I_n\}_{n\in \mathbb N}$, cofinal with the regular powers $\{I^n\}_{n\in \mathbb N}$, such that the natural map
$$\Ext_R^i(R/I_n,R)\rightarrow
\lim_{\longrightarrow}\Ext_R^i(R/I_n,R)=\HH{i}{I}{R}$$
is injective for every $i$ and $n$. In \cite[1.2]{BBLSZ} the authors provide a partial answer to this question, showing that if $I$ is a homogeneous prime ideal such that $\Proj R/I$ is smooth, and $\chara k = 0$, then for each $i\in \mathbb N$ and $j\in \mathbb Z$ the map $$\Ext_R^i(R/I^n,R)_j\rightarrow
\HH{i}{I}{R}_j$$
is injective for $n\gg 0$. Part (3) of the following corollary provides another partial answer to the question above for a class of ideals in Gorenstein rings. Part (1) and (2) are closely related to \cite[1.1]{BBLSZ}.
\begin{cor}\label{stabExt}
Let $R$, $I$, and $E$ be as in Corollary \ref{conormal}. Assume $\dim R/I\geqslant 3$ and $\beta(E^*)<0$, then for any $D\in \mathbb Z$, we have
\begin{enumerate}
\item for each $1\leqslant i\leqslant \dim R/I-2$ the natural map $$\HH{i}{\mathfrak{m}}{R/I^{n+1}}_{\leqslant D}\rightarrow\HH{i}{\mathfrak{m}}{R/I^{n}}_{\leqslant D}$$ is an isomorphism for $n\gg 0$; this map is injective for $i=\dim R/I -1$ and $n\gg 0$.
\item If $R$ is Cohen-Macaulay and $\omega$ is the canonical module of $R$, then for each $\height I +2\leqslant i<\dim R$ the natural map
$$\Ext_R^i(R/I^n,\omega)_{\geqslant D}\rightarrow \Ext_R^i(R/I^{n+1},\omega)_{\geqslant D}$$
is an isomorphism for $n\gg 0$. Furthermore, this map is injective for $i=\height I$ and $n\geqslant 1$.
\item If $R$ is Cohen-Macaulay and $\omega$ is the canonical module of $R$, then for every $i<\dim R$ such that $i\neq \height I +1 $ the natural map
$$\Ext_R^i(R/I^n,\omega)_{\geqslant D}\rightarrow \HH{i}{I}{\omega}_{\geqslant D}$$
is injective for $n\gg 0$. In fact the map is injective for $i=\height I$ and $n\geqslant 1$, and it is an isomorphism for $i\geqslant \height I+2$ and $n\gg 0$.
\end{enumerate}
\end{cor}
\begin{proof}
Part (1) follows by Theorem \ref{mainVB}, as the assumptions imply that $\indeg \HH{i}{\mathfrak{m}}{I^n/I^{n+1}} = \indeg \HH{i}{\mathfrak{m}}{S^n(E)}>D$ for $1\leqslant i\leqslant \dim R/I-1$ and $n\gg 0$. Part (2) follows from (1) and local duality for the case $i\geqslant\height I+2$. The injectivity for $i=\height I$ follows by local duality and the epimorphism $$\HH{\dim R/I}{\mathfrak{m}}{R/I^{n+1}}\twoheadrightarrow \HH{\dim R/I}{\mathfrak{m}}{R/I^{n}}$$ for $n\geqslant 1$.
Now, Part (3) follows from (2) as $\HH{i}{I}{\omega}=\displaystyle\lim_{\longrightarrow}\Ext_R^i(R/I^n,\omega)$ for every $i$.
\end{proof}
A local ring $(S,\mathfrak{n})$ is said to be {\it cohomologically full} if for every surjection $T\twoheadrightarrow S$ from a local ring $(T,\mathfrak{q})$, such that $T_{\text{red}} = S_{\text{red}}$ and $T$ and $S$ have the same characteristic, the natural map
$\HH{i}{\mathfrak{q}}{T}\rightarrow \HH{i}{\mathfrak{n}}{S}$
is surjective for every $i$. If $R$ is a standard graded algebra over a field $k$ and irrelevant maximal ideal $\mathfrak{m}$, then we say $R$ is cohomologically full if the local ring $R_\mathfrak{m}$ is. For more information and examples of cohomologically full rings see \cite{DDM}.
\vspace{1mm}
The following result answers Question \ref{motivQ}, (2), in a particular case.
\begin{cor}\label{cohFull}
Let $R$, $I$, and $E$ be as in Corollary \ref{stabExt}, and fix an integer $1\leqslant i\leqslant \dim R/I-2$. Assume $R/J$ is cohomologically full for some $R$-ideal $J$ such that $\sqrt{J}=\sqrt{I}$ and $\HH{i}{\mathfrak{m}}{R/J}\neq 0$. Then there exists an integer $C\leqslant 0$ such that $\indeg \HH{i}{\mathfrak{m}}{R/I^n}=C$ for every $n\gg 0$.
\end{cor}
\begin{proof}
By assumption we have that the map $\HH{i}{\mathfrak{m}}{R/I^n}\rightarrow \HH{i}{\mathfrak{m}}{R/J}$ is surjective for $n\gg 0$. Therefore, $ \HH{i}{\mathfrak{m}}{R/J}$ has finite length and
$$\indeg \HH{i}{\mathfrak{m}}{R/I^n}\leqslant \indeg \HH{i}{\mathfrak{m}}{R/J} =0,$$
where the last equality follows from \cite[4.9]{DDM}. Now, by Corollary \ref{stabExt}, (1) we have that $\HH{i}{\mathfrak{m}}{R/I^{n+1}}_{\leqslant 0}\rightarrow\HH{i}{\mathfrak{m}}{R/I^{n}}_{\leqslant 0}$ is an isomorphism for $n\gg 0$. The conclusion follows.
\end{proof}
\begin{remark}
In the setting of Corollary \ref{cohFull}, let $X = \Proj R/I$. If $i\leqslant \codim \Sing X$ it was proved in \cite[3.1]{BBLSZ} that $\HH{i}{\mathfrak{m}}{R/I^n}_{<0}=0$ for every $n\geqslant 1$.
Hence, if $I^n$ is cohomologically full for every $n\gg 0$, \cite[4.9]{DDM} shows $\indeg \HH{i}{\mathfrak{m}}{R/I^n}=0$ for $n\gg 0$.
\end{remark}
The following example shows that the assumption on the characteristic is necessary in Corollary \ref{conormal} and hence in Theorem \ref{mainVB}.
\begin{example}
Let $R$ and $I$ be as in Example \ref{maxMinors} but assume instead that $R$ has characteristic $p>0$. Moreover, assume the conclusion of Theorem \ref{mainVB} holds in positive characteristic. Computations by Macaulay2 \cite{GS} show $\beta((I/I^2)^*) =-1$ and then by Corollary \ref{stabExt}, (1) we have $$\HH{3}{\mathfrak{m}}{R/I^{n+1}}_0\rightarrow \HH{3}{\mathfrak{m}}{R/I^{n}}_0$$
is injective for $n\gg 0$. Furthermore, by \cite[5.5]{BBLSZ}, $\HH{3}{\mathfrak{m}}{R/I^{n}}_0\neq 0$ for every $n\geqslant 2$. However, if $n'\gg n\gg0$, there exists $e\in \mathbb N$ such that $I^{n'}\subseteq I^{[p^e]}\subseteq I^n$ and hence $\HH{3}{\mathfrak{m}}{R/I^{n'}}\rightarrow \HH{3}{\mathfrak{m}}{R/I^{n}}$ is the zero map as $R/I^{[p^e]}$ is Cohen-Macaulay, which is a contradiction.
\end{example}
\section{Polynomial Bound for Homogeneous Ideals}\label{polySec}
Let $I$ be a homogeneous ideal in a standard graded ring over a field. The purpose of this section is to prove that whenever the modules $ \HH{i}{\mathfrak{m}}{R/I^n}$ are Noetherian for $n\gg 0$, the rate of growth of the sequence $\{\indeg \HH{i}{\mathfrak{m}}{R/I^n}\}_{n\in \mathbb N}$ is at most polynomial. The results of this section apply in wide generality and without assumptions on the characteristic of the base field.
Let $M$ be a Noetherian $R$-module of dimension $d$. We denote by $e_0(M), \ldots, e_d(M)$ the {\it Hilbert coefficients} of $M$, i.e.,
$$\lambda(M/\mathfrak{m}^nM)=\sum_{i=0}^r (-1)^ie_i(M){n+d-i\choose d-i},\qquad \text{ for }n\gg 0,$$
where $\lambda(N)$ denotes the {\it length} of the $R$-module $N$.
\vspace{1mm}
We now present the main theorem of this section.
\begin{thm}\label{polBound}
Let $R$ be a standard graded algebra over a field $k$ and with irrelevant maximal ideal $\mathfrak{m}$. Let $I$ be a homogeneous $R$-ideal and set $d=\dim R/I\geqslant 2$. Assume $\HH{i}{\mathfrak{m}}{R/I^n}$ is Noetherian for some $1\leqslant i< d$ and $n\gg 0$. Then there exists $s\in \mathbb N$ such that for every $n\gg 0$ we have
$$|\indeg \HH{i}{\mathfrak{m}}{R/I^n}|<n^s.$$
\end{thm}
\begin{proof}
Let $e=\dim_k R_1$. Consider an epimorphism $S := k[x_1,\ldots, x_e]\xtwoheadrightarrow{\varphi} R$ from a polynomial ring $S$ and let $\mathfrak{n} = (x_1,\ldots,x_e)\subseteq S$.
By graded local duality \cite[3.6.19]{BH} we have a graded isomorphism
$$
\HH{i}{\mathfrak{m}}{R/I^n}\cong \HH{i}{\mathfrak{n}}{R/I^n} \cong \Ext^{e-i}_S(R/I^n, S)^{\vee}(e)
$$
for every $n\in \mathbb N$. Therefore, by the assumption the module $\Ext^{e-i}_S(R/I^n, S)$ has finite length for $n\gg 0$, and then
\begin{equation}\label{upsDo}
\indeg \HH{i}{\mathfrak{m}}{R/I^n} = -\topdeg \Ext^{e-i}_S(R/I^n, S)-e= -\reg( \Ext^{e-i}_S(R/I^n, S))-e.
\end{equation}
For an $R$-module $M$ of dimension $r$, we set $Q_M(n)= \sum_{i=0}^r |e_i(M)|{n+r-i\choose r-i}. $
For an $R$-ideal $J$, set $\tilde{J}= (0:_{R}\mathfrak{m}^\infty)$.
\vspace{1mm}
Let $r_n=\reg(R/I^n) $ and notice that from the regular sequence
$$0\to \tilde{I^n}/I^n\to R/I^n \to R/\tilde{I^n}\to 0$$ we obtain $\reg(R/\tilde{I^n})\leqslant r_n $. Therefore, by \cite[3.5]{CHH}, there exists $C\in \mathbb N$ such that
$$\reg( \Ext^{e-i}_S(R/I^n, S))<C\big(Q_{R/I^n}(r_n)\big)^{2^d-2}.$$
By \cite[1.1]{HPV}, the functions $e_i(R/I^n)$ agree with a polynomial of degree $\leqslant e-d-i$ for every $i$ and $n\gg 0$, therefore there exists a polynomial in two variables, $q(n,t)\in \mathbb Z[n,t]$ of degree at most $e$ in $n$ and of degree $d$ in $t$, such that $Q_{R/I^n}(t)\leqslant q(t,n)$ for $t, n\gg 0$.
Since $r_n$ eventually agrees with a linear function by \cite[3.2]{TW}, it follows that $$\big(Q_{R/I^n}(r_n)\big)^{2^d-2} < D n^{(e+d)(2^d-2)}$$ for some $D\in \mathbb N$ and $n\gg 0$. The conclusion now follows from \eqref{upsDo} and the fact that the sequence $\{\indeg \HH{i}{\mathfrak{m}}{R/I^n}\}_{n\in \mathbb N}$ is bounded above by the linear function $\reg(R/I^n)$ for $n\gg 0$.
\end{proof}
The previous result can be used to show a polynomial bound for the lengths of local cohomology modules of powers of homogeneous ideals. The next result can be seen as a partial answer to Question \cite[7.1]{DM}.
\begin{cor}\label{polynomialLength}
Let $R=k[x_1,\ldots, x_d]$, $\mathfrak{m}=(x_1,\ldots, x_d)$, and $I$ a homogeneous $R$-ideal. Assume $\HH{i}{\mathfrak{m}}{R/I^n}$ is Noetherian for $n\gg 0$, then there exists $t\in \mathbb N$ such that $$\limsup_{n\rightarrow \infty} \frac{\lambda(\HH{i}{\mathfrak{m}}{R/I^n})}{n^t}<\infty.$$
\end{cor}
\begin{proof}
In this proof we follow Notation \ref{takNot}. By extending the field $k$ we can assume it is infinite. As in the proof of \cite[5.3]{DM}, using Gr\"obner deformation and \cite[2.4]{Sba} we can construct a sequence of monomial ideals $J_n$ such that $\reg(J_n)=\reg(I^n)$ and $\dim_k \HH{i}{\mathfrak{m}}{R/I^n}_{j}\leqslant \dim_k \HH{i}{\mathfrak{m}}{R/J_n}_{j}$ for every $j\in\mathbb Z$. Let $\beta\in \mathbb N$ be such that $\reg(I^n)\leqslant\beta n$ for every $n\geqslant 1$. By Theorem \ref{polBound} there exists $s\in \mathbb Z_{>0}$ such that $\HH{i}{\mathfrak{m}}{R/I^n}_{<-n^s} = 0$, then
\begin{equation}\label{beqn1}
\lambda(\HH{i}{\mathfrak{m}}{R/I^n})=\lambda(\HH{i}{\mathfrak{m}}{R/I^n}_{\geqslant -n^s})\leqslant\lambda(\HH{i}{\mathfrak{m}}{R/J_n}_{\geqslant -n^s})
\leqslant \sum_{-n^s \leqslant |\mathbf{a}| \leqslant\beta n} \dim_k \HH{i}{\mathfrak{m}}{R/J_n}_{\mathbf{a}}.
\end{equation}
By \cite[5.2]{DM} if $|\mathbf{a}^+|>\beta n$, then $\HH{i}{\mathfrak{m}}{R/J_n}_{\mathbf{a}}=0$. Moreover, if $\mathbf{a}=(a_1,\ldots, a_d)\in \mathbb Z^d$ satisfies that $-n^s \leqslant |\mathbf{a} | \leqslant\beta n$ and $|\mathbf{a}^+|\leqslant \beta n$, then for every $j\in G_\mathbf{a}$ we must have $a_j\geqslant -n^s-\beta n$. Set $S=G_\mathbf{a}$, then by \cite[3.3, 5.2, 5.1]{DM} there exists $C\in \mathbb N$ such that
\begin{equation}\label{beqn2}
\sum_{-n^s \leqslant |\mathbf{a}| \leqslant\beta n,\, G_\mathbf{a} =S} \dim_k \HH{i}{\mathfrak{m}}{R/J_n}_{\mathbf{a}}\leqslant (n^s+\beta n)^{|S|}Cn^{d-|S|}\leqslant C(n^{s+1}+\beta n^2)^{d}.
\end{equation}
The conclusion now follows from \eqref{beqn1} and Inequality \eqref{beqn2}, by adding the latter over all possible $S$ and setting $t=d(s+1)$.
\end{proof}
We finish the section with the following remark.
\begin{remark}
Assume $I$ is generated in a single degree $\gamma$. In \cite[3.4]{BCH} the authors showed that for every $i\geqslant 0$, there exists a set $\Lambda_i\subseteq \mathbb Z$ and a function $\eta_i:\Lambda_i\to \mathbb N$, such that $\HH{i}{\mathfrak{m}}{R/I^n}_{l+n\gamma }\neq 0$ for every $l\in \Lambda_i$ and $n\geqslant \eta_i(l)$. We note that if one is able to show that the for certain $i$ the image of the function $\eta_i$ is bounded, then the Noetherian assumption on $ \HH{i}{\mathfrak{m}}{R/I^n}$ for $n\gg 0$ would imply $\Lambda_i$ is finite, and then by \cite[3.4]{BCH} the sequence $ \{\indeg \HH{i}{\mathfrak{m}}{R/I^n}\}_{n\in \mathbb N}$ would agree with a linear function for $n\gg 0$.
\end{remark}
\section{Monomial ideals}\label{monoSec}
The purpose of this section is to analyze the asymptotic behavior of $\{\indeg \HH{i}{\mathfrak{m}}{R/I^n}\}_{n\in \mathbb N}$ for monomial ideals.
From now on we assume $R=k[x_1,\ldots, x_d]$, $\mathfrak{m}=(x_1,\ldots, x_d)$, and $I$ is a monomial ideal.
\begin{Notation}\label{takNot}
Let $F$ be a subset of $[d]= \{1,\ldots, d\}$. We consider the map $\pi_F: R\longrightarrow R,$ defined by $\pi_F(x_i)=1$ if $i\in F$, and $\pi_F(x_i)=x_i$ otherwise. We set $I_F:=\pi_F(I)$.
For $\mathbf{a}=(a_1,\ldots, a_d)\in \mathbb N^d$, we use the notation $\mathbf{x}^\mathbf{a} := x_1^{a_1}\cdots x_d^{a_d}$. We also consider $G_\mathbf{a}=\{i\mid a_i<0\}$ and define $\mathbf{a}^+=(a_1^+,\ldots, a_d^+)$, where $a_i^+=a_i$ if $i\not\in G_\mathbf{a} $ and $a_i^+=0$ otherwise. We set $\Delta_{\mathbf{a}}(I)$ to be the simplicial complex of all subsets $F$ of $[d]\setminus G_\mathbf{a}$ such that $\mathbf{x}^{\mathbf{a}^+}\not\in I_{F\cup G_\mathbf{a}}$. We note that $\Delta_{\mathbf{a}}(I)$ is a subcomplex of $\Delta(I)$, the simplicial complex whose Stanley-Reisner ideal is $\sqrt{I}$ (\cite[1.3]{MT}).
\end{Notation}
We now state Takayama's formula which expresses the graded components of local cohomology of monomial ideals in terms of reduced homology of some associated simplicial complexes.
\begin{thm}[ {\cite[Theorem 1]{Tak}}]\label{Takayama}
For every $\mathbf{a}\in \mathbb Z^d$ and $i\geqslant 0$ we have
$$\dim_k \HH{i}{\mathfrak{m}}{R/I}_{\mathbf{a}} = \dim_k \tilde{H}_{i-|G_\mathbf{a} |-1}(\Delta_{\mathbf{a}}(I), k)$$
\end{thm}
The following is the main theorem of this section.
\begin{thm}\label{monomials}
Let $I$ be a monomial ideal and assume $\HH{i}{\mathfrak{m}}{R/I^n}$ is Noetherian for $n\gg 0$. Then one of the following holds
\begin{enumerate}
\item If $\tilde{H}_{i-1}(\Delta(I),k)\neq 0$, then $\indeg \HH{i}{\mathfrak{m}}{R/I^n} =0$ for $n\gg 0$.
\item If $\tilde{H}_{i-1}(\Delta(I),k) = 0$ then $\liminf_{n\rightarrow \infty}\frac{\indeg \HH{i}{\mathfrak{m}}{R/I^n} }{n}\geqslant 1$.
\end{enumerate}
\end{thm}
\begin{proof}
By \cite[Proposition 1]{Tak} and the assumption we have $\HH{i}{\mathfrak{m}}{R/I^n}_{\mathbf{a}}=0$ for $n\gg 0$, and every $\mathbf{a}\in \mathbb Z^d$ such that $G_\mathbf{a}\neq \emptyset$. In particular, $\indeg \HH{i}{\mathfrak{m}}{R/I^n} \geqslant 0$ for every $n\gg 0$. Set $\mathbf{0}=(0,\ldots,0)\in \mathbb N^d$. We note that $\Delta_{\mathbf{0}}(I^n) = \Delta(I)$ for every $n$ (\cite[1.4]{MT}), therefore if $\tilde{H}_{i-1}(\Delta(I),k)\neq 0$, Theorem \ref{Takayama} implies $\indeg \HH{i}{\mathfrak{m}}{R/I^n} =0$ for every $n\gg 0$.
Now, assume $\tilde{H}_{i-1}(\Delta(I),k) = 0$. Fix $n\in \mathbb N$ and $\mathbf{a}\in \mathbb N^d$ such that $|\mathbf{a} |<n$.
For every facet $F$ of $\Delta(I)$ we have $I_F^n\neq 1$, hence by degree reasons $\mathbf{x}^\mathbf{a}\not\in I_F^n$.
It follows $\Delta_\mathbf{a}(I^n)=\Delta(I)$ and then $\HH{i}{\mathfrak{m}}{R/I^n}_{\mathbf{a}}=0$ by Theorem \ref{Takayama}. We conclude $\indeg \HH{i}{\mathfrak{m}}{R/I^n}\geqslant n$, finishing the proof.
\end{proof}
\begin{remark}
The condition $\tilde{H}_{i-1}(\Delta(I),k)\neq 0$ in Theorem \ref{monomials} (1) is automatically satisfied if $\HH{i}{\mathfrak{m}}{R/I^n}$ is Noetherian for some $n\in \mathbb N $ and $\HH{i}{\mathfrak{m}}{R/\sqrt{I}}\neq 0$ (\cite[4.9]{DDM}).
\end{remark}
The following example answers Question \ref{motivQ}, (2) in the particular case that $\Delta(I)$ is a cycle graph $\mathcal C_d$ for $d\geqslant 5$.
\begin{example}
Let $d\geqslant 5$ and $\mathcal C_d$ be the cycle graph of length $d$, i.e., the edges of $\mathcal C_d$ are indexed by $\{i,i+1\}$ for $1\leqslant i\leqslant d$ where $\{d,d+1\}:=\{d,1\}$. Let $I$ be the Stanley-Reisner ideal of $R$ associated to the complex $\Delta(I)=\mathcal C_d$. Then
\begin{equation}\label{limitExMon}
\lim_{n\rightarrow \infty}\frac{\indeg\HH{1}{\mathfrak{m}}{R/I^n}}{n}=1.
\end{equation}
To show this, notice that since $\Delta(I)$ is connected, we have $\tilde{H}_0(\Delta(I))=0$. Then by Theorem \ref{monomials} it suffices to show $\displaystyle\limsup_{n\rightarrow \infty}\frac{\indeg\HH{1}{\mathfrak{m}}{R/I^n}}{n}\leqslant 1$. For each $1\leqslant i\leqslant d$, set $\mathfrak{p}_i = (\{x_j\mid j\neq i,i+1\})$. Hence, $\Ass(I)=\{\mathfrak{p}_1,\ldots, \mathfrak{p}_d\}$.
Note that
$$I_{\{1\}}=(x_3,\ldots,x_n)\cap(x_2,\ldots, x_{n-1})=(x_2x_n, x_3,x_4,\ldots, x_{n-1})$$
which is a complete intersection. Likewise, $I_{\{i\}}$ is a complete intersection for every $1\leqslant i\leqslant d$, therefore $\Proj R/I$ is lci. Hence, $\HH{1}{\mathfrak{m}}{R/I^n}$ is Noetherian for every $n\in \mathbb N$ and we have $$\tilde{I^n}=(I^n:_R \mathfrak{m}^\infty ) = \cap_{i=1}^d \mathfrak{p}_i^n .$$
Fix $n\gg 0$ and let $\mathbf{a}_n = (n-d+4, 0, 1, \ldots, 1, 0)\in \mathbb N^d $, then one readily verifies $\Delta_{\mathbf{a}_n}(\tilde{I^n})$ is the subcomplex of $\Delta(I)$ whose facets are $\{i,i+1\}$ for $i\neq 2, d-1$. Since $\Delta_{\mathbf{a}_n}(\tilde{I^n})$ is disconnected, we have $\dim_k \HH{1}{\mathfrak{m}}{R/\tilde{I^n}}_{\mathbf{a}_n} = \dim_k \tilde{H}_0(\Delta_{\mathbf{a}_n}, k)\neq 0$. We conclude $$\indeg \HH{1}{\mathfrak{m}}{R/\tilde{I^n}}\leqslant |\mathbf{a}_n|=n+1.$$ Finally, the limit \eqref{limitExMon} follows by noticing $\HH{1}{\mathfrak{m}}{R/I^n} \cong \HH{1}{\mathfrak{m}}{R/\tilde{I^n}}$.
\end{example}
\section*{Acknowledgments}
The authors are grateful to David Eisenbud, Jack Jeffries, Robert Lazarsfeld, Luis N\'u\~nez-Betan\-court, and Claudiu Raicu for very helpful discussions. They also thank Tai H\`a for bringing the reference \cite{BCH} to their attention. Part of the research included in this article was developed in the Mathematisches Forschungsinstitut Oberwolfach (MFO) while the authors were in residence at the institute under the program {\it Oberwolfach Leibniz Fellows}. The authors thank MFO for their hospitality and excellent conditions for conducting research. The authors would also like to thank the referee for her or his helpful comments and suggestions that improved this paper.
|
{
"timestamp": "2019-05-08T02:19:09",
"yymm": "1809",
"arxiv_id": "1809.02310",
"language": "en",
"url": "https://arxiv.org/abs/1809.02310"
}
|
\section{Introduction}\label{intro}
Hamiltonian systems of differential equations are of widespread importance in physics and mathematics.
There is particular interest in systems that are superintegrable in the sense of Liouville
by possessing the maximal number of globally well-defined first integrals
which are functionally independent.
Noether's theorem provides an obvious connection between first integrals and local symmetries of a given Hamiltonian system, whether or not the system is (super)integrable.
Specifically, using the Lagrangian formulation of the system,
each first integral corresponds to a variational local symmetry,
which can be either a point symmetry or a dynamical symmetry \cite{BA-book,Olver-book}.
Point symmetries are distinguished by involving only the canonical variables and the time variable;
most importantly,
all point symmetries can be obtained systematically for any Hamiltonian system
through the use of Lie's method \cite{BA-book,Olver-book}.
A natural first question is:
\emph{To what extent can (super)integrability of a Hamiltonian system be detected just by looking at its point symmetries?}
The answer is, in general, that variational point symmetries do not always provide
a sufficient number of first integrals.
A widely studied example is central force motion in the Newtonian case of an inverse-square force law (see e.g.~\cite{GolPooSaf}).
The variational point symmetries of this Hamiltonian system consist of rotations and time-translation,
which yield the components of the angular momentum vector and the energy
as first integrals.
There are additional first integrals given by the components of the well-known Laplace-Runge-Lenz (LRL) vector.
Recall that this vector lies in the plane orthogonal to the angular momentum vector
and is oriented in the direction of the apsis line from the center of mass to the apsis point on any non-circular orbit.
The first integrals corresponding to the angle determined by the LRL vector
arise from ``hidden'' dynamical symmetries \cite{BacRueSou,Fra,AncMeaPas}
rather than point symmetries of the \eom/.
A second question then is:
\emph{Can (super)integrability of a Hamiltonian system be detected by knowing its dynamical symmetries?}
The answer, surprisingly, is neither simple nor universal.
Because variational dynamical symmetries correspond to first integrals by Noether's theorem,
these local symmetries do contain some information about the first integrals.
In some situations, the global form of the symmetry group transformations
acting on solutions may indicate if the first integrals are globally single-valued and non-singular.
But, in general, this global question about first integrals is distinct from the properties of the variational symmetry group,
because the existence and nature of local symmetries depends solely on the local structure of the \eom/.
The example of central force motion with a general radial potential is a nice illustration of the subtleties.
It has been known for several decades that an analog of the LRL vector exists
for any radial potential \cite{BacRueSou,Fra,AncMeaPas},
but the resulting first integrals given by the components of this generalized vector
are globally single-valued and non-singular only for
the Kepler-Coulomb potential and the isotropic oscillator potential \cite{BacRueSou}.
Namely, those are the only two central force systems that are superintegrable
in Euclidean space.
(For the situation in curved spaces with radial symmetry, see~\cite{commun}.)
When any other central force system is considered in Euclidean space,
the generalized LRL vector instead is multi-valued and jumps each time the apsis point on a non-circular orbit is reached \cite{SerSha,BucDen}.
Moreover, the ``hidden'' dynamical symmetries that correspond to these first integrals
have the same symmetry algebra \cite{BacRueSou,Fra} for all radial potentials,
whether the central force system is superintegrable or not.
As a consequence, superintegrability is not related to either the size or the algebra structure of the variational dynamical symmetries.
Of course, if all variational dynamical symmetries of a given Hamiltonian system are known,
then the first integrals can be obtained in an explicit form through Noether's theorem,
so that their global properties then can be studied.
But a priori it is not possible to find all variational dynamical symmetries
without in essence integrating the \eom/ of the Hamiltonian system,
and this task involves the same level of difficulty as directly finding all first integrals
\cite{BA-book}.
More specifically, it is no easier to find all variational dynamical symmetries
than it is to find all first integrals.
Nevertheless, there is an extended symmetry method developed in Ref.~\cite{AncMeaPas} that can be used
to obtain all first integrals systematically for many Hamiltonian systems.
The purpose of the present paper is to get a deeper insight into the connection between first integrals and local symmetries
by studying a superintegrable Hamiltonian system introduced in Ref.~\cite{DarbouxPhysD}.
This system describes a radially symmetric nonlinear oscillator which is physically interesting both from dynamical and geometrical viewpoints,
since it can be identified both
with the motion of a particle on a space having non-constant curvature,
and also with an oscillator whose mass is position dependent.
The main results in the paper will be to show how to go systematically
from the Hamiltonian system to local symmetries and then to first integrals,
and reciprocally, from the Hamiltonian system directly to first integrals and then to local symmetries.
This will be accomplished without the need for ansatzes or guess-work
by adapting the extended symmetry method from \cite{AncMeaPas}.
As will be seen,
superintegrability of the system is not related in any straightforward or universal way to
either its point symmetries or its variational symmetry algebra.
These results will reinforce the preceding discussion.
In particular,
on one hand, point symmetries are generally insufficient to characterize when a Hamiltonian system is superintegrable,
and on the other hand,
the size and structure of the variational symmetry algebra of a given Hamiltonian system
is not enough to detect if the system is superintegrable.
\section{Lie point symmetries and (super)integrability}\label{toymodels}
In this section,
the basic Hamiltonian systems for two uncoupled oscillators
and for central force planar motion
are studied as benchmark models to discuss and clarify the questions raised in section~\ref{intro}.
Recall that the first system is superintegrable provided that the two oscillator frequencies are commensurate,
and that the second system is superintegrable only for the Kepler-Coulomb potential and the isotropic oscillator potential.
This situation is ideal for analyzing the difficulties in detecting superintegrability through point symmetries,
since the superintegrable systems are specific cases belonging to the respective family of oscillator and central force potentials.
\subsection{Uncoupled oscillators}\label{uncoupledoscil}
The Hamiltonian system describing two uncoupled oscillators is given by
the \eom/
\begin{equation*}
\ddot q_1 +\omega_1{}^2 q_1=0,
\qquad
\ddot q_2 +\omega_2{}^2 q_2=0,
\end{equation*}
for $(q_1(t),q_2(t))$,
where $\omega_1$ and $\omega_2$ are the frequencies.
The Lagrangian is ${\mathcal L} = \tfrac{1}{2}(\dot q_1^2 +\dot q_2^2 - \omega_1{}^2 q_1^2 - \omega_2{}^2 q_2^2)$. \\
\emph{First integrals}:
All constants of motion (\com/) $I(q_1,q_2,\dot q_1,\dot q_2)$
arise from the determining equation
\begin{equation*}
0=\dot I =I_{q_1} \dot q_1 +I_{q_2} \dot q_2 -\omega_1{}^2 q_1 I_{\dot q_1} -\omega_2{}^2 q_2 I_{\dot q_2},
\end{equation*}
which can be solved easily by the method of characteristics.
A maximal set of three functionally-independent \com/ is given by
the energies of the two oscillators
\begin{equation}\label{uncoupledoscil-com-E}
E_1= \tfrac{1}{2}( \dot q_1^2 +\omega_1{}^2 q_1^2 ),
\quad
E_2= \tfrac{1}{2}( \dot q_2^2 +\omega_2{}^2 q_2^2 ) ,
\end{equation}
and a phase quantity
\begin{equation}\label{uncoupledoscil-com-Phi}
\Phi= (1+\tfrac{\omega_2}{\omega_1})\arctan(\omega_1 q_1/\dot q_1) -(1+\tfrac{\omega_1}{\omega_2})\arctan(\omega_2 q_2/\dot q_2)
\ \mod \pi.
\end{equation}
This quantity can be shown to describe
the difference in the relative phase shifts $\Delta\phi_1$ and $\Delta\phi_2$
between the two oscillators measured at the times $t_1$ and $t_2$
when each one of oscillators passes through zero,
namely $\Phi=\Delta\phi_2-\Delta\phi_1$.
It undergoes a jump each time one of the oscillators changes direction.
Thus, in general the \com/~$\Phi$ is multi-valued.
In contrast, the \com/~$E_1$, $E_2$, and $E=E_1+E_2$ (total energy of the oscillators) are single-valued.
The oscillator system is superintegrable iff the frequencies of two oscillators are commensurate:
$\omega_1/\omega_2\in\mathbb Q$.
In the superintegrable case,
the set of values of the \com/ $\Phi$ is finite,
whereas for incommensurate frequencies,
this set of values is infinite.
In both cases, $\Phi$ is always non-singular.
In addition to the three functionally-independent \com/ $E_1$, $E_2$, $\Phi$,
there is a first integral that depends explicitly on $t$:
\begin{equation}
T= t- \Big(\tfrac{1}{2\omega_1}\arctan(\omega_1 q_1/\dot q_1) +\tfrac{1}{2\omega_2}\arctan(\omega_2 q_2/\dot q_2)\Big)
\end{equation}
This first integral yields the average of the times at which the respective oscillators pass through zero. \\
\emph{Variational symmetries}:
According to Noether's theorem,
each of the first integrals $E_1$, $E_2$, $\Phi$, $T$ corresponds to a variational symmetry.
The evolutionary form of these symmetries, acting on $(q_1(t),q_2(t))$, is given by the generators
\begin{gather*}
\hat\mathbf{X}_{E_1}= \dot q_1\partial/\partial q_1,
\quad
\hat\mathbf{X}_{E_2}= \dot q_2\partial/\partial q_2,
\\
\hat\mathbf{X}_{\Phi}=\tfrac{\omega_1+\omega_2}{2}( \tfrac{1}{E_2} q_2\partial/\partial q_2 -\tfrac{1}{E_1} q_1\partial/\partial q_1 ),
\quad
\hat\mathbf{X}_{T}=\tfrac{1}{2E_1} q_1\partial/\partial q_1 +\tfrac{1}{2E_2} q_2\partial/\partial q_2 .
\end{gather*}
By comparison, an infinitesimal point symmetry
$t\to t + \epsilon \tau(t,q_1,q_2) +O(\epsilon^2)$,
$q_1\to q_1+\epsilon \eta_1(t,q_1,q_2) +O(\epsilon^2)$,
$q_2\to q_2+\epsilon \eta_2(t,q_1,q_2) +O(\epsilon^2)$
in evolutionary form has the generator
$\hat\mathbf{X}=(\eta_1-\tau\dot q_1)\partial/\partial_{q_1}+ (\eta_2-\tau\dot q_2)\partial/\partial_{q_2}$.
It is straightforward to see that none of the four symmetries $\hat\mathbf{X}_{E_1}$, $\hat\mathbf{X}_{E_2}$, $\hat\mathbf{X}_{\Phi}$, $\hat\mathbf{X}_{T}$
represent point symmetries, due to the form of their dependence on $\dot q_1$ and $\dot q_2$ through the expressions \eqref{uncoupledoscil-com-E}--\eqref{uncoupledoscil-com-Phi} for $E_1,E_2,\Phi$.
Therefore, they are dynamical symmetries.
On solutions of the \eom/,
the symmetry generators are mutually commuting,
namely the symmetry algebra is abelian.
Moreover, none of the symmetries contain information about superintegrability of the \eom/.
In particular, the components of each symmetry generator are
single-valued and non-singular for arbitrary frequencies $\omega_1\neq0$, $\omega_2\neq0$.\\
\emph{Point symmetries}:
The point symmetries of the uncoupled oscillator system
for general frequencies $\omega_1$ and $\omega_2$
are generated by
a time translation $\mathbf{X}_{\rm trans}= \partial/\partial t$
and two scalings $\mathbf{X}_{{\rm scal}_1}= q_1\partial/\partial q_1$ and $\mathbf{X}_{{\rm scal}_2}= q_2\partial/\partial q_2$,
along with
elementary symmetries $\mathbf{X} = f_1(t)\partial/\partial_{q_1} + f_2(t)\partial/\partial_{q_2}$,
where $f_j=a_j\cos(\omega_jt +\phi_j)$, $j=1,2$, are arbitrary solutions of the \eom/.
Note that time translation is a variational symmetry whose evolutionary form
is given by $\hat\mathbf{X}_{\rm trans} =\hat\mathbf{X}_{E_1} +\hat\mathbf{X}_{E_2} = \hat\mathbf{X}_{E}$.
Additional point symmetries arise only when the two frequencies are equal,
$\omega_1=\omega_2=\omega$.
In this special case,
there are four additional point symmetries,
which are generated by
\begin{align*}
& \mathbf{X}_{\rm rot}= q_2\partial/\partial q_1 - q_1\partial/\partial q_2,
\\
& \mathbf{X}_1 = e^{\pm i\omega t}( q_1\partial/\partial q_1 + q_2\partial/\partial q_2 \mp\tfrac{i}{\omega} \partial/\partial_t ),
\\
& \mathbf{X}_2 = e^{\pm i\omega t}q_1(q_1\partial/\partial q_1 + q_2\partial/\partial q_2 \mp\tfrac{i}{\omega} \partial/\partial_t ),
\\
& \mathbf{X}_3 = e^{\pm i\omega t}q_2(q_1\partial/\partial q_1 + q_2\partial/\partial q_2 \mp\tfrac{i}{\omega} \partial/\partial_t ).
\end{align*}
Hence, because $\omega_1=\omega_2=\omega$ belongs to the case of commensurate frequencies,
this special superintegrable case has a larger point symmetry group.
However, in all other superintegrable cases,
for which $\omega_1/\omega_2\in \mathbb Q$ with $\omega_1\neq\omega_2$,
the point symmetry group of the system has the same size as in the general non-superintegrable case.
\subsection{Central force motion}\label{centralforce}
For any central force,
motion in the plane orthogonal to the conserved angular momentum vector
is given by the Hamiltonian system
\begin{equation*}
\ddot r=\dot \theta^2 r -U'(r),
\quad
\ddot \theta=-2\dot \theta \dot r/r
\end{equation*}
in polar coordinates $(r(t),\theta(t))$,
where $U(r)$ is the potential and ${\mathcal L} = \tfrac{1}{2}( \dot r^2 + \dot\theta^2 r^2 ) -U(r)$ is the Lagrangian.\\
\emph{First integrals}:
All \com/ $I(r,\theta,\dot r,\dot \theta)$ arise from the determining equation
\begin{equation*}
0=\dot I =I_{r} \dot r +I_{\theta} \dot \theta +(\dot\theta^2 r -U'(r)) I_{\dot r} -2\dot\theta r^{-1}\dot r I_{\dot \theta} .
\end{equation*}
This is a first-order linear PDE for $I$, which can be explicitly solved by the method of characteristics
and yields a maximal set of three functionally-independent \com/:
\begin{gather}
L=\dot\theta r^2,
\quad
E= \tfrac{1}{2}( \dot r^2 +L^2/r^2) +U(r),
\label{centralforce-LE}
\\
\Theta= \theta-L\int_{r_0}^r \frac{{\rm sgn}(\dot r)}{r\sqrt{2(E+U(r_{\rm equil}) - U(r))r^2 -L^2}}dr
\ \mod 2\pi,
\label{centralforce-Theta}
\end{gather}
where $r_{\rm equil}$ is any equilibrium point, $U'(r_{\rm equil})=0$.
For any solution $(r(t),\theta(t))$ of the \eom/,
$L$ is the planar angular momentum;
$E$ is the energy (Hamiltonian);
and $\Theta$ is the angle reached at some point $r=r_0$.
As shown in \cite{AncMeaPas},
a natural intrinsic choice of $r_0$ is any turning point $r^*$ or any inertial point $r_*$,
which are given by $U_{\rm eff}(r^*)=E$ or $U_{\rm eff}'(r_*)=0$
in terms of the effective potential $U_{\rm eff}(r)=U(r) +\tfrac{1}{2}L^2/r^2 - U(r_{\rm equil})$.
In addition to the three functionally-independent \com/~ $L$, $E$, $\Theta$,
there is a first integral that depends explicitly on $t$:
\begin{equation}\label{centralforce-T}
T = t- \int^r_{r_0} \frac{{\rm sgn}(\dot r)}{\sqrt{2(E+U(r_{\rm equil})-U(r)) -L^2/r^2}}\,dr .
\end{equation}
It is well known that the central force system is superintegrable iff
$U(r)=-k/r$ is the Coulomb potential or $U(r)=k r^2$ is the isotropic oscillator potential.
Thus, $\Theta$ is single-valued and non-singular in these two cases.
In particular, if $r_0=r^*$ is a turning point at which $r(t)$ reaches a local maximum,
then as shown in \cite{AncMeaPas},
$\Theta$ is the angle of the LRL vector,
which is a \com/ for the Coulomb potential and the isotropic oscillator potential.
In these two cases, $T$ is the time at which $\theta(t)$ coincides with the LRL angle,
modulo the period of the solution $(r(t),\theta(t))$.
As a consequence, all bounded orbits for both of these potentials do not precess.
For any other central force system, we can infer that $\Theta$ is multi-valued and possibly singular.
In particular, the angle $\Theta$ defining the generalized LRL vector undergoes a jump each time
$t=T$ when $r$ reaches a turning point.
An example is the perturbed Coulomb potential $U(r)=-k/r-K/r^2$,
where bounded non-circular orbits exhibit precession \cite{GolPooSaf},
and thus this central force system is not superintegrable. \\
\emph{Variational symmetries}:
By Noether's theorem,
each of the \com/~$L$, $E$, $\Theta$ corresponds to a variational symmetry.
In evolutionary form, acting on $(r(t),\theta(t))$,
these symmetries are given by the generators \cite{AncMeaPas}
\begin{gather*}
\hat\mathbf{X}_{L}= -\partial/\partial \theta,
\quad
\hat\mathbf{X}_{E}= -\dot r\partial/\partial r -\dot\theta \partial/\partial \theta,
\\
\hat\mathbf{X}_{\Theta}=-(\dot r \partial_E\Theta) \partial/\partial r -(\partial_L\Theta +\dot\theta\partial_E \Theta)\partial/\partial \theta,
\quad
\hat\mathbf{X}_{T}=-(\dot r \partial_E T) \partial/\partial r -(\partial_L T +\dot\theta\partial_E T)\partial/\partial \theta .
\end{gather*}
Both $\hat\mathbf{X}_{L}$ and $\hat\mathbf{X}_{E}$ represent infinitesimal point symmetries,
as can be easily seen by comparison with
$\hat\mathbf{X}=(\eta^r -\tau\dot r)\partial/\partial{r}+ (\eta^\theta-\tau\dot \theta)\partial/\partial{\theta}$
which is general evolutionary form for an infinitesimal point symmetry
$t\to t + \epsilon \tau(t,r,\theta) +O(\epsilon^2)$,
$r\to r+\epsilon \eta^r(t,r,\theta) +O(\epsilon^2)$,
$\theta\to \theta+\epsilon \eta^\theta(t,r,\theta) +O(\epsilon^2)$.
In contrast, $\hat\mathbf{X}_{\Phi}$ and $\hat\mathbf{X}_{T}$ do not represent point symmetries
but instead are dynamical symmetries,
because of their nonlinear dependence on $\dot r$ and $\dot \theta$ through the expressions \eqref{centralforce-LE} for $L,E$.
A straightforward computation shows that, on solutions of the \eom/,
the symmetry algebra is abelian,
namely, the four generators $\hat\mathbf{X}_{L}$, $\hat\mathbf{X}_{E}$, $\hat\mathbf{X}_{\Phi}$, $\hat\mathbf{X}_{T}$
are mutually commuting.
Clearly, the point symmetries $\mathbf{X}_{L}$ and $\mathbf{X}_{E}$ contain no information about superintegrability of the \eom/,
since they are admitted for an arbitrary central force potential $U(r)$.
An interesting question is whether the dynamical symmetry $\mathbf{X}_{\Theta}$ contains
any information about superintegrability.
To answer this, we need to examine the components
\begin{equation}\label{centralforce_Theta_components}
\begin{aligned}
\partial_E \Theta & =
L\int_{r_0}^r \frac{{\rm sgn}(\dot r)r}{\sqrt{2(E+U(r_{\rm equil}) - U(r))r^2 -L^2}^3}dr,
\\
\partial_L\Theta & =
\int_{r_0}^r \frac{{\rm sgn}(\dot r)2(U(r)-U(r_{\rm equil})-E)r}{\sqrt{2(E+U(r_{\rm equil}) - U(r))r^2 -L^2}^3}dr.
\end{aligned}
\end{equation}
In the cases of the Coulomb potential $U(r)=-k/r$
and the isotropic oscillator potential $U(r)=k r^2$,
we find that both components \eqref{centralforce_Theta_components}
are single-valued but become singular at turning points.
For the case of the perturbed Coulomb potential $U(r)=-k/r-K/r^2$,
we find that the component $\partial_L\Theta$ is not single-valued.
Consequently, in these three examples,
the form of the dynamical symmetry $\mathbf{X}_{\Theta}$ detects if the central force system is superintegrable. \\
\emph{Point symmetries}:
The point symmetries of the central force \eom/ are generated by
$\mathbf{X}_{L}= -\partial/\partial \theta$ and $\mathbf{X}_{E}= \partial/\partial t$
for a general potential $U(r)$.
Additional point symmetries are admitted only for \cite{AncMeaPas}
two special potentials:
$U(r)=kr^p$, which admits $\mathbf{X}_1 = t\partial/\partial t -\tfrac{2}{p}r\partial/\partial r$;
$U(r)=kr+K/r^3$, which admits $\mathbf{X}_2 = e^{2\sqrt{k}t}(\partial/\partial t +\sqrt{k}r\partial/\partial r)$.
Notice that the point symmetry group is \emph{not larger} in the superintegrable cases.
\section{Connections among first integrals, symmetries, and superintegrability}
We will study the $N=2$ version of the dynamical system given by the Hamiltonian
\begin{equation}\label{ND-Ham}
H(\mathbf{q},\mathbf{p})
=\tfrac{1}{2}(1+\lambda \mathbf{q}^2)^{-1} ( \mathbf{p}^2 + \omega^2 \mathbf{q}^2)
\end{equation}
where $\lambda>0$ and $\omega> 0$ are real parameters,
and $(\mathbf{q},\mathbf{p})$ are $2N$ canonical coordinates.
This system was proven in~Ref.~\cite{DarbouxPhysD} to be maximally superintegrable,
namely it possesses the maximum number $(2N-1)$ of \com/,
which are functionally independent and globally well-defined for a general solution $(\mathbf{q}(t),\mathbf{p}(t))$.
These \com/ are explicitly given by
\begin{equation}\label{ND-com}
\begin{gathered}
C^{(m)}=\!\! \sum_{1\leq i<j\leq m} \!\!\!\! (q_ip_j-q_jp_i)^2 ,
\quad
C_{(m)}=\!\!\! \sum_{N-m<i<j\leq N}\!\!\!\!\!\! (q_ip_j-q_jp_i)^2 , \quad m=2,\dots,N,
\\
E_i=p_i^2-\bigl(2\lambda H(\mathbf{q},\mathbf{p})-\omega^2\bigr) q_i^2 ,\quad i=1,\dots,N.
\end{gathered}
\end{equation}
Some non-local symmetries for the $N=1$ case were found in~\cite{GandariasPLA}.
When $\lambda\to 0$,
this system~\eqref{ND-Ham} reduces to the $N$-dimensional Euclidean isotropic oscillator with frequency $\omega$,
which is indeed a maximally superintegrable system.
Thus, $\lambda$ can be viewed as a deformation parameter,
and the system~\eqref{ND-Ham} can be thought of as a maximally superintegrable deformation of the Euclidean isotropic oscillator.
Geometrically,
the term $\tfrac{1}{2}(1+\lambda \mathbf{q}^2)^{-1}\mathbf{p}^2$ in the Hamiltonian $H(\mathbf{q},\mathbf{p})$
can be interpreted as the kinetic energy defined by the geodesic motion of a particle with unit mass on a conformally flat space whose metric is given by
${\rm d} s^2= (1+\lambda \mathbf{q}^2){\rm d} \mathbf{q}^2$ (see also~\cite{KKMW02}--\cite{Gonera}).
The scalar curvature of this space
is negative and asymptotically vanishes for large $|\mathbf{q}|$.
Further discussion on the geometrical interpretation of the system and its quantization can be found in~\cite{commun} and~\cite{NDAnnals}--\cite{BGSNpla}.
From a physical viewpoint,
the system describes a particle with position-dependent mass of the form $m(\mathbf{q})=1+\lambda \mathbf{q}^2$.
We recall that the quantum version of such systems (see for instance~\cite{Roos}--\cite{MR} and references therein) is relevant for the description of
semiconductor heterostructures and nanostructures
and, in particular, models constructed in terms of quadratic mass functions have been considered in~\cite{Koc, Schd}.
Our main result will be to show systematically how to derive the local symmetry group underlying the \com/ ~\eqref{ND-com} of this system in the case $N=2$.
We do this derivation in two different ways.
First, we directly integrate the determining equation for first integrals,
and then we apply Noether's theorem (in reverse) to obtain the corresponding variational symmetries.
This process can be summarized as
\emph{Hamiltonian system $\Rightarrow$ first integrals $\Rightarrow$ local symmetries}.
Next, and most importantly,
we show how to use the symmetry method outlined in Ref.\cite{AncMeaPas}
to do the reverse process:
\emph{Hamiltonian system $\Rightarrow$ local symmetries $\Rightarrow$ first integrals}.
This method is systematic and explicit, and no ansatzes are needed.
\subsection{From symmetries to first integrals}
The Hamiltonian \eqref{ND-Ham} in the planar case $N=2$ is given by
\begin{equation}\label{Ham}
H = \frac{p_1^2 +p_2^2 +\omega^2(q_1^2+q_2^2)}{2(1+\lambda(q_1^2+q_2^2))}.
\end{equation}
Note, when the deformation parameter $\lambda$ is taken to be $\lambda=0$,
this Hamiltonian reduces to the one for the planar isotropic oscillator,
which is given by oscillator system discussed in Section~\ref{uncoupledoscil}
in the case $\omega_1=\omega_2=\omega$.
Hereafter it will be useful to change from planar coordinates to polar coordinates
\begin{equation}
q_1=r\sin\theta,
\quad
q_2=r\sin\theta,
\quad
p_1 = p^r\cos\theta +p^\theta r^{-1}\sin\theta,
\quad
p_2 = p^r\sin\theta-p^\theta r^{-1}\cos\theta.
\end{equation}
The Hamiltonian \eqref{Ham} becomes
\begin{equation}
H= \frac{(p^r)^2 + (p^\theta/r)^2 +\omega^2 r^2}{2(1+\lambda r^2)} ,
\end{equation}
which yields the second-order ODE system
\begin{equation}\label{eom}
\ddot r = f^r(r,\theta,\dot r,\dot\theta)
= \frac{((2\lambda r^2 +1)\dot \theta^2 +\lambda \dot r^2)r}{\lambda r^2+1} -\frac{\omega^2 r}{(\lambda r^2+1)^3},
\quad
\ddot\theta = f^\theta(r,\theta,\dot r,\dot\theta)
=-\frac{2\dot \theta \dot r (2\lambda r^2 +1)}{(\lambda r^2 +1)r}.
\end{equation}
This system is superintegrable.
Hereafter,
the set of solutions $(r(t),\theta(t))$ of the \eom/ will be denoted ${\mathcal E}$.
\subsection{From first integrals to symmetries}
A first integral is a function $I$ of $t,r,\theta,\dot r,\dot \theta$ that is time-independent,
$\dot I=0$,
when it is evaluated on the solution space ${\mathcal E}$ of the \eom/.
If $I$ does not depend explicitly on $t$, then it is a \com/.
All first integrals can be found by solving the determining equation
\begin{equation}\label{I-deteqn}
0=\dot I(t,r,\theta,\dot r,\dot \theta)\big|_{\mathcal E}
=I_{t} + I_{r} \dot r +I_{\theta} \dot\theta +f^r I_{\dot r} +f^\theta I_{\dot \theta},
\end{equation}
which is a linear first-order PDE for $I(t,r,\theta,\dot r,\dot \theta)$.
Solving this PDE amounts to integrating the \eom/ \eqref{eom}.
This can be done by applying the method of characteristics,
yielding the ODE system
$dt/1=dr/\dot r=d\theta/\dot\theta=d\dot r/f^r=d\dot\theta/f^\theta$.
Integration of the system gives four functionally-independent first integrals:
\begin{align}
& L= r^2(1+\lambda r^2) \dot\theta =p^\theta,
\label{L}
\\
&
E= \tfrac{1}{2}\big( (1+\lambda r^2) \dot r^2 + (\omega^2 r^2 +L^2/r^2)(1+\lambda r^2) ^{-1} \big) =H,
\label{E}
\\
& \begin{aligned}
\Theta & = \theta - \tfrac{1}{2}\arctan\Big(\frac{{\rm sgn}(\dot r)(Er^2-L^2)}{L\sqrt{2Er^2(1+\lambda r^2)-L^2-\omega^2r^4}}\Big)\Big|^{r}_{r_0}
\mod 2\pi,
\end{aligned}
\label{Theta}
\\
&
\begin{aligned}
T & = t -\tfrac{1}{2}\frac{w^2-\lambda E}{\sqrt{w^2-2\lambda E}^3}\bigg(\arctan\Big(\frac{{\rm sgn}(\dot r)\big( (2\lambda r^2+1)E-\omega^2 r^2\big)}{\sqrt{w^2-2\lambda E}\sqrt{2Er^2(1+\lambda r^2)-L^2-\omega^2r^4}}\Big) \Big|^{r}_{r_0} \bigg)
\\&\qquad
+\tfrac{1}{2}\frac{\lambda}{w^2-2\lambda E}\Big(
{\rm sgn}(\dot r)\sqrt{2Er^2(1+\lambda r^2)-L^2-\omega^2r^4}\Big)\Big|^{r}_{r_0} .
\end{aligned}
\label{T}
\end{align}
Here $L$ is the planar angular momentum
and $E$ is the energy,
which respectively arise from solving $dr/\dot r=d\dot\theta/f^\theta$
and $dr/\dot r=d\dot r/f^r$;
$\Theta$ is an angular quantity
given by solving $dr/\dot r=d\theta/\dot\theta$
and involves an arbitrary constant of integration $r_0=r(t_0)$.
These quantities are \com/,
while $T$ is a temporal first integral arising from $dt/1=dr/\dot r$.
The physical meaning of $\Theta$ and $T$ is similar to the analogous quantities that appear in the superintegrable cases of central force motion discussed in Section~\ref{centralforce}.
Notice that $L$, $E$, $\Theta$, $T$ are functionally independent because
they each have different physical units.
A natural physical choice of $r_0$ is any turning point $r^*$ or any inertial point $r_*$,
which are respectively given by $U_{\rm eff}(r^*)=E$ or $U_{\rm eff}'(r_*)=0$
in terms of the effective potential
\begin{equation}
U_{\rm eff}(r)=\frac{\omega^2 r^2 +L^2/r^2}{1+\lambda r^2}.
\end{equation}
On the orbit of a solution $(r(t),\theta(t))$,
a turning point is thus a point $r=r^*$ at which the radial velocity $\dot r=0$,
and an inertial point is a point $r=r_*$ at which the radial acceleration $\ddot r=0$.
These points are determined intrinsically by the dynamics of each solution $(r(t),\theta(t))$.
With such a choice of $r_0$,
the angular quantity $\Theta$ physically represents the angle $\theta$
on the orbit of a solution $(r(t),\theta(t))$
at the point $r=r_0$
given by either a turning point $r_0=r^*$ or an inertial point $r_0=r_*$.
Likewise,
the temporal quantity $T$ physically represents the time $t$
at which this point is reached on the orbit.
Superintegrability of the system \eqref{eom} is distinguished by the feature that
both $\Theta$ and $T$ are single-valued and non-singular for a general solution $(r(t),\theta(t))$.
Consequently,
$\Theta$ provides an analog of the LRL angle.
All first integrals $I(t,r,\theta,\dot r,\dot\theta)$ are associated to multiplier pairs
$(Q^r(r,\theta,\dot r,\dot\theta),Q^\theta(r,\theta,\dot r,\dot\theta))$
given by expressing the conservation property $\dot I|_{\mathcal E}=0$ as an identity
\begin{equation*}
\dot I = (\ddot r - f^r)Q^r + (\ddot \theta - f^\theta)Q^\theta,
\qquad
Q^r=\partial_{\dot r}I,
\quad
Q^\theta=\partial_{\dot\theta}I
\end{equation*}
holding off of the solution space ${\mathcal E}$.
Multiplier pairs are directly related to variational symmetries through Noether's theorem
using the Lagrangian formulation of the system \eqref{eom} as follows.
The Lagrangian is given by
${\mathcal L} = \tfrac{1}{2}(1+\lambda r^2)( \dot r^2 + \dot\theta^2 r^2 ) -\tfrac{1}{2}w^2r^2/(1+\lambda r^2)$,
which yields
\begin{equation*}
\ddot r -f^r =-(1+\lambda r^2)^{-1}\frac{\delta{\mathcal L}}{\delta r},
\qquad
\ddot \theta -f^\theta =-(r^2 (1+\lambda r^2))^{-1}\frac{\delta{\mathcal L}}{\delta \theta} .
\end{equation*}
Now consider any vector field
$\hat\mathbf{X} = P^r(t,r,\theta,\dot r,\dot\theta)\partial_r + P^\theta(t,r,\theta,\dot r,\dot\theta)\partial_\theta$
in evolutionary form which acts only on the coordinates $(r,\theta)$.
This vector field induces a variation of the Lagrangian, yielding Noether's identity
\begin{equation*}
{\rm pr}^{(1)}\hat\mathbf{X} ({\mathcal L}) =
\frac{\delta{\mathcal L}}{\delta r} P^r + \frac{\delta{\mathcal L}}{\delta\theta} P^\theta + \frac{d}{dt}( P^r\partial_{\dot r}{\mathcal L} + P^\theta \partial_{\dot\theta}{\mathcal L}),
\end{equation*}
where
${\rm pr}^{(1)}\hat\mathbf{X} = P^r\partial_{r} + P^\theta\partial_{\theta} + \dot P^r\partial_{\dot r} + \dot P^\theta\partial_{\dot\theta}$
is the prolongation of $\hat\mathbf{X}$ to the coordinate space
$(r,\theta,\dot r,\dot\theta)$.
The condition for the vector field to be a variational symmetry is that
the induced variation of the Lagrangian is a total time derivative,
${\rm pr}^{(1)}\hat\mathbf{X}({\mathcal L}) = \dot R$,
for some function $R(t,r,\theta,\dot r,\dot\theta)$.
This implies
\begin{equation*}
(\ddot r -f^r)\big( (1+\lambda r^2)P^r \big)
+ (\ddot \theta -f^\theta)\big( r^2(1+\lambda r^2)P^\theta \big)
= \frac{d}{dt}\big( P^r\partial_{\dot r}{\mathcal L} +P^\theta \partial_{\dot\theta}{\mathcal L} -R \big) .
\end{equation*}
When this equation is evaluated on solutions $(r(t),\theta(t))$ of the \eom/,
it yields a first integral
\begin{equation}\label{I}
I=R - P^r\partial_{\dot r}{\mathcal L} -P^\theta \partial_{\dot\theta}{\mathcal L}
\end{equation}
which has the multiplier pair
\begin{equation}\label{QfromP}
Q^r = -(1+\lambda r^2)P^r ,
\qquad
Q^\theta = -r^2(1+\lambda r^2)P^\theta .
\end{equation}
This relation establishes a one-to-one correspondence between
multiplier pairs $(Q^r,Q^\theta)$ and components $P^r$, $P^\theta$ of variation symmetries.
Specifically,
any variational symmetry yields a first integral whose corresponding multiplier
is determined by equation \eqref{QfromP} in terms of the components of the symmetry generator;
conversely, any first integral yields a variational symmetry whose generator has
components determined in terms of the multiplier through inverting equation \eqref{QfromP}
to get
\begin{equation}\label{PfromQ}
P^r = \frac{-Q^r}{1+\lambda r^2},
\qquad
P^\theta = \frac{-Q^\theta}{r^2(1+\lambda r^2)} .
\end{equation}
These correspondences \eqref{QfromP}--\eqref{PfromQ},
along with the first integral expression \eqref{I},
constitute the statement of Noether's theorem.
The multiplier pairs for the four first integrals \eqref{L}--\eqref{T} are given by
\begin{align}
& (Q^r,Q^\theta)_L = \big(0,r^2(\lambda r^2+1)\big),
\qquad
(Q^r,Q^\theta)_E = \big((\lambda r^2+1)\dot r,r^2(\lambda r^2+1)\dot \theta\big),
\label{Q-L-E}
\\
& (Q^r,Q^\theta)_\Theta = \big((\lambda r^2+1)\dot r\partial_E\Theta,r^2(\lambda r^2+1)(\dot \theta \partial_E\Theta+ \partial_L\Theta)\big),
\label{Q-Theta}
\\
& (Q^r,Q^\theta)_T = \big((\lambda r^2+1)\dot r\partial_E T,r^2(\lambda r^2+1)(\dot \theta \partial_E T+ \partial_L T)\big),
\label{Q-T}
\end{align}
where
\begin{align}
&\begin{aligned}
\partial_L\Theta & =
\frac{1}{E^2+(2E\lambda-\omega^2)L^2} \bigg(
{\rm sgn}(\dot r)\frac{(2(\lambda r^2 +1)E -\omega^2 r^2)E+(2\lambda E-\omega^2)L^2 }{2\sqrt{2Er^2(\lambda r^2 +1) - L^2 - \omega^2r^4}} \bigg)\bigg|^{r}_{r_0},
\end{aligned}
\\
&\begin{aligned}
\partial_E\Theta = - \partial_L T &=
\frac{L}{E^2+(2E\lambda-\omega^2)L^2} \bigg(
{\rm sgn}(\dot r)\frac{\omega^2r^2 -(\lambda r^2 +1)E -\lambda L^2}{2\sqrt{2Er^2(\lambda r^2 +1) - L^2 - \omega^2r^4}} \bigg)\bigg|^{r}_{r_0},
\end{aligned}
\\
&\begin{aligned}
\partial_ET & =\frac{\lambda(2w^2-\lambda E)}{2\sqrt{w^2-2\lambda E}^5}
\arctan\Big(\frac{{\rm sgn}(\dot r)\big((2\lambda r^2+1)E -\omega^2 r^2\big)}{\sqrt{2Er^2(\lambda r^2 +1)-L^2-\omega^2r^4}\sqrt{w^2-2\lambda E}}\Big)\Big|^{r}_{r_0}
\\&\qquad
+\frac{1}{2(w^2-2\lambda E)}
\Big( (\lambda r^2+1)\Big( \lambda r^2 +\frac{(\omega^2-\lambda E)(Er^2-L^2)}{E^2+(2E\lambda-\omega^2)L^2}\Big)\Big)\Big|^{r}_{r_0},
\\&\qquad
+\frac{\lambda}{2(w^2-2\lambda E)^2}
\Big(2\lambda +\frac{E(\omega^2-\lambda E)}{E^2+(2E\lambda-\omega^2)L^2}\Big)\Big({\rm sgn}(\dot r)\sqrt{2Er^2(\lambda r^2 +1)-L^2-\omega^2r^4}\Big)\Big|^{r}_{r_0}.
\end{aligned}
\end{align}
Applying the Noether correspondence \eqref{PfromQ} to each multiplier pair,
we obtain the corresponding variational symmetries
\begin{gather}
\hat\mathbf{X}_{L}= -\partial/\partial \theta,
\quad
\hat\mathbf{X}_{E}= -\dot r\partial/\partial r -\dot\theta \partial/\partial \theta,
\label{hatX-L-E}
\\
\hat\mathbf{X}_{\Theta} =-(\dot r \partial_E\Theta) \partial/\partial r -(\partial_L\Theta +\dot\theta\partial_E \Theta)\partial/\partial \theta,
\label{hatX-Theta}
\\
\hat\mathbf{X}_{T} =-(\dot r \partial_E T) \partial/\partial r -(\partial_L T +\dot\theta\partial_E T)\partial/\partial \theta.
\label{hatX-T}
\end{gather}
These symmetries can be understood as acting on solutions $(r(t),\theta(t))$ of the \eom/.
Equivalently, the symmetries can be formulated as acting on the variables $(t,r,\theta)$
by means of a standard transformation \cite{BA-book,Olver-book}
which has the general form
$\eta^r=P^r +\tau \dot r$, $\eta^\theta=P^\theta +\tau \dot\theta$,
yielding
\begin{equation*}
\mathbf{X} = \tau\partial/\partial_t + \eta^r\partial/\partial_r + \eta^\theta\partial/\partial_\theta
\end{equation*}
where $\tau$ can be chosen freely as a function of $t,r,\theta,\dot r,\dot\theta$.
We take $\tau=0$ for $\hat\mathbf{X}_{L}$, $\tau=1$ for $\hat\mathbf{X}_{E}$,
giving
\begin{equation}\label{X-L-E}
\mathbf{X}_{L}= -\partial/\partial \theta,
\qquad
\mathbf{X}_{E}= \partial/\partial t .
\end{equation}
In this form, these generators represent point symmetries,
consisting of rotations and time-translations.
For $\hat\mathbf{X}_{\Theta}$ we take $\tau = \partial_E\Theta$,
which yields
\begin{equation}\label{X-Theta}
\begin{aligned}
\mathbf{X}_{\Theta} & =
\partial_E\Theta\partial/\partial_t -\partial_L\Theta\partial/\partial \theta .
\end{aligned}
\end{equation}
Similarly, for $\hat\mathbf{X}_{T}$ we take $\tau = \partial_E T$,
giving
\begin{equation}\label{X-T}
\begin{aligned}
\mathbf{X}_{T} & =
\partial_E T \partial/\partial_t -\partial_L T\partial/\partial \theta .
\end{aligned}
\end{equation}
Both of these generators $\mathbf{X}_{\Theta}$ and $\mathbf{X}_{T}$ represent dynamical symmetries.
Moreover, there is no choice of $\tau$ that can transform them into point symmetries,
because the components of the generators $\hat\mathbf{X}_{L}$ and $\hat\mathbf{X}_{T}$
have a nonlinear dependence on $\dot r$ and $\dot\theta$ through the expressions for $L,E$.
The commutators of the variational symmetries \eqref{X-L-E}, \eqref{X-Theta}, \eqref{X-T}
turn out to vanish, as shown later,
whereby the variational symmetries comprise a four-dimensional abelian algebra.
In summary,
we have shown how to go
\emph{Hamiltonian system $\Rightarrow$ first integrals $\Rightarrow$ local symmetries}
in a systematic way by using the determining equation \eqref{I-deteqn} for first integrals
and the reverse version \eqref{PfromQ} of Noether's theorem.
\subsection{From symmetries to first integrals}
We will now show how to go
\emph{Hamiltonian system $\Rightarrow$ local symmetries $\Rightarrow$ first integrals}
in a systematic way without having to use any ansatzes or guess-work,
by following the extended symmetry method outlined in Ref.~\cite{AncMeaPas}.
\emph{Step 1}:
Compute all variational point symmetries of the Hamiltonian system \eqref{eom}.
Point symmetries are given by generators (vector fields) of the form
$\mathbf{X} = \tau(t,r,\theta)\partial/\partial{t} + \eta^r(t,r,\theta)\partial/\partial{r} + \eta^\theta(t,r,\theta)\partial/\partial{\theta}$
under which the system \eqref{eom} is infinitesimally invariant,
\begin{equation}\label{symmcond}
{\rm pr}^{(2)}\mathbf{X} (\ddot r - f^r)|_{\mathcal E} = 0,
\qquad
{\rm pr}^{(2)}\mathbf{X} (\ddot\theta - f^\theta)|_{\mathcal E} = 0 .
\end{equation}
Here ${\rm pr}^{(2)}\mathbf{X}$ is the second prolongation of $\mathbf{X}$,
acting on the coordinate space $(t,r,\theta,\dot r,\dot\theta,\ddot r,\ddot\theta)$.
The prolongation formula is somewhat complicated.
It can be avoided by working with the generator $\mathbf{X}$ in evolutionary form
\begin{equation}\label{pointsymm}
\hat\mathbf{X} = P^r \partial/\partial{r} + P^\theta \partial/\partial{\theta},
\qquad
P^r = \eta^r -\tau \dot r,
\quad
P^\theta = \eta^\theta-\tau \dot\theta,
\end{equation}
acting on solutions $(r(t),\theta(t))$.
Then the invariance condition \eqref{symmcond} becomes simply
\begin{equation}
\begin{aligned}
0 & ={\rm pr}^{(1)}\hat\mathbf{X}(\ddot r - f^r)|_{\mathcal E} = \big( \ddot P^r - P^r \partial_r f^r - P^\theta \partial_\theta f^r - \dot P^r \partial_{\dot r} f^r - \dot P^\theta \partial_{\dot\theta} f^r \big)\big|_{\mathcal E},
\\
0 & ={\rm pr}^{(1)}\hat\mathbf{X}(\ddot\theta -f^\theta)|_{\mathcal E} = \big( \ddot P^\theta - P^r \partial_r f^\theta - P^\theta \partial_\theta f^\theta - \dot P^r \partial_{\dot r} f^\theta - \dot P^\theta \partial_{\dot\theta} f^\theta \big)\big|_{\mathcal E}.
\end{aligned}
\end{equation}
This pair of determining equations splits with respect to $\dot r$, $\dot \theta$,
and thereby yields an overdetermined linear system of equations for $\tau$, $\eta^r$, $\eta^\theta$.
After simplification, the linear system reduces to
$\partial_t\tau=\partial_r\tau=\partial_\theta\tau=0$, $\eta^r=0$,
$\partial_t\eta^\theta=\partial_r\eta^\theta=\partial_\theta\eta^\theta=0$,
whose solution is given by $\tau=C_1$, $\eta^r=0$, $\eta^\theta=C_2$,
where $C_1,C_2$ are constants.
Hence, we obtain the two point symmetries \eqref{X-L-E},
consisting of rotations ($C_1=0$, $C_2=-1$) and time-translations ($C_1=1$, $C_2=0$).
Both of these point symmetries are variational.
In particular, in evolutionary form \eqref{hatX-L-E},
their action on the Lagrangian is a total time derivative given by
\begin{equation}\label{XactionLagr}
{\rm pr}^{(1)}\hat\mathbf{X}_{L}({\mathcal L}) = \dot R =0,
\qquad
{\rm pr}^{(1)}\hat\mathbf{X}_{E}({\mathcal L}) = \dot R = -\dot{\mathcal L} .
\end{equation}
\emph{Step 2}:
Use Noether's theorem to obtain first integrals from the variational point symmetries.
The action \eqref{XactionLagr} of the variational point symmetries \eqref{hatX-L-E}
on ${\mathcal L}$ gives $R=0$ and $R=-{\mathcal L}$ respectively.
Then, through the Noether correspondence \eqref{I},
this yields the first integrals $I=L$ and $I=E$,
which are given by the angular momentum \eqref{L} and the energy \eqref{E}.
\emph{Step 3}:
Re-write the Hamiltonian system \eqref{eom} in first-order form using the previous first integrals.
First, expressions \eqref{L} and \eqref{E} for the first integrals directly yield
\begin{equation}\label{1stordsys1}
\dot r = \frac{{\rm sgn}(\dot r)\sqrt{2E(\lambda r^2 +1) - L^2/r^2 - \omega^2r^2}}{\lambda r^2 +1} = F^r,
\qquad
\dot \theta = \frac{L}{r^2(\lambda r^2 +1)} = F^\theta.
\end{equation}
Next, since the first integrals are time-independent, they satisfy
\begin{equation}\label{1stordsys2}
\dot L =0,
\qquad
\dot E =0.
\end{equation}
These four equations constitute a first-order form for the Hamiltonian system \eqref{eom}.
\emph{Step 4}:
Find all point symmetries of the first-order system such that
every joint invariant of the variational point symmetries is preserved.
An invariant of a point symmetry $\mathbf{X}$ is a function $\chi$ of $t,r,\theta$ that is annihilated by the symmetry generator, $\mathbf{X}(\chi)=0$.
The joint invariants of the two variational point symmetries \eqref{X-L-E}
clearly consist of only $\chi=r$.
To preserve this invariant,
we search for point symmetries on $(t,r,\theta,E,L)$ of the infinitesimal form
\begin{equation}\label{Y}
\mathbf{Y} = \tau(r,L,E)\partial/\partial t + \eta^\theta(r,L,E)\partial/\partial\theta + \eta^L(r,L,E)\partial/\partial L + \eta^E(r,L,E)\partial/\partial E.
\end{equation}
The condition for the first-order system \eqref{1stordsys1}--\eqref{1stordsys2} to be infinitesimally invariant is given by
\begin{gather*}
0 ={\rm pr}^{(1)}\hat\mathbf{Y}(\dot L)|_{\mathcal E}
= F^r\partial_r\eta^L,
\qquad
0 ={\rm pr}^{(1)}\hat\mathbf{Y}(\dot E)|_{\mathcal E}
= F^r\partial_r\eta^E,
\\
0 ={\rm pr}^{(1)}\hat\mathbf{Y}(\dot \theta - F^\theta)|_{\mathcal E}
= \big( F^r(\partial_r\eta^\theta -F^\theta\partial_r\tau) -\eta^E\partial_E F^\theta - \eta^L\partial_L F^\theta \big),
\\
0 ={\rm pr}^{(1)}\hat\mathbf{Y}(\dot r - F^r)|_{\mathcal E}
= -\big( (F^r)^2\partial_r\tau + \eta^E\partial_E F^r + \eta^L\partial_L F^r \big),
\end{gather*}
using the symmetry generator in evolutionary form
$\hat\mathbf{Y}= (\eta^\theta -\tau F^\theta)\partial/\partial{\theta} -\tau F^r\partial/\partial{r} + \eta^L\partial/\partial{L} + \eta^E\partial/\partial{E}$.
These determining equations yield the linear system
\begin{gather}
\partial_r\eta^L=0,
\quad
\partial_r\eta^E=0,
\\
\partial_r\tau = -(\eta^E\partial_E F^r + \eta^L\partial_L F^r)/(F^r)^2,
\quad
\partial_r\eta^\theta = -(\eta^E\partial_E(F^r F^\theta) +\eta^L\partial_L(F^r F^\theta))/(F^r)^2,
\end{gather}
which can be straightforwardly solved.
Up to arbitrary functions of $L$ and $E$,
the general solution is given by
$\tau = C_1\partial_E\Theta + C_2\partial_E T +C_3$,
$\eta^\theta = -C_1\partial_L\Theta - C_2\partial_LT -C_4$,
$\eta^L = C_1$, $\eta^E=C_2$,
where $C_1,C_2,C_3,C_4$ are constants.
Hence we obtain four point symmetries
\begin{gather}
\mathbf{Y}_L = -\partial/\partial{\theta} = \mathbf{X}_L,
\qquad
\mathbf{Y}_E = \partial/\partial{t} = \mathbf{X}_E,
\label{Y-L-E}
\\
\mathbf{Y}_\Theta =\partial_E\Theta\partial/\partial{t} -\partial_L\Theta \partial/\partial{\theta} + \partial/\partial{L},
\qquad
\mathbf{Y}_T =-\partial_E T\partial/\partial{t} +\partial_L T \partial/\partial{\theta} - \partial/\partial{E}.
\label{Y-Theta-T}
\end{gather}
\emph{Step 5}:
Convert the additional point symmetries into dynamical symmetries of the Hamiltonian system \eqref{eom}.
The two symmetries $\mathbf{Y}_L$ and $\mathbf{Y}_E$ are clearly inherited from the
rotation and time-translation symmetries \eqref{X-L-E} of the Hamiltonian system \eqref{eom}.
Since these symmetries are the only point symmetries admitted by this system,
the two additional symmetries $\mathbf{Y}_\Theta$ and $\mathbf{Y}_T$ must therefore yield
dynamical symmetries when they are transformed to act on solutions $(r(t),\theta(t))$.
Their action is obtained simply by first expressing the symmetries in evolutionary form
and then projecting the generators onto the coordinate space $(r,\theta)$.
This yields
$\hat\mathbf{Y}_\Theta = -(\partial_L \Theta +\dot\theta\partial_E \Theta)\partial/\partial{\theta} -\dot r\partial_E\Theta\partial/\partial{r} =\hat\mathbf{X}_\Theta$
and
$\hat\mathbf{Y}_T = (\partial_L T+\dot\theta\partial_E T )\partial/\partial{\theta} +\dot r\partial_E T\partial/\partial{r} =-\hat\mathbf{X}_T$,
which are the two dynamical symmetries \eqref{hatX-Theta} and \eqref{hatX-T}
obtained previously.
\emph{Step 6}:
Apply Noether's theorem to obtain first integrals by using the dynamical symmetries that are variational.
There are two methods to verify a priori that the two dynamical symmetries $\hat\mathbf{Y}_\Theta$ and $\hat\mathbf{Y}_T$ are variational, and to derive the corresponding first integrals.
A direct method \cite{BA-book,Olver-book}
consists of showing that the action of the symmetries on the Lagrangian is a total time derivative, ${\rm pr}^{(1)}\hat\mathbf{Y}({\mathcal L})=\dot R$.
This also yields $R$ so that the first integral \eqref{I} can be obtained.
However, it is somewhat complicated to find $R$ explicitly.
An alternative method which by-passes this complication
uses only the dynamical symmetries themselves \cite{BA-book,AncBlu98}.
First, a dynamical symmetry
$\hat\mathbf{Y} = P^r\partial/\partial{r} + P^\theta\partial/\partial{\theta}$ is variational iff
the pair $(Q^r,Q^\theta)$ given by the Noether correspondence \eqref{QfromP} is a multiplier,
satisfying
\begin{equation}\label{Q-deteqns}
\frac{\delta}{\delta r}\Big((\ddot r -f^r)Q^r+(\ddot \theta -f^\theta)Q^\theta\Big)=0,
\quad
\frac{\delta}{\delta \theta}\Big((\ddot r -f^r)Q^r+(\ddot \theta -f^\theta)Q^\theta\Big)=0.
\end{equation}
Then, the resulting first integral \eqref{I} can be obtained by a line integral formula
\begin{equation}\label{I-lineintegral}
I = \int_{\mathcal C} \Big( (1+\lambda r^2)^{-1}\dot P^r\,dr + (r^2(1+\lambda r^2))^{-1}\dot P^\theta\,d\theta -(1+\lambda r^2)P^r\,d\dot r -r^2(1+\lambda r^2)P^\theta\,d\dot \theta \Big)\big|_{\mathcal E}
\end{equation}
where $\mathcal C$ is any curve from $(t_0,r_0,\theta_0,\dot r_0,\dot\theta_0)$
to $(t,r,\theta,\dot r,\dot\theta)$.
It is straightforward to verify that the multiplier equations \eqref{Q-deteqns} hold
for both of the dynamical symmetries $\hat\mathbf{Y}_\Theta$ and $\hat\mathbf{Y}_T$,
using the pairs \eqref{Q-Theta} and \eqref{Q-T}.
The corresponding first integrals from formula \eqref{I-lineintegral}
(up to an additive constant) are $I=\Theta$ and $I=T$,
given by expressions \eqref{Theta} and \eqref{T}, respectively.
Altogether, the two first integrals arising from the dynamical symmetries,
plus the two first integrals arising from the point symmetries,
comprise the complete set of four functionally-independent first integrals
\eqref{L}--\eqref{T} for the Hamiltonian system \eqref{eom}.
\subsection{Variational symmetry algebra}
The commutator structure of the four variational symmetries \eqref{hatX-L-E}--\eqref{hatX-T}
can be derived in several different ways.
A direct method is to express the commutators in terms of the action of the prolonged symmetries on the components of the symmetry generators.
However, the prolongations can be avoided by instead using the representation of the symmetries
in the form \eqref{Y-L-E}--\eqref{Y-Theta-T}
given by point symmetries acting on the coordinate space $(t,r,\theta,E,L)$.
The commutators are then simple to compute,
and we find that every commutator vanishes.
Hence the four point symmetries \eqref{Y-L-E}--\eqref{Y-Theta-T}
comprise an abelian algebra.
The same algebra structure then holds for the four variational symmetries \eqref{hatX-L-E}--\eqref{hatX-T}.
We remark that this result provides an alternative way to obtain the corresponding first integrals,
by utilizing the canonical coordinates of the four symmetries \eqref{Y-L-E}--\eqref{Y-Theta-T}
as follows.
The canonical form of a symmetry \eqref{Y} is given by
$\mathbf{Y} = \partial/\partial{\zeta}$
where $\zeta(t,r,\theta,L,E)$ is a canonical coordinate satisfying
$\mathbf{Y}(\zeta)=\tau\partial_t\zeta + \eta^\theta\partial_\theta\zeta + \eta^L\partial_L\zeta + \eta^E\partial_E\zeta=1$.
Since the four symmetries \eqref{Y-L-E}--\eqref{Y-Theta-T} are mutually commuting,
there exists a point transformation from $(t,r,\theta,E,L)$ to the coordinate space
$(r,\zeta^L,\zeta^E,\zeta^\Theta,\zeta^T)$
consisting of the four canonical coordinates and the joint invariant of all four symmetries.
It is straightforward to see that $\zeta^L = -\Theta$, $\zeta^E = T$, $\zeta^\Theta=L$, $\zeta^T=-E$.
\section{Concluding remarks}
Finally, we stress that the approach presented in this paper is widely applicable to many other Hamiltonian systems
and will be helpful for unveiling the connections between local and global aspects of their integrability and symmetry properties.
In particular, we plan to study first integrals and local symmetries of
more general central force systems with Hamiltonians of the form $H=m(r)^{-1}((p^r)^2 + (p^\theta/r)^2)+ U(r)$
having a position-dependent mass,
as well as generic (for instance, H\'enon-Heiles type) nonlinearly coupled systems of oscillators.
Indeed, from a more general perspective,
an important open problem is how to detect global regularity of first integrals.
\section*{Acknowledgements}
S.C.A. is supported by an NSERC research grant.
A.B. has been partially supported by Ministerio de Ciencia, Innovaci\'on y Universidades (Spain) under grant MTM2016-79639-P (AEI/FEDER, UE), by Junta de Castilla y Le\'on (Spain) under grant BU229P18. M.L.G. gratefully acknowledges support by Junta de Andaluc\'{i}a (Spain) for the research group grant FQM-201.
|
{
"timestamp": "2019-01-24T02:05:10",
"yymm": "1809",
"arxiv_id": "1809.02248",
"language": "en",
"url": "https://arxiv.org/abs/1809.02248"
}
|
\section{Introduction}\label{s1}
The problem of constructing a consistent theory of interacting fields with higher spin is a long-standing one. Mainly, it was an interesting theoretical curiosity, but nowadays, thanks to advances in string field theory and attempts to construct a quantum theory of gravity, we see a renaissance of interest in higher spins, which may illuminate some fundamental aspects behind those theories.
Before embarking on a journey into the yet uncharted territory of higher spin theory, in this section we review what we already know about spin.
After a basic introduction, in \textbf{Section \ref{s2}} we discuss some of the main motivating factors that keep driving new research in this area.
As history frequently teaches us important lessons, in \textbf{Section \ref{s3}} we go through a "folk" history of higher spin theory, having the obvious advantage of hindsight. One should keep in mind that this is far from a complete history, it is a story consciously biased towards the topics we will be dealing with here. This section also serves as an appropriate place to expose some of the most important ideas and developments in higher spin theory.\newline We review several theorems whose implications seem to render the investigation of higher spins pointless.
In \textbf{Section \ref{s5}}, as a warm-up exercise, we review the well-known theories of fields with spin $0, 1$ and $2$, carefully examining the role of spin and trying to catch a glimpse of the underlying pattern common to all spins.
Finally, in \textbf{Section \ref{s6}}, we present the higher spin theory of massless bosons in flat spacetime using an elegant mathematical formalism that works for all spins in all dimensions. We discuss both free and interacting theory (with a generic external current) in their constrained and unconstrained forms. We also examine the issue of non-locality and we investigate the geometric formulation of higher spin theories.
In \textbf{Appendix \ref{a1}}, we provide some explicit calculational results in higher spin theory. These results were obtained and checked using a simple C\texttt{++} code written specifically for this purpose. The core snippets of the code are provided in \textbf{Appendix \ref{a2}}.
\newpage
\subsection{What is spin?}
Spin is an intrinsic property of relativistic fields, which (after quantization) give rise to particles. It can be viewed as an additional degree of freedom unrelated to spatial degrees of freedom specified by position and momentum.
Its name comes from the fact that, mathematically, spin behaves like quantized angular momentum. Unlike orbital angular momentum, spin quantum numbers may have half-integer values and fundamental particles cannot be made to stop "spinning" or to spin faster or slower, spin can only change its orientation.
Particles (fields) with integral spin are called \textbf{bosons}, and those with half-integral spin are known as \textbf{fermions}.
\textit{Spin-statistics theorem}, one of the rigorous results of axiomatic quantum field theory, tells us that bosons and fermions behave in a drastically different manner. While the former respect the \textbf{Bose-Einstein} statistics, the latter respect the \textbf{Fermi-Dirac} statistics, and as a consequence are subject to the \textit{Pauli exclusion principle}. Fermions form particles of matter, while bosons mediate the interactions between them.
\subsection{Where does spin come from?}
The existence of spin is a direct consequence of the most fundamental mathematical properties of our universe, \textbf{spacetime symmetries}. This is why the true meaning of spin has to be discussed in the context of a fully Lorentz-invariant theory.
Quantum field theory, which is the underlying formalism describing the Standard Model of particle physics, is such a theory. We introduce a field for each fundamental particle species, which transforms \textit{nicely} under Lorentz transformations. Once we pick a particular representation of the Lorentz transformations, it specifies the spin. After quantizing the field, one finds that the field operator creates or annihilates particles of definite spin, which was of course, the spin associated with the classical field to begin with.
\subsubsection{Spin from irreducible representations in four-dimensional spacetime}\label{IRREPs}
Instead of simply looking at Lorentz transformations, we have to look at the full class of spacetime isometries (i.e. isometries of \textit{Minkowski space} $\mathcal{M}^4$). They are locally described by the \textit{Poincaré group}
\begin{equation}
\mathcal{P} = \mathbb{R}^{(1,3)} \rtimes \mathrm{SO} (1,3) \, ,
\end{equation}
a ten-dimensional noncompact Lie group, corresponding to ten independent symmetries (3 spatial translations, time translation, 3 spatial rotations and 3 Lorentz boosts).\newline
A closer look at continuous local symmetries leads us to analyze the identity component of $\mathcal{P}$, which acts as a stabilizer group of the origin, the \textit{proper orthochronous Lorentz group},
\begin{equation}
\mathrm{SO}^+ (1,3) = \left\{ \Lambda \in \mathrm{GL} (4, \mathbb{R}) \, \big| \, \Lambda^T \eta \Lambda = \eta, \, \eta = \mathbf{diag} (-1,1,1,1) \right\} \, .
\end{equation}
The fundamental physical fields must carry the irreducible representations of its Lie algebra,
\begin{equation}
\mathfrak{so}(1,3) \hookrightarrow \mathfrak{so}(1, 3)_\mathbb{C} \cong \mathfrak{su}(2)_\mathbb{C} \oplus \mathfrak{su}(2)_\mathbb{C} \cong \mathfrak{sl} (2, \mathbb{C}) \, ,
\end{equation}
which generates the covering Lie group of $\mathcal{P}$,
\begin{equation}
\mathbb{R}^{(1,3)} \rtimes \mathrm{SL} (2,\mathbb{C}) \hookleftarrow \mathcal{P} \, .
\end{equation}
Elements of this group are generated by terms of the form
\begin{equation}
\exp(i a_\mu P^\mu) \exp\left(\frac{i}{2} \omega_{\mu\nu} M^{\mu\nu }\right) \, ,
\end{equation}
where $a_\mu$ parametrizes translations generated by $P^\mu$, and $\omega_{\mu\nu}$ parametrizes Lorentz transformations (rotations and boosts) generated by $M^{\mu\nu}$.
This Lie algebra is defined by
\begin{align}
\left[ P_\mu, P_\nu \right] &= 0 \, , \\
\left[ M_{\mu\nu}, P_\rho \right] &= i \, \eta_{\rho [ \mu} P_{\nu ]} \, , \\
\left[ M_{\mu\nu}, M_{\rho\sigma} \right] &= i \, \eta_{[ \mu \nu} M_{\rho \sigma]} \, ,
\end{align}
where $[\dots]$ stands for unweighted anti-symmetrization of indices with the minimal number of terms and $M_{\mu\nu}$ is defined in terms of the rotation generator $J_i$ and boost generator $K_\mu$ as
\begin{align}
J_i &= \frac{1}{2} \varepsilon_{ijk} M^{jk} \, , \\
K_i &= M_{0i}.
\end{align}
The \textit{Casimir invariants} of the Poincaré group are
\begin{equation}
P_\mu P^\mu := m^2 \, ,
\end{equation}
where $m$ stands for mass and\footnotemark
\footnotetext{This is true if $m \neq 0$, but $m=0$ does not imply $W^2=0$.}
\begin{equation}
W_\mu W^\mu = m^2 s (s+1),
\end{equation}
where $s$ stands for spin\footnotemark.\footnotetext{At this level of analysis, which is purely mathematical, \textit{mass} and \textit{spin} are simply names we give to these quantities. Their physical properties only become apparent after introducing the actual physics, i.e. equations of motion.}
$W_\mu$ is the \textit{Pauli-Lubanski} pseudovector, defined as the Hodge dual of $\mathbf{J} \wedge \mathbf{P}$, i.e.
\begin{equation}
W_\mu := \frac{1}{2} \varepsilon_{\mu\nu\rho\sigma} J^{\nu\rho} P^\sigma.
\end{equation}
\newpage
Next, we look at \textit{Wigner's little groups}, stabilizer subgroups of various mass states.
\begin{itemize}
\item $m>0$, stabilizer of $P=(m,0,0,0)$
\newline$\implies$ massive states with mass $m$ and spin $s \in \mathbb{N}_0/2$
\item $m=0$ and $P_0 > 0$, stabilizer of $P=(k,0,0,k)$
\newline$\implies$ $s \in \mathbb{N}_0/2$ IRREPs and \textit{continuous spin} representation
\item $m^2 < 0$, stabilizer of $P=(0,0,0,m)$
\newline$\implies$ \textit{tachyons}\footnotemark \footnotetext{Fields with an imaginary mass, which propagate faster-than-light excitations and lead to theories with instabilities and violation of causality.}
\item $m=0$ and $P^\mu=0$
\newline$\implies$ trivial representation, the \textit{vacuum} state
\end{itemize}
We can now classify the physically relevant\footnotemark\, finite-dimensional irreducible representations of the double cover of the Poincaré group by two numbers, $m \in \mathbb{R}^+$ and $s \in \frac{1}{2} \mathbb{Z}$.
\footnotetext{We follow the standard prescription of ignoring tachyons and \textit{continuous spin} representations.\newline The latter seem to give rise to fields whose excitations cannot be compactly localized, but are instead localized on semi-infinite spacelike strings (see \cite{infspin}) that we do not seem to find in nature.}
These irreducible representations are further classified by two numbers, $j_1$ and $j_2$ such that $j_1 + j_2 = s$ and are labeled as $(j_1, j_2)$ representations, summarized in the table below.\\
\begin{tabular}{ r | c | c | c | c}
\hline
\textbf{spin} & \textbf{representation} & \textbf{field} & \textbf{eq. of motion} & \textbf{example} \\
\hline
$0$ & $(0,0)$ & scalar & \textit{Klein-Gordon} & Higgs \\
$1/2$ & $(\frac{1}{2},0) \oplus (0, \frac{1}{2})$ & spinor & \textit{Dirac} & electron \\
$1$ & $(\frac{1}{2}, \frac{1}{2})$ & vector & \textit{Proca} & photon \\
$3/2$ & $(\frac{1}{2},1) \oplus (1, \frac{1}{2})$ & spinor-vector & \textit{Rarita-Schwinger} & gravitino\footnotemark \\
$2$ & $(1,1)$ & 2-tensor\footnotemark & \textit{linearized Einstein} & graviton\footnotemark \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
\hline
\end{tabular}\newline
\footnotetext[5]{Graviton's fermionic superpartner in theories with Bose-Fermi symmetry (supersymmetry), specifically in supergravity.}
\footnotetext[6]{Symmetric tensor of order two.}
\footnotetext[7]{Hypothetical quantized excitation of the gravitational field. }
The dots at the end of the table represent the fact that, from a mathematical point of view, nothing prevents the existence of higher spin fields at this level of analysis, nor does anything directly imply that we should expect them to behave differently from their lower spin counterparts.
For a detailed group-theoretical analysis of spin, see, for example, \cite{weinbergQFT}, \cite{sternberg}, \cite{BBIRREPs} or \cite{schwichtenberg}.
\newpage
\subsubsection{Spin from an action principle}
It has long been thought that spin cannot be formulated with an action principle within the framework of Lagrangian mechanics with particles.
This issue was particularly relevant with the introduction of Feynman's sum-over-histories approach to quantization.\newline Even Feynman himself wrote in his 1965 book \cite{feynmanpath}:\newline
\q{With regards to quantum mechanics, path integrals suffer most grievously from a serious defect. They do not permit a discussion of spin operators or other such operators in a simple and lucid way. ... It is a serious limitation that the half-integral spin of the electron does not find a simple and ready representation.}\newline
However, this is actually possible and relatively straightforward, but the appropriate phase space formulation was not fully obvious until several years later \cite{pathspin}.
One begins by examining how spin behaves classically. It can be thought of as a spinning top, or a little arrow with fixed length, sticking out from the particle and pointing in a particular direction in three-dimensional physical space. Therefore, it is reasonable to assume that its phase space is a $2$-sphere $\mathcal{S}^2$, and its dynamical variables can be taken to be the polar and azimuthal angles $\theta$ and $\phi$.
This establishes the point particle as an entity described not only by its position and momentum, but also by the orientation of its spin "arrow". The rest is simply a matter of mathematical construction.
It is precisely this that caused so much confusion, because in order to write the action, one has to find the proper local geometric invariant on $\mathcal{S}^2$.
The volume form on a $2$-sphere is
\begin{equation}
\mathbf{\omega} = \sin{\theta} \, d \phi \, \wedge \, d\theta \, ,
\end{equation}
and it is invariant under rotations. Since $\mathbf{\omega}$ is a closed form, we can write it locally as an exact form, i.e.
\begin{equation}
\mathbf{\omega} = d \mathbf{\chi} = d (\cos{\theta} \, d \phi) \, ,
\end{equation}
so the action for spin $J$ can be written as
\begin{equation}
S = J \int \chi = J \int d\phi \, \cos{\theta} = J \int dt \, \dot{\phi} \cos{\theta} \, . \label{int}
\end{equation}
A distinctive property of this system is that its phase space is compact (i.e. closed and bounded), which implies a finite-dimensional Hilbert space. Furthermore, invariance of $\mathcal{S}^2$ (as a manifold embedded in $\mathbb{R}^3$) under $\mathrm{SO}(3)$ guarantees the operators with correct commutation relations\footnotemark.
\footnotetext{We will not derive this here. The interested reader is referred to \cite{condmatter}.}
Let us demonstrate how quantized spin arises when we plug \eqref{int} into the Feynman integral. The spin term in the Feynman integral, with $J=j\hbar$ reads \footnotemark \footnotetext{Here and only here, we write $\hbar$ explicitly instead of working with natural units, where $\hbar=c=1$}
\begin{equation}
e^{i S / \hbar} = \exp\left(ij\int dt \, \dot{\phi} \cos{\theta}\right) \, .
\end{equation}\newpage
We proceed by using the Stokes' theorem\footnotemark\, on the integral, \footnotetext{$\int_{\partial \Omega} \omega = \int_{\Omega} d\omega$}
\begin{equation}
\int dt \, \dot{\phi} \cos{\theta} = \oint_C d \phi \cos{\theta} = \int_M d \phi d \theta \sin{\theta}
\end{equation}
where $C$ denotes a closed path on $\mathcal{S}^2$, bounding a $2$-surface $M$, i.e. $\partial M = C$. Since $\mathcal{S}^2$ is compact, the choice of $M$ is not unique, but the difference between two possible choices is simply the integral over entire $\mathcal{S}^2$. In other words, the difference between two possible choices for the action is
\begin{equation}
\Delta S = j \hbar \int_{\mathcal{S}^2} d \phi d \theta \sin{\theta} = 4 \pi \hbar j \, .
\end{equation}
The path integral cannot be multivalued, which in turn means that $e^{iS/\hbar}$ has to be single-valued. Therefore,
\begin{equation}
e^{i \Delta S / \hbar} = 1 \quad \implies \quad 4 \pi j = 2 \pi N \quad ( \forall N \in \mathbb{Z}) \, ,
\end{equation}
i.e. spin $j$ can take only integer and half-integer values, the same conclusion we arrived at using group representation theory in \textbf{Section \ref{IRREPs}}.
Once again, nothing at the mathematical level of analysis prevents the existence of arbitrarily high spins nor does it indicate any sort of inconsistency.
\newpage
\subsection{What do we mean by higher spin?}
Historically, the term \textit{higher spin}\footnotemark\, was used to refer to several different domains of theoretical constructs.
\footnotetext{Often abbreviated as HS.}One of the reasons for including only spin $0$, $1/2$ and $1$ fields in the domain of \textit{lower spin} was the fact that only those result in renormalizable quantum field theories.
Today, by \textit{higher spin}, we mean spin greater than \textbf{two}, i.e. spin-$5/2$ and higher for fermions, spin-$3$ and higher for bosons. This seems more appropriate since we \textit{do} have consistent \textit{classical} field theories for $s\leq2$, but all higher spins yield problematic constructions even before quantization.
Constructing a consistent interacting theory of HS fields (sometimes referred to as \textit{higher-spin gravity} in the case of massless interacting fields) has been a long-standing problem in theoretical physics. So far, we only have a fully consistent interacting HS theory in $\mathrm{(A)dS}$ spacetimes, which has become known as \textit{Vasiliev's theory}. Similar attempts at constructing such theories in flat space have not been successful. Unfortunately, taking the flat-space limit of Vasiliev's theory in $\mathrm{(A)dS}$ in hope of recovering a theory in flat spacetime is by no means trivial and possibly not even well-defined.
Interestingly, $\mathrm{(A)dS}$ spacetimes are highly symmetrical, and their symmetry group $\mathrm{SO}(1,4) \cong \mathbf{Sp}(2,2)$ reduces\footnotemark\, to the Poincaré group in the limiting case of infinite \newline(anti-)de Sitter radius, which may point to the $\mathrm{(A)dS}$ group as being more fundamental\cite{missed}.
\footnotetext{This can be accomplished rigorously using the İnönü-Wigner group contraction.}
\newpage
\section{Why study HS theory?}\label{s2}
\subsection{String theory}\label{ST}
String theory is a promising candidate for a consistent theory of quantum gravity. Its perturbative spectrum consists of states with arbitrarily high spins and masses.
One could say that higher spin gravity lies between supergravity and string theory, which makes it particularly interesting.
In string theory, one finds an \textbf{infinite tower of massive string excitations} with increasing spin. The existence of this infinite tower of higher-spin fields is crucial for the absence of ultraviolet divergences, an extremely important feature of string theory.
The only free parameter in string theory is the \textit{string constant}, denoted by $\alpha'$, which determines the characteristic length and mass scale of strings. In the $\alpha' \to 0$ limit, the theory reduces to supergravity, i.e. a theory with massless modes. On the other hand, in the $\alpha' \to \infty$ limit, all excitations become massless, and the theory resembles higher spin gravity.
Furthermore, we know that it is possible to have a theory with only massless fields in its formal construction, which nevertheless produces no massless excitations after quantization. In other words, it is possible to begin with a Lagrangian with massless fields, which describes a quantum field theory without massless propagating degrees of freedom. There are at least two mechanisms, familiar from the Standard Model, that exhibit such behaviour. One is \textit{spontaneous symmetry breaking}, i.e. the \textit{Higgs mechanism} that gives mass to massive fundamental\footnotemark\footnotetext{Spontaneous symmetry breaking also occurs in non-fundamental descriptions, for example in the theory of superconductivity and superfluidity.} particles. The other one is \textit{color confinement}, the mechanism responsible for clumping of gluons and quarks into colorless hadrons. It is possible that a similar mechanism underlies the generation of massive states in string theory, in which case we would have to know how to construct a massless higher spin theory. There are strong indications that symmetries of string theory form a very large group, much larger than what can be seen using the perturbative approach, which spontaneously breaks down to a smaller group, giving mass to higher-spin excitations.
Therefore, it is plausible that a firm understanding of HS theory could shed some light on the underlying mathematical structure of string theory. In particular, we would like to know what symmetries the theory possesses and what is the notion of spacetime geometry in string theory and HS gravity.
\newpage
\subsection{AdS/CFT correspondence}
The $\mathrm{AdS/CFT}$ (\textit{anti-de Sitter/conformal field theory}) correspondence\footnotemark \footnotetext{Also known as \textit{Maldacena duality} or \textit{gauge/gravity duality}.}, in its strictest formulation, posits an equivalence between the theory of quantum gravity in anti-de Sitter spacetimes, as formulated in string theory, and conformal field theory on its boundary. However, although it seems to be valid generally, it is still technically a conjecture and its rigorous construction has not yet been completed.
There are reasons to believe that studying HS theory could help us not only to understand string theory but also to elucidate this conjectured correspondence, particularly in the prominent example of type IIB string theory (a theory on $AdS_5 \times \mathcal{S}^5$ with five spacetime and five compact dimensions) and $\mathcal{N}=4$ supersymmetric Yang-Mills theory on its four-dimensional boundary.
\subsection{Why not?}
Of course, from the viewpoint of pure mathematics, one needs no justification for studying anything. But in the case of HS theory, there is more to it than just curiosity.
We know that a rich mathematical structure emerges from the fully consistent (necessarily non-linear!) theory of spin-$2$ fields, i.e. \textbf{pseudo-Riemannian geometry}. Some say that Einstein's general theory of relativity was, in a sense, discovered prematurely, and it was only Einstein's deep geometric intuition that allowed him to make the leap to a fully geometrized description of gravity. Had we persisted on building the theory in a bottom-up way from a linear spin-$2$ theory, we might have not ended up with such an elegant theory years before the dawn of quantum field theory.
It is not known at the time of writing whether fully consistent spin-$3$ and higher spin theories give rise to some exciting new connections between physics and mathematics, perhaps even hitherto unknown mathematical structures, but it doesn't seem so unlikely that they might.
\newpage
\section{Folk history of HS theory}\label{s3}
One cannot fully appreciate the struggle to understand higher spins without its history. For that purpose, we review here the most important steps forward\footnotemark\footnotetext{Like in every area of research, some steps that did not quite lead forward have been made, which was not understood at the time. Today, we understand more, so we can only pick those results that lead somewhere, hence the title \textit{folk} history.} in understanding higher spins, and we use this opportunity to expose the very basics of the theory. We restrict our attention to fields with integral spin, since those are the focus of this thesis.
\subsection{Fierz-Pauli equations (1939)} \label{fpe}
Fierz and Pauli constructed a consistent set of equations describing free massive fields of arbitrary spin \cite{fierzpauli}.
They start with the Klein-Gordon equation,
\begin{equation}
(\Box - M^2) \phi = 0 \, ,
\end{equation}
which describes spin-$0$ fields, and generalize it directly to higher spins, imposing additional consistency constraints.
The Fierz-Pauli equations describing a spin-$s$ field are
\begin{align}
\phi_{\mu_1 \cdots \mu_s} &= \phi_{(\mu_1 \cdots \mu_s)}\, , \label{FP1} \\
(\Box - M^2) \phi_{\mu_1 \cdots \mu_s} &= 0 \, , \label{FP2} \\
\partial^{\mu_1} \phi_{\mu_1 \cdots \mu_s} &= 0 \, , \label{FP3} \\
\eta^{\mu_1 \mu_2} \phi_{\mu_1 \cdots \mu_s} &= 0 \, , \label{FP4}
\end{align}
where $(\dots)$ denotes the normalized symmetrization of indices. Equation \eqref{FP1} establishes the field as a fully symmetric tensor of order $s$. This condition ensures that the field transforms in accordance with the desired spin representation.
To ensure that it is an irreducible representation, it must be traceless, which is guaranteed by \eqref{FP4}.
Finally, \eqref{FP3} imposes the transversality condition, needed for the field to propagate the correct number of degrees of freedom, which we calculate in the following segment.
From group-theoretical considerations, we expect all\footnotemark\, massless bosons in four-dimensional Minkowski spacetime to propagate exactly \textbf{two} independent degrees of freedom.\footnotetext{Except scalar bosons, which always have a single degree of freedom. More precisely, \textit{all} massless bosons have a single degree of freedom, but it gets doubled due to parity transformations, except in the case of scalars, whose irreducible representations are one-dimensional.} In general, a spin-$s$ field in $D$-dimensional spacetime should propagate\footnotemark
\begin{equation}
\#(D-2,s) - \#(D-2, s-2) = {{D+s-3} \choose {s}} - {{D+s-5} \choose {s-2}}
\end{equation}
independent degrees of freedom, as can be seen for example, using Wigner's classification. $\#(D,s)$ denotes the number of independent components of a fully symmetric tensor of order $s$ in $D$-dimensional spacetime. It is a simple exercise in combinatorics to check that $\#(D,s)={{D+s-1} \choose {s}}$.
A spin-$s$ field is described by a totally symmetric doubly traceless tensor of order $s$, which contains
\begin{equation}
\#(D, s) - \#(D, s-4) = \underbrace{{{D+s-1} \choose {s}}}_{\parbox{2cm}{\tiny{components of a\\ symmetric tensor}}} - \underbrace{{{D+s-5} \choose {s-4}}}_{\parbox{1.8cm}{\tiny{components of \\ its second trace}}}
\end{equation}
independent components.
Gauge invariance eliminates the propagation of spurious degrees of freedom ($2 s^2$ components in $D=4$), leaving the correct number of remaining degrees of freedom.
For a detailed calculation, see, for example, \cite{weinbergQFT} or \cite{BBIRREPs}.
\subsection{Singh-Hagen Lagrangian (1974) }
It was not until 35 years later that the proper Lagrangian formulation of Fierz-Pauli equations was constructed, by Singh and Hagen \cite{singhhagen}.
The fundamental obstacle lay in the need for auxiliary non-dynamical fields of spins $s-2, s-3, \dots$, along with the spin-$s$ field.
Let us motivate their construction by starting with the trivial example of $s=1$, and then proceeding to the first non-trivial case of $s=2$.
\subsubsection{Spin-1: no auxiliary fields}
A spin-$1$ field is described by $\phi_\mu(x)$, a Lorentz-tensor of order one (i.e. a 4-vector). Since it only has a single index, we do not have to worry about equation \eqref{FP4}, nor do we have to worry about the symmetry condition \eqref{FP1}. The Lagrangian for $s=1$ is the Proca Lagrangian
\begin{equation}
\mathcal{L}_{(1)} = -\frac{1}{2} (\partial_\mu \phi_\nu)^2 + \frac{1}{2} (\partial \cdot \phi )^2 - \frac{M^2}{2} \phi^2 \label{SH1}
\end{equation}
which produces the Proca equation of motion,
\begin{equation}
\Box \phi_\mu - \partial_\mu (\partial \cdot \phi) - M^2 \phi_\mu = 0 \, . \label{proca}
\end{equation}
At first glance, this is not equal to \eqref{FP2} for $s=1$, but a single divergence of \eqref{proca} gives
\begin{equation}
\partial^\mu \phi_\mu = 0 \, ,
\end{equation}
which gives the transversality condition \eqref{FP3}. Putting this back into \eqref{proca}, we are indeed left with the spin-$1$ version of equation \eqref{FP2},
\begin{equation}
(\Box - M^2) \phi_\mu = 0 \, .
\end{equation}
\newpage
\subsubsection{Spin-2: scalar auxiliary field}
Following \eqref{FP1} and \eqref{FP4}, a spin-$2$ field is described by $\phi_{\mu \nu}(x)$, a symmetric traceless Lorentz-tensor of order two. Instead of directly generalizing \eqref{SH1} to the spin-$2$ case by using a tensor of order two, we write the Lagrangian with an undetermined real parameter $\alpha$ in place of $1$,
\begin{equation}
\mathcal{L}_{(2)} = -\frac{1}{2} (\partial_\mu \phi_{\nu\rho})^2 + \frac{\alpha}{2} (\partial \cdot \phi_\mu )^2 - \frac{M^2}{2} \phi^2 \, . \label{SH2}
\end{equation}
The reason for doing so will become apparent soon.
The corresponding equation of motion\footnotemark\, is found to be
\footnotetext{$\mathcal{L}_{(2)}$ is varied taking into consideration the symmetry and the tracelessness of $\phi_{\mu \nu}$}
\begin{equation}
\Box \phi_{\mu \nu} - \frac{\alpha}{2} \left( \partial_\mu \partial \cdot \phi_\nu + \partial_\nu \partial \cdot \phi_\mu - \frac{2}{D} \eta_{\mu \nu} \partial^2 \cdot \phi \right) - M^2 \phi_{\mu \nu} = 0 \, , \label{sheom}
\end{equation}
where $D$ is the dimension of spacetime. A single divergence of \eqref{sheom} gives
\begin{equation}
\left( 1 - \frac{\alpha}{2} \right) \Box \partial \cdot \phi_\mu + \alpha \left( \frac{1}{D} - \frac{1}{2} \right) \partial_\mu \partial^2 \cdot \phi - M^2 \partial \cdot \phi_\mu = 0 \, . \label{shtrouble}
\end{equation}
We seem to be in trouble, because (assuming $D>2$) we can only partially restore the transversality condition $\eqref{FP3}$ by setting $\alpha=2$, which eliminates the first term in \eqref{shtrouble}. Had we generalized \eqref{SH1} directly, instead of leaving $\alpha$ undetermined, we would not have been able to eliminate it.
To eliminate the second term in \eqref{shtrouble}, we introduce the auxiliary scalar field $\pi(x)$ by adding to $\mathcal{L}_{(2)}$ (with $\alpha=2$) additional terms with two undetermined real parameters, $c_1$ and $c_2$,
\begin{equation}
\mathcal{L}_\pi = \pi \partial^2 \cdot \phi + c_1 (\partial_\mu \pi)^2 + c_2 \pi^2 \, .
\end{equation}
The corresponding equations of motion for $\mathcal{L} = \mathcal{L}_{(2)}\big\rvert_{\alpha=2} + \mathcal{L}_\pi$ are found to be
\begin{align}
\phi :& \quad \Box \phi_{\mu \nu} - \partial_\mu \partial \cdot \phi_\nu - \partial_\nu \partial \cdot \phi_\mu + \frac{2}{D} \eta_{\mu \nu} \partial^2 \cdot \phi - M^2 \phi_{\mu\nu} + \partial_\mu \partial_\nu \pi - \frac{1}{D} \eta_{\mu \nu} \Box \pi = 0 \, , \label{phieq} \\
\pi :& \quad \partial^2 \cdot \phi + 2(c_2 - c_1 \Box) \pi = 0 \, . \label{sheom2pi}
\end{align}
Taking twice the divergence of \eqref{phieq}, i.e. contracting it by $\partial_\mu \partial_\nu$, and multiplying it by $D$, yields
\begin{equation}
\left( (2-D) \Box - D M^2 \right) \partial^2 \cdot \phi + (D-1) \Box^2 \pi = 0 \, .\label{sheom2phi}
\end{equation}
The two equations, \eqref{sheom2pi} and \eqref{sheom2phi}, can be seen as a linear homogeneous system in $\partial^2 \cdot \phi$ and $\pi$. The system is solved by requiring that its determinant be non-vanishing and purely algebraic (without $\Box$ operators). Fortunately, this is possible if we choose
\begin{align}
c_1 &= \frac{D-1}{2(D-2)} \, , \\
c_2 &= \frac{D(D-1)M^2}{2(D-2)^2} \, .
\end{align}
This way, the only solution of the linear system \eqref{sheom2pi}-\eqref{sheom2phi} is $\pi = 0$ and $\partial^2 \cdot \phi = 0$, which we plug into \eqref{shtrouble} with $\alpha=2$ to obtain the Fierz-Pauli transversality condition \eqref{FP3} for $s=2$,
\begin{equation}
\partial^\mu \phi_{\mu\nu} = 0 \, .
\end{equation}
Finally, plugging the transversality condition and the solution $\partial^2 \cdot \phi = 0$ into \eqref{sheom}, we get the Fierz-Pauli equation of motion \eqref{FP2} for $s=2$,
\begin{equation}
(\Box - M^2) \phi_{\mu\nu} = 0 \, .
\end{equation}
A similar procedure with $s-1$ auxiliary fields was shown to yield the correct Lagrangian for spin-$s$ fields, equivalent to the Fierz-Pauli equations \cite{singhhagen}.
\subsection{Fronsdal equation (1978)}\label{fronsdaleq}
Soon after Singh and Hagen, Fronsdal investigated the massless case, taking the $M \to 0$ limit of their Lagrangian formulation \cite{fronsdal}.
In this limit, only the spin-$s$ and the first auxiliary spin-$s-2$ field survive, while all the lower spin auxiliary fields decouple. Furthermore, the remaining two fields can be neatly packed into a single field, with additional consistency constraints. Let us demonstrate here what happens in the spin-$2$ case.
\subsubsection{Spin-$2$ Fronsdal equation}
We start from the $M \to 0$ limit of the Singh-Hagen Lagrangian for $s=2$,
\begin{equation}
\mathcal{L} = -\frac{1}{2} (\partial_\mu \phi_{\nu\rho})^2 + (\partial \cdot \phi_\mu)^2 + \pi \partial^2 \cdot \phi + \frac{D-1}{2(D-2)} (\partial_\mu \pi)^2 \, .
\end{equation}
Next, we redefine $\pi$ and $\phi_{\mu \nu}$ into a new field $\varphi_{\mu\nu}$,
\begin{equation}
\varphi_{\mu\nu} := \phi_{\mu \nu} + \frac{1}{D-2} \eta_{\mu \nu} \pi \, ,
\end{equation}
which is no longer traceless.
The resulting Lagrangian is
\begin{equation}
\mathcal{L} = -\frac{1}{2} (\partial_\mu \varphi_{\nu\rho})^2 + (\partial \cdot \varphi_\mu)^2 + \frac{1}{2} (\partial_\mu \varphi)^2 + \varphi \partial^2 \cdot \varphi \, .
\end{equation}
Note that this is exactly the linearized Einstein-Hilbert Lagrangian, an important fact to which we will return later.
The equation of motion that follows from this Lagrangian is
\begin{equation}
\Box \varphi_{\mu\nu} - (\partial_\mu \partial \cdot \varphi_\nu + \partial_\nu \partial \cdot \varphi_\mu) + \partial_\mu \partial_\nu \varphi + \eta_{\mu \nu} \left( \partial^2 \cdot \varphi - \Box \varphi \right) = 0 \, , \label{f2eom}
\end{equation}
which is precisely the free linearized Einstein equation,
\begin{equation}
G_{\mu\nu}^{(lin)} = R_{\mu\nu}^{(lin)} - \frac{1}{2} \eta_{\mu \nu} R^{(lin)} = 0 \, ,
\end{equation}
but more on this in \textbf{Section \ref{spin2}}.
We define $\mathcal{F}_{\mu\nu}$ as the \textit{Fronsdal tensor} of order two,
\begin{equation}
\mathcal{F}_{\mu\nu} = \Box \varphi_{\mu\nu} - (\partial_\mu \partial \cdot \varphi_\nu + \partial_\nu \partial \cdot \varphi_\mu) + \partial_\mu \partial_\nu \varphi \, ,
\end{equation}
which is equal to $R_{\mu\nu}^{(lin)}$. Note that we can now write \eqref{f2eom} as
\begin{equation}
\mathcal{F}_{\mu\nu} - \frac{1}{2} \eta_{\mu \nu} \mathcal{F} = 0 \, , \label{fgeom1}
\end{equation}
which simply reduces to
\begin{equation}
\mathcal{F}_{\mu \nu} = \Box \varphi_{\mu\nu} - (\partial_\mu \partial \cdot \varphi_\nu + \partial_\nu \partial \cdot \varphi_\mu) + \partial_\mu \partial_\nu \varphi = 0 \, . \label{fgeom2}
\end{equation}
This is the \textbf{Fronsdal equation} for spin-$2$ fields. Note that \eqref{fgeom1} can reduce to \eqref{fgeom2} only because the theory is \textit{free}, analogous to the reduction of Einstein field equations to the vanishing of $R_{\mu\nu}$ in vacuum.
Fronsdal equation is invariant under the gauge transformation
\begin{equation}
\delta \varphi_{\mu \nu} = \partial_\mu \Lambda_\nu + \partial_\nu \Lambda_\mu \, .
\end{equation}
This fact will be particularly important when we begin investigating the theory in detail.
\subsubsection{Spin-$3$ Fronsdal equation}
We can try to generalize the spin-$2$ case directly on a totally symmetric (but not traceless!) tensor of order three\footnotemark, \footnotetext{Here, we begin to use a prime to denote a trace, e.g. $\varphi'_\nu := \varphi^\mu{}_{\mu\nu}$.}
\begin{align}
\mathcal{F}_{\mu\nu\sigma} &= \Box \varphi_{\mu\nu\sigma} - (\partial_\mu \partial \cdot \varphi_{\nu \sigma} + \partial_\nu \partial \cdot \varphi_{\sigma \mu} + \partial_\sigma \partial \cdot \varphi_{\mu \nu}) \\ \nonumber &+ \partial_\mu \partial_\nu \varphi'_\sigma + \partial_\nu \partial_\sigma \varphi'_\mu + \partial_\nu \partial_\sigma \varphi'_\mu = 0\, .
\end{align}
The corresponding generalized gauge transformations reads
\begin{equation}
\delta \varphi_{\mu\nu\sigma} = \partial_\mu \Lambda_{\nu\sigma} + \partial_\nu \Lambda_{\sigma\mu} + \partial_\sigma \Lambda_{\mu\nu} \, ,
\end{equation}
but unlike in the case of $s=2$, now we do not have a fully gauge-invariant Fronsdal tensor. Instead,
\begin{equation}
\delta \mathcal{F}_{\mu\nu\sigma} = 3 \partial_\mu \partial_\nu \partial_\sigma \Lambda' \, .
\end{equation}
In his original formulation\cite{fronsdal}, Fronsdal circumvents this problem by simply restricting the space of gauge parameters to ones that are traceless, i.e. by imposing the unusual constraint
\begin{equation}
\Lambda' = 0 \, .
\end{equation}
This amounts to restricting ourselves to a subclass of gauge transformations, instead of having fully unrestricted gauge invariance.
\subsubsection{Spin-$s$ Fronsdal equation}
The traceless $\Lambda$ constraint leaves us with a fully consistent gauge-invariant theory of free higher spin fields obeying the spin-$s$ \textit{Fronsdal equation}\footnotemark,
\begin{equation}
\mathcal{F}_{\mu_1 \cdots \mu_s} = \Box \varphi_{\mu_1 \cdots \mu_s} - (\partial_{\underline{\mu_1}} \partial \cdot \varphi_{\underline{\mu_2 \cdots \mu_s}}) + \partial_{\underline{\mu_1}} \partial_{\underline{\mu_2}} \varphi'_{\underline{\mu_3 \cdots \mu_{s} }} = 0 \, .
\end{equation}\footnotetext{Underlined indices stand for unweighted symmetrization with the minimal number of terms.}
\subsubsection{Fronsdal Lagrangian}
Fronsdal started with the Singh-Hagen Lagrangian formulation and naturally, he wanted to describe his theory using an action principle.
The Lagrangian that makes this possible is
\begin{equation}
\mathcal{L}_{\mathcal{F}} = \frac{1}{2} \varphi^{\mu_1 \cdots \mu_s} \left( \mathcal{F}_{\mu_1 \cdots \mu_s} - \frac{1}{2} \eta_{\underline{\mu_1 \mu_2}} \mathcal{F}'_{\underline{\mu_3 \cdots \mu_s}} \right) \, . \label{flang}
\end{equation}
As we show through explicit calculation in \textbf{Section \ref{freefronsdal}}, where we switch to a simpler notation, \eqref{flang} indeed yields the Fronsdal equation for $s<4$. For spins higher than four, we have to impose another unusual constraint,
\begin{equation}
\varphi'' = 0
\end{equation}
if we are to arrive at the Fronsdal equation of motion $\mathcal{F}=0$.
\subsection{Vasiliev's equations (1990) }
M.A.Vasiliev successfully constructed a fully consistent non-linear theory of interacting higher spin fields in (anti-)de Sitter spacetimes\cite{vasiliev90}. The equations are notoriously complicated and since we will be dealing with massless bosonic fields in flat spacetime, we will not reproduce them here.
It suffices to quote \cite{vasiliev}:\newline
\q{The shortest route to Vasiliev equations covers 40 pages.}\newline
\q{It is a sort of conventional wisdom that Vasiliev equations cannot be derived...}
Similarly to string theory, Vasiliev's theory in spacetime dimensions four and higher can be consistent only if it contains an infinite tower of higher-spin fields. Only in dimensions three and lower can it be consistent with an upper limit on spin.
\subsection{No-go theorems}\label{s4}
Throughout the history of HS theory, several important results have been obtained that severely constrain the properties of would-be interacting theories of higher spin fields. Vasiliev's theory\cite{vasiliev90}\cite{vasiliev} shows that the class of such theories is not empty, but we have yet to arrive at other theories of this kind.
We list here some of the most important \textit{no-go} theorems. For a more detailed discussion, see \cite{nogo},\cite{nogo2} and references therein.
\paragraph{No long-range HS interactions\newline}
Using the \textit{S-matrix} approach, Weinberg proved in 1964 that there are no consistent \textbf{long-range} interactions by massless bosons with spin greater than two\cite{weinberg}.
\paragraph{No local Lagrangians in HS theories\newline}
Using the local \textit{Lagrangian formalism} and working in the \textit{soft limit}, Aragone and Deser proved in 1979 \cite{AragoneDeserNoGo1} (see also \cite{AragoneDeserNoGo2}) that HS fields cannot consistently interact with gravity. Since gravitational interaction is universal, this implies that there can be no consistent interacting HS fields.
\paragraph{No massless HS interactions in flat spacetime\newline}
The Weinberg-Witten theorem\cite{WeinbergWitten} from 1980 states that no massless HS field can consistently interact with gravity in \textbf{flat spacetime}.\newline
It is important to keep in mind that all no-go theorems start with some underlying assumptions that are not obviously satisfied in all physically possible cases. Therefore, the effort to construct consistent interacting HS theories might not be a fool's errand after all.
\newpage
\section{Review of lower spin theories}\label{s5}
Instead of jumping head-first into some deeper problems of higher spin theory, let us review the familiar territory of lower spin bosonic theories.
\subsection{Spin-0 theory}
Fields of spin $0$ are described by Lorentz scalars.
The general Lagrangian for these fields is
\begin{equation}
\mathcal{L}_{0} [\phi]= \frac{1}{2} ( \partial_\mu \phi )^2 - \frac{m^2}{2} \phi^2 \, ,
\end{equation}
and it produces the equation of motion for scalar fields, the \textbf{Klein-Gordon equation},
\begin{equation}
(\Box + m^2) \phi = 0 \, .
\end{equation}
\subsubsection{Example: Higgs boson}
\textbf{Higgs field} is a well-known example of a scalar field, and it is the only scalar fundamental field in the Standard Model.
It is a \textit{complex} scalar field, described by the Lagrangian
\begin{equation}
\mathcal{L}_H = \left| \partial_\mu \phi \right|^2 - V(\phi) \, .
\end{equation}
\subsection{Spin-1 theory}
Fields of spin $1$ are described by Lorentz vectors.
The general Lagrangian for these fields is
\begin{equation}
\mathcal{L}_1 [A^\mu]= -\frac{1}{2} F_{\mu \nu} F^{\mu \nu} + \frac{m^2}{2} A_\mu A^\mu \, ,
\end{equation}
with $F_{\mu \nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$, and it produces the \textbf{Proca equation},
\begin{equation}
\Box A^\nu - \partial^\nu ( \partial_\mu A^\mu ) + m^2 A^\nu = 0 \, .
\end{equation}
\subsubsection{Example: Maxwell's electrodynamics}
In the Standard Model, four vector bosons take part in the \textbf{electroweak} interaction, the \textbf{photon} and three intermediate bosons, $W^\pm$ and $Z^0$.
The free massive intermediate boson fields satisfy the Proca equation while the massless photon field satisfies \textbf{Maxwell's equations},
\begin{equation}
\partial_\mu F^{\mu \nu} = j^\nu.
\end{equation}
The strong force is also mediated by vector bosons, described by the massless \textbf{gluon} field.
Before addressing the spin-2 theory, let us briefly discuss the issue of gauge invariance.
\paragraph{Spin-1 gauge invariance:}
A massless spin-$1$ field $A^\nu (x)$ has 4 components, and it satisfies the equation of motion
\begin{equation}
\Box A^\nu - \partial^\nu (\partial_\lambda A^\lambda ) = 0.
\end{equation}
This theory is invariant under the Abelian gauge transformation
\begin{equation}
\delta A_\mu (x) = \partial_\mu \Lambda (x) \, .
\end{equation}
The equation of motion can be cast into a simple wave equation form,
\begin{equation}
\Box A_\mu (x) = 0 \, ,
\end{equation}
by choosing the Lorentz-invariant \textit{Lorenz gauge},
\begin{equation}
\partial_\mu A^\mu (x) = 0.
\end{equation}
This choice is a scalar constraint, which eliminates one of two spurious degrees of freedom, but there is a degree of gauge freedom left, i.e.
\begin{equation}
\delta (\partial_\mu A^\mu) = \Box \Lambda = 0.
\end{equation}
This is also a scalar constraint, so we are indeed left with two propagating degrees of freedom.
\subsection{Spin-2 theory}\label{spin2}
Fields of spin $2$ are described by symmetric Lorentz tensors of order two. The general Lagrangian for these fields is
\begin{equation}
\mathcal{L}_2 [h^{\mu\nu}] = -\frac{1}{2} (\partial_\sigma h_{\mu\nu})^2 + \partial_\sigma h_{\mu\nu} \partial^\mu h^{\nu\sigma} - \partial \cdot h_\nu \partial^\nu h + \frac{1}{2} (\partial_\mu h)^2 \, , \label{Ls2}
\end{equation}
and it produces the equation of motion
\begin{equation}
\Box h_{\mu\nu} - \partial_\mu \partial \cdot h_\nu - \partial_\nu \partial \cdot h_\mu + \partial_\mu \partial_\nu h + \eta_{\mu\nu} \partial^2 \cdot h - \eta_{\mu\nu} \Box h = 0 \, . \label{Es2}
\end{equation}
\subsubsection{Example: General Relativity}
\textbf{Einstein's General Relativity} is the archetypal example of a spin-2 theory. It describes gravitation, and it is the only spin-$2$ theory found in nature.
In its full form, general relativity is highly nonlinear, and it is described by \textbf{Einstein field equations},
\begin{equation}
G_{\mu \nu} \equiv R_{\mu \nu} - \frac{1}{2} g_{\mu \nu} R = 8 \pi T_{\mu \nu} ,
\end{equation}
where $G_{\mu \nu}$ is the \textbf{Einstein tensor}, $R_{\mu \nu}$ is the \textbf{Ricci tensor}, $R$ is the \textbf{Ricci scalar} and $T_{\mu \nu}$ is the matter \textbf{energy-momentum} tensor.
The \textbf{Riemann curvature tensor} can be defined as
\begin{equation}
R^\rho{}_{\sigma\mu\nu} = \partial_\mu \Gamma^\rho{}_{\nu \sigma} - \partial_\nu \Gamma^\rho{}_{\mu \sigma} + \Gamma^\rho{}_{\mu \lambda} \Gamma^\lambda{}_{\nu \sigma} - \Gamma^\rho{}_{\nu \lambda} \Gamma^\lambda{}_{\mu \sigma} \, ,
\end{equation}
using the \textit{torsionless connection} $\Gamma^\rho{}_{\mu\nu} = \Gamma^\rho{}_{\nu\mu}$,
\begin{equation}
\Gamma^\rho{}_{\mu\nu} = \frac{1}{2} g^{\rho\lambda} (\partial_\mu g_{\nu \lambda} + \partial_\nu g_{\mu \lambda} - \partial_\lambda g_{\mu \nu}) \, .
\end{equation}
Ricci tensor and scalar are simply given by
\begin{align}
R_{\mu \nu} &= R^\rho{}_{\mu \rho \nu} \, ,\\
R &= R^\lambda{}_\lambda \, .
\end{align}
\subsubsection{Example: Linearized Gravity}
By considering small metric perturbations from the flat Minkowski spacetime, we can construct a linear theory of a dynamical spin-2 field in a static flat background. Explicitly, we decompose the metric so that
\begin{equation}
g_{\mu \nu} (x) = \eta_{\mu \nu} + h_{\mu \nu} (x) + \mathcal{O}(h^2) \, ,
\end{equation}
and we truncate the expansion to first order in $h_{\mu\nu}$, assuming $\| h(x) \| \ll 1$.
The resulting theory is what we call \textbf{linearized gravity}, and it is described by \textit{linearized} Einstein field equations,
\begin{equation}
G^{(lin)}_{\mu \nu} = R^{(lin)}_{\mu \nu} - \frac{1}{2} \eta_{\mu \nu} R^{(lin)} = T^{(lin)}_{\mu\nu}.
\end{equation}
Or, using the metric perturbation field explicitly,
\begin{align}
R^{(lin)}_{\mu \nu} &= \Box h_{\mu \nu} - \partial_\mu (\partial^\lambda h_{\lambda \nu}) - \partial_\nu (\partial^\lambda h_{\lambda \mu}) + \partial_\mu \partial_\nu h^\lambda{}_\lambda \, , \\
R^{(lin)} &= 2 \Box h^\lambda{}_\lambda -2 \partial^\lambda \partial^\sigma h_{\lambda \sigma} \, , \\
G^{(lin)}_{\mu \nu} &= \Box h_{\mu \nu} - \partial_\mu (\partial^\lambda h_{\lambda \nu}) - \partial_\nu (\partial^\lambda h_{\lambda \mu}) + \partial_\mu \partial_\nu h^\lambda{}_\lambda + \eta_{\mu \nu} \partial^\lambda \partial^\sigma h_{\lambda \sigma} - \eta_{\mu\nu}\Box h^\lambda{}_\lambda \, .
\end{align}
Note that $G^{(lin)}_{\mu \nu} = 0$ corresponds to \eqref{Es2}. This is no coincidence, since \eqref{Ls2} precisely describes the Lagrangian for linearized gravity in the absence of sources, i.e. with \newline$T^{(lin)}_{\mu\nu}=0$.
\paragraph{Spin-2 gauge invariance:}
The spin-$2$ field is described by a doubly traceless tensor $h_{\mu\nu} (x)$ of rank two and therefore has $\mathbf{10}$ independent components.
In free theory, it satisfies the equation of motion
\begin{equation}
R^{(lin)}_{\mu\nu} = \Box h_{\mu\nu} - \partial_\mu (\partial^\lambda h_{\lambda\nu}) - \partial_\nu (\partial^\lambda h_{\mu\lambda}) + \partial_\mu \partial_\nu h^\lambda{}_\lambda = 0 \, .
\end{equation}
This theory is invariant under the Abelian gauge transformation
\begin{equation}
\delta h_{\mu \nu} (x) = \partial_\mu \xi_\nu (x) + \partial_\nu \xi_\mu (x) \, ,
\end{equation}
which allows us to cast the above equation into a simple wave equation form,
\begin{equation}
\Box h_{\mu \nu} (x) = 0 \, ,
\end{equation}
by choosing the Lorentz-invariant \textit{de Donder Gauge}\footnotemark,
\begin{equation}
\mathcal{D}_\mu (x) \equiv \partial^\lambda h_{\lambda \mu} - \frac{1}{2} \partial_\mu h^\lambda{}_\lambda = 0.
\end{equation}
\textit{De Donder tensor} $\mathcal{D}_\mu$ is a $4$-vector, so we are left with $10 - 4 = \mathbf{6}$ degrees of freedom.
Fixing the gauge in this way does not eliminate the gauge freedom completely. This can be seen from the \textit{de Donder} gauge condition, since
\footnotetext{Also known as the \textit{harmonic gauge}, \textit{Lorentz gauge}, \textit{Einstein gauge}, \textit{Hilbert gauge} or \textit{Fock gauge}.}
\begin{equation}
\delta \mathcal{D}_\mu (x) = \Box \xi_\mu (x) = 0.
\end{equation}
This too is a $4$-vector constraint, which eliminates the remaining $4$ spurious degrees of freedom, leaving us with $6 - 4 = \mathbf{2}$ propagating degrees of freedom, as expected.
\newpage
\section{Higher spin theory of massless bosons}\label{s6}
\subsection{Francia-Sagnotti formalism}
There exists an elegant formalism\footnotemark\, developed by D.Francia and A.Sagnotti\cite{fs1} \cite{fs2} \cite{st} \cite{introfree} \cite{fs3} \cite{fms} \cite{dariomass} \cite{dariopropm} \cite{dariocrete} that makes it easy to express and manipulate most of mathematical objects of HS theory in the linear approximation.\footnotetext{To be fair, it would be more precise to call it \textit{notation}, but as Feynman said\cite{feynmannotation}: \q{We could, of course, use any notation we want; do not laugh at notations; invent them, they are powerful. In fact, mathematics is, to a large extent, invention of better notations.}} This formalism is suitable for higher spin theory since the tensorial indices and spin are left implicit, but are easily recovered.
A spin-$s$ field is simply written as
\begin{equation}
\phi_{\mu_1 \cdots \mu_s} \equiv \phi \, .
\end{equation}
The $n$-th gradient of $\phi$ is written as $\partial^n \phi$, the $n$-th divergence\footnotemark\, as $\partial^n \cdot \phi$ and the $n$-th trace as $\phi^{[n]}$. Lower traces are simply written with a prime, e.g. $\phi''$ for the second trace.
\footnotetext{Where Francia and Sagnotti would use (for example) $\partial \cdot \partial \cdot \partial \cdot \varphi$, here we use $\partial^3 \cdot \varphi $ instead. This simplification seems to produce no ambiguities, as the reader is welcome to check.}
All indices are implicitly symmetrized, without weight factors, using the minimal number of terms.
For example, if $s=2$,
\begin{align}
\partial^2 \phi &\equiv \partial_\mu \partial_\nu \phi_{\sigma \rho} + \partial_\mu \partial_\sigma \phi_{\nu \rho} + \partial_\mu \partial_\rho \phi_{\sigma \nu} + \partial_\nu \partial_\sigma \phi_{\mu \rho} + \partial_\nu \partial_\rho \phi_{\sigma \mu} + \partial_\sigma \partial_\rho \phi_{\mu \nu} \, , \label{ex1} \\
\partial (\partial \cdot \phi) &\equiv \partial_\nu (\partial^\lambda \phi_{\mu \lambda}) + \partial_\mu (\partial^\lambda \phi_{\lambda \nu}) \, , \label{ex2} \\
\eta \partial^2 \cdot \phi &\equiv \eta_{\mu\nu} \partial^\rho \partial^\sigma \phi_{\rho \sigma} \, . \label{ex3}
\end{align}
The formalism implies the following set of rules:
\begin{align}
( \partial^p \phi )' &= \Box \partial^{p-2} \phi + 2 \partial^{p-1} \left( \partial \cdot \phi \right) + \partial^p \phi' \label{fs1} \\
\partial \cdot (\partial^p \phi) &= \Box \partial^{p-1} \phi + \partial^p \left( \partial \cdot \phi \right) \label{fs2} \\
\left( \eta^k T_{(s)} \right)' &= [D + 2(s+k-1)] \eta^{k-1} T_{(s)} + \eta^k T_{(s)}' \label{traces} \\
\partial^p \partial^q &= {{p+q}\choose{q}} \partial^{p+q} \\
\eta^p \eta^q &= {{p+q}\choose{q}} \eta^{p+q} \\
\partial \cdot \eta^{p} &= \eta^{p-1} \partial
\end{align}
\eqref{fs1} and \eqref{fs2} can further be generalized to:
\begin{align}
( \partial^n \phi )^{[p]} &= \sum_{k=0}^{p} \sum_{l=0}^{k} {p \choose k} {k \choose l} 2^l \Box^{p-k} \partial^{n - 2p + 2k -l} \left( \partial^l \cdot \phi^{[k-l]} \right) \\
\partial^n \cdot ( \partial^p \phi ) &= \sum_{k=0}^{n} {n \choose k} \Box^{n-k} \partial^{p-n+k} \left( \partial^k \cdot \phi \right) \label{gen2}
\end{align}
The relations \eqref{fs1}-\eqref{gen2} will prove to be useful in simplifying our calculations.
Note that this formalism is also implicit in the dimension of spacetime, as long as the relevant expressionts do not include traces of terms containing the metric tensor, as implied by \eqref{traces}.
We introduce "$ \scalerel*{\cdot}{\bigodot} $" to denote maximal contraction between two tensors\footnotemark.\footnotetext{Francia and Sagnotti do not use this notation. Instead, such contractions are left implicit, which may look confusing to the untrained eye.} For tensors $\varphi$ of order $s$ and $\chi$ of order $r$, with $s>r$, the contraction is defined as
\begin{equation}
\varphi \scalerel*{\cdot}{\bigodot} \chi \equiv \varphi_{\mu_1 \cdots \mu_r \mu_{r+1} \cdots \mu_s} \chi^{\mu_1 \cdots \mu_r} \, ,
\end{equation}
where both tensors are assumed to be symmetrized with the minimal number of unweighted terms, before contraction.
For example, if $\varphi$ is a tensor of order three and $\chi$ is a tensor of order two,
\begin{align}
\varphi \scalerel*{\cdot}{\bigodot} \chi &\equiv \varphi_{\mu\nu\sigma} \chi^{\mu \nu} \, , \\
\varphi \scalerel*{\cdot}{\bigodot} \partial \chi &\equiv \varphi_{\mu\nu\sigma} \left( \partial^\mu \chi^{\nu\sigma} + \partial^\nu \chi^{\sigma\mu} + \partial^\sigma \chi^{\mu\nu} \right) = 3 \varphi_{\mu\nu\sigma} \partial^\mu \chi^{\nu\sigma} \, , \\
\partial \varphi \scalerel*{\cdot}{\bigodot} \eta \chi &\equiv \left( \partial_\mu \varphi_{\nu\sigma\rho} + \partial_\nu \varphi_{\sigma\rho\mu} + \partial_\sigma \varphi_{\rho\mu\nu} + \partial_\rho \varphi_{\mu\nu\sigma} \right) \\ \nonumber &\left( \eta^{\mu\nu} \chi^{\sigma\rho} + \eta^{\mu\sigma} \chi^{\nu\rho} + \eta^{\mu\rho} \chi^{\sigma\nu} + \eta^{\nu\sigma} \chi^{\mu\rho} + \eta^{\nu\rho} \chi^{\mu\sigma} + \eta^{\sigma\rho} \chi^{\mu\nu} \right) \\
\nonumber &= 12 \partial^\sigma \varphi_{\sigma \mu \nu} \chi^{\mu\nu} + 12 \chi^{\mu\nu} \eta^{\sigma\rho} \partial_\mu \varphi_{\nu\sigma\rho} \, .
\end{align}
In other words, \textit{first} we symmetrize the tensors as in examples \eqref{ex1}-\eqref{ex3}, and \textit{then} we contract them. This notation will prove to be particularly useful in the analysis of actions and their variations. In the following segments, when we vary a Lagrangian, we will always vary it under the integral sign, as a variation of the action, i.e.
\begin{equation}
\delta \mathcal{S} [\varphi(x)] = \delta \int d^D x \, \mathcal{L}[\varphi(x)] = \int d^D x \, \delta \mathcal{L}[\varphi(x)] \, .
\end{equation}
When calculating such variations, we will often encounter terms of the form
\begin{equation}
\int d^D x \, A(x) \delta (\partial_\mu B(x)) \, ,
\end{equation}
where we perform partial integration to obtain
\begin{equation}
- \int d^D x \, \partial_\mu A(x) \delta B(x) + \text{(boundary terms)} \, .
\end{equation}
The boundary terms vanish due to the standard assumption that all fields vanish at infinity and that there are no non-trivial topological features of spacetime. This allows us to use the following relation:
\begin{equation}
\int d^D x \, A(x) \delta (\partial_\mu B(x)) = - \int d^D x \, \partial_\mu A(x) \delta B(x) \, .
\end{equation}
In the Francia-Sagnotti formalism, one should be careful when performing partial integration, since this operation might produce additional symmetry factors. For example, if $\varphi$ is a symmetric tensor of order $s$ and $\Lambda$ is a symmetric tensor of order $s-1$,
\begin{align}
\int d^D x \, \partial \Lambda \scalerel*{\cdot}{\bigodot} \varphi &\equiv \int d^D x \, \left( \underbrace{\partial_{\mu_1} \Lambda_{\mu_2 \cdots \mu_{s}} + \dots + \partial_{\mu_s} \Lambda_{\mu_1 \cdots \mu_{s-1}}}_{s \, \text{terms}} \right) \varphi^{\mu_1 \cdots \mu_s} \\
&= s \int d^D x \, \partial_{\mu_1} \Lambda_{\mu_2 \cdots \mu_{s}} \varphi^{\mu_1 \cdots \mu_s} \\
&= -s \int d^D x \, \Lambda_{\mu_1 \cdots \mu_{s-1}} \partial_{\mu_s} \varphi^{\mu_1 \cdots \mu_s} \\
&\equiv -s \int d^D x \, \Lambda \scalerel*{\cdot}{\bigodot} \partial \cdot \varphi \, .
\end{align}
Similarly, one should be careful when writing terms of the form $\varphi \scalerel*{\cdot}{\bigodot} \eta \varphi'$ as terms of the form $\varphi' \scalerel*{\cdot}{\bigodot} \varphi'$, because
\begin{align}
\varphi \scalerel*{\cdot}{\bigodot} \eta \varphi' &\equiv \varphi_{\mu_1 \cdots \mu_s} \eta_{\nu \sigma} \left( \underbrace{ \eta^{\mu_1 \mu_2} \varphi^{\mu_3 \cdots \mu_s \nu \sigma} + \dots + \eta^{\mu_{s-1} \mu_s} \varphi^{\mu_1 \cdots \mu_{s-2} \nu \sigma}}_{{s \choose 2} \, \text{terms}} \right) \\
&= {s \choose 2} \eta^{\mu_1 \mu_2} \varphi_{\mu_3 \cdots \mu_s} \eta_{\mu_1 \mu_2} \varphi^{\mu_3 \cdots \mu_s} \\
&\equiv {s \choose 2} \varphi' \scalerel*{\cdot}{\bigodot} \varphi' \, .
\end{align}
To drive the point home, we provide two additional examples that we will encounter in our calculations:
\begin{align}
\int d^D x \, \Lambda \scalerel*{\cdot}{\bigodot} \partial^3 \varphi'' &= - {{s-1} \choose 3} \int d^D x \, \partial^3 \cdot \Lambda \scalerel*{\cdot}{\bigodot} \varphi'' \\
\Lambda \scalerel*{\cdot}{\bigodot} \eta \partial \cdot \varphi' &= {s-1 \choose 2} \Lambda' \scalerel*{\cdot}{\bigodot} \partial \cdot \varphi'
\end{align}
\subsection{Fronsdal's constrained theory}\label{fronsdalconstrained}
\subsubsection{Free theory}\label{freefronsdal}
As we have already seen in \textbf{Section \ref{fronsdaleq}}, Fronsdal's HS theory, in the absence of sources, consists of the Fronsdal equation along with two unusual constraints, i.e.
\begin{align}
\mathcal{F} &= \Box \varphi - \partial (\partial \cdot \varphi) + \partial^2 \varphi' = 0 \, , \label{freq} \\
\mathcal{L}_{\mathcal{F}} &= \frac{1}{2} \varphi \scalerel*{\cdot}{\bigodot} \left( \mathcal{F} - \frac{1}{2} \eta \mathcal{F}' \right) \, , \label{lagrangianF} \\
\delta \varphi &= \partial \Lambda \, , \label{gauge} \\
\Lambda' &= 0 \, \label{constr1},\\
\varphi'' &= 0 \, \label{constr2}.
\end{align}
Let us show how \eqref{lagrangianF} produces \eqref{freq} as the equation of motion in the absence of sources. Varying the action gives
\begin{align}
\delta \mathcal{S}_\mathcal{F} = \int d^D x \, \delta \mathcal{L}_\mathcal{F} &= \frac{1}{2} \int d^D x \, \left[ \delta \varphi \scalerel*{\cdot}{\bigodot} \left( \mathcal{F} - \frac{1}{2} \eta \mathcal{F}' \right) + \varphi \scalerel*{\cdot}{\bigodot} \left( \delta (\mathcal{F}) - \frac{1}{2} \eta (\delta \mathcal{F}') \right) \right] \\
&= \frac{1}{2} \int d^D x \, \Big\{ \delta \varphi \scalerel*{\cdot}{\bigodot} \left( \Box \varphi - \partial (\partial \cdot \varphi) + \partial^2 \varphi' + \eta \partial^2 \cdot \varphi - \eta \Box \varphi \right) \\
&+ \varphi \scalerel*{\cdot}{\bigodot} \left[ \delta (\Box \varphi) - \delta (\partial (\partial \cdot \varphi)) + \delta (\partial^2 \varphi') + \delta (\eta \partial^2 \cdot \varphi) - \delta (\eta \Box \varphi ) \right] \Big\} \nonumber \\
&= \int d^D x \, \left( \Box \varphi - \partial (\partial \cdot \varphi) + \partial^2 \varphi' + \eta \partial^2 \cdot \varphi - \eta \Box \varphi \right) \scalerel*{\cdot}{\bigodot} \delta \varphi \\
&= \int d^D x \, \left( \mathcal{F} - \frac{1}{2} \eta \mathcal{F}' \right) \scalerel*{\cdot}{\bigodot} \delta \varphi \, ,
\end{align}
so the equation of motion reads
\begin{equation}
\mathcal{F} - \frac{1}{2} \eta \mathcal{F}' = 0 \, , \label{ffeq}
\end{equation}
which indeed reduces to
\begin{equation}
\mathcal{F} = 0 \, ,
\end{equation}
since there are no sources on the right-hand side of \eqref{ffeq}.
Armed with the powerful formalism, let us now take a closer look at the two constraints \eqref{constr1} and \eqref{constr2}. We would like to find where exactly they come from so that we can construct an equivalent \textit{unconstrained} theory.
\paragraph{Why traceless $\Lambda$?\newline}
The Fronsdal equation \eqref{freq} transforms under the gauge variation \eqref{gauge} as
\begin{equation}
\delta \mathcal{F} = 3 \partial^3 \Lambda' \label{FVar} \, ,
\end{equation}
which is why we demand that the gauge parameter be traceless.
If we could find an appropriate linear combination of fully gauge-invariant terms, we could formulate a theory without imposing this constraint.
One way of getting around this would be through a differential constraint,
\begin{equation}
\partial^3 \Lambda' \, (x) = 0 \, ,
\end{equation}
without directly constraining $\Lambda'$. If the gauge parameter $\Lambda(x)$ vanishes at infinity, the only solution would indeed be $\Lambda'=0$.
Another way to dispense with this constraint is to introduce a non-dynamical spin-$(s-3)$ \textit{compensator} field $\alpha(x)$, which transforms under the gauge variation as
\begin{equation}
\delta \alpha = \Lambda' \, ,
\end{equation}
and modify the equation of motion to
\begin{equation}
\mathcal{F} - 3 \partial^3 \alpha = 0 \, .
\end{equation}
If we introduce a second non-dynamical spin-$(s-4)$ field, this theory can be described by a Lagrangian, as we will explain in \textbf{Section \ref{localunconstrained}}.
The third way to avoid the traceless $\Lambda$ is to work within a manifestly gauge-invariant geometric framework. Unfortunately, as we will see in \textbf{Section \ref{nonlocal}}, this forces us to abandon locality and instead work with non-local or higher-order (in derivatives) terms.
\paragraph{Why doubly-traceless $\varphi$?\newline}
The relation that lies at the heart of this constraint is the so-called \textit{anomalous\footnotemark Bianchi identity},\footnotetext{It is called \textit{anomalous} because it does not vanish. If the right-hand side vanishes, it is simply the \textit{Bianchi identity}.}
\begin{equation}
\partial \cdot \mathcal{F} - \frac{1}{2} \partial \mathcal{F}' = - \frac{3}{2} \partial^3 \varphi'' \, . \label{bianchi}
\end{equation}
We would like the Fronsdal action to be gauge invariant, so let us see what its gauge variation\footnotemark\, produces. Using \eqref{bianchi}, one obtains
\footnotetext{We use $\delta_{\Lambda}$ to avoid confusing this variation with the usual functional variation $\delta$.}
\begin{equation}
\delta_{\Lambda} \mathcal{S}_\mathcal{F} = \int d^D x \, \delta_{\Lambda} \mathcal{L}_{\mathcal{F}} = \frac{1}{2} \int d^D x \, \left[ \partial \Lambda \scalerel*{\cdot}{\bigodot} \left( \mathcal{F} - \frac{1}{2} \eta \mathcal{F}' \right) + \varphi \scalerel*{\cdot}{\bigodot} \delta_{\Lambda} \left( - \frac{3}{2} \partial^3 \varphi'' \right) \right] \, .
\end{equation}
The second term under the integral vanishes if we impose $\Lambda' = 0$, so we have
\begin{align}
\delta_{\Lambda} \mathcal{S}_\mathcal{F} = \int d^D x \, \delta_{\Lambda} \mathcal{L}_{\mathcal{F}} &= \frac{1}{2} \int d^D x \, \partial \Lambda \scalerel*{\cdot}{\bigodot} \left( \mathcal{F} - \frac{1}{2} \eta \mathcal{F}' \right) \\
&= -\frac{s}{2} \int d^D x \, \Lambda \scalerel*{\cdot}{\bigodot} \partial \cdot \left( \mathcal{F} - \frac{1}{2} \eta \mathcal{F}' \right) \\
&= -\frac{s}{2} \int d^D x \, \Lambda \scalerel*{\cdot}{\bigodot} \left( \underbrace{\partial \cdot \mathcal{F} - \frac{1}{2} \partial \mathcal{F}'}_{\eqref{bianchi}} - \frac{1}{2} \eta \partial \cdot \mathcal{F}' \right) \\
&= -\frac{s}{2} \int d^D x \, \left[ \Lambda \scalerel*{\cdot}{\bigodot} \left( -\frac{3}{2} \partial^3 \varphi'' \right) - \frac{1}{2} \Lambda \scalerel*{\cdot}{\bigodot} \eta \partial \cdot \mathcal{F}' \right] \\
&= -3 \int d^D x \, \left[ {s \choose 4} \partial^3 \cdot \Lambda \scalerel*{\cdot}{\bigodot} \varphi'' - \frac{1}{4} {s \choose 3} \Lambda' \scalerel*{\cdot}{\bigodot} \partial \cdot \mathcal{F}' \right] \, .
\end{align}
Once again, the second term under the integral vanishes if we impose $\Lambda' = 0$. It follows that it is necessary to impose the additional constraint $\varphi'' = 0$ for the action to be gauge-invariant.
An alternative way to get around the double-tracelessness constraint is to work within a geometric framework, where we generalize the Fronsdal tensor into an equivalent object satisfying generalized Bianchi identities. As previously mentioned, the price to pay for the elegant geometric theory is higher-order terms or non-locality.
\paragraph{Counting degrees of freedom$\newline$}
Let us show that the constrained Fronsdal equation propagates the correct number of degrees of freedom. In case of a massless spin-$s$ bosonic field, arguments from representation theory (as discussed in \textbf{Section \ref{fpe}} and specifically in relation to the Fronsdal equation in \cite{verybasics}) show that the correct number is
\begin{equation}
\#(D-2,s) - \#(D-2, s-2) = {{D+s-3} \choose {s}} - {{D+s-5} \choose {s-2}} \, . \label{dof}
\end{equation}
We begin by counting the number of independent components of $\varphi$. It is a fully symmetric doubly-traceless $D-$dimensional tensor of order $s$, so that number is
\begin{equation}
\#(D,s) - \#(D,s-4) = {{D+s-1} \choose {s}} - {{D+s-5} \choose {s-4}} \, .
\end{equation}
We proceed by partially fixing the gauge, imposing the de Donder gauge condition,
\begin{equation}
\mathcal{D} = \partial \cdot \varphi - \frac{1}{2} \partial \varphi' = 0 \, ,
\end{equation}
which reduces the Fronsdal equation to a wave equation,
\begin{equation}
\Box \varphi = 0 \, .
\end{equation}
Since $\mathcal{D}$ is traceless and of order $s-1$, fixing the de Donder tensor corresponds to eliminating
\begin{equation}
\#(D,s-1) - \#(D,s-3) = {{D+s-2} \choose {s-1}} - {{D+s-4} \choose {s-3}}
\end{equation}
independent components.
However, fixing $\mathcal{D}$ does not fully fix the gauge, since
\begin{equation}
\delta \mathcal{D} = \Box \Lambda \, .
\end{equation}
Fixing this residual gauge freedom also corresponds to eliminating $\#(D,s-1) - \#(D,s-3)$ independent components.
In total, this leaves us with
\begin{align}
&\#(D,s) - \#(D,s-4) - 2\left\{\#(D,s-1) - \#(D,s-3)\right\} = \nonumber \\ &\#(D-2,s) - \#(D-2, s-2) \, ,
\end{align}
which is the same as \eqref{dof}.
Note that we used the double tracelessness of $\varphi$ to count the propagating degrees of freedom. However, this is merely a \textit{sufficient} condition for the correct number, not a \textit{necessary} one.
\subsubsection{Interacting theory with an external current}\label{interacting constrained}
Let us begin the analysis of the HS gauge field coupled to an external current within the framework of Fronsdal's constrained theory by defining the \textit{Fronsdal-Einstein} tensor $\mathcal{G}$,
\begin{equation}
\mathcal{G} := \mathcal{F} - \frac{1}{2} \eta \mathcal{F}' \, .
\end{equation}
We showed in \textbf{Section \ref{freefronsdal}} that this is precisely the left-hand side of the equation of motion, as obtained from \eqref{lagrangianF}.
An interaction term in the Lagrangian, for some \textit{generic} totally symmetric external current $J$ can be written as
\begin{equation}
\mathcal{L}_{int} = -\frac{1}{2} \varphi \scalerel*{\cdot}{\bigodot} J \, , \label{current}
\end{equation}
so the total action reads
\begin{equation}
\mathcal{S} [\varphi, J] = \int d^D x \, \mathcal{L} = \int d^D x \, \left( \mathcal{L}_{\mathcal{F}} + \mathcal{L}_{int} \right) = \mathcal{S}_\mathcal{F} - \frac{1}{2} \int d^D x \, \varphi \scalerel*{\cdot}{\bigodot} J \, . \label{constrainedS}
\end{equation}
As demonstrated in \textbf{Section \ref{freefronsdal}},
\begin{equation}
\delta_{\Lambda} \mathcal{S}_{\mathcal{F}} = 0 \, ,
\end{equation}
so we need to investigate the effect of the interaction term $\mathcal{L}_{int}$, since it need not be gauge-invariant.
The equation of motion obtained by varying \eqref{constrainedS} reads
\begin{equation}
\mathcal{G} = J \, . \label{el}
\end{equation}
Taking the trace of \eqref{el} yields
\begin{equation}
\mathcal{F}' = \frac{-2}{D+2(s-3)} J' \, , \label{fj}
\end{equation}
which in turn implies
\begin{equation}
J'' = 0 \, , \label{jpp0}
\end{equation}
since $\mathcal{F}''=0$ when $\varphi''=0$.
We can now rewrite \eqref{el} as
\begin{equation}
\mathcal{F} = J - \frac{1}{D+2(s-3)} \eta J' \, . \label{fronsINT}
\end{equation}
Taking the divergence of \eqref{el} and using \eqref{bianchi}, we get
\begin{align}
\partial \cdot J = -\frac{1}{2} \eta \partial \cdot \mathcal{F}' \label{Whencefore It Cometh?}
\end{align}
Substituting \eqref{fj} into \eqref{Whencefore It Cometh?} yields
\begin{equation}
\partial \cdot J - \frac{1}{D+2(s-3)} \eta \partial \cdot \ J' = 0 \, . \label{divtracelessJ}
\end{equation}
The left-hand side of \eqref{divtracelessJ} is actually the traceless part of $\partial \cdot J$. In general, the traceless part of a fully symmetric tensor $\chi$ of order $s$ in $D$-dimensional spactime is\footnotemark\footnotetext{To compactify the notation, here we begin to use the \textbf{falling factorial} function, defined as \newline $n^{\underline{k}} = \frac{n!}{(n-k)!}$, and we define the \textbf{falling double factorial} function $n^{\uuline{k}} = \frac{n!!}{(n-k)!!}$.}
\begin{equation}
\mathcal{T}_D[\chi] = \sum_{k=0}^{[s/2]} \frac{(-1)^k}{[D+2(s-2)]^{\uuline{k}}} \eta^k \chi^{[k]} := \sum_{k=0}^{[s/2]} \rho_k (D,s) \eta^k \chi^{[k]} \, , \label{traceless}
\end{equation}
which is easily checked by direct computation. In \eqref{traceless}, we define coefficients $\rho_k (D,s)$ for later convenience. Using \eqref{traceless} and \eqref{jpp0}, we see that indeed
\begin{equation}
\mathcal{T}_D[\partial \cdot J] = \partial \cdot J - \frac{1}{D+2(s-3)} \eta \partial \cdot \ J' \, . \label{jtraceless}
\end{equation}
Also,
\begin{equation}
\mathcal{F} = \mathcal{T}_{D-2} [J] \, .
\end{equation}
Therefore, in general, only the traceless part of the divergence of $J$ vanishes.
\bigskip\par\centerline{*\,*\,*}\medskip\par
To understand the physical meaning of \eqref{jtraceless}, we need to introduce the concept of \textit{current exchange}. Let us motivate the idea on a familiar case of spin-$1$ fields, i.e. Maxwell's theory of electrodynamics.
In the manifestly Lorentz-covariant formalism, Maxwell's equations coupled to an external current $J^\mu$ read
\begin{equation}
\Box A^\mu - \partial^\mu (\partial \cdot A) = J^\mu \, ,
\end{equation}
where consistency demands that the current be conserved, i.e.
\begin{equation}
\partial_\mu j^\mu = 0 \, .
\end{equation}
In the momentum space, this translates to
\begin{align}
(p^2 \eta_{\mu\nu} - p_\mu p_\nu) A^\nu &= J_\mu \, , \\
p^\mu J_\mu &= 0 \, .
\end{align}
It follows that, for a current-current interaction,
\begin{equation}
p^2 A_\mu J^\mu = J_\mu J^\mu \, .
\end{equation}
By current exchange, we mean the exchange between the degrees of freedom that take part in this interaction. As we know from electrodynamics, interactions mediated by photons only respond to the transverse part of the current, since the photon has no longitudinal degrees of freedom. Therefore, instead of considering the full Lorentzian product $J_\mu J^\mu$, we can project an on-shell current (i.e. a current satisfying the equation of motion) $J_\mu (p)$ to its transverse part using the projection operator $\Pi$
\begin{equation}
\Pi_{\mu\nu} = \eta_{\mu\nu} - p_\mu \bar{p}_\nu - p_\nu \bar{p}_\mu \, , \label{projektor}
\end{equation}
where $p$ is the exchanged on-shell momentum, satisfying $p^2 = 0$, and $\bar{p}$ is a vector that satisfies $\bar{p}^2 = 0$ and $p_\mu \bar{p}^\mu = 1$.
One can check by direct computation that, indeed,
\begin{equation}
p^\mu \Pi_{\mu\nu} J^\nu = 0 \, ,
\end{equation}
and
\begin{equation}
J_\mu J^\mu = J^\mu \Pi_{\mu\nu} J^\nu \, . \label{jpj}
\end{equation}
Now, since
\begin{equation}
\eta_{\mu \nu} \Pi^{\mu\nu} = D - 2 \, ,
\end{equation}
it follows\footnotemark\footnotetext{Equation \eqref{jpj} is basically an eigenvalue problem. The trace of a linear operator equals the sum of its eigenvalues. Since the eigenvalues of a projection operator equal $0$ or $1$, its trace equals the dimension of the subspace to which it projects.} that the number of degrees of freedom taking part in the interaction is $D-2$.
Instead of working in the manifestly Lorentz-covariant formulation, we can repeat the procedure in the light-cone formulation, where we have two null coordinates (i.e. coordinates on the light cone),
\begin{align}
x^+ &= \frac{t+x}{\sqrt{2}} \, , \\
x^- &= \frac{t-x}{\sqrt{2}} \, ,
\end{align}
and the remaining $D-2$ coordinates are spatial. This allows us to work in the light-cone gauge,
\begin{equation}
A^+ = 0 \, ,
\end{equation}
which eliminates all unphysical degrees of freedom.
In this formulation, Maxwell's equations in the momentum space take a simple form that involves only the spatial coordinates,
\begin{equation}
p^2 A_i = j_i \, ,
\end{equation}
where latin indices denote the components of ($D-2$)-dimensional Euclidean vectors. The current-current interaction becomes
\begin{equation}
p^2 j_i A^i = j_i j^i \, .
\end{equation}
Since all components are physical\footnotemark\footnotetext{Because the light-cone formulation corresponds to working in the "reference frame" of a massless particle, where all degrees of freedom are particle's proper degrees of freedom. This is why the light-cone frame is sometimes referred to as the \textit{infinite momentum frame.}}, we simply count the number of components of $j_i$, which is $D-2$, in agreement with our previous conclusion.
The general idea is to check whether
\begin{equation}
J_{\mu_1 \cdots \mu_s} \mathcal{P}^{\mu_1 \cdots \mu_s \nu_1 \cdots \nu_s} J_{\nu_1 \cdots \nu_s} = j_{a_1 \cdots a_s} j^{a_1 \cdots a_s} \label{currx}
\end{equation}
holds for spin-$s$ current exchanges, where $\mathcal{P}$ denotes the proper analogue of the projection operator \eqref{projektor}. The right-hand side of \eqref{currx} implies that the proper number of degrees of freedom in the current exchange is equal to the number of independent components of the current in the light-cone gauge. As we saw in \textbf{Section \ref{fpe}}, this number is equal to the number of independent components of a traceless fully symmetric tensor of order $s$ in $D-2$ dimensions.
This means that $\mathcal{P}$ should be an operator that projects the current to its transverse part and then extracts its traceless part. Using \eqref{traceless} and \eqref{projektor}, we see that $\mathcal{P}$ has to be
\begin{equation}
\mathcal{P}^{(\mu)(\nu)} J_{(\nu)} = \mathcal{T}_{D-2} [\Pi \cdot J] \, , \label{conserved}
\end{equation}
where we write $(\mu)$ and $(\nu)$ to indicate a totally symmetric set of $s$ indices.
\bigskip\par\centerline{*\,*\,*}\medskip\par
Coming back to the interacting theory with an external current in the constrained formulation, we see that \eqref{jtraceless} determines the currrent exchange. It implies that
\begin{align}
J_{(\mu)} \mathcal{P}^{(\mu)(\nu)} J_{(\nu)} &= J \scalerel*{\cdot}{\bigodot} \left( J - \frac{1}{D+2(s-3)} \eta J'\right) \, , \\
&= J \scalerel*{\cdot}{\bigodot} J - \frac{1}{D+2(s-3)} J \scalerel*{\cdot}{\bigodot} \eta J' \, , \\
&= J \scalerel*{\cdot}{\bigodot} J - \frac{s(s-1)}{2[D+2(s-3)]} J' \scalerel*{\cdot}{\bigodot} J' \, \\
&= J \scalerel*{\cdot}{\bigodot} J + \rho_1 (D-2,s) {s \choose 2} J' \scalerel*{\cdot}{\bigodot} J' . \label{constrained exchange}
\end{align}
We will return to this result to compare it with the analogous result in the unconstrained formulation.
\subsection{Local unconstrained theory}\label{localunconstrained}
Let us demonstrate how we can rewrite Fronsdal's theory without the usual
\begin{equation}
\Lambda' = 0 \quad \& \quad \varphi'' = 0
\end{equation}
constraints. This is accomplished here by introducing two compensator fields.
\subsubsection{Free theory}
We begin by considering the Fronsdal tensor $\mathcal{F}$ and its gauge transformation \eqref{FVar}. From $\mathcal{F}$, one can build a fully gauge-invariant tensor,
\begin{equation}
\mathcal{A} := \mathcal{F} - 3 \partial^3 \alpha \, , \label{Acomp}
\end{equation}
where we introduce the field $\alpha(x)$ as a spin-$(s-3)$ \textit{compensator}, which transforms as
\begin{equation}
\delta_\Lambda \alpha = \Lambda'
\end{equation}
under the gauge transformation \eqref{FVar}.
The Bianchi identity for $\mathcal{A}$ reads
\begin{equation}
\partial \cdot \mathcal{A} - \frac{1}{2} \partial \mathcal{A}' = - \frac{3}{2} \partial^3 \left( \varphi'' - 4 \partial \cdot \alpha - \partial \alpha' \right) =: -\frac{3}{2} \partial^3 \mathcal{C} \, , \label{ABianchi}
\end{equation}
where we have identified a gauge-invariant tensor, which we denote by $\mathcal{C}$, i.e.
\begin{equation}
\mathcal{C} = \varphi'' - 4 \partial \cdot \alpha - \partial \alpha' \, .
\end{equation}
In analogy with \eqref{lagrangianF}, we write the Lagrangian
\begin{equation}
\mathcal{L}_0 = \frac{1}{2} \varphi \scalerel*{\cdot}{\bigodot} \left( \mathcal{A} - \frac{1}{2} \eta \mathcal{A}' \right) \, .
\end{equation}
Varying the action, we get
\begin{align}
\delta_{\Lambda} \mathcal{S}_0 = \int d^D x \, \delta_{\Lambda} \mathcal{L}_0 &= \frac{1}{2} \int d^D x \, \partial \Lambda \scalerel*{\cdot}{\bigodot} \left( \mathcal{A} - \frac{1}{2} \eta \mathcal{A}' \right) \\
&= - \frac{s}{2} \int d^D x \, \Lambda \scalerel*{\cdot}{\bigodot} \left( \underbrace{\partial \cdot \mathcal{A} - \frac{1}{2} \partial \mathcal{A}'}_{\eqref{ABianchi}} - \frac{1}{2} \eta \partial \cdot \mathcal{A}' \right) \\
&= -3 \int d^D x \, \left[ {s \choose 4} \partial^3 \cdot \Lambda \scalerel*{\cdot}{\bigodot} \mathcal{C} - \frac{1}{4} {s \choose 3} \Lambda' \scalerel*{\cdot}{\bigodot} \partial \cdot \mathcal{A}' \right] \, .
\end{align}
We can make all the terms under the integral vanish by adding to $\mathcal{L}_0$
\begin{equation}
\mathcal{L}_1 = -\frac{3}{4} {s \choose 3} \alpha \scalerel*{\cdot}{\bigodot} \partial \cdot \mathcal{A}' + 3 {s \choose 4} \beta \scalerel*{\cdot}{\bigodot} \mathcal{C} \, ,
\end{equation}
where we introduce the second compensator\footnotemark \footnotetext{Technically, it is just a Lagrange multiplier.}, a spin-$(s-4)$ field denoted by $\beta$ that transforms as
\begin{equation}
\delta_\Lambda \beta = \partial^3 \cdot \Lambda
\end{equation}
under the gauge transformation \eqref{FVar}.
Finally, we can write the fully gauge-invariant Lagrangian for the unconstrained local theory as
\begin{equation}
\mathcal{L} = \frac{1}{2} \varphi \scalerel*{\cdot}{\bigodot} \left( \mathcal{A} - \frac{1}{2} \eta \mathcal{A}' \right) -\frac{3}{4} {s \choose 3} \alpha \scalerel*{\cdot}{\bigodot} \partial \cdot \mathcal{A}' + 3 {s \choose 4} \beta \scalerel*{\cdot}{\bigodot} \mathcal{C} \, . \label{abplagrangian}
\end{equation}
We can introduce the third gauge-invariant tensor $\mathcal{B}$,
\begin{equation}
\mathcal{B} := \beta + \Box \partial \cdot \alpha + \frac{1}{2} \partial (\partial^2 \cdot \alpha) - \frac{1}{2} \partial^2 \cdot \varphi' \, \label{Bcomp}
\end{equation}
and note that \eqref{abplagrangian} may be generalized to
\begin{equation}
\mathcal{L}_k = \frac{1}{2} \varphi \scalerel*{\cdot}{\bigodot} \left( \mathcal{A} - \frac{1}{2} \eta \mathcal{A}' \right) -\frac{3}{4} {s \choose 3} \alpha \scalerel*{\cdot}{\bigodot} \partial \cdot \mathcal{A}' + 3 {s \choose 4} \left( \beta - k \mathcal{B} \right) \scalerel*{\cdot}{\bigodot} \mathcal{C} \, , \label{abplagrangianK}
\end{equation}
without affecting the equations of motion, so that \eqref{abplagrangian} corresponds to $k=0$.
As shown in \cite{fms}, a more general analysis reveals that adding quadratic terms in $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ to \eqref{abplagrangianK} does not produce any terms that would lead to different equations of motion.
Therefore, the free unconstrained local theory is parametrized by a real parameter $k$ and the gauge-invariant field equations read
\begin{align}
E_\varphi (k) &:= \mathcal{A} - \frac{1}{2} \eta \mathcal{A}' + \frac{1+k}{4} \eta \partial^2 \mathcal{C} + (1-k) \eta^2 \mathcal{B} = 0 \, , \label{Evarphi} \\
E_\alpha (k) &:= -\frac{3}{2} {s \choose 3} \left[ \partial \cdot \mathcal{A}' -\frac{1+k}{2} \left( \partial \Box + \partial^2 \partial \cdot \right) \mathcal{C} + (k-1) \left( 2 \partial + \eta \partial \cdot \right) \mathcal{B} \right] = 0 \, , \label{Ealpha} \\
E_\beta (k) &:= 3 {s \choose 4} (1-k) \mathcal{C} = 0 \, . \label{Ebeta}
\end{align}
We can use these three tensors to write the final Lagrangian in a particularly elegant form,
\begin{equation}
\mathcal{L}_k = \frac{1}{2} \varphi \scalerel*{\cdot}{\bigodot} E_\varphi (k) + \frac{1}{2} \alpha \scalerel*{\cdot}{\bigodot} E_\alpha (k) + \frac{1}{2} \beta \scalerel*{\cdot}{\bigodot} E_\beta (k) \, .
\end{equation}
From equations \eqref{Evarphi}-\eqref{Ebeta}, if $k\neq1$, it follows that
\begin{align}
\mathcal{A} &\equiv \mathcal{F} - 3 \partial^3 \alpha = 0 \, , \\
\mathcal{C} &\equiv \varphi'' - 4 \partial \cdot \alpha - \partial \alpha' = 0 \, .
\end{align}
After fixing the gauge to $\Lambda' = 0$ we are left with
\begin{align}
\mathcal{F} &= 0 \, , \\
\varphi'' &= 0 \, ,
\end{align}
which is exactly equivalent to Fronsdal's constrained formulation.
\subsubsection{Interacting theory with an external current}
In the unconstrained formulation described in the previous subsection, setting $k=0$, coupling to an external source \eqref{current} is described by
\begin{equation}
\mathcal{A} - \frac{1}{2} \eta \mathcal{A}' + \eta^2 \mathcal{B} = J \label{abj}
\end{equation}
We can define a quantity $\mathcal{K}$,
\begin{equation}
\mathcal{K} := J - \eta^2 \mathcal{B} \, , \label{kdef}
\end{equation}
and write the equation of motion as
\begin{equation}
\mathcal{A} - \frac{1}{2} \eta \mathcal{A}' = \mathcal{K} \, , \label{unconseom}
\end{equation}
so that, formally, $\mathcal{A}$ and $\mathcal{K}$ play the same role as $\mathcal{F}$ and $J$ play in the constrained formalism of \textbf{Section \ref{interacting constrained}}.
Note that
\begin{equation}
\mathcal{A}'' = 3 \Box \mathcal{C} + 3 \partial (\partial \cdot \mathcal{C}) + \partial^2 \mathcal{C}' = 0 \, , \label{app}
\end{equation}
since $\mathcal{C}$ vanishes as a result of \eqref{Ebeta}. Since $\mathcal{A}''$ vanishes when the equations of motion are satisfied, we can write
\begin{equation}
\mathcal{B} = \sum_{k=2}^{n+1} \sigma_k \eta^k J^{[k]} \, ,
\end{equation}
where $n=\left[ \frac{s-1}{2} \right]$ and we can determine the coefficients $\sigma_k$ from the condition $\mathcal{K}'' = 0$. A direct computations yields
\begin{equation}
\mathcal{B} = \sum_{k=2}^{n+1} (1-n) \, \rho_n(D-2,s) \eta^k J^{[k]} \, , \label{bcons}
\end{equation}
which allows us to rewrite \eqref{abj} as
\begin{equation}
\mathcal{A} - \frac{1}{2} \eta \mathcal{A}' = J - \sum_{k=2}^{n+1} (1-n) \, \rho_n(D-2,s) \eta^k J^{[k]}
\end{equation}
We can use the formal correspondence between $\mathcal{A}$ and $\mathcal{K}$, and $\mathcal{F}$ and $J$, to skip the explicit calculation and immediately write
\begin{equation}
\mathcal{A} = \mathcal{K} - \frac{1}{D + 2(s-3)} \eta \mathcal{K}' \, , \label{acons}
\end{equation}
in analogy with \eqref{fronsINT}. Using \eqref{bcons} and \eqref{kdef}, we arrive at
\begin{equation}
\mathcal{A} = \sum_{k=0}^{n+1} \rho_k (D-2,s) \eta^k J^{[k]}
\end{equation}
The current exchange is thus
\begin{align}
J_{(\mu)} \mathcal{P}^{(\mu)(\nu)} J_{(\nu)} &= \sum_{k=0}^{n+1} \rho_k (D-2,s) J \scalerel*{\cdot}{\bigodot} \eta^k J^{[k]} \\
&= \sum_{k=0}^{n+1} \rho_k (D-2,s) {{s-2k} \choose k} J^{[k]} \scalerel*{\cdot}{\bigodot} J^{[k]} \, , \label{unconstrained exchange}
\end{align}
which agrees with \eqref{constrained exchange}, as we can see by expanding the first two terms,
\begin{align}
J_{(\mu)} \mathcal{P}^{(\mu)(\nu)} J_{(\nu)} & = J \scalerel*{\cdot}{\bigodot} J + \rho_1 (D-2,s) {s \choose 2} J' \scalerel*{\cdot}{\bigodot} J' \\ \nonumber &\quad + \sum_{k=2}^{n+1} \rho_k (D-2,s) J \scalerel*{\cdot}{\bigodot} \eta^k J^{[k]} \, .
\end{align}
\newpage
\subsection{Non-local unconstrained theory}\label{nonlocal}
Instead of introducing compensator fields $\alpha$ and $\beta$ and formulating the theory in terms of $\mathcal{A}$, $\mathcal{C}$ and $\mathcal{B}$ tensors, we can construct it using only the gauge field $\varphi$ if we allow non-local operators, i.e. powers of $\frac{1}{\Box}$ \footnotemark.\footnotetext{Alternatively, we could multiply the equations with the appropriate power of $\Box$ and have a higher-order derivative theory instead. However, it is not clear if the higher-order formulation of the theory is equivalent to the non-local formulation.}
Let us show here how to construct the theory in this manner.
\subsubsection{Free theory}
One begins by building a non-local tensor $\mathcal{H}$ that satisfies
\begin{equation}
\delta_{\Lambda} \mathcal{H} = 3 \Lambda' \, ,
\end{equation}
so that $\mathcal{F} - \partial^3 \mathcal{H}$ becomes gauge-invariant without any additional constraints or compensator fields.
As shown in \cite{fs1}, inspired by HS generalizations of metric connections from general relativity (developed in \cite{dwf} and later explained in more detail in \textbf{Section \ref{geometric}}), we can try to construct a generalized Fronsdal tensor $\mathcal{F}_n$ that transforms as
\begin{equation}
\delta_\Lambda \mathcal{F}_{n} = (2n+1) \frac{\partial^{2n+1}}{\Box^{n-1}} \Lambda^{[n]} \, \label{gaugenonlocal}
\end{equation}
under the gauge variation \eqref{gauge}. This way, for high enough $n$, $\mathcal{F}_n$ becomes gauge-invariant. Since we also want the action to be gauge invariant, we require that $\mathcal{F}_n$ satisfies a generalization of the Bianchi identity,
\begin{equation}
\partial \cdot \mathcal{F}_{n} - \frac{1}{2n} \partial \mathcal{F}_{n}{}' = - \left( 1 + \frac{1}{2n} \right) \frac{\partial^{2n+1}}{\Box^{n-1}} \varphi^{[n+1]} \, , \label{bianchinonlocal}
\end{equation}
which also vanishes for high enough $n$. The generalized Fronsdal tensor $\mathcal{F}_n$ that satisfies all these requirements reads
\begin{equation}
\mathcal{F}_{n+1} = \mathcal{F}_{n} - \frac{1}{n+1} \frac{\partial}{\Box} \left( \partial \cdot \mathcal{F}_{n} \right) + \frac{1}{(n+1)(2n+1)} \frac{\partial^2}{\Box} \mathcal{F}_{n}{}' \, ,
\end{equation}
where $\mathcal{F}_{1} = \mathcal{F}$ (or equivalently, $\mathcal{F}_0 = \Box \varphi$), as one can easily check through direct computation and using simple inductive arguments.
To construct a spin-$s$ theory, we use $\mathcal{F}_{n+1}$ with $n=\left[ \frac{s-1}{2} \right]$, the minimal value for which the gauge variation and the Bianchi identity \eqref{bianchinonlocal} both vanish.
The corresponding generalized Einstein-like tensor reads
\begin{equation}
\mathcal{G}_{n} = \sum_{k=0}^{n+1} \frac{(-1)^k}{2^k (n+1)^{\underline{k}}} \eta^k \mathcal{F}_{n+1}^{[k]} \, .
\end{equation}
One can check that $\mathcal{G}_{n}$ is indeed divergenceless, as required by the gauge-invariance of the action, using the traces of \eqref{bianchinonlocal}, which satisfy
\begin{equation}
\partial \cdot \mathcal{F}_{n+1}^{[k]} - \frac{1}{2(n-k+1)} \partial \mathcal{F}_{n+1}^{[k+1]} = 0 \, , \quad (k \leq n) \label{pbianchinonlocal}
\end{equation}
and applying it successively to terms in $\partial \cdot \mathcal{G}_{n}$.
For clarity, let us show how all the pieces fit together to make the action gauge-invariant.
\begin{align}
\delta_\Lambda \mathcal{S}_n &= \int d^D x \, \delta_\Lambda \mathcal{L}_n \\ &= \frac{1}{2} \int d^D x \, \left( \partial \Lambda \scalerel*{\cdot}{\bigodot} \mathcal{G}_n + \varphi \scalerel*{\cdot}{\bigodot} \delta_\Lambda \mathcal{G}_n \right) \\
&= \frac{1}{2} \sum_{k=0}^{n+1} \frac{(-1)^k}{2^k (n+1)^{\underline{k}}} \int d^D x \, \left( \partial \Lambda \scalerel*{\cdot}{\bigodot} \eta^k \mathcal{F}_{n+1}^{[k]} + \varphi \scalerel*{\cdot}{\bigodot} \eta^k \underbrace{\delta_\Lambda \mathcal{F}_{n+1}^{[k]}}_{=0, \eqref{gaugenonlocal}} \right) \\
&= -\frac{s}{2} \sum_{k=0}^{n+1} \frac{(-1)^k}{2^k (n+1)^{\underline{k}}} \int d^D x \, \left[ \Lambda \scalerel*{\cdot}{\bigodot} \partial \cdot \left( \eta^k \mathcal{F}_{n+1}^{[k]} \right) \right] \\
&= -\frac{s}{2} \int d^D x \, \Lambda \scalerel*{\cdot}{\bigodot} \partial \cdot \mathcal{F}_{n+1} -\frac{s}{2} \sum_{k=1}^{n+1} \frac{(-1)^k}{2^k (n+1)^{\underline{k}}} \int d^D x \, \Lambda \scalerel*{\cdot}{\bigodot} \eta^k \partial \cdot \mathcal{F}_{n+1}^{[k]} \\ \nonumber &\quad+s \sum_{k=0}^{n} \frac{(-1)^k}{2^k (n+1)^{\underline{k+1}}} \int d^D x \, \Lambda \scalerel*{\cdot}{\bigodot} \eta^{k} \partial \mathcal{F}_{n+1}^{[k+1]} \\
&= -\frac{s}{2} \sum_{k=0}^{n} \frac{(-1)^k}{2^k (n+1)^{\underline{k}}} \int d^D x \, \eta^k \left( \underbrace{ \partial \cdot \mathcal{F}_{n+1}^{[k]} - \frac{1}{2(n-k+1)} \partial \mathcal{F}_{n+1}^{[k+1]} }_{=0, \eqref{pbianchinonlocal}} \right) \\
&= 0
\end{align}
\subsubsection{Interacting theory with an external current}
If the system is coupled to a generic totally symmetric external current $\mathcal{J}$, a natural starting point would be to write the Lagrangian as
\begin{equation}
\mathcal{L} = \frac{1}{2} \varphi \scalerel*{\cdot}{\bigodot} \left( \mathcal{G}_{n} - \mathcal{J} \right)
\end{equation}
and the field equations read
\begin{equation}
\mathcal{G}_{n} \equiv \sum_{k=0}^{n+1} \frac{(-1)^k}{2^k (n+1)^{\underline{k}}} \eta^k \mathcal{F}_{n+1}^{[k]} = \mathcal{J} \, . \label{nonlocal current}
\end{equation}
We proceed like in all previous cases, inverting \eqref{nonlocal current} to extract the current exchange. Taking successive traces of \eqref{nonlocal current} and multiplying both sides with metric tensors to obtain a tensor of order $s$, one finds a general relation,
\begin{equation}
\rho_k (D-2n,s-1) \eta^k \mathcal{J}^{[k]} = (-1)^k \sum_{p=k}^{n+1} \frac{(-1)^{p}}{2^{p} (n+1)^{\underline{p}}} {p \choose k} \eta^p \mathcal{F}_{n+1}^{[p]} \, . \label{JFrelation}
\end{equation}
Summing both sides of \eqref{JFrelation} over $k$ ($0 \leq k \leq n+1$), one finds that the factor $(-1)^k {p \choose k}$ cancels all the terms over $p$ on the right-hand side, except $\mathcal{F}_{n+1}$, i.e.
\begin{equation}
\mathcal{F}_{n+1} = \sum_{k=0}^{n+1} \rho_k (D-2n,s-1) \eta^k \mathcal{J}^{[k]} \, .
\end{equation}
Therefore, the current exchange is described by
\begin{align}
\mathcal{J}_{(\mu)} \mathcal{P}^{(\mu)(\nu)} \mathcal{J}_{(\nu)} &= \sum_{k=0}^{n+1} \rho_k (D-2n,s-1) \mathcal{J} \scalerel*{\cdot}{\bigodot} \eta^k \mathcal{J}^{[k]} \\
&= \sum_{k=0}^{n+1} \rho_k (D-2n,s-1) {{s-2k} \choose 2} \mathcal{J}^{[k]} \scalerel*{\cdot}{\bigodot} \mathcal{J}^{[k]} \label{thexchangesum}
\end{align}
Expanding the first two terms of the current exchange, we see that
\begin{align}
\mathcal{J}_{(\mu)} \mathcal{P}^{(\mu)(\nu)} \mathcal{J}_{(\nu)} &= \mathcal{J} \scalerel*{\cdot}{\bigodot} \mathcal{J} + \frac{1}{2} \rho_1 (D-2n,s-1) \mathcal{J}' \scalerel*{\cdot}{\bigodot} \mathcal{J}'\\ \nonumber &\quad + \sum_{k=2}^{n+1} \rho_k (D-2n,s-1) {{s-2k} \choose 2} \mathcal{J}^{[k]} \scalerel*{\cdot}{\bigodot} \mathcal{J}^{[k]} \, ,
\end{align}
which, due to the presence of an additional $-2n$ in the denominator, clearly \textit{disagrees} with the constrained case \eqref{constrained exchange} and the unconstrained case \eqref{unconstrained exchange}, except in the case of lower spins, i.e. $s\leq2$. We explore the implications of this disagreement in the following segment.
\subsubsection{HS theory with proper current exchange}
So far, we have seen three different ways to formulate a higher spin theory of massless bosons in flat spacetime. One is \textit{Fronsdal's constrained formulation}, explored in \textbf{Section \ref{freefronsdal}}, in which the gauge-invariance of the action is enforced by restricting the gauge parameter $\Lambda$ to a traceless tensor and restricting the gauge field $\varphi$ to a doubly-traceless tensor. The second one is the \textit{local unconstrained formulation}, explored in \textbf{Section \ref{localunconstrained}}, which requires additional non-dynamical fields $\alpha$ and $\beta$ to ensure a fully gauge-invariant action. The third one is the \textit{non-local unconstrained formulation}, explored in this section, which allows for a fully gauge-invariant theory without any additional fields, at the cost of having to use non-local operators $\frac{1}{\Box}$. As we concluded in the previous segment, current exchanges in the non-local unconstrained theory seem to disagree with other two formulations. Since the non-local formulation is based on simple geometric arguments, without imposing \textit{ad-hoc} constraints or adding additional fields to the theory, it is natural to take it as a starting point, try to understand the disagreement and try to formulate it in a way that naturally reduces to other two formulations. As we shall see, this leads us to a \textit{unique} form of the theory for each spin.
Equation \eqref{unconstrained exchange} suggests that, in the constrained formulation, the operator $\mathcal{P}_c$, as defined in \eqref{currx}, is
\begin{equation}
\mathcal{P}_c \scalerel*{\cdot}{\bigodot} J= \sum_{k=0}^{n+1} \rho_k (D-2,s) \eta^k J^{[k]} \, .
\end{equation}
A direct computation shows that
\begin{equation}
(\mathcal{P}_c \scalerel*{\cdot}{\bigodot} J)' = 2 \sum_{k=0}^{n+1} \rho_{k+1} (D-2,s) \eta^k J^{[k]} \, ,
\end{equation}
and
\begin{equation}
(\mathcal{P}_c \scalerel*{\cdot}{\bigodot} J)'' = 0 \, . \label{p''}
\end{equation}
Note that $\mathcal{P}_c$ precisely corresponds to \eqref{currx} if the current is conserved, since $\Pi$ effectively gets replaced by $\eta$.
Thus, if we want to build an unconstrained theory with proper current exchanges, in analogy with \eqref{abj}, we postulate the non-local Einstein tensor $\mathcal{E}$ of form\footnotemark\footnotetext{We put $\varphi$ in the subscript to stress the fact that these quantities are to be built using only the gauge field $\varphi$.}
\begin{equation}
\mathcal{E} = \mathcal{A}_\varphi - \frac{1}{2} \eta \mathcal{A}_\varphi ' + \eta^2 \mathcal{B}_\varphi \, ,
\end{equation}
requiring that $\mathcal{A}_\varphi'' = 0$, reflecting \eqref{p''}, and $\partial \cdot \mathcal{E} = 0$, reflecting the fact that $\mathcal{P}_c$ corresponds to the generalized projection operator for a \textit{conserved} external current.
We construct $\mathcal{A}_\varphi$ using gauge-invariant building blocks, which can all be expressed in terms of $\mathcal{F}_{n+1}$\footnotemark\footnotetext{Alternatively, we could have used $\mathcal{G}_n$ as the main building block.}, where $n=\left[ \frac{s-1}{2} \right]$. Since
\begin{equation}
\partial \cdot \mathcal{F}^{[k]}_{n+1} = \frac{1}{2(n-k+1)} \partial \mathcal{F}^{[k+1]}_{n+1} \, ,
\end{equation}
all divergences can be expressed in terms of traces, which means that our gauge-invariant building blocks are
\begin{equation}
\mathcal{F}_{n+1}, \, \mathcal{F}_{n+1}' , \, \dots \, , \mathcal{F}^{[N]}_{n+1}
\end{equation}
where $N=\left[ \frac{s}{2} \right]$. The general linear combination thus reads
\begin{equation}
\mathcal{A}_\varphi = \sum_{k=0}^{N} a_k \frac{\partial^{2k}}{\Box^k} \mathcal{F}^{[k]}_{n+1} \, .
\end{equation}
Demanding that $\mathcal{A}_\varphi$ satisfies the Bianchi identity (because we want to write the Lagrangian as $\mathcal{L}_\varphi = \frac{1}{2} \varphi \scalerel*{\cdot}{\bigodot} \mathcal{E}$),
\begin{equation}
\partial \cdot \mathcal{A}_\varphi - \frac{1}{2} \partial \mathcal{A}_\varphi ' = 0 \, \label{bianchnl}
\end{equation}
implies
\begin{equation}
a_k = (-1)^{k+1} (2k-1) \prod_{j=0}^{k-1} \frac{n+j}{n-j+1} \, .
\end{equation}
We can write $\mathcal{B}_\varphi$ as
\begin{equation}
\mathcal{B}_\varphi = \sum_{k=0}^{N-2} \eta^k \mathcal{B}_k \, ,
\end{equation}
where $\mathcal{B}_k$ terms contain no metric tensors.
We solve for $\mathcal{B}_k$ by demanding that $\mathcal{E}$ be divergenceless, i.e.
\begin{align}
\partial \cdot & \left\{ \mathcal{A}_\varphi - \frac{1}{2} \eta \mathcal{A}_\varphi ' + \eta^2 \mathcal{B}_\varphi \right\} = 0 \, \\
\implies & \partial \cdot \mathcal{A}_\varphi ' = 2 \partial \mathcal{B}_\varphi + \eta \partial \cdot \mathcal{B}_\varphi \, .
\end{align}
This in turn implies that $\mathcal{B}_0$ is pure gradient
\begin{equation}
\partial \cdot \mathcal{A}_\varphi' = 2 \partial \mathcal{B}_0 \, , \label{AB}
\end{equation}
and $\mathcal{B}_k$ tensors can therefore be expressed as traces of $\mathcal{B}_0$, i.e.
\begin{equation}
\mathcal{B}_\varphi = \sum_{k=0}^{N-2} \frac{1}{2^{k-1} (k+2)!} \eta^k \mathcal{B}^{[k]}_0 \, .
\end{equation}
Solving \eqref{AB} for $\mathcal{B}_0$ gives
\begin{equation}
\mathcal{B}_0 = \sum_{k=0}^{N} b_k \frac{\partial^{2k}}{\Box^k} \mathcal{F}^{[k+2]}_{n+1} \, ,
\end{equation}
where\footnotemark \footnotetext{This calculation was worked out with an error in \cite{fms} and later corrected in \cite{dariomass}, which unfortunately also seems to contain an error.}
\begin{equation}
b_k = \frac{a_k}{4(n-k)(n-k+1)} \frac{1-4n^2}{1-4k^2} \, .
\end{equation}
Note that the denominator of $b_k$ can be equal to zero, but that is not a problem, since $\mathcal{F}^{[k+2]}_{n+1}$ vanishes for those values.
\subsection{Geometric theory}\label{geometric}
It is instructive to formulate the theory purely in terms of generalized geometric objects. We start by defining \textit{higher-spin curvatures}, which leads us to the construction of generalized Riemann and Einstein tensors.
\subsubsection{Higher-spin curvature}
As explained in \cite{dwf}, HS curvatures are essentially a hierarchy of generalized (linearized!) \textit{Christoffel connections} $\Gamma$ built from derivatives of the gauge field $\varphi$.
The $m$-th connection reads
\begin{equation}
\Gamma^{(m)}_{\mu_1 \cdots \mu_m ; \nu_1 \cdots \nu_s} \equiv \Gamma^{(m)} = \sum_{k=0}^{m} \frac{(-1)^k}{{m \choose k}} \partial_{(\nu)}^{m-k} \partial_{(\mu)}^k \varphi \, , \label{gamma}
\end{equation}
where we write $(\mu)$ and $(\nu)$ in the subscript of $\partial$ to denote that $m-k$ derivatives carry one set of symmetric indices $(\nu_1 \cdots \nu_s)$, whereas $k$ derivatives carry the other set of symmetric indices $(\mu_1 \cdots \mu_m)$.
For example, if $s=2$ the first connection is
\begin{align}
\Gamma^{(1)} &= \partial_{(\nu)} \varphi - \partial_{(\mu)} \varphi \\
& \equiv \partial_{\nu_1} \varphi_{\nu_2 \mu} + \partial_{\nu_2} \varphi_{\nu_1 \mu} - \partial_\mu \varphi_{\nu_1 \nu_2}
\end{align}
which is exactly the linearized first connection as we know it from linearized general relativity, up to a multiplicative constant.
The gauge variation of $\Gamma^{(m)}$ is
\begin{align}
\delta \Gamma^{(m)} &= \sum_{k=0}^{m} \frac{(-1)^k}{{m \choose k}} \partial_{(\nu)}^{m-k} \partial_{(\mu)}^k (\partial_{(\nu)} \Lambda + \partial_{(\mu)} \Lambda) \\
&= \sum_{k=0}^{m} \frac{(-1)^k}{{m \choose k}} \left[ (m-k+1) \partial_{(\nu)}^{m-k+1} \partial_{(\mu)}^k \Lambda + (k+1) \partial_{(\nu)}^{m-k} \partial_{(\mu)}^{k+1} \Lambda \right] \\
& = (m+1) \partial_{(\nu)}^{m+1} \Lambda \,.
\end{align}
Since $\partial$ should carry $m+1$ $\nu$-indices, $\Lambda$ should carry $s-1$ $\mu$-indices and there are $m+s$ indices in total, the gauge variation vanishes for $m \geq s$. For this reason, we define the generalized Riemann tensor as
\begin{align}
\mathcal{R} := \Gamma^{(s)} \, .
\end{align}
\subsubsection{Relationship between $\mathcal{R}$ and $\mathcal{F}$}
Following \cite{fms}, we can relate the Fronsdal tensor $\mathcal{F}$ to the generalized Riemann tensor $\mathcal{R}$ using the generalized Fronsdal tensor $\mathcal{F}_n$,
\begin{equation}
\mathcal{F}_{n+1} = \frac{1}{\Box^n} \partial^{s - 2N} \cdot \mathcal{R}^{[N]} \, , \label{geometry}
\end{equation}
where, as before, $n=\left[ \frac{s-1}{2} \right]$, $N=\left[ \frac{s}{2} \right]$ and the contraction is performed on $\mu$-indices, i.e. the first set of indices in the $s$-th connection.
The correspondence \eqref{geometry} thus allows us to reformulate the theory using only (linearized!) geometric objects, which was one of the motivating factors that led to the construction of $\mathcal{F}_{n+1}$, as mentioned in \textbf{Section \ref{nonlocal}}
\newpage
\section{Discussion}\label{s7}
Having gone through the analysis of massless bosonic massless fields, we are now in a position to concisely define the proper theory and show how it reduces to some interesting special cases.
The full set of equations reads
\begin{empheq}[box=\widefbox]{align}
\mathcal{L} &= \frac{1}{2} \varphi \left( \mathcal{E} - \mathcal{J} \right) \\
\mathcal{E} &= \mathcal{A}_\varphi - \frac{1}{2} \eta \mathcal{A}_\varphi ' + \eta^2 \mathcal{B}_\varphi\\
\mathcal{A}_\varphi &= \sum_{k=0}^{N} a_k \frac{\partial^{2k}}{\Box^k} \mathcal{F}^{[k]}_{n+1} \\
\mathcal{B}_\varphi &= \sum_{k=0}^{N-2} \frac{1}{2^{k-1} (k+2)!} \eta^k \mathcal{B}_0^{[k]} \label{B} \\
\mathcal{B}_0 &= \sum_{k=0}^{N} b_k \frac{\partial^{2k}}{\Box^k} \mathcal{F}^{[k+2]}_{n+1} \label{D} \\
a_k &= (-1)^{k+1} (2k-1) \prod_{j=0}^{k-1} \frac{n+j}{n-j+1} \label{ak} \\
b_k &= \frac{a_k}{4(n-k)(n-k+1)} \frac{1-4n^2}{1-4k^2} \label{bk} \\
\mathcal{F}_{n+1} &= \mathcal{F}_n - \frac{1}{n+1} \frac{\partial}{\Box} \partial \cdot \mathcal{F}_n + \frac{1}{(n+1)(2n+1)} \frac{\partial^2}{\Box} \mathcal{F}_n' \\
\mathcal{F}_0 &= \Box \varphi \\
n &= \left[ \frac{s-1}{2} \right], \quad N = \left[ \frac{s}{2} \right]
\end{empheq}
If the theory is free, $\mathcal{E} = 0$ reduces to $\mathcal{A}_\varphi = 0$ which in turn reduces to $\mathcal{F}_{n+1} = 0$.
If we want to cast the theory in a local unconstrained form, we express $\mathcal{A}$ in terms of $\varphi$ and $\alpha$ as described in \eqref{Acomp}, and we express $\mathcal{B}$ in terms of $\varphi$ and $\beta$ as described in \eqref{Bcomp}.
Finally, to go full circle and arrive at Fronsdal's constrained theory, we simply dispense with compensator fields $\alpha$ and $\beta$ and we impose $\Lambda'=0$ and $\varphi''=0$.
\newpage
\subsection{A single equation?}\label{disc}
The Einstein-like tensor $\mathcal{E}$ reveals an interesting peculiarity when expressed explicitly in terms of $\varphi$. As you can see in \textbf{Appendix \ref{E}}, its general form for the spin-$s$ tensor $\mathcal{E}_s$ seems to be
\begin{equation}
\mathcal{E}_{s} [\varphi_s]= \mathcal{E}_{s-1}[\varphi_s] + \Delta_{s}[\varphi_s] \, ,
\end{equation}
where $\Delta_s$ contains only those terms that become non-vanishing for spin $s$, i.e.
\begin{equation}
\Delta_s [\varphi_{s'}] = 0 \quad \text{for} \quad s' < s \, .
\end{equation}
In other words, for spin $s$, all tensors $\mathcal{E}_k$ where $k\geq s$ are equally valid, since they trivially reduce to $\mathcal{E}_s$.
What this seems to imply is not only that there is a unique Einstein-like tensor that leads to a valid theory \textbf{for each spin}, but that there is a \textbf{single tensor} $\mathcal{E}_\infty$ that leads to a valid theory \textbf{for all spins}.
Note that we have only evaluated $\mathcal{E}_s$ up to $s=15$ using computer-assisted methods described in \textbf{Appendix \ref{program}}, but it certainly seems natural that this pattern holds for general spin $s$.
This conjecture remains to be proved, and the possibility of an explicit construction of $\mathcal{E}_\infty$ also remains an open question.
\section{Conclusion}\label{s8}
We have shown the proper form of equations for a theory of massless higher-spin bosons interacting with a generic external current. As it turns out, to construct a consistent unconstrained local theory, we either have to introduce non-local operators or high derivatives.
This construction leads to a \textit{unique} theory for each spin, perhaps even a unique theory for \textit{all spins}, as discussed in \textbf{Section \ref{disc}}.
Putting the $\mathrm{(A)dS}$ and the fermionic theory aside, an interesting step forward would perhaps be to find the proper HS gauge corresponding to the \textit{de Donder} gauge for the spin-$2$ theory. Some interesting results regarding generalized \textit{de Donder} gauges can be found in \cite{fs2}, but they certainly deserve further investigation.
Note that in spacetimes with more than four dimensions, fully symmetric tensors do not exhaust all the available possibilities and one should also consider mixed-symmetry tensors. This was considered, for example, in \cite{mix1}, \cite{mix2} or \cite{mix3}.
|
{
"timestamp": "2018-09-10T02:01:22",
"yymm": "1809",
"arxiv_id": "1809.02179",
"language": "en",
"url": "https://arxiv.org/abs/1809.02179"
}
|
\section{Introduction}
\section{Introduction}
In recent years, there has been considerable interest in investigating the physical properties related to quantum systems with long-range interactions \cite{Lahaye2009, Peter2012, Gong2016, Zeiher2017}.
This is primarily in response to the significant developments made in experimental atomic, molecular and optical (AMO) physics \cite{Bloch2008,Saffman2010,Douglas2015}, where such interactions can be implemented in a well controlled setting \cite{Schauss2012,Britton2012,Yan2013,Islam2013}.
These studies have led to a flurry of exciting new physical phenomena
\cite{Hauke2013,Sciolla2011,Zunkovic2018, Richerme2014, Jurcevic2014, Schachenmayer2013, Buyskikh2016, Bruno2001, Laflorencie2005, Lobos2013, Maghrebi2017, Smith2016, Jaschke2017, Neyenhuis2017, Eldredge2017}, for instance, propagation of correlations faster than the Lieb-Robinson bound \cite{Hauke2013,Richerme2014,Jurcevic2014},
emergence of exotic long-range order \cite{Laflorencie2005,Lobos2013,Bruno2001,Maghrebi2017}, and
dynamical phase transitions \cite{Sciolla2011,Zunkovic2018}.
{Most of these phenomena arise in the presence of long-range interactions due to the breakdown of ``quasi-locality"~\cite{Eisert2013,Gong2017}
(cf.~\cite{Luitz2019}). In this context, quasi-locality affirms the existence of a non-relativistic spatial light-cone within which most of the causal information travels with the finite Lieb-Robinson velocity~\cite{Lieb1972}. Any correlations or response to local fluctuations appear to be strongly suppressed at small distances away from this light-cone boundary \cite{Hastings2006,Nachtergaele2006}, which may not be the case when long-range interactions are present.}
Importantly, the loss of quasi-locality can lead to nontrivial distribution of quantum entanglement \cite{Horodecki2009},
which over the years has been established as an important resource in implementation of various quantum information and computation protocols \cite{Ekert1991,Bennett1993,Bennett1992} (also see \cite{Nielsen2000}). While recent studies have { focused} on the growth of entanglement between two parties in a variable-range interacting system \cite{Schachenmayer2013, Buyskikh2016}, the effect of emergent quasi-nonlocality due to long-range interactions on the global entanglement of these systems remains elusive \cite{Pappalardi2018}.
Here, we address this void by investigating the multipartite entanglement in the ground and quenched states of quantum many-body systems with long-range interactions.
It is known that entanglement jointly distributed among many parties has richer features \cite{Vidal2000,Verstraete2002}, which has allowed for the design of sophisticated protocols such as cryptographic conference \cite{Horne1992, Bose1998} and multiparty quantum communication \cite{Karlsson1998,Bandyopadhyay2000,Rigolin2005,Yeo2006,Agrawal2006,Ghose2016}.
Multiparty entangled states
are also intrinsic resources in implementation of novel quantum computation models such as measurement-based quantum computation \cite{Raussendorf2001}. In the past decade, notable progress in experimental physics has allowed for the efficient creation and manipulation of multiparty entanglement \cite{Eibl2004,Prevedel2009,Gao2010,Pan2012,Yao2012}. This opens up the exciting potential for harnessing systems with tunable range of interactions for physical realization of these quantum protocols. Moreover, multiparty entanglement is also an important characteristic quantity in the study of critical phenomena in many-body systems \cite{Wei2005,Cui2008,Orus2008,Giampaolo2013,Stasinska2014,Hofmann2014,Roy2017,Pezze2017}.
In this work, we consider a spin-1/2 Heisenberg chain with spin interactions that follow a power-law decay (${1}/{r^{\alpha}}$). Efficient implementation of such variable interactions has been possible with recent developments in AMO physics, in particular with cold atoms \cite{Douglas2015} where the parameter $\alpha$ can be tuned. Other systems include Rydberg atoms \cite{Schauss2012}, trapped ions \cite{Britton2012} and polar molecules \cite{Yan2013}.
Incidentally, it is known that such power-law decay in Heisenberg chain can lead to breakdown of quasi-locality \cite{Maghrebi2017,Luitz2019}. An important ramification of this is that the area law no longer bounds the entanglement entropy \cite{Gong2017} (cf.~\cite{Hastings2007,Eisert2010}), especially for $\alpha \leq 1$. In the same vein, intuitively, one would expect that the spatial nonlocal effects induced by the long-range interactions will result in quantum phases with enhanced global entanglement.
To explore this further,
we characterize the multiparty entanglement in both the ground and quenched states of the considered Hamiltonian.
We observe a clear dichotomy between two different regimes, depending on whether the interactions in the $x$--$y$ spin plane are antiferromagnetic (AFM) or ferromagnetic (FM).
{We note that while the ground states} in the FM regime have enhanced multiparty entanglement for increased range of interactions in the system, counterintuitively, for the AFM regime the global entanglement weakly diminishes.
{Interestingly, we note that this is no longer the case when quantum states are quenched with such long-range interactions. Here, we start from a completely separable or product spin state and switch on the interactions. The subsequent growth of multipartite entanglement in the time-evolved system is then numerically analyzed.}
{Here, we observe that the} AFM interactions with long-range are more favorable towards the growth of multiparty entanglement, in contrast to the FM interactions, where the growth appears almost independent of the range of interactions. Thus, our findings clearly demonstrate that long-range interaction selectively enhances quantum resources, such as global entanglement in the system, and this is important for experimental efforts to generate entanglement and implement {quantum information and computation} protocols using systems with variable-range interactions.
The paper is arranged as follows. We introduce the spin-1/2 Heisenberg chain with long-range interactions in Sec.~\ref{model}. Our measure of genuine multipartite entanglement and its computaion is discussed in Sec.~\ref{sec:GGM_def}. In Sec.~\ref{sec:results}, we analyze the genuine multiparty entanglement in the ground states of the long-range model. The growth of entanglement under quantum quench is then investigated in Sec.~\ref{sec:quench}, before we end with a final discussion on the results in Sec.~\ref{sec:discussions}.
\section{\label{model} Model}
We start by introducing the physical system of our interest, the one dimensional (1D) quantum spin lattice, consisting of spin-1/2 particles, coupled via long-range interactions with power-law decay. The Heisenberg Hamiltonian governing such a system can be written as
\begin{eqnarray}
\mathcal{H}=\sum_{i<j}\frac{1}{|i-j|^{\alpha}}(J_x\sigma^x_i \sigma^x_{j}+J_y\sigma^y_i\sigma^y_{j}+\Delta\sigma^z_i \sigma^z_{j}),
\label{Long_Ham}
\end{eqnarray}
where $\alpha\geq0$ is the continuous exponent that controls the long-range interaction. $J_x$ and $J_y$ are the coupling constants along the $x$ and $y$ spin axes, respectively,
and $\Delta$ is the anisotropy along the $z$-direction. Here, $\sigma^m$'s are the Pauli spin matrices ($m\in \{x,y,z\}$). For $J_x, J_y < 0$, the interaction in the $x$--$y$ plane is ferromagnetic, while for $J_x, J_y > 0$, we obtain the antiferromagnetic coupling. The above Hamiltonian, in the presence of long-range interactions, has a rich phase diagram. For $J_x = J_y = -1$, an exotic continuous symmetry breaking (CSB) phase emerges \cite{Maghrebi2017} for low values of $\alpha$, apart from the known \emph{XY} and AFM phases observed in the short-range Hamiltonian. This is a true hallmark of the quasi-nonlocal effect and {the change in effective dimensionality of the system induced by long-range interactions. This is due to the fact that spontaneous breaking of continuous symmetry typically appears only in higher dimensional spin lattices and is otherwise forbidden in low-dimensional systems by the Mermin-Wagner theorem~\cite{Mermin1966} (also see Ref.~\cite{Maghrebi2017}).}
For $\alpha=\infty$, the model reduces to the short-range Heisenberg model with nearest-neighbor (NN) interactions. A schematic of a 1D long-range quantum spin system of arbitrary size and with power-law decay of interactions is provided in Fig.~\ref{fig1}. In our study,
we consider periodic boundary conditions for the spin chain. { In order to do that we always choose the interaction corresponding to the shortest distance, $|i-j|$, from one site and another in the periodic spin chain.} We note that at $\Delta=0$, for $J_x = J_y$, the above Hamiltonian gives us the long-range \emph{XX} model, which we analyze in our study. Moreover, for $\Delta = J_x = J_y$ and $\alpha=2$, the model reduces to the exactly solvable Haldane-Shastry model \cite{Gaudiano2008,Shastry1988,Haldane1988}.
\begin{figure}[t]
\includegraphics[width=3.4in,angle=00]{longrange2.pdf}
\caption{Schematic of a quantum spin chain with interactions that follow a power-law decay, ${1}/{r^{\alpha}}$. The black (bold) and red (dotted) lines show the short- and long-range coupling between the $j^{th}$ and the other spins in the quantum system.
}
\label{fig1}
\end{figure}
\section{\label{sec:GGM_def} Measure of genuine multiparty entanglement}
Before going into the detailed analysis of the entanglement properties of the ground and quenched states of the long-ranged Heisenberg model, we begin by defining the genuine multiparty entanglement of a quantum state. We note that there exists several equivalent definitions and measures of multiparty entanglement in the literature \cite{Horodecki2009}. In our work, we are mainly { focused} on the genuine multiparty entanglement of a quantum system \cite{Chiara2018}, which is defined as follows: {\it An $N$-party pure quantum state, $|\psi\rangle_N$, is said to be genuinely multiparty entangled if it cannot be written as a product in any bipartition}. In other words, a genuine multiparty entangled state is entangled across all bipartitions of the system \cite{Goldbart2003, Blasone2008, Sen2010}. In order to estimate this quantity in $|\psi\rangle_N$, we consider the generalized geometric measure (GGM) \cite{Sen2010}, which is a computable measure of genuine multiparty entanglement of a state. It is defined as an optimized distance of the given quantum state, $|\psi\rangle_N$, from the set of all states that are not genuinely multiparty entangled. This can be mathematically expressed as
$
\mathcal{G}(|\psi\rangle_N)=1-\Lambda_{\max}^2(|\psi\rangle_N),
$
where $\Lambda_{\max} (|\psi\rangle_N ) = \max | \langle \chi|\psi\rangle_N |$, with the maximization being over all such pure quantum state $|\chi\rangle$ that are not genuinely multiparty entangled. Following some simplifications, one can derive an equivalent expression for the above equation, given by \cite{Sen2010}
\begin{equation}
\mathcal{G} (|\psi \rangle_N ) = 1 - \max \{\lambda^2_{ A : B} | A \cup B = \{s_1,\ldots, s_N\}, A \cap B = \emptyset\},
\label{GGM}
\end{equation}
where \(\lambda_{A:B}\) is the maximal Schmidt coefficient of $|\psi\rangle_N$, in the bipartition
$A:B$. The measure is then optimized over all possible bipartitions of the state, $|\psi\rangle_N$, and for spin-1/2 or qubit systems, takes values in the range: $ 0 \leq \mathcal{G} (|\psi\rangle_N ) \leq 1/2$. In recent years, GGM has been used to characterize genuine multiparty entanglement in strongly-correlated systems, including quantum spin liquids \cite{Dhar2013,Roy2016}, doped spin lattices \cite{Roy2017b,Das2018}, and other many-body systems \cite{Prabhu2011, Jindal2014, Mishra2016, Biswas2014, Sadhukhan2017}.
We note that the computation of GGM in many-body quantum systems requires access to the complete state of the system and all its reduced density matrices. In general, for a quantum system with $N$ number of sites, the number of such reduced density matrices is given by $\sum_{i=1}^{N/2} {N \choose i} $, which increases exponentially with the size of the system. Moreover, in the presence of long-range interactions, there are no known analytical or approximate methods, such as tensor-networks or matrix product states, which can be used to compute GGM, as is the case in several short-range models (see Refs.~\cite{Dhar2013,Roy2016,Roy2017b,Roy2018}).
Therefore, in our case, we are restricted to exact numerical solutions for small, finite spin chains. In our work, we have considered systems with up to $N$ = 20 spins, and use diagonalization and propagation methods based on the Krylov subspace and Lanczos algorithm. {To mitigate the effect of unstable finite-size effects in the presence of long-range interactions, we have also checked the qualitative consistency of our main results against smaller system-sizes.}
\section{\label{sec:results} Multiparty entanglement in the ground state}
We now study the variation of genuine multiparty entanglement ($\mathcal{G}$) in the {ground states} of the spin-1/2 Heisenberg chain, with long-range interactions, given by Eq.~(\ref{Long_Ham}). Towards that aim, we consider two distinct regimes emanating from the Hamiltonian, i) The ferromagnetic regime, with interactions in the $x$--$y$ plane given by $J_x = J_y = -1$, and ii) the antiferromagnetic, with $J_x=J_y=1$. Here, we consider only the antiferromagnetic interactions along the $z$-axis, i.e.,$0 \leq \Delta/J \leq 2$ ($J$ = $|J_x| = |J_y|$), where there always exists a distinct gap between the lowest energy values. The long-range interaction in the system is controlled through the exponent $\alpha$, which is varied in the integer range, $1 \leq \alpha \leq 10$. We exclude the extreme points corresponding to systems with infinite interactions ($\alpha = 0$) or strictly NN interactions ($\alpha = \infty$).
We start with the FM regime, and consider the case where the interaction is defined by $\alpha = 10$, with variable anisotropy between the $x$--$y$ and $z$ directions.
We note that the system is already short-range for $\alpha = 10$.
In Fig.~\ref{fig2}, we note that the ground state is genuinely multiparty entangled for all values of the anisotropy parameter, $\Delta/J$ ($0\leq\Delta/J\leq 2$), with the minimum $\mathcal{G}$ at the point, $\Delta/J = 1$. As the range of interaction is increased, by decreasing $\alpha$, the genuine multipartite entanglement increases monotonically, for $\Delta/J < 1$.
Subsequently, at higher values of anisotropy ($\Delta/J > 1$) the local minimum shifts to larger values of $\Delta/J$ for decreasing values of $\alpha$. Moreover, there is a cross-over
between $\mathcal{G}$ values of different $\alpha$, which gives rise to an interesting regime where the shortest-range ($\alpha = 10$) and the longest-range of interactions ($\alpha = 1$) generate ground states with higher global entanglement than the intermediate range of interactions.
More importantly, $\mathcal{G}$ remains the highest at $\alpha = 1$, and gradually decreases with increasing $\Delta$. Therefore, as long-range interactions in the system {increase} there is an expected hike in the genuine multipartite entanglement {in the ground state} of the ferromagnetic spin-1/2 Heisenberg Hamiltonian.
\begin{figure}[t]
\includegraphics[width=3in,angle=00]{FM_new.pdf}
\caption{(Color online.) Variation of genuine multipartite entanglement. Here, we consider a Heisenberg chain with $N = 20$ spins, and FM interactions ($J_x=J_y=-1$) in the
$x$--$y$ plane.
The plot shows the variation in $\mathcal{G}$ with the parameter $\Delta/J$, where $J$ = $|J_x| = |J_y|$, for ten different integer values of the exponent $\alpha$, ranging from $\alpha = 1$ (red-squares) to $\alpha = 10$ (green-circles). Here, the dashed lines are fits to the plotted data points. The regime $\Delta=0$ corresponds to the \emph{XX} model which also mimics the result obtained for Heisenberg chain at low $\Delta$.
We note that the plots for higher $\alpha$ values are very close together, which shows that the short-range character is reached fairly quickly.
Moreover, the solid-red vertical arrows highlight specific parameter regimes where $\mathcal{G}$ increases with decreasing $\alpha$ (or increasing long-range interactions), whereas the dashed-green vertical arrows show regions where $\mathcal{G}$ increases but now for increasing $\alpha$ (or decreasing long-range interactions).
Both the axes are dimensionless.
}
\label{fig2}
\end{figure}
In the AFM regime, the situation is drastically different. For the short-range interaction ($\alpha = 10$), the genuine multiparty entanglement of the ground state is minimum at $\Delta/J = 1$, with a distinct symmetry around the point, as shown in Fig.~\ref{fig3}.
In contrast to the FM regime, $\Delta/J = 1$ is the local minima of $\mathcal{G}$, for all values of $\alpha$, {although in the vicinity of this point the multipartite entanglement increases for more long-range interactions, which is similar to the FM regime.}
However, away from this point, $\mathcal{G}$ decreases as the long-range interaction in the system is increased. This intriguing behavior of genuine multipartite entanglement is in direct contrast to the behavior of the system in the FM regime, and implies a negative interdependence between global entanglement and long-range induced nonlocal effects in the system.
This is significant from the perspective of physical implementation of quantum protocols where multiparty entanglement is an important resource. In the AFM regime, long-range interactions appear to be detrimental to generating large entangled states, as compared to the FM regime.
\begin{figure}[t]
\includegraphics[width=3in,angle=00]{AFM_new.pdf}
\caption{(Color online.) Variation of genuine multipartite entanglement. Here, we consider a Heisenberg chain with $N = 20$ spins, and AFM interactions ($J_x=J_y=1$) in the $x$--$y$ plane.
The plot shows the variation in $\mathcal{G}$ with the parameter $\Delta/J$ (where, $J$ = $|J_x| = |J_y|$), for ten different integer values of the exponent $\alpha$, ranging from $\alpha = 1$ (red-squares) to $\alpha = 10$ (green-circles).
Once more, the dashed lines are just fits to the plotted data points and both the axes here are dimensionless.
The red and green vertical arrows here imply the same behavior as outlined in Fig.~\ref{fig2}.
}
\label{fig3}
\end{figure}
{The difference in the behavior of the genuine multipartite entanglement between the FM and AFM regimes, can be partly explained using a heurtistic description of the ground state in these regimes based on our numerical simulations. Here, the competition between different ground state configurations in interacting many-body systems gives rise to the phenomena of entanglement-frustration \cite{Dawson2019} (also see Refs.~\cite{Roy2017,Jindal2014}), which can potentially define the complex behavior of entanglement in our model. In the long-range interaction model that we consider, the ground state can be written as a superposition between two stable, but competing configurations, such that $|\psi_g\rangle = a~|\psi_\mathrm{N}\rangle + b~|\phi_\mathrm{\bar{N}}\rangle$.
Here, $|\psi_\mathrm{N}\rangle$ is the state arising due to the N{\'e}el order at $\Delta > 1$ (for $J$ = 1). It is expected that at large $\Delta$, the ground state will be closer to the N{\'e}el state for both the AFM and FM model. For large $\alpha$, this is simply given by $|\psi\rangle$ = $|\uparrow\downarrow\uparrow\cdots\downarrow\rangle$, where $\{\uparrow, \downarrow\}$ are the eigenstates of $\sigma_z$. However, the complementary configuration $|\psi'\rangle$ = $|\downarrow\uparrow\downarrow\cdots\uparrow\rangle$ is also a likely ground state at large $\Delta$. Hence, at dominant $\Delta$ values, frustration ensures that, $|\psi_\mathrm{N}\rangle = \beta_1|\psi\rangle + \beta_2|\psi'\rangle$. The parity symmetry of $\mathcal{H}$ results in $\beta_1 = \pm \beta_2 = 1/\sqrt{2}$, which ensures $|\psi_\mathrm{N}\rangle$ is maximally multiparty entangled.
On the other hand, $|\phi_\mathrm{\bar{N}}\rangle$ refers primarily to the non-N{\'e}el configurations in the ground state, which are orthogonal to both $|\psi\rangle$ and $|\psi'\rangle$.
These states are significant in regimes where $\Delta$ is not large and seem to arise from the \emph{XY} terms in the Hamiltonian.
However, unlike $|\psi_\mathrm{N}\rangle$, the entanglement properties of $|\phi_\mathrm{N}\rangle$ are a priori not known.
While the above description is intuitively appealing for regimes that correspond to either large or small values of the anisotropy parameter, numerical analysis suggests that it can also provide a broad picture of the ground state for intermediate values of $\Delta$.
By investigating the quantum fidelity of the ground state to $|\psi\rangle$ and $|\psi'\rangle$, one can deduce that the overall entanglement of the ground state is dependent on the trade-off between the states $|\psi_\mathrm{N}\rangle$ and $|\phi_\mathrm{\bar{N}}\rangle$, i.e., the ratio $a/b$.
In the AFM case, two distinct regimes emerge for all $\alpha$, symmetric around the point $\Delta = 1$, viz. the region with $a >b$ (for $\Delta > 1$) and the one with $b > a$ (for $\Delta < 1$). We call these the N{\'e}el and non-N{\'e}el regimes. Later, we discuss how these regimes closely correspond to the AFM and \textit{XY} phases respectively. In the N{\'e}el regime, we observe that the ratio $a/b$ not only increases with $\Delta$ but also with $\alpha$, resulting in higher entanglement for shorter-range interactions.
Interestingly, in the non-N{\'e}el regime the opposite behavior is observed. Here, it can be numerically shown that the ratio $b/a$ increases for decreasing $\Delta$ values,
resulting in more entanglement close to $\Delta = 0$. However, $b/a$ also increases with increasing $\alpha$, which again leads to higher entanglement in short-range systems.
This allows a distinct symmetry in multiparty entanglement to emerge around the vicinity of $\Delta = 1$ (in the region, $0\leq\Delta\leq2$), for all values of $\alpha$ in the AFM model, but with short-range interactions leading to more entanglement, as shown in Fig.~\ref{fig3}. Things look more interesting in the FM case, where the effects of long-range interactions become more prominent. Firstly, for $\alpha > 1$, the transition from the N{\'e}el to the non-N{\'e}el regime is no longer centered at $\Delta = 1$, apart from the short-range cases ($\alpha > 6$). The different points of transition on $\Delta$ increases for decreasing $\alpha$. This allows for cross-over between the multiparty entanglement corresponding to different values of $\alpha$. Secondly, and more importantly, there is no N{\'e}el to non-N{\'e}el transition for $\alpha = 1$ in the FM case, at least within the considered parameter regime. Therefore, the ground state always corresponds to high values of $b/a$ (in the non-N{\'e}el regime) and has high multiparty entanglement compared to other values of $\alpha$ (see Fig.~\ref{fig2}).}
{Incidentally, we note that in the short-range limit (i.e. $\alpha =10$), due to the $SU(2)$ invariance of the Hamiltonian, the AFM and FM model turns out to be the same at $\Delta=\pm 1$. Moreover, in the short-range limit, the FM and AFM models here are also connected via local unitary operations (spin-flip operations at alternate sites in the spin chain) that keep the global entanglement unchanged. This is reflected in Figs.~\ref{fig2} and \ref{fig3}, where the plots for multiparty entanglement ($\mathcal{G}$) at $\alpha = 10$ are almost same for both the FM and AFM cases.}
\begin{figure}[t]
\includegraphics[width=3.5in,angle=00]{3dphase.pdf}
\caption{(Color online.) Variation of genuine multiparty entanglement ($\mathcal{G}$) in the ground states of the long-range Heisenberg chain consisting of $N = 20$ spins in the $\Delta/J-\alpha$ plane for (a) AFM and (b) FM interactions in the $x$--$y$ plane. Here, $J$ = $|J_x| = |J_y|$.
We note here that the dashed yellow lines represent the extremal (minima) points of $\mathcal{G}$. However, they only serve as a visual aid to deconstruct the known quantum phases of the model (see Ref.~\cite{Maghrebi2017}). Both the axes and the color bar in the figures are dimensionless.
}
\label{fig4}
\end{figure}
{The dichotomy} in the behavior of genuine multipartite entanglement in the ground state of FM and AFM regimes of the Heisenberg Hamiltonian is closely related to their respective phase structures. In Fig.~\ref{fig4}, we show that the genuine multipartite entanglement is able to deconstruct the different phases in these regimes, as has been established in earlier work \cite{Maghrebi2017}. For large $\alpha$, the phases are similar to their counterparts corresponding to NN spin-1/2 Heisenberg chain, with two distinct phases: the \emph{XY} spin liquid phase and the AFM Ising-like phase. Figures~\ref{fig4}(a)-\ref{fig4}(b), shows how $\mathcal{G}$ distinctly highlights these phases in both the AFM and FM regimes, respectively.
We note that the ferromagnetic phase corresponding to $\Delta < -1$ is not shown in the diagram, as $\mathcal{G}$ cannot be uniquely computed for degenerate ground states. The anomalous behavior arises as $\alpha$ is decreased and one enters the quasi-nonlocal regime. For the AFM case, a regime of relatively weak entangled phase appears, with lower values of $\mathcal{G}$.
In contrast, in the FM regime, the continuous symmetry breaking phase emerges with decreasing $\alpha$ ($\alpha \lesssim 2$) \cite{Maghrebi2017}, which is marked by a region of high genuine multiparty entanglement. Therefore, in terms of the phase diagram, the increase in genuine multipartite entanglement with increasing long-range interactions is related to the \emph{XY}--CSB phase transition in the FM regime. In addition to this, at the truly long-range interaction limit ($\alpha\sim1$), the ground state mostly remains in the CSB phase, which apparently does not decrease quickly even when the anisotropy and N{\'e}el order increase with $\Delta$.
\section{\label{sec:quench}Generating entanglement through quantum quench}
We now look at how genuine multipartite entanglement can be generated through a quantum quench mediated {by the variable-range interactions in either the FM or AFM regimes of the spin Hamiltonian}. In particular, we start with a product or completely separable initial state of the system, given by $|\psi\rangle_{in} = \prod_{i}^N|\phi\rangle_i$, where $|\phi\rangle_i = \frac{1}{\sqrt{2}}(|0\rangle_i+|1\rangle_i)$. Here, $|0\rangle_i$ and $|1\rangle_i$ are the eigen states of $\sigma_i^z$.
The initial states here can be thought to be ground states of some local Hamiltonian acting identically on all the spins. For the quantum quench, the long-range interactions are instantaneously switched on in the spin system. Subsequently, the initially separable quantum state rapidly evolves in time leading to potential growth of multiparty entanglement in the system.
We note that the quench performed in our study is motivated from the perspective of various quantum information and computation protocols, where entanglement is necessary for successful implementation of the protocol. To this end, in our quench process we begin with a completely separable product state, which is a resource-less state, and wish to generate useful resource (entanglement) in the system.
Our main aim here is to see whether the presence of long-range interactions in the Heisenberg Hamiltonian can generate higher entanglement or quantum resource in these quenched states {as compared to process that only invoke short-range interactions during the quench.
\begin{figure}[h]
\includegraphics[width=3.5in,angle=00]{Evolution.pdf}
\caption{(Color online.) Genuine multiparty entanglement after a quantum quench. The growth of $\mathcal{G}$ in a system consisting of $N = 12$ spins after a quantum quench of the initially product state, $|\psi\rangle_{in}$, for (a)-(b) AFM and (c)-(d) FM interactions in the $x$--$y$ plane. Here, $J$ = $|J_x| = |J_y|$ and the plots correspond to $\alpha =$ 1 (red-squares), 2 (green-circles), 5 (yellow-triangles), 10 (orange-circles). The red vertical arrow here implies the same behavior as outlined in Fig.~\ref{fig2}.
The axes in the above figures are all dimensionless.
}
\label{fig5}
\end{figure}
The initial state is subjected to a quantum quench and coherently evolves to $|\psi(t)\rangle = \exp(-i\mathcal{H}t)|\psi\rangle_{in}$. Subsequently, we measure how much GGM is generated in the quenched state, i.e., we calculate $\mathcal{G}(|\psi(t)\rangle)$.
We are interested in the parameter regimes away from $\Delta = 1$, where the dichotomy between {the ground states in the FM and AFM cases} appears to be the most distinct. Figure~\ref{fig5}, shows the evolution of the state after the quench. Surprisingly, for the quenched dynamics, long-range interactions ($\alpha = 1$) appears to play a strong role in the growth of multipartite entanglement when $|\psi\rangle_{in}$ is quenched in the AFM regime. In contrast, the generation of multipartite entanglement in the FM regime is almost independent of the range of interactions in the system. This implies that highly entangled quantum states can be generated through quenching in the FM regime even in the absence of any significant long-range interactions.
Therefore, in quenched dynamics long-range interactions seem to affect the multiparty entanglement favorably in the AFM regime, while remaining ambivalent in the FM regime. This is converse to the outcome that was observed in the ground state phases of the system.
\section{\label{sec:discussions}Discussion}
In this work, we have demonstrated how the quasi-nonlocal effect induced by long-range interactions in many-body systems, selectively affects the multipartite entanglement of the system. By investigating different ground state phases of the spin-1/2 Heisenberg Hamiltonian we observed that multiparty entanglement can be enhanced or counterintuitively, can reduce as the range of interactions are increased. In particular, these opposing effects were observed for two distinct ground states phases depending on whether the interaction in the $x$--$y$ plane was ferromagnetic or antiferromagnetic. While the global entanglement is expectedly boosted with more quasi-nonlocal effects for the FM regime, in contrast, long-range interactions {appear to} act detrimentally in the AFM case. {A possible reason for the unexpected behavior in the AFM regime, as we observed in our ground state analysis, is the entanglement-frustration arising from N{\'e}el and non-N{\'e}el terms, which appears to favor more global entanglement in the short-range limit. In the FM case, no transition occur between the N{\'e}el and non-N{\'e}el regimes for long-range interactions, leading to higher entanglement.}
Interestingly, the observed dichotomy in these regimes was intriguingly different while considering the generation of multiparty entanglement through quenched dynamics of initially separable states. Here, long-range interactions allow for robust growth of global entanglement in the AFM regime, in contrast to the FM regime, where there is no perceptible advantage in using longer interactions in the quenched dynamics. Overall, our results clearly demonstrate that the system in the ferromagnetic interaction regime is more susceptible to allow significant global entanglement for both short- and long-range interactions.
Our findings provide significant insights for physical implementation of quantum protocols where multiparty entanglement is the necessary resource, such as measurement based computation or secure multiparty communication.
With recent technological breakthroughs in experimental atomic, molecular and optical physics, where the systems often contain tunable long-range interactions, it is essential to determine the optimal range of interactions that will allow for maximal global entanglement in these system, which can then be harnessed in the quantum protocol.
\acknowledgments
The authors thank R. Fazio for useful discussions at ICTP, Trieste. HSD acknowledges funding by the Austrian Science Fund (FWF), project no. M 2022-N27, under the Lise Meitner programme of the FWF.
|
{
"timestamp": "2019-06-19T02:01:39",
"yymm": "1809",
"arxiv_id": "1809.02335",
"language": "en",
"url": "https://arxiv.org/abs/1809.02335"
}
|
\section{Introduction}
We investigate the $\tay$-time-periodic Stokes problem
\begin{align}\label{SH}
\begin{pdeq}
\partial_tu - \Deltau + \nabla p &= f && \text{in }\mathbb{R}\times\R^n_+, \\
\Divu &= g && \text{in }\mathbb{R}\times\R^n_+, \\
u &= h && \text{on }\mathbb{R}\times\partial\R^n_+, \\
u(\tay+t, \cdot) &= u(t, \cdot),
\end{pdeq}
\end{align}
in a half-space $\R^n_+$ of dimension $n\geq 2$. Here, $u\colon\mathbb{R}\times\R^n_+\to{\R^n}$ denotes the velocity field and $p\colon\mathbb{R}\times\R^n_+\to\mathbb{R}$ the pressure term. As customary in the formulation of time-periodic problems, the time-axis is taken to be the whole of $\mathbb{R}$. Variables in the time-space domain $\mathbb{R}\times\R^n_+$ are denoted by $(t,x)$.
The time-period $\tay>0$ shall remain fixed. Data
$f\colon\mathbb{R}\times\R^n_+\to{\R^n}$,
$g\colon\mathbb{R}\times\R^n_+\to\mathbb{R}$ and
$h\colon\mathbb{R}\times\partial\R^n_+\to{\R^n}$
that are also $\tay$-time-periodic are considered.
The time-periodic Stokes equations play a fundamental role in a wide range of problems in fluid mechanics. Although
comprehensive $\LR{p}$ estimates of maximal regularity type are available in the whole-space case \cite{KyedMaxReg14}, similar estimates in the more complicated half-space case were only established recently by \textsc{Maekawa} and \textsc{Sauer} \cite{MaekawaSauer17}. The analysis in \cite{MaekawaSauer17}, however, does not include inhomogeneous boundary data $h\neq 0$.
In the following, we shall establish maximal regularity $\LR{p}$ estimates that include the case $h\neq 0$.
Such estimates are crucial in a number of applications. For example, the classical approach to free boundary problems in fluid-structure interaction relies heavily on maximal regularity frameworks that include inhomogeneous boundary data. When time-periodic driving forces are studied in such settings,
time-periodic Stokes equations appear in the linearization.
The nature of the Stokes problem does not allow the treatment of inhomogeneous boundary data by a simple ``lifting'' argument.
Consequently, an extension of the results in \cite{MaekawaSauer17} to include the case $h\neq 0$ is by no means trivial.
In the following, we employ a different approach than the reflection type argument used in \cite{MaekawaSauer17}. Instead, we use the Fourier-transform $\mathscr{F}_{{\mathbb T}\times\mathbb{R}^{n-1}}$ to reduce \eqref{SH} to an ordinary differential equation in the variable $x_n$. Here, ${\mathbb T}$ denotes the torus $\mathbb{R}/\tay\mathbb{Z}$. The $\LR{p}$ estimates are then established with arguments based on Fourier-multipliers and interpolation techniques. Although the main idea behind this approach is not new, indeed it has been applied successfully by various authors to investigate the initial-value Stokes problem, a number of non-trivial modifications are needed to adapt the arguments to the time-periodic case. Notably, the system \eqref{SH} has to be decomposed into a steady-state part and a so-called purely oscillatory part. Without this decomposition, it seems impossible to establish optimal $\LR{p}$ estimates. Whereas the estimates for the resulting steady-state problem are well-known and can be found in contemporary literature, the estimates for the purely oscillatory part in the following are new.
It is convenient to formulate $\tay$-time-periodic problems in a setting of function spaces where the torus ${\mathbb T}\coloneqq\mathbb{R}/\tay\mathbb{Z}$ is used as a time-axis.
Indeed, via lifting with the quotient map $\pi\colon\mathbb{R}\to{\mathbb T}$,
$\tay$-time-periodic functions are canonically identified as functions defined on ${\mathbb T}$ and vice versa.
For such functions, we introduce the simple decomposition
\begin{align}\label{intro_projections}
\calpu(x) \coloneqq \int_{\mathbb T} u(t,x)\,{\mathrm d}t = \frac{1}{\tay}\int_0^\tay u(t,x)\,{\mathrm d}t,\qquad
\calp_\botu(t,x) \coloneqq u(t,x)-\calpu(x)
\end{align}
into a time-independent part $\calpu$, and a part $\calp_\botu$ with vanishing time-average over the period. We shall refer to $\calpu$ as the \emph{steady-state} part, and to $\calp_\botu$ as the \emph{purely oscillatory} part of $u$.
Equipped with the quotient topology, the time-space domain ${\mathbb T}\times\mathbb{R}^n$ is a locally compact
abelian group and therefore has a Fourier transform $\mathscr{F}_{{\mathbb T}\times\mathbb{R}^n}$ associated to it. Moreover, we may introduce the Schwartz-Bruhat space
$\mathscr{S}({\mathbb T}\times\mathbb{R}^n)$ and its dual space $\mathscr{S^\prime}({\mathbb T}\times\mathbb{R}^n)$ of tempered distributions.
Consequently, Bessel potential spaces with underlying time-space domain ${\mathbb T}\times\mathbb{R}^n$ can be defined as subspaces of $\mathscr{S^\prime}({\mathbb T}\times\mathbb{R}^n)$ in a completely standard manner. Sobolev spaces are introduced as Bessel potential spaces with integer exponents, and Sobolev-Slobodecki\u{\i} spaces, \textit{i.e} Sobolev spaces with non-integer exponents, via real interpolation.
The same scale of function spaces with respect to the half-space ${\halfspace}$ is obtained by restriction.
In a setting of these function spaces (see Section \ref{pre} for the precise definitions and Remark \ref{PerVsTorusFunctionSpacesRemark} below), the main theorem of this article can be formulated as follows:
\begin{thm}[Main Theorem]\label{MainThm}
Let $q\in(1,\infty)$ and $n\geq 2$. For all
\begin{align}
\begin{aligned}\label{MainThm_Data}
&f\in\LR{q}\bp{{\mathbb T}; \LR{q}(\R^n_+)}^n,\\
&g\in\LR{q}\bp{{\mathbb T}; \WSR{1}{q}(\R^n_+)}\cap\WSR{1}{q}\bp{{\mathbb T}; \WSRD{-1}{q}(\R^n_+)},\\
&h\in\WSR{1-\frac{1}{2q}}{q}\bp{{\mathbb T};\LR{q}(\wholespace)}^n\cap\LR{q}\bp{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)}^n
\end{aligned}
\end{align}
with
\begin{align}\label{MainThm_DataCompCond}
\begin{aligned}
&h_{n}\in\WSR{1}{q}({\mathbb T}; \WSRD{-\frac{1}{q}}{q}(\wholespace))
\end{aligned}
\end{align}
there is a solution $(u,p)$ to \eqref{SH} with
\begin{align}
\begin{aligned}\label{MainThm_SolReg}
&\calpu\in\WSRD{2}{q}({\halfspace})^n,\\
&\calp_\botu\in\calp_\bot\WSR{1}{q}\bp{{\mathbb T};\LR{q}({\halfspace})}^n\cap\calp_\bot\LR{q}\bp{{\mathbb T};\WSR{2}{q}({\halfspace})}^n,\\
&p\in\LR{q}\bp{{\mathbb T};\WSRD{1}{q}({\halfspace})},
\end{aligned}
\end{align}
which satisfies
\begin{multline}\label{MainThm_ProjEst}
\norm{\nabla^2\calpu}_{\LR{q}({\halfspace})} + \norm{\nabla\calpp}_{\LR{q}({\halfspace})}\\
\leq \Ccn{C}
\bp{\norm{\calp f}_{\LR{q}({\halfspace})} + \norm{\calpg}_{\WSR{1}{q}({\halfspace})}+\norm{\calph}_{\WSR{2-\frac{1}{q}}{q}(\wholespace)}}
\end{multline}
and
\begin{align}\label{MainThm_ProjComplEst}
\begin{aligned}
&\norm{\calp_\botu}_{\WSR{1}{q}\np{{\mathbb T};\LR{q}({\halfspace})}\cap\LR{q}\np{{\mathbb T};\WSR{2}{q}({\halfspace})}}
+ \norm{\nabla\calp_\botp}_{\LR{q}\np{{\mathbb T}; \LR{q}({\halfspace})}} \\
&\qquad \leq \Ccn{C}\,
\bp{\norm{\calp_\botf}_{\LR{q}\np{{\mathbb T};\LR{q}(\R^n_+)}}+\norm{\calp_\botg}_{\LR{q}\np{{\mathbb T}; \WSR{1}{q}(\R^n_+)}\cap\WSR{1}{q}\np{{\mathbb T}; \WSRD{-1}{q}(\R^n_+)}}\\
&\qquad\qquad +\norm{\calp_\both}_{\WSR{1-\frac{1}{2q}}{q}\np{{\mathbb T};\LR{q}(\wholespace)}\cap\LR{q}\np{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)}}
+ \norm{\calp_\both_n}_{\WSR{1}{q}\np{{\mathbb T}; \WSRD{-\frac{1}{q}}{q}(\wholespace)}}\,
}
\end{aligned}
\end{align}
with $\Ccn{C}=\Ccn{C}\np{n,q,\tay}$.
If $(\tilde{u},\tilde{p})$ is another solution to \eqref{SH} in the class \eqref{MainThm_SolReg}, then
$\calp_\botu=\calp_\bot\tilde{u}$, $\calpu=\calp\tilde{u}+(a_1x_n,\ldots,a_{n-1}x_{n},0)$ for some vector $a\in\mathbb{R}^{n-1}$, and $p=\tilde{p} + d(t)$
for some function $d$ that depends only on time.
\end{thm}
The two separated estimates \eqref{MainThm_ProjEst} and \eqref{MainThm_ProjComplEst} of different regularity type for the steady-state $\calpu$ and
the {purely oscillatory} part $\calp_\botu$ of the solution, respectively,
demonstrate the necessity of the decomposition. Observe that the purely oscillatory
part $\calp_\botu$ of the solution is unique, whereas the steady-state part $\calpu$ is not. The projections $\calp$ and $\calp_\bot$
decompose the time-periodic Stokes problem \eqref{SH} into a classical steady-state Stokes problem with respect to data $(\calpf,\calpg,\calph)$
and a time-periodic Stokes problem with respect to purely oscillatory data $(\calp_\botf,\calp_\botg,\calp_\both)$, respectively. The first estimate
\eqref{MainThm_ProjEst} is well-known for the former problem, whence the main objective in the following will be to establish existence of a unique solution to the latter that satisfies \eqref{MainThm_ProjComplEst}.
\begin{rem}\label{PerVsTorusFunctionSpacesRemark}
It is possible to avoid the analysis on the torus group ${\mathbb T}\coloneqq\mathbb{R}/\tay\mathbb{Z}$ and instead define the function spaces appearing in Theorem \ref{MainThm} as spaces of $\tay$-periodic functions on $\mathbb{R}$.
For a Banach space $E$, let
\begin{align*}
\CR{\infty}_{\mathrm{per}}\left(\mathbb{R}; E\right)\coloneqq \setc{f\in\CR \infty\left(\mathbb{R}; E\right)}{f(t+\tay, x) = f(t, x) }
\end{align*}
denote the space of smooth $\tay$-time-periodic $E$-valued functions. The Bochner-Lebesgue and Bochner-Sobolev spaces of time-periodic functions can then be introduced as
\begin{align*}
&\LRper{q}\left(\mathbb{R}; E\right)\coloneqq \closure{\CR{\infty}_{\mathrm{per}}\left(\mathbb{R}; E\right)}{\norm{\cdot}_{\LR{q}\left((0,\tay); E\right)}},\\
&\WSRper{k}{q}\left(\mathbb{R}; E\right) \coloneqq \closure{\CR{\infty}_{\mathrm{per}}\left(\mathbb{R}; E\right)}{\norm{\cdot}_{\WSR{k}{q}\left((0,\tay ); E\right)}}.
\end{align*}
Observe that the closures above are taken with respect to a time interval of period $\tay$.
Time-periodic Sobolev-Slobodecki\u{\i} spaces can then be defined as real interpolation spaces in the usual way.
Via the canonical quotient map $\pi\colon\mathbb{R}\to{\mathbb T}$, the spaces
$\CR{\infty}_{\mathrm{per}}\bp{\mathbb{R}; E}$ and $\CR \infty\bp{{\mathbb T}; E}$ are isometrically isomorphic with respect to
the norms $\norm{\cdot}_{\WSR{k}{q}\left((0,\tay ); E\right)}$ and $\norm{\cdot}_{\WSR{k}{q}\left({\mathbb T}; E\right)}$, respectively,
provided the Haar measure on ${\mathbb T}$ is normalized appropriately. It follows that
$\WSRper{s}{q}\left(\mathbb{R}; E\right)$ and $\WSR{s}{q}\left({\mathbb T}; E\right)$ are also isometrically isomorphic for all $s$.
In this manner, all the function spaces appearing in Theorem \ref{MainThm} have interpretations as $\tay$-time-periodic Bochner spaces.
\end{rem}
\section{Preliminaries}\label{pre}
The objective of this section is to formalize the reformulation of \eqref{SH} in a setting where the time axis is replaced with the torus group
${\mathbb T}\coloneqq\mathbb{R}/\tay\mathbb{Z}$. This includes definitions of the function spaces appearing in Theorem \ref{MainThm}.
\subsection{Topology, differentiable structure and Fourier transform}
We utilize ${{\mathbb T}\times{\R^n}}$ as a time-space domain. Equipped with the quotient topology induced by the quotient mapping
\begin{align*}
\pi :\mathbb{R}\times\mathbb{R}^n\to{{\mathbb T}\times{\R^n}}, \ \pi\left(t, x\right) \coloneqq \left(\left[t\right], x\right),
\end{align*}
${{\mathbb T}\times{\R^n}}$ becomes a locally compact abelian group. We can identify ${{\mathbb T}\times{\R^n}}$ with the domain $\left[0,\tay\right)\times\mathbb{R}^n$ via the restriction $\pi\big|_{\left[0, \tay\right)\times{\R^n}}$. The Haar measure ${\mathrm d}g$ on ${{\mathbb T}\times{\R^n}}$ is the product of the Lebesgue measure on $\mathbb{R}^n$ and the Lebesgue measure on $\left[0,\tay\right)$. We normalize ${\mathrm d}g$ so that
\begin{align*}
\int_{{\mathbb T}\times{\R^n}} u(g)\,{\mathrm d}g = \frac{1}{\tay}\int_0^\tay\int_{{\R^n}} u(t, x)\,{\mathrm d}x{\mathrm d}t.
\end{align*}
There is a bijective correspondence between points $(k, \xi)\in{\frac{2\pi}{\tay}\Z\times{\R^n}}$ and characters
$\chi: {{\mathbb T}\times{\R^n}}\to\mathbb{C}$, $\chi\left(t, x\right)\coloneqq e^{ix\cdot\xi + ikt}$ on ${{\mathbb T}\times{\R^n}}$. Consequently, we can identify the dual group of
${{\mathbb T}\times{\R^n}}$ with ${\frac{2\pi}{\tay}\Z\times{\R^n}}$. The
compact-open topology on ${\frac{2\pi}{\tay}\Z\times{\R^n}}$ reduces to the product of the Euclidean topology on ${\R^n}$ and the discrete topology on ${\frac{2\pi}{\tay}\Z}$. The Haar measure on ${\frac{2\pi}{\tay}\Z\times{\R^n}}$ is therefore the product of the counting measure on ${\frac{2\pi}{\tay}\Z}$ and the Lebesgue measure on ${\R^n}$.
The spaces of smooth functions on ${{\mathbb T}\times{\R^n}}$ and ${\frac{2\pi}{\tay}\Z\times{\R^n}}$ are defined as
\begin{align}\label{DefOfSmoothFunctionsOnGrp}
\CR \infty({{\mathbb T}\times{\R^n}}) \coloneqq \setc{u:{{\mathbb T}\times{\R^n}}\rightarrow\mathbb{R}}{\exists U\in\CR \infty\bp{\mathbb{R}\times{\R^n}}:\ U=u\circ\pi}
\end{align}
and
\begin{align*}
\CR \infty\Bp{{\frac{2\pi}{\tay}\Z\times{\R^n}}} \coloneqq \setcL{u\in\CR{}\Bp{{\frac{2\pi}{\tay}\Z\times{\R^n}}}}{\forall k\in{\frac{2\pi}{\tay}\Z}: u(k, \cdot)\in\CR \infty({\R^n})},
\end{align*}
respectively.
Derivatives of a function $u\in\CR \infty({{\mathbb T}\times{\R^n}})$ are defined by
\begin{align*}
\partial_t^\beta\partial_x^\alphau \coloneqq \left[\partial_t^\beta\partial_x^\alpha\left(u\circ\pi\right)\right]\circ\Pi^{-1},
\end{align*}
with $\Pi \coloneqq \pi\big|_{\left[0,\tay\right)\times{\R^n}}$.
The notion of Schwartz spaces can be extended to locally compact abelian groups (see \cite{Bruhat} and \cite{kyedeiter_PIFBook}).
The so-called Schwartz-Bruhat space on ${{\mathbb T}\times{\R^n}}$ is given by
\begin{align*}
\mathscr{S}\np{{{\mathbb T}\times{\R^n}}} \coloneqq \setc{u\in\CR \infty({{\mathbb T}\times{\R^n}})}{\forall (\alpha, \beta, \gamma)\in\mathbb{N}_0^n\times\mathbb{N}_0\times\mathbb{N}_0^n: \ \rho_{\alpha, \beta, \gamma}(u)<\infty},
\end{align*}
where
\begin{align*}
\rho_{\alpha, \beta, \gamma}(u) \coloneqq \sup_{(t,x)\in{{\mathbb T}\times{\R^n}}}{\left|x^\gamma\partial_t^\beta\partial_x^\alphau(t,x)\right|}.
\end{align*}
Equipped with the semi-norm topology of the family ${\{\rho_{\alpha, \beta, \gamma} | \left(\alpha, \beta, \gamma\right)\in\mathbb{N}_0^n\times\mathbb{N}_0\times\mathbb{N}_0^n \}}$, $\mathscr{S}({{\mathbb T}\times{\R^n}})$ becomes a topological vector space. The corresponding topological dual space $\mathscr{S^\prime}({{\mathbb T}\times{\R^n}})$ equipped with the weak* topology is referred to as the space of tempered distributions on ${{\mathbb T}\times{\R^n}}$. Distributional derivatives for a tempered distribution $u$ are defined by duality as in the classical case.
The Schwartz-Bruhat space on ${\frac{2\pi}{\tay}\Z\times{\R^n}}$ is
\begin{multline*}
\mathscr{S}\Bp{{\frac{2\pi}{\tay}\Z\times{\R^n}}} \\\coloneqq \setcL{u\in\CR \infty\Bp{{\frac{2\pi}{\tay}\Z\times{\R^n}}}}{\forall (\alpha, \beta, \gamma)\in\mathbb{N}_0^n\times\mathbb{N}_0^n\times\mathbb{N}_0: \ \hat{\rho}_{\alpha, \beta, \gamma}(u)<\infty},
\end{multline*}
with the generic semi-norms
\begin{align*}
\hat{\rho}_{\alpha, \beta, \gamma}(u) \coloneqq \sup_{(k,\xi)\in{\frac{2\pi}{\tay}\Z\times{\R^n}}}{\left|\xi^\alpha\partial_\xi^\beta k^\gammau(k,\xi)\right|}
\end{align*}
inducing the topology.
By $\mathscr{F}_{{\mathbb T}\times{\R^n}}$ we denote the Fourier transform associated to the locally compact abelian group ${{\mathbb T}\times{\R^n}}$ equipped with the Haar measure introduced above:
\begin{align*}
&\mathscr{F}_{{\mathbb T}\times{\R^n}}:\mathscr{S}\np{{{\mathbb T}\times{\R^n}}}\rightarrow\mathscr{S}\bp{{\frac{2\pi}{\tay}\Z\times{\R^n}}},\\
&\mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{u}(k,\xi)\coloneqq \ft{u}(k,\xi) \coloneqq \frac{1}{\tay}\int_0^\tay\int_{{\R^n}} u(t,x)\,\e^{-ix\cdot\xi-ik t}\,{\mathrm d}x{\mathrm d}t.
\end{align*}
Recall that $\mathscr{F}_{{\mathbb T}\times{\R^n}}:\mathscr{S}({{\mathbb T}\times{\R^n}})\rightarrow\mathscr{S}({\frac{2\pi}{\tay}\Z\times{\R^n}})$ is a homeomorphism with inverse given by
\begin{align*}
&\mathscr{F}^{-1}_{{\mathbb T}\times{\R^n}}:\mathscr{S}\bp{{\frac{2\pi}{\tay}\Z\times{\R^n}}}\rightarrow\mathscr{S}\np{{{\mathbb T}\times{\R^n}}},\\
&\mathscr{F}^{-1}_{{\mathbb T}\times{\R^n}}\nb{w}(t,x)\coloneqq \sum_{k\in{\frac{2\pi}{\tay}\Z}}\,\int_{{\R^n}} w(k,\xi)\,\e^{ix\cdot\xi+ik t}\,{\mathrm d}\xi,
\end{align*}
provided the Lebesgue measure ${\mathrm d}\xi$ is normalized appropriately.
By duality, $\mathscr{F}_{{\mathbb T}\times{\R^n}}$ extends to a homeomorphism $\mathscr{F}_{{\mathbb T}\times{\R^n}}:\mathscr{S^\prime}({{\mathbb T}\times{\R^n}})\rightarrow\mathscr{S^\prime}({\frac{2\pi}{\tay}\Z\times{\R^n}})$.
The Fourier symbol, with respect to $\mathscr{F}_{{\mathbb T}\times{\R^n}}$, of the projection $\calp$ introduced in \eqref{intro_projections} is the delta distribution $\delta_{\frac{2\pi}{\tay}\Z}$, \textit{i.e.}, the
function $\delta_{\frac{2\pi}{\tay}\Z}:{\frac{2\pi}{\tay}\Z}\rightarrow\mathbb{C}$ with $\delta_{\frac{2\pi}{\tay}\Z}(0):=1$ and $\delta_{\frac{2\pi}{\tay}\Z}(k):=0$ for $k\neq 0$. Via the symbol, the projections $\calp$ and $\calp_\bot$ extend to projections on $\mathscr{S^\prime}\np{{{\mathbb T}\times{\R^n}}}$:
\begin{align}\label{SymbolsOfProjections}
\begin{aligned}
&\calp:\mathscr{S^\prime}\np{{{\mathbb T}\times{\R^n}}}\rightarrow\mathscr{S^\prime}\np{{{\mathbb T}\times{\R^n}}},\quad \calpu := \mathscr{F}^{-1}_{{\mathbb T}\times{\R^n}}\bb{\delta_{\frac{2\pi}{\tay}\Z}\,\mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{u}},\\
&\calp_\bot:\mathscr{S^\prime}\np{{{\mathbb T}\times{\R^n}}}\rightarrow\mathscr{S^\prime}\np{{{\mathbb T}\times{\R^n}}},\quad \calp_\botu := \mathscr{F}^{-1}_{{\mathbb T}\times{\R^n}}\bb{\np{1-\delta_{\frac{2\pi}{\tay}\Z}}\,\mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{u}}.
\end{aligned}
\end{align}
At this point, we have introduced ample formalism to reformulate \eqref{SH} equivalently as a system of partial differential equations in the time-space domain ${{\mathbb T}\times{\R^n}}$. Moreover, the Fourier transform $\mathscr{F}_{{\mathbb T}\times{\R^n}}$ enables us the investigate the systems in terms of Fourier-multipliers.
Due to the lack of a comprehensive $\LR{q}$-multiplier theory in the general group setting, we shall utilize a so-called
Transference Principle for this purpose.
The Transference Principle goes back to \textsc{De Leeuw} \cite{Leeuw1965}. The lemma below is due to \textsc{Edwards} and \textsc{Gaudry} \cite{EdwardsGaudry}.
\begin{thm}[\textsc{De Leeuw}, \textsc{Edwards} and \textsc{Gaudry}]\label{transference}
Let $G$ and $H$ be locally compact abelian groups. Moreover, let
$\Phi:\widehat{G}\rightarrow\widehat{H}$ be a continuous homomorphism and $q\in [1,\infty]$. Assume that $m\in\LR{\infty}(\widehat{H};\mathbb{C})$ is a continuous $\LR{q}$-multiplier, i.e., there is a constant $C$ such that
\begin{align*}
\forall f\in\LR{2}\left(H\right)\cap\LR{q}\left(H\right):\quad \norm{\mathscr{F}^{-1}_H\nb{m\, \mathscr{F}_H\nb{f}}}_q\leq C\norm{f}_q.
\end{align*}
Then $m\circ\Phi\in\LR{\infty}(\widehat{G}; \mathbb{C})$ is also an $\LR{q}$-multiplier with
\begin{align*}
\forall f\in\LR{2}\left(G\right)\cap\LR{q}\left(G\right):\quad \norm{\mathscr{F}^{-1}_G\nb{m\circ\Phi\, \mathscr{F}_G\nb{f}}}_q\leq C\norm{f}_q.
\end{align*}
\end{thm}
\begin{proof}
See \cite[Theorem B.2.1]{EdwardsGaudry}.
\end{proof}
\subsection{Function spaces}\label{FunktionSpacesSection}
Since our proof of the main theorem is based on Fourier-multipliers and interpolation theory, we find it convenient to
introduce all the relevant Sobolev spaces via Bessel-Potential spaces.
Classical (inhomogeneous) Bessel-Potential spaces
are defined for $s\in\mathbb{R}$ and $q\in[1,\infty)$ by
\begin{align*}
&\HSR{s}{q}\np{\mathbb{R}^n} \coloneqq \setcl{u\in\mathscr{S^\prime}\np{{\R^n}}}{\mathscr{F}^{-1}_{{\R^n}}\bb{(1+\snorm{\xi}^2)^\frac{s}{2}\mathscr{F}_{{\R^n}}\nb{u}}\in\LR{q}\np{{\R^n}}},\\
&\norm{u}_{\HSR{s}{q}\np{\mathbb{R}^n}} \coloneqq \norm{u}_{s,q}\coloneqq\norm{\mathscr{F}^{-1}_{{\R^n}}\bb{(1+\snorm{\xi}^2)^\frac{s}{2}\mathscr{F}_{{\R^n}}\nb{u}}}_q.
\end{align*}
Classical Sobolev spaces on ${\R^n}$ are defined as Bessel-Potential spaces of integer order $k\in\mathbb{Z}$, and Sobolev spaces on
the half-space $\R^n_+$ via restriction:
\begin{align*}
\WSR{k}{q}\np{\mathbb{R}^n}\coloneqq\HSR{k}{q}\np{{\R^n}},\qquad\WSR{k}{q}\np{\R^n_+}\coloneqq\setcl{u_{|\R^n_+}}{u\in\WSR{k}{q}\np{{\R^n}}}.
\end{align*}
Observe that for negative-order spaces, \textit{i.e} when $k<0$, the Sobolev space $\WSR{k}{q}\np{\R^n_+}$ coincides with the dual space $\bp{\WSR{-k}{q'}\np{\R^n_+}}'$ and \emph{not} with the dual space $\np{\WSRN{-k}{q'}\np{\R^n_+}}'$.
In the following, it is essential that the former meaning of $\WSR{k}{q}\np{\R^n_+}$ is used.
Homogeneous Bessel-Potential spaces are defined in accordance with \cite{TriebelTheoryFunctionSpaces}
by introducing the subspace
\begin{align*}
Z\np{{\R^n}}\coloneqq\setcl{\phi\in\mathscr{S}\np{{\R^n}}}{\forall\alpha\in\mathbb{N}_0^n:\ \partial_x^\alpha\ft{\phi}\np{0}=0}
\end{align*}
of $\mathscr{S}\np{{\R^n}}$,
and for $s\in\mathbb{R}$ and $q\in[1,\infty)$ letting
\begin{align*}
&\HSRD{s}{q}\np{{\R^n}}\coloneqq\setcl{u\in\SRh^\prime\np{{\R^n}}}{\mathscr{F}^{-1}_{{\R^n}}\bb{\snorm{\xi}^s \mathscr{F}_{{\R^n}}\nb{u}}\in\LR{q}\np{{\R^n}}},\\
&\norm{u}_{\HSRD{s}{q}\np{{\R^n}}}\coloneqq\norm{\mathscr{F}^{-1}_{{\R^n}}\bb{\snorm{\xi}^s \mathscr{F}_{{\R^n}}\nb{u}}}_q.
\end{align*}
Due to the lack of regularity of $\snorm{\xi}^s$ at the origin, the above definition of $\HSRD{s}{q}\np{{\R^n}}$
is not meaningful as a subspace of $\mathscr{S^\prime}\np{{\R^n}}$. Instead, $\HSRD{s}{q}\np{{\R^n}}$ is defined as a subspace of $\SRh^\prime\np{{\R^n}}$.
As such, $\HSRD{s}{q}\np{{\R^n}}$ is clearly a Banach space.
As above, we define homogeneous Sobolev spaces on ${\R^n}$ as homogeneous Bessel-Potential spaces of integer order $k\in\mathbb{Z}$, and introduce homogeneous Sobolev spaces on
the half-space $\R^n_+$ via restriction:
\begin{align*}
\WSRD{k}{q}\np{\mathbb{R}^n}\coloneqq\HSRD{k}{q}\np{{\R^n}},\qquad\WSRD{k}{q}\np{\R^n_+}\coloneqq\setcl{u_{|\R^n_+}}{u\in\WSRD{k}{q}\np{{\R^n}}}.
\end{align*}
By the Hahn-Banach Theorem, any functional in $\SRh^\prime\np{{\R^n}}$ can be extended to a tempered distribution in $\mathscr{S^\prime}\np{{\R^n}}$.
If $s\leq 0$, the extension of an element in $\HSRD{s}{q}\np{{\R^n}}$ to $\mathscr{S^\prime}\np{{\R^n}}$ is unique.
In the case $s>0$, one may verify that two extensions of an element in $\HSRD{s}{q}\np{{\R^n}}$ differ at most by addition of a polynomial of order strictly less than $s$. With this ambiguity in mind, one may consider $\HSRD{s}{q}\np{{\R^n}}$ as a normed ($s\leq 0$) and semi-normed ($s> 0$) subspace of $\mathscr{S^\prime}\np{{\R^n}}$.
Sobolev-Slobodecki\u{\i} spaces of both homogeneous and inhomogeneous type are defined via real interpolation in the usual way. For example, the spaces appearing in Theorem \ref{MainThm} are defined by
\begin{align*}
&\WSR{2-\frac{1}{q}}{q}(\wholespace)\coloneqq\bp{\LR{q}\np{\wholespace},\WSR{2}{q}\np\wholespace}_{1-\frac{1}{2q},q},\\
&\WSRD{-\frac{1}{q}}{q}(\wholespace)\coloneqq\bp{\WSRD{-1}{q}\np{\wholespace},\LR{q}\np\wholespace}_{1-\frac{1}{q},q}
\end{align*}
and equipped with the associated interpolation norms.
Analogously, Bessel-Potential spaces with underlying time-space domain ${{\mathbb T}\times{\R^n}}$ are introduced via the Fourier transform $\mathscr{F}_{{{\mathbb T}\times{\R^n}}}$ as
\begin{align*}
&\HSR{r}{q}\bp{{\mathbb T};\HSR{s}{q}\np{\R^n}}\coloneqq\\
&\qquad\setcl{u\in\mathscr{S^\prime}\np{{{\mathbb T}\times{\R^n}}}}{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\np{1+\snorm{k}^2}^{\frac{r}{2}}(1+\snorm{\xi}^2)^\frac{s}{2}\mathscr{F}_{{{\mathbb T}\times{\R^n}}}\nb{u}}\in\LR{q}\bp{{\mathbb T}; \LR{q}\np{\R^n}}}
\end{align*}
equipped with the canonical norm. Again, we refer to Bessel-Potential spaces of integer order $k,l\in\mathbb{N}$ as Sobolev spaces:
\begin{align*}
\WSR{k}{q}\bp{{\mathbb T};\WSR{l}{q}\np{\R^n}}\coloneqq\HSR{k}{q}\bp{{\mathbb T};\HSR{l}{q}\np{\R^n}}.
\end{align*}
Sobolev spaces on the time-space domain ${\mathbb T}\times{\halfspace}$ are defined via restriction of the elements in the spaces above.
In order to introduce homogeneous spaces, we let
\begin{align*}
Z\np{{{\mathbb T}\times{\R^n}}}\coloneqq\setcl{\phi\in\mathscr{S}\np{{{\mathbb T}\times{\R^n}}}}{\forall\alpha\in\mathbb{N}_0^n:\ \partial_x^\alpha\mathscr{F}_{\mathbb{R}^n}\nb{\phi}\np{t,0}=0}
\end{align*}
and put
\begin{align*}
&\HSR{r}{q}\bp{{\mathbb T};\HSRD{s}{q}\np{\R^n}}\coloneqq\\
&\qquad\setcl{u\in\SRh^\prime\np{{{\mathbb T}\times{\R^n}}}}{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\np{1+\snorm{k}^2}^{\frac{r}{2}}\snorm{\xi}^s\mathscr{F}_{{{\mathbb T}\times{\R^n}}}\nb{u}}\in\LR{q}\bp{{\mathbb T}; \LR{q}\np{\R^n}}}.
\end{align*}
As above, we may consider $\HSR{r}{q}\bp{{\mathbb T};\HSRD{s}{q}\np{\R^n}}$ as a subspace of $\mathscr{S^\prime}\np{{{\mathbb T}\times{\R^n}}}$ by extension.
Finally, Sobolev-Slobodecki\u{\i} spaces on the domain ${{\mathbb T}\times{\R^n}}$ are defined via real interpolation. For example,
\begin{align*}
\WSR{1-\frac{1}{2q}}{q}\bp{{\mathbb T};\LR{q}(\wholespace)}
\coloneqq \bp{\LR{q}\bp{{\mathbb T};\LR{q}\np\wholespace},\WSR{1}{q}\bp{{\mathbb T};\LR{q}\np\wholespace}}_{1-\frac{1}{2q},q}.
\end{align*}
In this way, all the function spaces appearing in Theorem \ref{MainThm} attain rigorous definitions.
It is easy to verify that these definitions coincide with a classical interpretation as Bochner spaces of vector-valued functions defined on the torus
${\mathbb T}$.
\subsection{Interpolation}
Although the function spaces appearing in Theorem \ref{MainThm} can all be defined in terms of classical interpolation, our proof of the theorem
relies on a somewhat more refined scale of interpolation spaces. More specifically, it is based on anisotropic Besov spaces with underlying time-space domain ${{\mathbb T}\times{\R^n}}$, which we shall show coincide with the function spaces obtained by real interpolation of the Bessel-Potential spaces introduced above.
Although this task is mainly technical and does not require any significantly new ideas, indeed we shall mimic the proofs of similar results for classical isotropic Besov spaces, these spaces and their interpolation properties are not part of contemporary literature and we shall therefore carry out the
identification here (even in slightly more generality than actually needed for the proof of the main theorem).
To this end, we fix an $m\in\mathbb{N}$ and introduce the parabolic length scale
\begin{align}\label{BS_LengthScale}
\parnorm{\eta}{\xi}\coloneqq(|\eta|^2+|\xi|^{4m})^{\frac{1}{4m}}\quad {\text{for } (\eta,\xi)\in\mathbb{R}\times\mathbb{R}^n}.
\end{align}
The anisotropic Besov spaces defined below pertain to time-periodic parabolic problems of order $2m$. In our analysis of the Stokes problem, we thus put
$m=1$. For simplicity, we omit $m$ in the notation for the function spaces below.
The anisotropic Besov spaces shall be based on the following anisotropic partition of unit:
\begin{lem}\label{BS_PartitionOfUnity}
Let $m\in\mathbb{N}$ and $\parnorm{\eta}{\xi}$ be given by \eqref{BS_LengthScale}. There is a $\phi\in\CR \infty_0\np{\mathbb{R}\times\mathbb{R}^n}$ satisfying
\begin{align}
&\supp\phi = \setc{(\eta,\xi)}{2^{-1}\leq \parnorm{\eta}{\xi}\leq 2} \label{BS_PartitionOfUnity_Support},\\
&\phi(\eta,\xi)>0 \quad\text{for}\quad 2^{-1}<\parnorm{\eta}{\xi}<2,\label{BS_PartitionOfUnity_Positivity}\\
&\sum_{l=-\infty}^\infty \phi(2^{-2ml}\eta,2^{-l}\xi)=1\quad \text{for}\quad \parnorm{\eta}{\xi}\neq 0.\label{BS_PartitionOfUnity_PartUnity}
\end{align}
\end{lem}
\begin{proof}
Let $h\in\CR \infty(\mathbb{R})$ with $\supph=\setc{y\in\mathbb{R}}{2^{-1}\leq\snorm{y}\leq2}$
and $h(y)>0$ for $2^{-1}<\snorm{y}<2$. Then $f:\mathbb{R}\times\mathbb{R}^n\rightarrow\mathbb{R},\ f(\eta,\xi)\coloneqqh\bp{\parnorm{\eta}{\xi}}$ satisfies
\eqref{BS_PartitionOfUnity_Support} and \eqref{BS_PartitionOfUnity_Positivity}. Moreover, $f(2^{-2ml}\eta,2^{-l}\xi)\neq 0$ iff
$2^{l-1}<\parnorm{\eta}{\xi}<2^{l+1}$. Thus $f(2^{-2ml}\eta,2^{-l}\xi)\neq 0$ for at least one and at most two $l\in\mathbb{Z}$. Consequently,
\begin{align*}
\phi:\mathbb{R}\times\mathbb{R}^n\rightarrow\mathbb{R},\quad \phi(\eta,\xi)\coloneqq
\begin{pdeq}
&\frac{f(\eta,\xi)}{\sum_{l=-\infty}^\infty f(2^{-2ml}\eta,2^{-l}\xi)} &&\text{if }\parnorm{\eta}{\xi}\neq 0,\\
&0 && \text{if }\parnorm{\eta}{\xi}=0
\end{pdeq}
\end{align*}
is well-defined. It is easy to verify that $\phi$ satisfies \eqref{BS_PartitionOfUnity_Support}--\eqref{BS_PartitionOfUnity_PartUnity}.
\end{proof}
\begin{defn}[Anisotropic Besov and Bessel-Potential Spaces]\label{BS_DefOfBesovSpace}
Let $\phi\in\CR \infty_0\np{\mathbb{R}\times\mathbb{R}^n}$ be as in Lemma \ref{BS_PartitionOfUnity}, $s\in\mathbb{R}$ and $p,q\in[1,\infty)$. We define anisotropic Besov spaces
\begin{align}\label{BS_DefOfBesovSpace_Defn}
\begin{aligned}
&\BSRcompl{s}{pq}\np{{{\mathbb T}\times{\R^n}}}\coloneqq\setc{f\in\calp_\bot\mathscr{S^\prime}({{\mathbb T}\times{\R^n}})}{\norm{f}_{\BSRcompl{s}{pq}}<\infty},\\
&\norm{f}_{\BSRcompl{s}{pq}} \coloneqq \Bp{\sum_{l=0}^\infty \bp{2^{sl} \norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_p}^q}^{\frac{1}{q}},
\end{aligned}
\end{align}
and anisotropic Bessel-Potential spaces
\begin{align}\label{BS_DefOfBesovSpace_DefnBessel}
\begin{aligned}
&\ABPSRcompl{s}{p}\np{{{\mathbb T}\times{\R^n}}}\coloneqq\setc{f\in\calp_\bot\mathscr{S^\prime}({{\mathbb T}\times{\R^n}})}{\norm{f}_{\ABPSRcompl{s}{p}}<\infty},\\
&\norm{f}_{\ABPSRcompl{s}{p}} \coloneqq
\norm {\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\parnorm{k}{\xi}^s\mathscr{F}_{{{\mathbb T}\times{\R^n}}}\nb{f}}}_p.
\end{aligned}
\end{align}
\end{defn}
Observe that $\BSRcompl{s}{pq}\np{{{\mathbb T}\times{\R^n}}}$ and $\ABPSRcompl{s}{p}\np{{{\mathbb T}\times{\R^n}}}$ are defined as subspaces of the purely oscillatory distributions $\calp_\bot\mathscr{S^\prime}({{\mathbb T}\times{\R^n}})$ rather than
$\mathscr{S^\prime}({{\mathbb T}\times{\R^n}})$. Recalling \eqref{SymbolsOfProjections}, it is easy to verify that $\norm{\cdot}_{\BSRcompl{s}{pq}}$
and $\norm{\cdot}_{\ABPSRcompl{s}{p}}$ are therefore norms (rather than mere semi-norms), and $\BSRcompl{s}{pq}\np{{{\mathbb T}\times{\R^n}}}$ and $\ABPSRcompl{s}{p}\np{{{\mathbb T}\times{\R^n}}}$
Banach spaces.
As in the case of classical (isotropic) spaces, real interpolation of anisotropic Bessel-potential spaces yields anisotropic Besov spaces:
\begin{lem}\label{BS_InterpolationLem}
Let $p,q\in(1,\infty)$, $\theta\in(0,1)$, $s_0,s_1\in\mathbb{R}$ and $s\coloneqq(1-\theta)s_0+\theta s_1$.
If $s_0\neq s_1$, then
$\bp{\ABPSRcompl{s_0}{p}\np{{\mathbb T}\times{\R^n}},\ABPSRcompl{s_1}{p}\np{{\mathbb T}\times{\R^n}}}_{\theta,q}=\BSRcompl{s}{pq}\np{{\mathbb T}\times{\R^n}}$ with equivalent norms.
\end{lem}
\begin{proof}
For $l\in\mathbb{N}_0$ and $r\in\mathbb{R}$ let
\begin{align*}
\mathfrak{m}^r_l:\mathbb{R}\times\mathbb{R}^n\rightarrow\mathbb{C},\quad\mathfrak{m}^r_l(\eta,\xi)\coloneqq \phi\bp{2^{-2ml}\eta,2^{-l}\xi}\parnorm{\eta}{\xi}^{-r}.
\end{align*}
We claim that $\mathfrak{m}^r_l$ is an $\LR{p}\np{\mathbb{R}; \LR{p}\np{\R^n}}$-multiplier, which we verify by showing that $\mathfrak{m}^r_l$ meets the condition
of Marcinkiewicz's multiplier theorem (see for example \cite[Corollary 6.2.5]{Grafakos}).
For this purpose, we utilize only
\begin{align}
&\supp\phi\bp{2^{-2ml}\cdot,2^{-l}\cdot}=\setc{(\eta,\xi)\in\mathbb{R}\times\mathbb{R}^n}{2^{l-1}\leq\parnorm{\eta}{\xi}\leq2^{l+1}},\label{BS_InterpolationThm_SuppProperty}
\end{align}
and that $g(\eta,\xi)\coloneqq\parnorm{\eta}{\xi}^{-r}$ is parabolically $\np{-r}$-homogeneous, that is,
\begin{align}
&\forall\lambda>0:\quad g(\eta,\xi)=\lambda^{-r} g(\lambda^{-2m}\eta,\lambda^{-1}\xi).\label{BS_InterpolationThm_HomoProperty}
\end{align}
From \eqref{BS_InterpolationThm_SuppProperty} we immediately obtain
$\norm{\mathfrak{m}^r_l}_\infty \leq \Ccn{C} \norm{\phi}_\infty 2^{-lr}$,
with $\Ccnlast{C}$ independent on $l$. By \eqref{BS_InterpolationThm_HomoProperty}, we further observe that
\begin{align*}
\eta\,\partial_\eta\mathfrak{m}^r_l(\eta,\xi) &= 2^{-2ml}\eta\, \partial_\eta\phi\bp{2^{-2ml}\eta,2^{-l}\xi}\, g(\eta,\xi) \\
&\quad + \phi\bp{2^{-2ml}\eta,2^{-l}\xi}\,\lambda^{-r}\, \partial_\eta g(\lambda^{-2m}\eta,\lambda^{-1}\xi)\,\lambda^{-2m}\eta.
\end{align*}
Choosing $\lambda\coloneqq\parnorm{\eta}{\xi}$ and recalling \eqref{BS_InterpolationThm_SuppProperty}, we thus deduce
$\norm{\eta\,\partial_\eta\mathfrak{m}^r_l}_\infty \leq \Ccn{C} \norm{\phi}_\infty 2^{-lr}$,
with $\Ccnlast{C}$ independent on $l$. Similarly, we obtain
\begin{align*}
\sum_{\alpha\in\set{0,1}^{n+1}}
\norm{\xi_1^{\alpha_1}\cdots\xi_n^{\alpha_n}\eta^{\alpha_{n+1}}
\partial_{\xi_1}^{\alpha_1}\cdots\partial_{\xi_n}^{\alpha_n}\partial_{\eta}^{\alpha_{n+1}} \mathfrak{m}^r_l}_\infty\leq \Ccn{C} \norm{\phi}_\infty 2^{-lr}
\end{align*}
with $\Ccnlast{C}$ independent on $l$.
It follows from Marcinkiewicz's multiplier theorem that $\mathfrak{m}^r_l$ is an $\LR{p}(\mathbb{R}; \LR{p}(\mathbb{R}^n))$-multiplier.
Consequently, the Transference Principle (Theorem \ref{transference}) implies that $\multrestriction{{\mathfrak{m}^r_l}}{{\frac{2\pi}{\tay}\Z}\times\mathbb{R}^n}$
is an $\LR{p}({\mathbb T}; \LR{p}({\R^n}))$-multiplier with
\begin{align*}
\normL{\phi\mapsto\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\mathfrak{m}^r_l(k,\xi)\mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{\phi}}}_{\mathscr{L}\np{\LR{p}\np{{\mathbb T}; \LR{p}\np{\R^n}},\LR{p}\np{{\mathbb T}; \LR{p}\np{\R^n}}}}<\Ccn{C} \norm{\phi}_\infty 2^{-lr}.
\end{align*}
Let $f\in\bp{\ABPSRcompl{s_0}{p}\np{{\mathbb T}\times{\R^n}},\ABPSRcompl{s_1}{p}\np{{\mathbb T}\times{\R^n}}}_{\theta,q}$. Consider a decomposition $f=f_0+f_1$ with
$f_0\in\ABPSRcompl{s_0}{p}\np{{\mathbb T}\times{\R^n}}$ and $f_1\in\ABPSRcompl{s_1}{p}\np{{\mathbb T}\times{\R^n}}$. We deduce
\begin{align*}
&\norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_p \\
&\quad\leq \norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\mathfrak{m}^{s_0}_l\mathscr{F}_{{\mathbb T}\times{\R^n}}\bb{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\parnorm{k}{\xi}^{s_0}\mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f_0}}}}}_p\\
&\quad\quad + \norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\mathfrak{m}^{s_1}_l\mathscr{F}_{{\mathbb T}\times{\R^n}}\bb{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\parnorm{k}{\xi}^{s_1}\mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f_1}}}}}_p\\
&\quad\leq \Ccn{C}\bp{2^{-ls_0}\norm{f_0}_{\ABPSRcompl{s_0}{p}} + 2^{-ls_1}\norm{f_1}_{\ABPSRcompl{s_1}{p}} }\\
&\quad\leq \Ccn{C}2^{-ls_0}\bp{\norm{f_0}_{\ABPSRcompl{s_0}{p}} + 2^{l(s_0-s_1)}\norm{f_1}_{\ABPSRcompl{s_1}{p}} }.
\end{align*}
We now employ the $K$-method (see for example \cite[Chapter 3.1]{BL76})
to characterize the interpolation space $\bp{\ABPSRcompl{s_0}{p}\np{{\mathbb T}\times{\R^n}},\ABPSRcompl{s_1}{p}\np{{\mathbb T}\times{\R^n}}}_{\theta,q}$. Taking infimum over all decompositions $f_0,f_1$ in the inequality above, we find that
\begin{align*}
\norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_p \leq \Ccn{C}\, 2^{-ls_0}\,K\bp{2^{l(s_0-s_1)},f,\ABPSRcompl{s_0}{p},\ABPSRcompl{s_1}{p}},
\end{align*}
which implies
\begin{align*}
\norm{f}_{\BSRcompl{s}{pq}} \leq \Ccn{C} \Bp{\sum_{l=0}^\infty\bp{2^{\theta l(s_1-s_0)}\,K\np{2^{l(s_0-s_1)},f,\ABPSRcompl{s_0}{p},\ABPSRcompl{s_1}{p}}}^q}^{\frac{1}{q}}
\leq \Ccn{C}\,\norm{f}_{\bp{\ABPSRcompl{s_0}{p},\ABPSRcompl{s_1}{p}}_{\theta,q}},
\end{align*}
where the last inequality above is valid since $s_0\neq s_1$.
Now consider $f\in\BSRcompl{s}{pq}\np{{\mathbb T}\times{\R^n}}$. Let $l\in\mathbb{N}_0$. Choose $\psi\in\CR \infty_0(\mathbb{R}\times\mathbb{R}^n)$ with $\psi(\eta,\xi)=1$ for $2^{-1}\leq\parnorm{\eta}{\xi}\leq2$ and
$\supp\psi=\setc{(\eta,\xi)\in\mathbb{R}\times\mathbb{R}^n}{4^{-1}\leq\parnorm{\eta}{\xi}\leq 4}$.
Using the same technique as above, this time utilizing the multiplier
\begin{align*}
\mathfrak{\widetilde{m}}^r_l:\mathbb{R}\times\mathbb{R}^n\rightarrow\mathbb{C},\quad\mathfrak{\widetilde{m}}^r_l(\eta,\xi)\coloneqq \psi\bp{2^{-2ml}\eta,2^{-l}\xi}\parnorm{\eta}{\xi}^{-r},
\end{align*}
we can estimate
\begin{align*}
&\norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{\ABPSRcompl{s_1}{p}}\\
&\qquad=\norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\psi(2^{-2ml}k,2^{-l}\xi)\, \phi(2^{-2ml}k,2^{-l}\xi)\, \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{\ABPSRcompl{s_1}{p}}\\
&\qquad \leq \Ccn{C}\,2^{ls_1}\norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{p},
\end{align*}
and similarly
\begin{align*}
\norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{\ABPSRcompl{s_0}{p}} \leq \Ccn{C}\,2^{ls_0}\norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{p}.
\end{align*}
We thus obtain
\begin{align*}
&2^{-l\theta(s_0-s_1)}{2^{l(s_0-s_1)} \norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{\ABPSRcompl{s_1}{p}}} \\
&\qquad=2^{ls} 2^{-ls_1} \norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{\ABPSRcompl{s_1}{p}} \\
&\qquad\leq \Ccn{C} 2^{ls} \norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{p}
\end{align*}
and
\begin{align*}
&2^{-l\theta(s_0-s_1)}{ \norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{\ABPSRcompl{s_0}{p}}} \\
&\qquad=2^{ls} 2^{-ls_0} \norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{\ABPSRcompl{s_0}{p}} \\
&\qquad\leq \Ccn{C} 2^{ls} \norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}_{p}.
\end{align*}
We now employ the $J$-method (see for example \cite[Chapter 3.2]{BL76}) to characterize
the interpolation space $\bp{\ABPSRcompl{s_0}{p}\np{{\mathbb T}\times{\R^n}},\ABPSRcompl{s_1}{p}\np{{\mathbb T}\times{\R^n}}}_{\theta,q}$. By the last two estimates above, we see that
\begin{align*}
&2^{-l\theta(s_0-s_1)}\,J\bp{2^{l(s_0-s_1)},\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f} }}\\
&\qquad\leq \Ccn{C}\,2^{ls} \norm{\mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f}}}_p.
\end{align*}
Since $\calp_\botf=f$, we find that $f=\sum_{l=0}^\infty \mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\bb{\phi(2^{-2ml}k,2^{-l}\xi) \mathscr{F}_{{\mathbb T}\times{\R^n}}\nb{f}}$ with
convergence in the space $\ABPSRcompl{s_0}{p}\np{{\mathbb T}\times{\R^n}}+\ABPSRcompl{s_1}{p}\np{{\mathbb T}\times{\R^n}}$. Recalling that $s_0\neq s_1$, we thus conclude that
\begin{align*}
\norm{f}_{\bp{\ABPSRcompl{s_0}{p},\ABPSRcompl{s_1}{p}}_{\theta,q}} \leq \Ccn{C}\,\norm{f}_{\BSRcompl{s}{pq}},
\end{align*}
and thereby the lemma.
\end{proof}
\section{Proof of Main Theorem}
Utilizing the formalism introduced in Section \ref{pre}, we can equivalently reformulate \eqref{SH} in a setting where the time axis $\mathbb{R}$ is replaced with the torus ${\mathbb T}\coloneqq\mathbb{R}/\tay\mathbb{Z}$. In this setting, the periodicity condition is no longer needed and we obtain the equivalent problem
\begin{align}\label{SHTorus}
\begin{pdeq}
\partial_tu - \Deltau + \nabla p &= f && \text{in }{\mathbb T}\times\R^n_+, \\
\Divu &= g && \text{in }{\mathbb T}\times\R^n_+, \\
u &= h && \text{on }{\mathbb T}\times\partial\R^n_+.
\end{pdeq}
\end{align}
In order to investigate \eqref{SHTorus}, we employ the projections $\calp$ and $\calp_\bot$ to decompose the problem into a steady-state and
a so-called \emph{purely oscillatory} problem. More specifically, we observe that $\np{u,p}$ is a solution to \eqref{SHTorus} if and only if $(v,\Pi)\coloneqq(\calpu,\calpp)$ is a solution to the steady-state problem
\begin{align}\label{SHGS}
\begin{pdeq}
- \Deltav + \nabla\Pi &= \calpf && \text{in }\R^n_+, \\
\Divv &= \calpg && \text{in }\R^n_+, \\
v &= \calph && \text{on }\partial\R^n_+
\end{pdeq}
\end{align}
and $\np{w,\pi}\coloneqq\np{\calp_\botu,\calp_\botp}$ is a solution to
\begin{align}\label{SHGP}
\begin{pdeq}
\partial_tw - \Deltaw + \nabla\pi &= \calp_\botf && \text{in }{\mathbb T}\times\R^n_+, \\
\Divw &= \calp_\botg && \text{in }{\mathbb T}\times\R^n_+, \\
w &= \calp_\both && \text{on }{\mathbb T}\times\partial\R^n_+.
\end{pdeq}
\end{align}
The steady-state problem \eqref{SHGS} is a classical Stokes problem, for which a comprehensive theory is available. We therefore focus on the
purely oscillatory problem \eqref{SHGP}, which only differs from \eqref{SH} by having purely oscillatory data.
We start with the case of non-homogeneous boundary values.
\begin{prop}\label{PurelyOscProblem_HomoBrdData}
Let $q\in (1, \infty)$ and $n\geq 2$. For any vector field $H$ with
\begin{align}\label{PurelyOscProblem_HomoBrdData_DataReg}
\begin{aligned}
&H\in\calp_\bot\WSR{1-\frac{1}{2q}}{q}\bp{{\mathbb T};\LR{q}(\wholespace)}^n\cap\calp_\bot\LR{q}\bp{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)}^n,\\
&H_{n}\in\calp_\bot\WSR{1}{q}({\mathbb T}; \WSRD{-\frac{1}{q}}{q}(\wholespace))
\end{aligned}
\end{align}
there is a solution
\begin{align}\label{PurelyOscProblem_HomoBrdData_SolReg}
\begin{aligned}
&u\in\calp_\bot\WSR{1}{q}\bp{{\mathbb T};\LR{q}({\halfspace})}^n\cap\calp_\bot\LR{q}\bp{{\mathbb T};\WSR{2}{q}({\halfspace})}^n,\\
&p\in\calp_\bot\LR{q}\bp{{\mathbb T};\WSRD{1}{q}({\halfspace})}
\end{aligned}
\end{align}
to
\begin{align}\label{PurelyOscProblem_HomoBrdData_Eq}
\begin{pdeq}
\partial_tu - \Deltau + \nabla p &= 0 && \text{in }{\mathbb T}\times\R^n_+, \\
\Divu &= 0 && \text{in }{\mathbb T}\times\R^n_+, \\
u &= H && \text{on }{\mathbb T}\times\partial\R^n_+
\end{pdeq}
\end{align}
that satisfies
\begin{align}
\begin{aligned}\label{PurelyOscProblem_HomoBrdData_Est}
&\norm{u}_{\WSR{1}{q}\np{{\mathbb T};\LR{q}({\halfspace})}\cap\LR{q}\np{{\mathbb T};\WSR{2}{q}({\halfspace})}}
+ \norm{\nablap}_{\LR{q}\np{{\mathbb T};\LR{q}({\halfspace})}}\\
&\qquad \leq \Ccn{C}\,
\bp{\norm{H}_{\WSR{1-\frac{1}{2q}}{q}\np{{\mathbb T};\LR{q}(\wholespace)}\cap\LR{q}\np{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)}}
+\norm{H_n}_{\WSR{1}{q}\np{{\mathbb T}; \WSRD{-\frac{1}{q}}{q}(\wholespace)}}
}
\end{aligned}
\end{align}
with $\Ccn{C}=\Ccn{C}(n,q,\tay)$.
\end{prop}
\begin{proof}
We shall employ the Fourier transform $\mathscr{F}_{{\mathbb T}\times\wholespace}$ to transform \eqref{PurelyOscProblem_HomoBrdData_Eq} into a system of ODEs.
For this purpose, we denote by $(k,\xi)\in {\frac{2\pi}{\tay}\Z}\times\wholespace$ the coordinates in the dual group of ${\mathbb T}\times\wholespace$.
Letting
$v\coloneqqu^\prime \coloneqq (u_1,\ldots,u_{n-1})$, $w\coloneqqu_n$,
$\ft{v}\coloneqq\mathscr{F}_{{\mathbb T}\times\wholespace}\bb{v}$, and $\ft{w}\coloneqq\mathscr{F}_{{\mathbb T}\times\wholespace}\bb{u_n}$ in \eqref{PurelyOscProblem_HomoBrdData_Eq}, we obtain an equivalent formulation of the system as a family of ODEs. More precisely, \eqref{PurelyOscProblem_HomoBrdData_Eq} is equivalent
to the following ODE being satisfied for each (fixed) $(k,\xi)\in{\frac{2\pi}{\tay}\Z}\times\wholespace$:
\begin{align}\label{ODE}
\begin{pdeq}
ik\ft{v}(x_n) + \snorm{\xi}^2\ft{v}(x_n) - \partial_{x_n}^2\ft{v}(x_n) + i\xi\ft{p}(x_n) &= 0 && \text{in }\mathbb{R}_+, \\
ik\ft{w}(x_n) + \snorm{\xi}^2\ft{w}(x_n) - \partial_{x_n}^2\ft{w}(x_n) + \partial_{x_n}\ft{p}(x_n) &= 0 && \text{in }\mathbb{R}_+, \\
i\xi\cdot\ft{v}(x_n) + \partial_{x_n}\ft{w}(x_n) &= 0 && \text{in }\mathbb{R}_+, \\
(\ft{v}(0), \ft{w}(0)) &= (\ft{\rhsh^\prime}, \ft{\rhsh_n}).
\end{pdeq}
\end{align}
To solve the ODE, we first consider the case $k\neq 0$.
Taking divergence on both sides in $\eqrefsub{PurelyOscProblem_HomoBrdData_Eq}{1}$ and utilizing that $\Divu=0$, we find that
$\Deltap = 0$ and thus $-\snorm{\xi}^2\ft{p}(x_n) + \partial_{x_n}^2\ft{p}(x_n) = 0$ in $\mathbb{R}_+$. Consequently,
\begin{align}\label{PurelyOscProblem_tfupresFormula}
\ft{p}(x_n) = q_0(k,\xi) e^{-\snorm{\xi}x_n}\quad\text{in }\mathbb{R}_+
\end{align}
for some function $q_0:{\frac{2\pi}{\tay}\Z}\times\wholespace\rightarrow\mathbb{C}$.
Inserting \eqref{PurelyOscProblem_tfupresFormula} into \eqref{ODE}, we find that
\begin{align*}
\partial_{x_n}^2\ft{v} &= (ik + \snorm{\xi}^2)\ft{v} + i\xi q_0 e^{-\snorm{\xi}x_n}, \\
\partial_{x_n}^2\ft{w} &= (ik + \snorm{\xi}^2)\ft{w} - \snorm{\xi} q_0 e^{-\snorm{\xi}x_n}.
\end{align*}
Since $k\neq 0$, the resolution hereof yields
\begin{align}
\ft{v}(x_n) &= -\frac{\xiq_0(k, \xi)}{k} e^{-\snorm{\xi}x_n} + \alpha(k, \xi) e^{-\sqrt{\snorm{\xi}^2 + ik} \, x_n}, \\
\ft{w}(x_n) &= \frac{\snorm{\xi}q_0(k, \xi)}{ik} e^{-\snorm{\xi}x_n} + \beta(k, \xi) e^{-\sqrt{\snorm{\xi}^2 + ik} \, x_n},
\end{align}
for some functions $\alpha,\beta:{\frac{2\pi}{\tay}\Z}\times\wholespace\rightarrow\mathbb{C}$.
Utilizing
$\eqrefsub{ODE}{3}$ and the boundary conditions $\eqrefsub{ODE}{5}$, we deduce
\begin{align*}
&\alpha = \ft{\rhsh^\prime} + \frac{\xiq_0}{k}, \qquad \beta = \ft{\rhsh_n} - \frac{\snorm{\xi}q_0}{ik}
\end{align*}
and
\begin{align}\label{PurelyOscProblem_HomoBrdData_SolFormulaq0}
&q_0 = -i\left(\snorm{\xi} + \sqrt{\snorm{\xi}^2 + ik}\right)\frac{\xi}{\snorm{\xi}}\cdot\ft{\rhsh^\prime} + \sqrt{\snorm{\xi}^2+ik} \, \ft{\rhsh_n} +
\snorm{\xi}\ft{\rhsh_n} + \frac{ik}{\snorm{\xi}}\ft{\rhsh_n}.
\end{align}
By \eqref{PurelyOscProblem_tfupresFormula}--\eqref{PurelyOscProblem_HomoBrdData_SolFormulaq0} a solution to \eqref{ODE} is identified in the case
$k\neq 0$. Since $H$ is purely oscillatory, we have $\ft{\rhsh^\prime}(0,\xi)=\ft{\rhsh_n}(0,\xi)=0$, whence $(v,w,p)\coloneqq(0,0,0)$ solves \eqref{ODE} in the case $k=0$.
We thus obtain a formula for the solution to \eqref{PurelyOscProblem_HomoBrdData_Eq}:
\begin{align}\label{PurelyOscProblem_HomoBrdData_SolFormula}
\begin{aligned}
v &= \mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\Bb{-\frac{\xiq_0}{k} e^{-\snorm{\xi}x_n} + \bp{\ft{\rhsh^\prime} + \frac{\xiq_0}{k}} \e^{-\sqrt{\snorm{\xi}^2 + ik} \, x_n}}, \\
w &= \mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\Bb{\frac{\snorm{\xi}q_0}{ik} e^{-\snorm{\xi}x_n} + \bp{\ft{\rhsh_n} - \frac{\snorm{\xi}q_0}{ik}}
e^{-\sqrt{\snorm{\xi}^2 + ik} \, x_n}}, \\
p &= \mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\bb{q_0 e^{-\snorm{\xi}x_n}}.
\end{aligned}
\end{align}
Formally at least, $(v,w,p)$ as defined above is a solution to \eqref{PurelyOscProblem_HomoBrdData_Eq}. It remains to show that
this solution is well-defined in the class \eqref{PurelyOscProblem_HomoBrdData_SolReg} for data in the class
\eqref{PurelyOscProblem_HomoBrdData_DataReg}. We start by considering data $H\in\calp_\botZ\np{{\mathbb T}\times\wholespace}^n$. The space
$Z\np{{\mathbb T}\times\wholespace}$ is dense in
\begin{align*}
\WSR{1-\frac{1}{2q}}{q}\bp{{\mathbb T};\LR{q}(\wholespace)}\cap\LR{q}\bp{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)} \cap \WSR{1}{q}\bp{{\mathbb T}; \WSRD{-\frac{1}{q}}{q}(\wholespace)},
\end{align*}
which is not a trivial assertion since it entails the construction of an approximating sequence that converges simultaneously in Sobolev spaces of positive order and in homogeneous Sobolev spaces of negative order. Nevertheless, it can be shown by a standard ``cut-off'' and mollifier technique;
see \cite[proof of Theorem 2.3.3 and Theorem 5.1.5]{TriebelTheoryFunctionSpaces}.
Consequently, $\calp_\botZ\np{{\mathbb T}\times\wholespace}^n$ is dense in the class \eqref{PurelyOscProblem_HomoBrdData_DataReg}.
Clearly, for purely oscillatory data $H$ the solution
given by \eqref{PurelyOscProblem_HomoBrdData_SolFormula} is also purely oscillatory.
If we can therefore show \eqref{PurelyOscProblem_HomoBrdData_Est} for arbitrary $H\in\calp_\botZ\np{{\mathbb T}\times\wholespace}^n$, the claim of the proposition will follow by a density argument.
We first examine the pressure term $p$ (more specifically $\nablap$). The terms in \eqref{PurelyOscProblem_HomoBrdData_SolFormulaq0} have different
order of regularity, so we decompose $q_0=q_1+q_2$ by
\begin{align}\label{PurelyOscProblem_HomoBrdData_DefOfqdfs}
\begin{aligned}
&q_1(k,\xi) \coloneqq -i\left(\snorm{\xi} + \sqrt{\snorm{\xi}^2 + ik}\right)\frac{\xi}{\snorm{\xi}}\cdot\ft{\rhsh^\prime} + \sqrt{\snorm{\xi}^2+ik} \, \ft{\rhsh_n} + \snorm{\xi}\ft{\rhsh_n},\\
&q_2(k,\xi) \coloneqq \frac{ik}{\snorm{\xi}}\ft{\rhsh_n},
\end{aligned}
\end{align}
and introduce the operators
\begin{align}\label{PurelyOscProblem_HomoBrdData_DefOfGoodOpr}
\begin{aligned}
&\mathscr G: Z\np{{\mathbb T}\times\wholespace}^n \rightarrow \mathscr{S}\np{{\mathbb T}\times{\halfspace}}^n, \\
&\mathscr G(H)\coloneqq \mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\Bb{\xiq_1(k,\xi)e^{-\snorm{\xi}x_n}}
\end{aligned}
\end{align}
and
\begin{align}\label{PurelyOscProblem_HomoBrdData_DefOfBadOpr}
\begin{aligned}
&\mathscr B: Z\np{{\mathbb T}\times\wholespace} \rightarrow \mathscr{S}\np{{\mathbb T}\times{\halfspace}}^n, \\
&\mathscr B(\rhsh_n)\coloneqq \mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\Bb{\xiq_2(k,\xi)e^{-\snorm{\xi}x_n}}.
\end{aligned}
\end{align}
For $m\in\mathbb{N}_0$, we observe for any $x_n>0$ that
the symbol $\xi\rightarrow\np{\snorm{\xi}x_n}^m\e^{-\snorm{\xi}x_n}$ is an $\LR{q}\np{\wholespace}$-multiplier.
Specifically, one verifies that
\begin{align*}
\sup_{x_n>0}\sup_{\varepsilon\in\{0, 1\}^{n-1}}\sup_{\xi\in\wholespace}\left|\xi_1^{\varepsilon_1}\cdots\xi_{n-1}^{\varepsilon_{n-1}}\partial_{\xi_1}^{\varepsilon_1}\cdots\partial_{\xi_{n-1}}^{\varepsilon_{n-1}}\bb{\np{\snorm{\xi}x_n}^m\e^{-\snorm{\xi}x_n}} \right|<\infty,
\end{align*}
whence it follows from the Marcinkiewicz Multiplier Theorem (see for example \cite[Corollary 6.2.5]{Grafakos}) that the Fourier-multiplier operator with symbol
$\xi\rightarrow\np{\snorm{\xi}x_n}^m\e^{-\snorm{\xi}x_n}$ is a bounded operator on $\LR{q}\np\wholespace$ with operator norm independent on $x_n$, that is,
\begin{align}\label{PurelyOscProblem_HomoBrdData_MultiplierOprNormIndep}
\sup_{x_n>0}\,\normL{\phi\mapsto\mathscr{F}_{\wholespace}\Bb{\np{\snorm{\xi}x_n}^m\e^{-\snorm{\xi}x_n}\mathscr{F}_{\wholespace}\nb{\phi}}}_{\mathscr{L}\np{\LR{q}\np{\wholespace},\LR{q}\np{\wholespace}}}<\infty.
\end{align}
We can thus estimate
\begin{align*}
&\norm{\mathscr G(H)}_{\LR{\infty}\np{\mathbb{R}_+;\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}}
\leq \Ccn{C}\, \norm{\mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\bb{\xiq_1(k,\xi)}}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}\\
&\qquad\leq \Ccn{C}\, \Bp{\norm{H}_{\LR{q}\np{{\mathbb T};\HSR{2}{q}\np\wholespace}} +
\normL{\mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\bb{M(k,\xi)\,{\np{\snorm{\xi}^2+ik}\, \frac{\xi\otimes\xi}{\snorm{\xi}^2}\ft{\rhsh^\prime}}}}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}\\
&\qquad\qquad\
+\normL{\mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\bb{M(k,\xi)\,{\np{\snorm{\xi}^2+ik}\, \frac{\xi}{\snorm{\xi}}\ft{\rhsh_n}}}}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}
},
\end{align*}
where
\begin{align*}
M:\mathbb{R}\times\wholespace\rightarrow\mathbb{C},\quad
M(\eta,\xi)\coloneqq \frac{\snorm{\xi}}{\sqrt{\snorm{\xi}^2+i\eta}}.
\end{align*}
Employing again the Marcinkiewicz Multiplier Theorem, we find that the symbol $M$ is an $\LR{q}\np{\mathbb{R};\LR{q}(\wholespace)}$-multiplier. An application of the Transference Principle (Theorem \ref{transference}) therefore implies that the restriction $M_{|\Zgrp\times\ws}$ is an $\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}$-multiplier.
We thus conclude
\begin{align}\label{PurelyOscProblem_HomoBrdData_GoodOprInterpolationEst1}
\begin{aligned}
&\norm{\mathscr G(H)}_{\LR{\infty}\np{\mathbb{R}_+;\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}}
\leq \Ccn{C}\, {\norm{H}_{\LR{q}\np{{\mathbb T};\HSR{2}{q}\np\wholespace}\cap\HSR{1}{q}\np{{\mathbb T};\LR{q}\np{\wholespace}}}}.
\end{aligned}
\end{align}
This estimate shall serve as an interpolation endpoint. To obtain the opposite endpoint, we again employ \eqref{PurelyOscProblem_HomoBrdData_MultiplierOprNormIndep} to estimate
\begin{align*}
\sup_{x_n>0}\,\norm{x_n \mathscr G\np{H}}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}\leq \Ccn{C}\,\norm{q_1}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}},
\end{align*}
which implies
\begin{align*}
\norm{\mathscr G\np{H}}_{\LR{1,\infty}\np{\mathbb{R}_+;\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}}
&= \normL{\frac{1}{x_n} \norm{x_n \mathscr G\np{H}}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}}_{\LR{1,\infty}\np{\mathbb{R}_+}} \\
&\leq \Ccn{C}\,\norm{q_1}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}.
\end{align*}
Recalling \eqref{PurelyOscProblem_HomoBrdData_DefOfqdfs}, we estimate
\begin{align*}
&\norm{q_1}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}\\
&\qquad \leq \Ccn{C}\,\Bp{\norm{H}_{\LR{q}\np{{\mathbb T};\HSR{1}{q}\np\wholespace}}
+\normL{\mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\bb{\MmultiplierNr{1}(k,\xi)\cdot{\np{\snorm{\xi}+\snorm{k}^\frac{1}{2}}\ft{\rhsh^\prime}}}}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}\\
&\qquad\qquad\quad +\normL{\mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\bb{\MmultiplierNr{2}(k,\xi)\,{\np{\snorm{\xi}+\snorm{k}^\frac{1}{2}}\ft{\rhsh_n}}}}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}}
\end{align*}
with
\begin{align*}
&\MmultiplierNr{1}:\mathbb{R}\times\wholespace\rightarrow\mathbb{C}^{n-1},\quad
\MmultiplierNr{1}(\eta,\xi)\coloneqq \frac{\sqrt{\snorm{\xi}^2+i\eta}}{\snorm{\xi}+\snorm{\eta}^{\frac{1}{2}}}\frac{\xi}{\snorm{\xi}},\\
&\MmultiplierNr{2}:\mathbb{R}\times\wholespace\rightarrow\mathbb{C},\quad
\MmultiplierNr{2}(\eta,\xi)\coloneqq \frac{\sqrt{\snorm{\xi}^2+i\eta}}{{\snorm{\xi}+\snorm{\eta}^{\frac{1}{2}}}}.
\end{align*}
Again, one can utilize the Marcinkiewicz Multiplier Theorem to show that both $\MmultiplierNr{1}$ and $\MmultiplierNr{2}$ are
$\LR{q}\np{\mathbb{R};\LR{q}(\wholespace)}$-multipliers, and subsequently obtain from
the Transference Principle (Theorem \ref{transference}) that their restrictions to ${\frac{2\pi}{\tay}\Z}\times\wholespace$
are $\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}$-multipliers. Consequently, we find that
\begin{align}\label{PurelyOscProblem_HomoBrdData_GoodOprInterpolationEst2}
&\norm{\mathscr G\np{H}}_{\LR{1,\infty}\np{\mathbb{R}_+;\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}}
\leq \Ccn{C}\, {\norm{H}_{\LR{q}\np{{\mathbb T};\HSR{1}{q}\np\wholespace}\cap\HSR{\frac{1}{2}}{q}\np{{\mathbb T};\LR{q}\np{\wholespace}}}}.
\end{align}
By \eqref{PurelyOscProblem_HomoBrdData_GoodOprInterpolationEst1} and \eqref{PurelyOscProblem_HomoBrdData_GoodOprInterpolationEst2}, the
operator $\mathscr G$ extends uniquely to a bounded operator
\begin{align}\label{PurelyOscProblem_HomoBrdData_GoodOprInterpolationPoles1}
\begin{aligned}
&\mathscr G:\LR{q}\bp{{\mathbb T};\HSR{2}{q}\np\wholespace}^n\cap\HSR{1}{q}\bp{{\mathbb T};\LR{q}\np{\wholespace}}^n\rightarrow\LR{\infty}\bp{\mathbb{R}_+;\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}^n,\\
&\mathscr G:\LR{q}\bp{{\mathbb T};\HSR{1}{q}\np\wholespace}^n\cap\HSR{\frac{1}{2}}{q}\bp{{\mathbb T};\LR{q}\np{\wholespace}}^n\rightarrow\LR{1,\infty}\bp{\mathbb{R}_+;\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}^n.
\end{aligned}
\end{align}
These extensions rely on the fact that $Z\np{{\mathbb T}\times\wholespace}$ is dense in the function spaces on the left-hand side above.
We once more refer to
\cite[proof of Theorem 2.3.3 and Theorem 5.1.5]{TriebelTheoryFunctionSpaces} for a verification of this fact.
Using the projection $\calp_\bot$ on the left-hand side in \eqref{PurelyOscProblem_HomoBrdData_GoodOprInterpolationPoles1}, we obtain scales of the anisotropic Bessel-Potential spaces introduced in \eqref{BS_DefOfBesovSpace_DefnBessel}. Consequently, $\mathscr G$ is a bounded operator:
\begin{align}\label{PurelyOscProblem_HomoBrdData_GoodOprInterpolationPoles2}
\begin{aligned}
&\mathscr G:\ABPSRcompl{2}{q}\np{{\mathbb T}\times\wholespace}^n \rightarrow\LR{\infty}\bp{\mathbb{R}_+;\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}^n,\\
&\mathscr G:\ABPSRcompl{1}{q}\np{{\mathbb T}\times\wholespace}^n \rightarrow\LR{1,\infty}\bp{\mathbb{R}_+;\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}^n.
\end{aligned}
\end{align}
Utilizing Lemma \ref{BS_InterpolationLem}, we find that
\begin{align*}
&\Bp{\ABPSRcompl{1}{q}\np{{\mathbb T}\times\wholespace},\ABPSRcompl{2}{q}\np{{\mathbb T}\times\wholespace}}_{1-\frac{1}{q},q}=\BSRcompl{2-\frac{1}{q}}{qq}\np{{\mathbb T}\times\wholespace}\\
&\quad=\Bp{\calp_\bot\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)},\ABPSRcompl{2}{q}\np{{\mathbb T}\times\wholespace}}_{1-\frac{1}{2q},q}\\
&\quad=\Bp{\calp_\bot\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}, \calp_\bot\LR{q}\bp{{\mathbb T};\HSR{2}{q}\np\wholespace}\cap\calp_\bot\HSR{1}{q}\bp{{\mathbb T};\LR{q}\np{\wholespace}}}_{1-\frac{1}{2q},q}\\
&\quad=\calp_\bot\Bp{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}, \LR{q}\bp{{\mathbb T};\HSR{2}{q}\np\wholespace}}_{1-\frac{1}{2q},q}\\
&\qquad\qquad \cap \calp_\bot\Bp{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)},\HSR{1}{q}\bp{{\mathbb T};\LR{q}\np{\wholespace}}}_{1-\frac{1}{2q},q}\\
&\quad=\calp_\bot\WSR{1-\frac{1}{2q}}{q}\bp{{\mathbb T};\LR{q}(\wholespace)}\cap\calp_\bot\LR{q}\bp{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)}.
\end{align*}
One can employ \cite[Theorem 1.12.1]{TriebelInterpolation} to verify the interpolation of the intersection space in the fourth equality above.
Moreover, real interpolation yields
\begin{align*}
\Bp{ \LR{1, \infty}\bp{\mathbb{R}_+; \LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}, &\, \LR{\infty}\bp{\mathbb{R}_+; \LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}}_{1-\frac{1}{q}, q} \\
&\qquad\qquad\qquad\qquad\qquad = \LR{q}\bp{\mathbb{R}_+; \LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}.
\end{align*}
Recalling \eqref{PurelyOscProblem_HomoBrdData_GoodOprInterpolationPoles2}, we conclude that $\mathscr G$ extends uniquely to a bounded operator
\begin{align}\label{PurelyOscProblem_HomoBrdData_GoodOprFinalMappingProperty}
\mathscr G: \calp_\bot\WSR{1-\frac{1}{2q}}{q}\bp{{\mathbb T};\LR{q}(\wholespace)}^n\cap\calp_\bot\LR{q}\bp{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)}^n \rightarrow
\LR{q}\bp{{\mathbb T};\LR{q}({\halfspace})}^n.
\end{align}
We now recall \eqref{PurelyOscProblem_HomoBrdData_DefOfBadOpr} and examine the operator $\mathscr B$. Utilizing \eqref{PurelyOscProblem_HomoBrdData_MultiplierOprNormIndep} with $m=0$, we obtain
\begin{align}\label{PurelyOscProblem_HomoBrdData_BadOprInterpolationPoles1}
\begin{aligned}
\norm{\mathscr B(\rhsh_n)}_{\LR{\infty}\np{\mathbb{R}_+;\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}}
&\leq \Ccn{C}\, \norm{\mathscr{F}^{-1}_{{\mathbb T}\times\wholespace}\bb{\xiq_2\np{k,\xi}}}_{\LR{q}\np{{\mathbb T};\LR{q}(\wholespace)}}\\
&\leq \Ccn{C}\, \norm{\rhsh_n}_{\HSR{1}{q}\np{{\mathbb T};\LR{q}\np{\wholespace}}}.
\end{aligned}
\end{align}
We again employ \eqref{PurelyOscProblem_HomoBrdData_MultiplierOprNormIndep} to estimate
\begin{align*}
\sup_{x_n>0}\,\norm{x_n \mathscr B\np{\rhsh_n}}_{\LR{q}\np{{\mathbb T}; \LR{q}(\wholespace)}}\leq \Ccn{C}\,\norm{\rhsh_n}_{\HSR{1}{q}\np{{\mathbb T};\HSRD{-1}{q}\np{\wholespace}}},
\end{align*}
which implies
\begin{align*}
\norm{\mathscr B\np{\rhsh_n}}_{\LR{1,\infty}\np{\mathbb{R}_+;\LR{q}\np{{\mathbb T}; \LR{q}(\wholespace)}}}
&= \normL{\frac{1}{x_n} \norm{x_n \mathscr B\np{\rhsh_n}}_{\LR{q}\np{{\mathbb T}; \LR{q}(\wholespace)}}}_{\LR{1,\infty}\np{\mathbb{R}_+}}\\
&\leq \Ccn{C}\,\norm{\rhsh_n}_{\HSR{1}{q}\np{{\mathbb T};\HSRD{-1}{q}\np{\wholespace}}}.
\end{align*}
It follows that $\mathscr B$ extends to a bounded operator
\begin{align*}
&\mathscr B: \calp_\bot\HSR{1}{q}\bp{{\mathbb T};\LR{q}\np{\wholespace}} \rightarrow\LR{\infty}\bp{\mathbb{R}_+;\LR{q}\np{{\mathbb T}; \LR{q}(\wholespace)}}^n,\\
&\mathscr B: \calp_\bot\HSR{1}{q}\bp{{\mathbb T};\HSRD{-1}{q}\np{\wholespace}}\rightarrow\LR{1,\infty}\bp{\mathbb{R}_+;\LR{q}\np{{\mathbb T}; \LR{q}(\wholespace)}}.^n
\end{align*}
Real interpolation thus implies that
$\mathscr B$ extends to a bounded operator
\begin{align}\label{PurelyOscProblem_HomoBrdData_BadOprFinalMappingProperty}
\mathscr B: \calp_\bot\WSR{1}{q}\bp{{\mathbb T}; \WSRD{-\frac{1}{q}}{q}(\wholespace)}\rightarrow \LR{q}\bp{{\mathbb T}; \LR{q}({\halfspace})}^n.
\end{align}
We now return to the solution formulas \eqref{PurelyOscProblem_HomoBrdData_SolFormula} and consider $H\in\calp_\botZ\np{{\mathbb T}\times\wholespace}^n$.
In this case, an application of \eqref{PurelyOscProblem_HomoBrdData_MultiplierOprNormIndep} ensures that $p$ is well-defined as an element
in the function space $\LR{q}\bp{{\mathbb T};\HSRD{1}{q}\np{\halfspace}}$.
By \eqref{PurelyOscProblem_HomoBrdData_GoodOprFinalMappingProperty} and \eqref{PurelyOscProblem_HomoBrdData_BadOprFinalMappingProperty},
we obtain
\begin{align}\label{PurelyOscProblem_HomoBrdData_EstimateForGradPresure}
\begin{aligned}
&\norm{\nablap}_{\LR{q}\np{{\mathbb T}; \LR{q}({\halfspace})}} = \norm{\mathscr G\np{H}+\mathscr B\np{\rhsh_n}}_{\LR{q}\np{{\mathbb T}; \LR{q}({\halfspace})}}\\
&\quad \leq \Ccn{C}
\bp{\norm{H}_{\WSR{1-\frac{1}{2q}}{q}\np{{\mathbb T};\LR{q}(\wholespace)}\cap\LR{q}\np{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)}}
+\norm{H_n}_{\WSR{1}{q}({\mathbb T}; \WSRD{-\frac{1}{q}}{q}(\wholespace)}}
\end{aligned}
\end{align}
In a similar manner, it can be shown that $(v,w)$ is well-defined as an element in the space $\LR{q}\bp{{\mathbb T};\HSR{2}{q}\np{{\halfspace}}}\cap\HSR{1}{q}\bp{{\mathbb T};\LR{q}\np{{\halfspace}}}$. To this end, one may consider the symbol
\begin{align*}
m:\mathbb{R}\times\wholespace\rightarrow\mathbb{C},\quad m\np{\eta,\xi}\coloneqq \bp{\sqrt{\snorm{\xi}^2+i\eta}\,x_n}^m \e^{-\sqrt{\snorm{\xi}^2 + i\eta} \, x_n}
\end{align*}
and verify that
\begin{align*}
\sup_{x_n>0}\sup_{\varepsilon\in\{0, 1\}^n}\sup_{(\eta,\xi)\in\mathbb{R}\times\wholespace}\left|\eta^{\varepsilon_0}\xi_1^{\varepsilon_1}\cdots\xi_{n-1}^{\varepsilon_{n-1}}\partial_{\eta}^{\varepsilon_0}\partial_{\xi_1}^{\varepsilon_1}\cdots\partial_{\xi_{n-1}}^{\varepsilon_{n-1}}
m(\eta,\xi) \right|<\infty.
\end{align*}
It follows that the Fourier-multiplier operator corresponding to the symbol $m$ is a bounded operator on $\LR{q}\np{\mathbb{R}; \LR{q}(\wholespace)}$ with operator norm independent on $x_n$. An application of the Transference Principle (Theorem \ref{transference}) therefore implies that the operator corresponding to the symbol $m_{|{\frac{2\pi}{\tay}\Z}\times\wholespace}$ is a bounded operator on
$\LR{q}\np{{\mathbb T}; \LR{q}(\wholespace)}$ with operator norm independent on $x_n$, that is,
\begin{align}\label{PurelyOscProblem_HomoBrdData_MultiplierOprNormIndep2}
\sup_{x_n>0}\,\normL{\phi\mapsto\mathscr{F}_{{\mathbb T}\times\wholespace}\bb{m(k,\xi)\mathscr{F}_{{\mathbb T}\times\wholespace}\nb{\phi}}}_{\mathscr{L}\np{\LR{q}\np{{\mathbb T}; \LR{q}(\wholespace)},\LR{q}\np{{\mathbb T}; \LR{q}(\wholespace)}}}<\infty.
\end{align}
With both \eqref{PurelyOscProblem_HomoBrdData_MultiplierOprNormIndep} and \eqref{PurelyOscProblem_HomoBrdData_MultiplierOprNormIndep2} at our disposal, it is now straightforward to verify that
$u\coloneqq(v,w)$ is well-defined as element in the space $\LR{q}\bp{{\mathbb T};\HSR{2}{q}\np{{\halfspace}}}\cap\HSR{1}{q}\bp{{\mathbb T};\LR{q}\np{{\halfspace}}}$.
By construction, this choice of $(u,p)$ is a solution to \eqref{PurelyOscProblem_HomoBrdData_Eq}.
Moreover, since $\calp_\botH=H$, also $\calp_\botu=u$.
This means that $u$ is a purely oscillatory solution in the aforementioned function space to the time-periodic heat equation in the half-space
\begin{align*}
\begin{pdeq}
\partial_tu - \Deltau &= -\nablap && \text{in }{\mathbb T}\times\R^n_+, \\
u &= H && \text{on }{\mathbb T}\times\partial{\halfspace}.
\end{pdeq}
\end{align*}
By \cite[Theorem 2.1]{KyedSauer_Heat} (see also \cite[Theorem 1.3]{KyedSauer_ADN1}), it is known that this problem has a unique purely oscillatory solution in the space
$\LR{q}\bp{{\mathbb T};\HSR{2}{q}\np{{\halfspace}}}\cap\HSR{1}{q}\bp{{\mathbb T};\LR{q}\np{{\halfspace}}}$, which satisfies
\begin{align*}
&\norm{u}_{\HSR{1}{q}\np{{\mathbb T};\LR{q}({\halfspace})}\cap\LR{q}\np{{\mathbb T};\HSR{2}{q}({\halfspace})}}\\
&\qquad \leq \Ccn{C} \bp{\norm{\nablap}_{\LR{q}\np{{\mathbb T}; \LR{q}({\halfspace})}} +
\norm{H}_{\WSR{1-\frac{1}{2q}}{q}\np{{\mathbb T};\LR{q}(\wholespace)}\cap\LR{q}\np{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)}}}.
\end{align*}
Combining this estimate with \eqref{PurelyOscProblem_HomoBrdData_EstimateForGradPresure}, we conclude \eqref{PurelyOscProblem_HomoBrdData_Est}.
\end{proof}
In the next step, we consider the resolution of the fully non-homogeneous system \eqref{SHGP}, that is, \eqref{SH} with purely oscillatory data, and establish $\LR{q}$ estimates. This step concludes the main result of the article.
\begin{thm}\label{PurelyOscProblem_HomoDataThm}
Let $q\in(1,\infty)$ and $n\geq 2$. For all
\begin{align}
\begin{aligned}
&f\in\calp_\bot\LR{q}\bp{{\mathbb T}; \LR{q}({\halfspace})}^n,\\
&g\in\calp_\bot\LR{q}\bp{{\mathbb T}; \WSR{1}{q}(\R^n_+)}\cap\calp_\bot\WSR{1}{q}\bp{{\mathbb T}; \WSRD{-1}{q}(\R^n_+)},\\
&h\in\calp_\bot\WSR{1-\frac{1}{2q}}{q}\bp{{\mathbb T};\LR{q}(\wholespace)}^n\cap\calp_\bot\LR{q}\bp{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)}^n
\end{aligned}
\end{align}
with
\begin{align}
&h_{n}\in\calp_\bot\WSR{1}{q}\bp{{\mathbb T}; \WSRD{-\frac{1}{q}}{q}(\wholespace)}
\end{align}
there is a solution $(u,p)$ to \eqref{SH} with
\begin{align}
\begin{aligned}\label{PurelyOscProblem_HomoDataThm_SolReg}
&u\in\calp_\bot\WSR{1}{q}\bp{{\mathbb T};\LR{q}({\halfspace})}^n\cap\calp_\bot\LR{q}\bp{{\mathbb T};\WSR{2}{q}({\halfspace})}^n,\\
&p\in\calp_\bot\LR{q}\bp{{\mathbb T};\WSRD{1}{q}({\halfspace})},
\end{aligned}
\end{align}
which satisfies
\begin{align}\label{PurelyOscProblem_HomoDataThm_ProjComplEst}
\begin{aligned}
&\norm{u}_{\WSR{1}{q}\np{{\mathbb T};\LR{q}({\halfspace})}\cap\LR{q}\np{{\mathbb T};\WSR{2}{q}({\halfspace})}}
+ \norm{\nablap}_{\LR{q}\np{{\mathbb T}; \LR{q}({\halfspace})}} \\
&\qquad \leq \Ccn{C}\,
\bp{\norm{f}_{\LR{q}\np{{\mathbb T}; \LR{q}({\halfspace})}}+\norm{g}_{\LR{q}\np{{\mathbb T}; \WSR{1}{q}(\R^n_+)}\cap\WSR{1}{q}\np{{\mathbb T}; \WSRD{-1}{q}(\R^n_+)}}\\
&\qquad\qquad +\norm{h}_{\WSR{1-\frac{1}{2q}}{q}\np{{\mathbb T};\LR{q}(\wholespace)}\cap\LR{q}\np{{\mathbb T};\WSR{2-\frac{1}{q}}{q}(\wholespace)}}
+ \norm{h_n}_{\WSR{1}{q}\np{{\mathbb T}; \WSRD{-\frac{1}{q}}{q}(\wholespace)}}\,
}
\end{aligned}
\end{align}
with $\Ccn{C}=\Ccn{C}(n,q,\tay)$. Moreover, if
$(\tilde{u},\tilde{p})$ is another solution to \eqref{SH} in the class \eqref{PurelyOscProblem_HomoDataThm_SolReg}, then
$u=\tilde{u}$ and $p=\tilde{p} + d(t)$
for some function $d$ that depends only on time.
\end{thm}
\begin{proof}
Let
$v\in\calp_\bot\WSR{1}{q}\bp{{\mathbb T};\LR{q}({\halfspace})}^n\cap\calp_\bot\LR{q}\bp{{\mathbb T};\WSR{2}{q}({\halfspace})}^n$ be
the solution to the purely oscillatory time-periodic n-dimensional heat equation
\begin{align*}
\begin{pdeq}
\partial_tv - \Deltav &= f && \text{in }{\mathbb T}\times\R^n_+, \\
v &= 0 && \text{on }{\mathbb T}\times\partial{\halfspace}.
\end{pdeq}
\end{align*}
The existence of such a solution $v$ that satisfies
\begin{align*}
\norm{v}_{\WSR{1}{q}\np{{\mathbb T};\LR{q}({\halfspace})}\cap\LR{q}\np{{\mathbb T};\WSR{2}{q}({\halfspace})}} \leq \Ccn{C} \norm{f}_{\LR{q}\np{{\mathbb T}; \LR{q}({\halfspace})}}
\end{align*}
follows from \cite[Theorem 2.1]{KyedSauer_Heat}. Denote by $G$ the extension of $g-\Divv$ to ${{\mathbb T}\times{\R^n}}$ by even reflection in
the $x_n$ variable. Then
$G\in\calp_\bot\LR{q}\bp{{\mathbb T};\WSR{1}{q}\np{{\R^n}}}$. Moreover, identifying $\WSRD{-1}{q}\np{{\R^n}}$ as the dual of $\WSRD{1}{q'}\np{{\R^n}}$ and recalling that
$g\in\calp_\bot\WSR{1}{q}\bp{{\mathbb T}; \WSRD{-1}{q}(\R^n_+)}$,
one directly verifies that $G\in\calp_\bot\WSR{1}{q}\bp{{\mathbb T};\WSRD{-1}{q}\np{{\R^n}}}$ with
\begin{multline*}
\norm{G}_{\WSR{1}{q}\np{{\mathbb T};\WSRD{-1}{q}\np{{\R^n}}}\cap\LR{q}\np{{\mathbb T};\WSR{1}{q}\np{{\R^n}}}}\\
\leq \Ccn{C}\bp{\norm{g}_{\WSR{1}{q}\np{{\mathbb T};\WSRD{-1}{q}\np{{\halfspace}}}\cap\LR{q}\np{{\mathbb T};\WSR{1}{q}\np{{\halfspace}}}}+
\norm{v}_{\WSR{1}{q}\np{{\mathbb T};\LR{q}({\halfspace})}\cap\LR{q}\np{{\mathbb T};\WSR{2}{q}({\halfspace})}}}.
\end{multline*}
A solution
to the purely oscillatory Stokes system
\begin{align}\label{PurelyOscProblem_HomoDataThm_StokesRnReduction}
\begin{pdeq}
\partial_tw-\Deltaw + \nabla\pi &=0 && \text{in }{{\mathbb T}\times{\R^n}},\\
\Divw &= G &&\text{in }{{\mathbb T}\times{\R^n}}
\end{pdeq}
\end{align}
is obtained via the solution formulas
\begin{align*}
w \coloneqq \mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\Bb{\frac{-i\xi}{\snorm{\xi}^2}\,\mathscr{F}_{{{\mathbb T}\times{\R^n}}}\nb{G}},\quad
\pi \coloneqq \mathscr{F}^{-1}_{{{\mathbb T}\times{\R^n}}}\Bb{\frac{ik+\snorm{\xi}^2}{\snorm{\xi}^2}\,\mathscr{F}_{{{\mathbb T}\times{\R^n}}}\nb{G}}.
\end{align*}
From these formulas, we immediately obtain the estimate
\begin{align*}
\norm{w}_{\WSR{1}{q}\np{{\mathbb T};\LR{q}({\R^n})}\cap\LR{q}\np{{\mathbb T};\WSR{2}{q}({\R^n})}}
&+ \norm{\nabla\pi}_{\LR{q}\np{{\mathbb T}; \LR{q}(\mathbb{R}^n)}} \\
&\qquad\qquad \leq \Ccn{C} \norm{G}_{\LR{q}\np{{\mathbb T};\WSR{1}{q}\np{{\R^n}}}\cap\WSR{1}{q}\np{{\mathbb T};\WSRD{-1}{q}\np{{\R^n}}}}.
\end{align*}
By the symmetry of $G$, the vector field $\widetilde{w}$ obtained by odd reflection with respect to $x_n$ of the n'th component of $w$, that is,
\begin{align*}
\widetilde{w}\np{t,x',x_n}\coloneqq \bp{w_1\np{t,x',x_n},\ldots,w_{n-1}\np{t,x',x_n},-w_n\np{t,x',-x_n}},
\end{align*}
is also a solution to \eqref{PurelyOscProblem_HomoDataThm_StokesRnReduction} corresponding to the same pressure term $\pi$.
This means that $w$ and $\widetilde{w}$ both solve the same time-periodic heat equation in the whole-space ${{\mathbb T}\times{\R^n}}$. By
\cite[Theorem 2.1]{KyedSauer_Heat}, $w=\widetilde{w}$. It follows that
$\trace_{{\mathbb T}\times\wholespace}\nb{w_n}=0$. Consequently, $H\coloneqqh-\trace_{{\mathbb T}\times\wholespace}\nb{w}$ belongs
to the space \eqref{PurelyOscProblem_HomoBrdData_DataReg} (see for example \cite{KyedSauer_ADN1} for a rigorous definition of the trace operator in this setting).
Let $(U,\mathfrak{P})$ be the corresponding solution from Proposition \ref{PurelyOscProblem_HomoBrdData}. It follows that
$\np{u,p}\coloneqq\np{U+w+v,\mathfrak{P}+\pi}$ is a solution
to \eqref{SH} in the class \eqref{PurelyOscProblem_HomoDataThm_SolReg} satisfying \eqref{PurelyOscProblem_HomoDataThm_ProjComplEst}.
It remains to show uniqueness, which follows from a standard duality argument. To this end, assume that $(\tilde{u},\tilde{p})$ is a solution
in the class \eqref{PurelyOscProblem_HomoDataThm_SolReg} to the homogeneous Stokes problem
\begin{align*}
\begin{pdeq}
\partial_t\tilde{u} - \Delta\tilde{u} + \nabla \tilde{p} &= 0 && \text{in }{\mathbb T}\times\R^n_+, \\
\Div\tilde{u} &= 0 && \text{in }{\mathbb T}\times\R^n_+, \\
\tilde{u} &= 0 && \text{on }{\mathbb T}\times\partial\R^n_+.
\end{pdeq}
\end{align*}
Let $\phi\in\CR \infty_0\np{{\mathbb T}\times{\halfspace}}^n$. With exactly the same arguments as above, one can establish existence of a solution
\begin{align*}
&\psi\in\calp_\bot\WSR{1}{q\prime}\bp{{\mathbb T};\LR{q\prime}({\halfspace})}^n\cap\calp_\bot\LR{q\prime}\bp{{\mathbb T};\WSR{2}{q\prime}({\halfspace})}^n,\\
&\eta\in\calp_\bot\LR{q\prime}\bp{{\mathbb T};\WSRD{1}{q\prime}({\halfspace})},
\end{align*}
to the adjoint Stokes problem
\begin{align*}
\begin{pdeq}
\partial_t\psi + \Delta\psi + \nabla \eta &= \phi && \text{in }{\mathbb T}\times\R^n_+, \\
\Div\psi &= 0 && \text{in }{\mathbb T}\times\R^n_+, \\
\psi &= 0 && \text{on }{\mathbb T}\times\partial\R^n_+,
\end{pdeq}
\end{align*}
where $q\prime$ denotes the H\"older conjugate of $q$.
Integration by parts yields
\begin{align*}
\int_{\mathbb T}\int_{\halfspace} \tilde{u}\cdot\phi\,{\mathrm d}x{\mathrm d}t =
\int_{\mathbb T}\int_{\halfspace} \tilde{u}\cdot\bp{\partial_t\psi + \Delta\psi + \nabla \eta}\,{\mathrm d}x{\mathrm d}t= 0.
\end{align*}
Since this identity holds for all $\phi\in\CR \infty_0\np{{\mathbb T}\times{\halfspace}}^n$, it follows that $\tilde{u}=0$. In turn, we deduce $\nabla\tilde{p}=0$, whence
$\tilde{p}\in\calp_\bot\LR{q}\np{{\mathbb T}}$, that is, $\tilde{p}$ depends only on time.
\end{proof}
\begin{proof}[Proof of Theorem \ref{MainThm}]
Let $f,g,h$ be vector fields in the class \eqref{MainThm_Data}, with $h$ satisfying \eqref{MainThm_DataCompCond}.
By \cite[Theorem IV.3.2]{Galdi}, the steady-state Stokes problem \eqref{SHGS}
admits a solution $\np{v,\Pi}\in\WSRD{2}{q}\np{{\halfspace}}^n\times\WSRD{1}{q}\np{{\halfspace}}$ that satisfies
\begin{multline*}
\norm{\nabla^2v}_{\LR{q}({\halfspace})} + \norm{\nabla\Pi}_{\LR{q}({\halfspace})}\leq \Ccn{C}
\bp{\norm{\calp f}_{\LR{q}({\halfspace})} + \norm{\calpg}_{\WSR{1}{q}({\halfspace})}+\norm{\calph}_{\WSR{2-\frac{1}{q}}{q}(\wholespace)}}.
\end{multline*}
By Theorem \ref{PurelyOscProblem_HomoDataThm}, the purely oscillatory Stokes problem \eqref{SHGP} admits a solution
$(w,\pi)$ in the class \eqref{PurelyOscProblem_HomoDataThm_SolReg} satisfying \eqref{PurelyOscProblem_HomoDataThm_ProjComplEst}.
Putting $(u,p)\coloneqq(v+w,\Pi+\pi)$, we obtain a solution to \eqref{SH} that satisfies \eqref{MainThm_ProjEst} and
\eqref{MainThm_ProjComplEst}. Finally,
if $(\tilde{u},\tilde{p})$ is another solution to \eqref{SH} in the class \eqref{MainThm_SolReg}, then
$\calp_\botu=\calp_\bot\tilde{u}$ by Theorem \ref{PurelyOscProblem_HomoDataThm}, and $\calpu=\calp\tilde{u}+(a_1x_n,\ldots,a_{n-1}x_{n},0)$ for some vector $a\in\mathbb{R}^{n-1}$ by \cite[Theorem IV.3.2]{Galdi}. It follows that $\nablap=\nabla\tilde{p}$, and thus $p=\tilde{p} + d(t)$
for some function $d$ that depends only on time.
\end{proof}
\bibliographystyle{plainurl}
|
{
"timestamp": "2018-10-09T02:05:49",
"yymm": "1809",
"arxiv_id": "1809.02424",
"language": "en",
"url": "https://arxiv.org/abs/1809.02424"
}
|
\section{Introduction}
Metastability is a dynamical phenomenon observed in many different contexts, such as physics, chemistry, biology, climatology, economics.
Despite the variety of scientific areas, the common feature of all these situations is the existence of multiple,
well-separated \emph{time scales}. On short time scales the system is in a
quasi-equilibrium within a single region, while on long time scales it undergoes
rapid transitions between quasi-equilibria in different regions. A rigorous description of metastability in the setting of stochastic dynamics is relatively recent, dating back to
the pioneering paper \cite{CGOV}, and has experienced substantial progress in the last decades. See \cite{BL,Bo, BdH,OV} for reviews and for a list of the most important papers on this subject.
One of the big challenges in rigorous study of metastability is understanding the dependence of the metastable behaviour and of the nucleation process of the stable phase
on the dynamics. The nucleation process of the critical droplet, i.e. the configuration triggering the crossover, has been indeed studied in different dynamical regimes: serial (\cite{BM, CO}) vs.
parallel dynamics (\cite{BCLS,CN,CNS01}); non-conservative (\cite{BM, CO}) vs. conservative dynamics (\cite{HNT,HNT1,HOS}); finite (\cite{BHN}) vs. infinite volumes (\cite{BHS}); competition (\cite{CNS02,CNS03,I,SLF}) vs. non-competition of metastable phases (\cite{CN2013,CNS2015}). All previous studies assumed that the microscopic interaction is of short-range type.
The present paper pushes further this investigation, studying the dependence of the metastability scenario on the \emph{range} of the interaction of the model. Long-range Ising models in low dimensions are known to behave like higher-dimensional short-range models. For instance in \cite{Dys, Cas} (and later generalized by \cite{Picco, WBruno}) it was shown that long-range Ising models undergo a phase transition already in one dimension, and this transition persists in fast enough decaying fields. Furthermore, Dobrushin interfaces are rigid already in two dimensions for anisotropic long-range Ising models, see \cite{Loren}.
We consider the question: does indeed a \emph{long-range} interaction change substantially the nucleation process? Are we able to define in this framework a critical configuration triggering the crossover towards the stable phase?
In (\cite{MCAW}) the author already considered the \emph{Dyson-like} long-range models, i.e. the one-dimensional lattice model of Ising spins with interaction decaying with a power $\alpha$, in a external magnetic field. Despite the long-range potential, the author showed, by \emph{instanton} arguments, that the system has a finite-sized critical droplet.
In this manuscript we want to make rigorous this claim for a general long-range interaction, showing as well that the long-range interaction completely changes the metastability scenario: in the short--range one-dimensional Ising model a droplet of size one, already nucleates the stable phase.
We show instead that for a given external field $h$, and pair long range potential $J(n)$, we can define a nucleation droplet which gets larger for smaller $h$. For $d=1$ finite range interactions, inserting a minus interval of size $\ell$ in the plus phase costs a finite energy, which is uniform in the length of the interval, the same is almost true for a fast decaying interaction, as there is a uniform bound on the energy an interval costs. Thus, for low temperature, there is a diverging timescale and we will talk in case (maybe by abuse of terminology) of metastability.
The spatial scale of a nucleating interval, however, defined as an interval which lowers its energy when growing, is finite for finite range interactions, but diverges as $h\to0$ for infinite range.
The Dyson model has energy and spatial scale of nucleating droplet diverging as $h$ goes to zero.
We will show that, depending on the value of $h$, the critical droplet can be \emph{macroscopic} or \emph{mesoscopic}.
Roughly speaking, an interval of minuses of length $\ell$ which grows to $\ell+1$ gains energy $2h$, but loses $E_\ell= \sum_{ n=\ell }^{\infty} J(n)$. $E_\ell$ converges to zero as $\ell\rightarrow \infty$, but the smaller $h$ is, the larger the size of the critical droplet. Moreover, by taking $h$ volume-dependent, going to zero with $N$ as $ N^{-\delta}$, one can make the nucleation interval mesoscopic (e.g. $O(N^\delta)$, with
$\delta\in(0,1)$) or macroscopic (i.e. $O(N)$).
The paper is organised as follows. In Section~2 we describe the lattice model and we give the main definitions; in Section~3 the main results of the paper are stated, while in Section~4 and 5 the proofs of the model-dependent results are given.
\section{The model and main definitions}
Let $\Lambda$ be a finite interval of $\mathbb{Z}$, and let us denote by $h$ a positive external field. Given a configuration
$\sigma$ in $\Omega_{\Lambda} = \{-1,1\}^{\Lambda}$, we define the \textit{Hamiltonian} with respect to free boundary condition by
\begin{equation}\label{ham1}
H_{\Lambda, h}(\sigma) = -\sum_{\{i,j\} \subseteq \Lambda} J(|i-j|)\sigma_{i}\sigma_{j} - \sum_{i \in \Lambda}h \sigma_{i},
\end{equation}
where $J: \mathbb{N} \rightarrow \mathbb{R}$, the \textit{pair interaction}, is assumed to be positive and decreasing. The class of interactions that we want to include in the present analysis are of \textit{long-range type}, for instance,
\begin{enumerate}
\item exponential decay: $J(|i-j|)= J \cdot \lambda^{-|i-j|}$ with constants $J>0$ and $\lambda >1$;
\item polynomial decay: $J(|i-j|)= J \cdot |i-j|^{-\alpha}$, where $\alpha>0$ is a parameter.
\end{enumerate}
The {\it finite-volume Gibbs measure} will be denoted by
\begin{equation}
\label{e:gibbs}
\mu_{\Lambda}(\sigma) = \frac{1}{Z_{\Lambda}} \exp \left(-\beta H_{\Lambda, h}(\sigma)\right ),
\end{equation}
where $\beta>0$ is proportional to the inverse temperature and $Z_{\Lambda}$ is a normalizing constant. The set of \emph{ground states} $\mathscr{X}^{s}$ is defined as
$\mathscr{X}^{s}: =\textnormal{argmin}_{\sigma\in \Omega_\Lambda} H_{\Lambda, h}(\sigma)$. Note that for the class of interactions considered
$\mathscr{X}^{s} = \{\mathbf{+1}\}$, where $\mathbf{+1}$ {stands for} the configuration with all spins equal to $+1$.
\\
Given an integer $k \in \{0, \dots, \#\Lambda\}$, we consider $\mathcal{M}_k:=\{\sigma\in \Omega_{\Lambda}: \# \{i: \sigma_{i} = 1\} = k\}$ consisting of configurations in $\Omega_{\Lambda}$ with $k$
positive spins, and we define the configurations $L^{(k)}$ and $R^{(k)}$ as follows. Let
\begin{equation}
L_{i}^{(k)} =
\begin{cases}
+1 & \text{if $1 \leq i \leq k $, and}\\
-1 & \text{otherwise,}
\end{cases}
\end{equation}
and
\begin{equation}
R_{i}^{(k)} =
\begin{cases}
-1 & \text{if $1 \leq i \leq \# \Lambda-k$, and}\\
+1 & \text{otherwise,}
\end{cases}
\end{equation}
i.e., the configurations respectively with $k$ positive spins on {\it left} side of the interval and on the {\it right} one. We will show that $L^{(k)}$ and $R^{(k)}$ are the {minimizers} of the energy
function $H_{\Lambda,h}$ on $\mathcal{M}_k$ (see {Proposition}~\ref{csgo2}). Let us denote {by $\mathcal{P}^{(k)}$ the set $\mathcal{P}^{(k)}:= \{L^{(k)}, R^{(k)}\}$ consisting} of the {minimizers} of the energy on $\mathcal{M}_k$. With abuse of notation we will indicate with
$H_{\Lambda,h}(\mathcal{P}^{(k)})$ the energy of the elements of the set, {that is, $H_{\Lambda,h}(\mathcal{P}^{(k)}):=H_{\Lambda,h}({L}^{(k)})=H_{\Lambda, h}({R}^{(k)})$.}
We {choose} the evolution of the system {to be described} by a discrete-time Markov chain {$X=(X(t))_{t \geq 0}$}, in particular, we consider the discrete-time serial Glauber dynamics given by the Metropolis weights, i.e.,
{the transition matrix of such dynamics is given by}
$$
p(\sigma,\eta):= c(\sigma,\eta)e^{-\beta[H_{\Lambda,h}(\eta)-H_{\Lambda,h}(\sigma)]_+},
$$
where $[\cdot]_+$ denotes the positive part, and $c(\cdot,\cdot)$ is its connectivity matrix {that is equal to
$1/|\Lambda|$ in case the two configurations $\sigma$ and $\eta$ coincide up to the value of a single spin, and zero otherwise.} Notice that such dynamics is reversible with respect to the Gibbs measure defined in
(\ref{e:gibbs}). Let us define the \emph{hitting time} {$\tau_{\eta}^{\sigma}$} of a configuration $\eta$ of the chain $X$ started at $\sigma$ as
\begin{equation}
\label{e:hit}
{\tau_{\eta}^{\sigma}:=\inf\{t>0: X(t)=\eta\}.}
\end{equation}
For any positive integer $n$, {a sequence $\gamma = (\sigma^{(1)}, \dots, \sigma^{(n)})$
such that $\sigma^{(i)}\in \Omega_\Lambda $ and $c(\sigma^{(i)},\sigma^{(i+1)})>0$} for all $i=1,\dots,n-1$
is called a \emph{path} joining {$\sigma^{(1)}$ to $\sigma^{(n)}$};
we also say that $n$ is the length of the path.
For any path {$\gamma$} of length $n$, we let
\begin{equation}
\label{height}
{\Phi_\gamma :=\max_{i=1,\dots,n} H_{\Lambda, h}(\sigma^{(i)})}
\end{equation}
be the \emph{height} of the path. {We also define
the \emph{communication height}
between $\sigma$ and $\eta$ by
\begin{equation}
\label{communication}
\Phi(\sigma,\eta)
:=
\min_{\gamma\in\Omega(\sigma,\eta)}
\Phi_\gamma,
\end{equation}
where the minimum is restricted to the set $\Omega(\sigma,\eta)$ of all paths joining $\sigma$ to $\eta$.}
By reversibility, it easily follows that
\begin{equation}
\label{rev02}
\Phi(\sigma,\eta)=\Phi(\eta,\sigma)
\end{equation}
for all $\sigma,\eta\in \Omega_\Lambda$.
{We extend the previous definition for sets $\mathcal{A},\mathcal{B}\subseteq \Omega_\Lambda$ by letting}
\begin{equation}
\label{communication-set}
\Phi(\mathcal{A},\mathcal{B})
:=
{\min_{\gamma\in\Omega(\mathcal{A},\mathcal{B})}\Phi_\gamma}
=
\min_{\sigma\in \mathcal{A},\eta\in \mathcal{B}}\Phi(\sigma,\eta),
\end{equation}
where {$\Omega(\mathcal{A},\mathcal{B})$ denotes} the set of paths joining
a state in $\mathcal{A}$ to a state in $\mathcal{B}$.
The {\it communication cost} of passing from
$\sigma$ to $\eta$ is given by the quantity $\Phi(\sigma,\eta)-H_{\Lambda, h}(\sigma)$.
{Moreover, if we define $\mathscr{I}_\sigma$ as the set of all states $\eta$ in $\Omega_\Lambda$
such that $H_{\Lambda, h}(\eta)< H_{\Lambda, h}(\sigma)$, then the
\emph{stability level} of any $\sigma\in \Omega_\Lambda \setminus \mathscr{X}^{s}$ is given by
\begin{equation}
\label{stability}
V_\sigma:=\Phi(\sigma,\mathscr{I}_\sigma)-H_{\Lambda, h}(\sigma)
\ge 0.
\end{equation}
}
Following \cite{MNOS}, we now introduce the notion of
\emph{maximal stability level}.
{Assuming that} $\Omega_\Lambda\setminus \mathscr{X}^{s}\neq\emptyset$, we let
the \emph{maximal stability level} be
\begin{equation}
\label{gamma}
\Gamma_\textnormal{m}:=\sup_{\sigma\in \Omega_\Lambda\setminus \mathscr{X}^{s}}V_\sigma.
\end{equation}
We give the following definition.
\begin{definition}
\label{def1}
We call metastable set $\mathscr{X}^{m}$, the set
\begin{equation}
\label{metastabile}
\mathscr{X}^{m}
:=
\{\sigma\in \Omega_\Lambda\setminus \mathscr{X}^{s}:\,V_\sigma=\Gamma_{\textnormal{m}}\}.
\end{equation}
\end{definition}
Following \cite{MNOS}, we shall call
$\mathscr{X}^{m}$ the set of \emph{metastable} states of the
system {and refer to each of its elements as \emph{metastable}.}
We denote {by} $\Gamma$ the quantity
\begin{equation}
\label{e:gamma}
{\Gamma:=\max_{k=0, \dots, \#\Lambda} H_{\Lambda,h}(\mathcal{P}^{(k)})- H_{\Lambda,h}(\mathbf{-1}).}
\end{equation}
We will show in Corollary~\ref{t:meta} that {under certain assumptions} $\Gamma=\Gamma_\textnormal{m}$.
\section{Main Results}
\subsection{Mean exit time}
In this section we will study the first hitting time of the configuration $\mathbf{+1}$ when the system is prepared in $\mathbf{-1}$, in the limit $\beta\to \infty$.
We will restrict our analysis to the case given by the following condition.
\begin{condition}
\label{c:condition}
{Let $N$ be an integer such that $N \geq 2$. We consider $\Lambda = \{1, \dots, N\}$ and $h$ such that
\begin{equation}\label{field}
0 < h < \sum_{n = 1}^{N-1}J(n).
\end{equation} }
\end{condition}
By using the general theory developed in \cite{MNOS}, we need first to solve two \emph{model-dependent} problems: the calculation of the \emph{minimax}
between {$\mathbf{-1}$} and $\mathbf{+1}$ ({item} \ref{minmax1} of Theorem~\ref{t:minmax})
and the proof of a
\emph{recurrence} property in the energy landscape ({item} \ref{minmax2} of Theorem~\ref{t:minmax}).
\begin{teo}
\label{t:minmax}
Assume that Condition~\ref{c:condition} is satisfied.{Then, we have}
\begin{enumerate}
\item $\Phi(-\mathbf{1},\mathbf{+1})=\Gamma+H_{\Lambda,h}(\mathbf{-1})$, \label{minmax1}
\item {$V_{\mathbf{-1}}= \Gamma > 0$}, and \label{minmax3}
\item $V_\sigma<\Gamma$ for any {$\sigma\in \Omega_\Lambda\setminus \{\mathbf{-1}, \mathbf{+1}\}$.\label{minmax2}}
\end{enumerate}
\end{teo}
As a corollary we have that $-\mathbf{1}$ is the only metastable state for this model.
\begin{cor}
\label{t:meta}
Assume that Condition~\ref{c:condition} is satisfied. {It follows that
\begin{equation}
\Gamma = \Gamma_m ,
\end{equation}
and
\begin{equation}
\mathscr{X}^{m}=\{-\mathbf{1}\}.
\end{equation}
}
\end{cor}
Therefore, the asymptotic of the exit time for the system
started at the metastable states {is given by the following theorem.}
\begin{teo}
\label{t:meantime}
Assume that Condition~\ref{c:condition} is satisfied. {It follows that}
\begin{enumerate}
\item for any $\epsilon>0$
$$
{\lim_{\beta\to\infty} \mathbb{P}\left(e^{\beta(\Gamma-\epsilon)}<\tau_{\mathbf{+1}}^{\mathbf{-1}}<e^{\beta(\Gamma+\epsilon)}\right)=1,}
$$
\item the limit
$$
{\lim_{\beta\to\infty}\frac{1}{\beta}\log\left(\mathbb{E}\left(\tau_{\mathbf{+1}}^{\mathbf{-1}}\right)\right)=\Gamma}
$$
holds.
\end{enumerate}
\end{teo}
Once the model-dependent results in Theorem~\ref{t:minmax} have been proven, the proof of Theorem~\ref{t:meantime} easily follows from the general
theory present in \cite{MNOS}: {item} 1 follows from Theorem~4.1 in \cite{MNOS} and {item} 2 from Theorem~4.9 in \cite{MNOS}.
\subsection{Nucleation of the metastable phase}
We are going to show that for small enough external magnetic field, the size of the critical droplet is a macroscopic fraction of the system, while for $h$ sufficiently large, the critical configuration will be a {mesoscopic} fraction of the system.
Let us define
$L := \left\lfloor \frac{N}{2} \right\rfloor$, and {let $h_{k}^{(N)}$ be}
\begin{equation}
{h_{k}^{(N)} := \sum_{n=1}^{N-k-1} J(n) - \sum_{n=1}^{k} J(n)}
\end{equation}
for each $k = 0,\dots, L-1$. {One can easily verify that
\begin{equation}
0 < h_{L-1}^{(N)} < \dots < h_{1}^{(N)} < h_{0}^{(N)} = \sum_{n = 1}^{N-1}J(n)
\end{equation}
}
\begin{prop}\label{critdrop}
{Under the assumption that Condition (\ref{c:condition}) is satisfied, one of the following conditions holds.
\begin{enumerate}
\item Case $h < h_{L-1}^{(N)}$, we have
\[H_{\Lambda,h}(\mathcal{P}^{(L)}) > \max_{\substack{0 \leq k \leq N \\ k \neq L}} H_{\Lambda,h}(\mathcal{P}^{(k)}).\]
\item Case $h_{k}^{(N)} < h < h_{k-1}^{(N)}$ for some $k \in \{1,\dots, L-1\}$, we have
\[H_{\Lambda,h}(\mathcal{P}^{(k)}) > \max_{\substack{0 \leq i \leq N \\ i \neq \bar{k}}} H_{\Lambda,h}(\mathcal{P}^{(i)}).\]
\item Case $h = h_{k}^{(N)}$ for some $k \in \{1,\dots, L-1\}$, we have
\[H_{\Lambda,h}(\mathcal{P}^{(k)}) = H_{\Lambda,h}(\mathcal{P}^{(k+1)}) > \max_{\substack{0 \leq i \leq N \\ i \neq k, i \neq k+1}} H_{\Lambda,h}(\mathcal{P}^{(i)}).\]
\end{enumerate}
}
\end{prop}
The first point of Proposition~\ref{critdrop} describes the less interesting and, in a way, artificial, situation of very low external magnetic fields:
in this regime the \emph{bulk} term is negligible so that the energy of the droplet increases until the positive spins are the majority
(i.e. {$k=L$}, see Figure~\ref{fig:test2}). Therefore, {the second point} contains the most interesting situation, where there is an interplay between the
bulk and the \emph{surface} term.
The following Corollary is a consequence of Proposition~\ref{critdrop} when $N$ is large enough and gives a characterisation of the critical size $k_c$ of the critical droplet.
\begin{cor}\label{corcrit}
{If we assume that $\sum_{n = 1}^{\infty}J(n)$ converges and
\begin{equation}
0 < h < \sum_{n = 1}^{\infty}J(n),
\end{equation}
then, the size of the critical droplet will be given by
\begin{equation}\label{crit}
k_{c} = \min\left\{k \in \mathbb{N}: \sum_{n = k+1}^{\infty}J(n) \leq h\right\}
\end{equation}
whenever $N$ is sufficiently large.}
\end{cor}
As a consequence {of Corollary \ref{corcrit}}, the set of \emph{critical configurations} $\mathcal{P}_c$ is given by {
\begin{equation}
\label{critical_conf}
\mathcal{P}_c:= \{L^{(k_c)}, R^{(k_c)}\}
\end{equation}
for $N$ large enough.} {The following result shows the reason why configurations in $\mathcal{P}_c$ are referred to as
\emph{critical} configurations: they indeed trigger the transition towards the stable phase.}
\begin{lemma}
\label{nucleation_1}
Under the conditions stated above, we have
\begin{enumerate}
\item any path {$\gamma\in \Omega(\mathbf{-1},\mathbf{+1})$ such that $\Phi_\gamma- H_{\Lambda,h}(-\mathbf{1})=\Gamma$} visits $\mathcal{P}_c$, and
\item {the limit}
{
$$
\lim_{\beta\to\infty} \mathbb{P}(\tau_{\mathcal{P}_c}^{-\mathbf{1}}<\tau_{+\mathbf{1}}^{-\mathbf{1}})=1
$$
holds.}
\end{enumerate}
\end{lemma}
The proof of the previous Theorem is a straightforward consequence of Theorem~5.4 in \cite{MNOS}.
\subsection{Examples}
Let us give two interesting examples of the general theory so far developed.
\subsubsection{Example 1: exponentially decaying coupling}
We consider
{$$J(n) = \frac{J}{\lambda^{n-1}},$$}
where $J$ and $\lambda$ are positive real numbers with $\lambda > 1$.
\begin{prop}
\label{expo}
{Under the same hypotheses as Corollary~\ref{corcrit}, we have that the critical droplet length $k_c$ is equal to
\begin{equation}
k_{c} = \left\lceil \log_{\lambda}\left(\frac{J}{h(1-\lambda^{-1})}\right) \right\rceil
\end{equation}
whenever N is sufficiently large.}
\end{prop}
\begin{proof}
{By Corollary~\ref{corcrit}, we have
\[ J \sum_{n = k_{c}+1}^{\infty}\lambda^{-(n-1)} \leq h < J \sum_{n = k_{c}}^{\infty}\lambda^{-(n-1)}\]
that implies
\[\frac{\lambda^{-k_{c}}}{1-\lambda^{-1}} \leq \frac{h}{J} < \frac{\lambda^{-(k_{c}-1)}}{1-\lambda^{-1}}\]
Thus
\begin{equation}
k_{c} - 1 < - \frac{\log\left(\frac{h(1-\lambda^{-1})}{J}\right)}{\log \lambda} \leq k_{c}.
\end{equation}
}
\end{proof}
{As a remark we notice that in case of exponential decay of the interaction, the system behaves essentially as the nearest neighbours one-dimensional
Ising model. Note that
\begin{equation}
\lim_{\lambda \to \infty} J(n) =
\begin{cases}
J &\text{if $n =1$, and}\\
0 &\text{otherwise;}
\end{cases}
\end{equation}
moreover, if $h < J = \lim_{\lambda \to \infty} \sum_{n =1}^{\infty}J(n)$, then $k_{c} = 1$ whenever $\lambda$ is large enough. So,
we conclude that typically a single plus spin in the lattice will trigger the nucleation of the stable phase.
As you can see in Figure~\ref{expo22} the energy exitations
$H_{\Lambda,h}(\mathcal{P}^{(k)})-H_{\Lambda,h}(\mathbf{-1})$ are strictly descreasing in $k$, as expected.}
\begin{figure}
\centering
\includegraphics[height=7cm]{N1000_exponential}
\caption{Blue line is the excitation energy
$H_{\Lambda,h}(\mathcal{P}^{(k)})-H_{\Lambda,h}(\mathbf{-1})$ for $N=1000$,
$\lambda=2, h=0.21,J=1$;
red line is the critical droplet.}
\label{expo22}
\end{figure}
\subsubsection{Example 2: polynomially decaying coupling}
Let the coupling constants be given by
$$J(n) = J\cdot n^{-\alpha},$$
where $J$ and $\alpha$ are positive real numbers with $\alpha > 1$. As it is shown in Figures~\ref{fig:test1} and \ref{fig:test2}, for the polynomially decaying coupling model,
we have that, for $h$ small enough the critical droplet is essentially the half interval, while for large enough magnetic external magnetic field, the critical droplet is the configuration with $k_c$ plus spins at the sides, with
$k_c\approx \left(\frac{J}{h(\alpha -1)}\right)^{\frac{1}{\alpha -1}} $.
\begin{figure}
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.9\linewidth]{N10000}
\captionof{figure}{Blue line is the excitation energy \\
$H_{\Lambda,h}(\mathcal{P}^{(k)})-H_{\Lambda,h}(\mathbf{-1})$ for $N=10000$,\\
$\alpha=3/2, h=0.21,J=1$; {the
red line represents the \\ critical length $k_c\approx 91$.}}
\label{fig:test1}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.9\linewidth]{N500}
\captionof{figure}{
Blue line is the excitation energy \\
$H_{\Lambda,h}(\mathcal{P}^{(k)})-H_{\Lambda,h}(\mathbf{-1})$ for $N=500$,\\
$\alpha=3/2, h=0.0001,J=1$; {the
red line represents the \\critical length $k_c= 250$.}}
\label{fig:test2}
\end{minipage}
\end{figure}
We can prove indeed the following proposition.
\begin{prop}\label{dyson}
{Under the same hypotheses as Corollary~\ref{corcrit}, we have that $k_c$ satisfies
\begin{equation}
\left | k_{c} - \left(\frac{J}{h(\alpha -1)}\right)^{\frac{1}{\alpha -1}} \right |< 1
\end{equation}
whenever $N$ is large enough.}
\end{prop}
\begin{proof}
{By Corollary~\ref{corcrit}}, it follows that
$$
J \sum_{n = k_{c}+1}^{\infty}n^{-\alpha} \leq h < J \sum_{n = k_{c}}^{\infty}n^{-\alpha}.
$$
Moreover, note that
$$
\int_{k_{c}+1}^{\infty}\frac{1}{x^{\alpha}} dx < \sum_{n = k_{c}+1}^{\infty}n^{-\alpha}
$$
and
\[ \sum_{n = k_{c}}^{\infty}n^{-\alpha} < \int_{k_{c}-1}^{\infty}\frac{1}{x^{\alpha}} dx \]
so that
\[\frac{(k_{c}+1)^{1-\alpha}}{\alpha - 1} < \frac{h}{J} < \frac{(k_{c}-1)^{1-\alpha}}{\alpha - 1}. \]
Hence,
\begin{equation}
(k_{c} - 1)^{\alpha - 1} < \frac{J}{h(\alpha -1)} < (k_{c} + 1)^{\alpha - 1} .
\end{equation}
\end{proof}
\section{Proof Theorem~\ref{t:minmax}}
We start the proof of the main theorem giving some general results about the control of the energy of a general configuration.
First of all we note that equation (\ref{ham1}) can be written as
\begin{eqnarray*}
H_{\Lambda, h}(\sigma) &=& -\frac{1}{2}\sum_{i \in \Lambda}\sum_{j \in \Lambda} J(|i-j|)\sigma_{i}\sigma_{j} - h\sum_{i \in \Lambda} \sigma_{i}\\
&=& \sum_{i \in \Lambda}\sum_{j \in \Lambda} J(|i-j|)\left(\frac{1 - \sigma_{i}\sigma_{j}}{2}\right) - h\sum_{i \in \Lambda} \sigma_{i} - \frac{1}{2}\sum_{i \in \Lambda}\sum_{j \in \Lambda} J(|i-j|)\\
&=& \sum_{i \in \Lambda}\sum_{j \in \Lambda} J(|i-j|) \mathds{1}_{\{\sigma_{i} \neq \sigma_{j}\}} - h\sum_{i \in \Lambda}\sigma_{i} - \frac{1}{2}\sum_{i \in \Lambda}\sum_{j \in \Lambda} J(|i-j|).
\end{eqnarray*}
Moreover, given an integer $k \in \{0, \dots, N\}$, if $\sigma\in\mathcal{M}_k$, then
\begin{equation}\label{main}
H_{\Lambda, h}(\sigma) = \sum_{i \in \Lambda}\sum_{j \in \Lambda} J(|i-j|) \mathds{1}_{\{\sigma_{i} \neq \sigma_{j}\}} + h (N - 2k) - \frac{1}{2}\sum_{i \in \Lambda}\sum_{j \in \Lambda} J(|i-j|).
\end{equation}
Therefore, restricting ourselves to configurations that contains only $k$ spins with the value $1$, in order to find such configurations with minimal energy,
it is sufficient to
minimize the first term of the right-hand side of equation (\ref{main}).
\begin{prop}\label{csgo}
Let $N$ be a positive integer and $k \in \{0, \dots, N\}$, if we restrict to all $\sigma\in\mathcal{M}_k$, then
\begin{equation}\label{csgo1}
\sum_{i=1}^{N}\sum_{j=1}^{N} J(|i-j|) \mathds{1}_{\{\sigma_{i} \neq \sigma_{j}\}} \geq 2 \sum_{i=1}^{k}\sum_{j=k+1}^{N} J(|i-j|).
\end{equation}
Under this restriction, the equality in the equation above holds if and only if $\sigma = L^{(k)}$ or $\sigma = {R}^{(k)}$.
\end{prop}
\begin{proof}
Let us prove the result by induction. Let $\mathcal{H}_{N}$ be defined by
\begin{equation}
\mathcal{H}_{N}(\sigma_{1}, \dots,\sigma_{N}) = \sum_{i=1}^{N}\sum_{j=1}^{N} J(|i-j|) \mathds{1}_{\{\sigma_{i} \neq \sigma_{j}\}} = 2 \sum_{i :\,\sigma_{i} = 1}\sum_{j :\,\sigma_{j} = -1} J(|i-j|).
\end{equation}
Note that the result is trivial if $N=1$. Assuming that it holds for $N \geq 1$, let us prove that it also holds for $N+1$.
In case $\sigma_{1} = 1$, applying our induction hypothesis and Lemma \ref{lero}, we have
\begin{eqnarray}
\mathcal{H}_{N+1}(1,\sigma_{2}, \dots,\sigma_{N+1}) &=&
2 \sum_{j = 1}^{N} J(j) \mathds{1}_{\{\sigma_{j+1} = -1\}} + \mathcal{H}_{N}(\sigma_{2}, \dots,\sigma_{N+1}) \\
\label{awp1}&\geq& 2 \sum_{j = k}^{N} J(j) + 2 \sum_{i=1}^{k-1}\sum_{j=k}^{N} J(|i-j|)\\
&=& 2 \sum_{i=1}^{k}\sum_{j=k+1}^{N+1} J(|i-j|).
\end{eqnarray}
Replacing the inequality sign in equation (\ref{awp1}) by an equality, it follows that
\begin{equation}
0 \leq \mathcal{H}_{N}(\sigma_{2}, \dots,\sigma_{N+1}) - 2 \sum_{i=1}^{k-1}\sum_{j=k}^{N} J(|i-j|)
= 2 \sum_{j = k}^{N} J(j) - 2 \sum_{j = 1}^{N} J(j) \mathds{1}_{\{\sigma_{j+1} = -1\}} \leq 0,
\end{equation}
hence,
\begin{equation}
\sum_{j = 1}^{k-1} J(j) - \sum_{j = 1}^{N} J(j) \mathds{1}_{\{\sigma_{j+1} = 1\}} = 0.
\end{equation}
Using Lemma \ref{lero} again, we conclude that $\sigma_{j} = 1$ whenever $1 \leq j \leq k$,
and $\sigma_{j} = -1$ whenever $k+1 \leq j \leq N+1$.
Now, in case $\sigma_{1} = -1$, we write $\mathcal{H}_{N+1}(-1,\sigma_{2}, \dots,\sigma_{N+1})$ as
\begin{eqnarray}
\mathcal{H}_{N+1}(-1,\sigma_{2}, \dots,\sigma_{N+1}) = \mathcal{H}_{N+1}(1,-\sigma_{2}, \dots,-\sigma_{N+1})
\end{eqnarray}
and apply our previous result in order to obtain
\begin{equation}
\mathcal{H}_{N+1}(-1,\sigma_{2}, \dots,\sigma_{N+1}) \geq 2 \sum_{i=1}^{N+1-k}\sum_{j=N+2-k}^{N+1} J(|i-j|) = 2 \sum_{i=1}^{k}\sum_{j=k+1}^{N+1} J(|i-j|),
\end{equation}
where the equality holds only if $\sigma_{j} = -1$ whenever $1 \leq j \leq N+1-k$,
and $\sigma_{j} = 1$ whenever $N+2-k \leq j \leq N+1$.
\end{proof}
As an immediate consequence of Proposition \ref{csgo} the next results follows.
\begin{teo}
\label{csgo2}
Given an integer $k \in \{0, \dots, N\}$, if we restrict to all $\sigma\in\mathcal{M}_k$, then
\begin{equation}
H_{\Lambda, h}(\sigma) \geq 2 \sum_{i=1}^{k}\sum_{j=k+1}^{N} J(|i-j|) + h (N - 2k) - \frac{1}{2}\sum_{i=1}^{N}\sum_{j =1}^{N} J(|i-j|).
\end{equation}
Under this restriction, the equality in the equation above holds if and only if $\sigma = R^{(k)}$ or $\sigma = {L}^{(k)}$
\end{teo}
\subsection{Proof of Theorem \ref{t:minmax}.\ref{minmax1}(minimax)}
\begin{proof}[Proof of Theorem \ref{t:minmax}.\ref{minmax1}]
Define $f: \{0,\dots, N\} \rightarrow \mathbb{R}$ as
\begin{equation}
f(k) = H_{\Lambda, h} (\mathscr{P}{^{(k)}}).
\end{equation}
It follows that
\begin{eqnarray*}
\Delta f (k) &=& f(k+1) - f(k) \\
&=& 2\left( \sum_{i=1}^{k+1} \sum_{j=k+2}^{N} J(|i-j|) - \sum_{i=1}^{k} \sum_{j=k+1}^{N} J(|i-j|) - h \right) \\
&=& 2\left( \sum_{j=k+2}^{N} J(|k+1-j|) + \sum_{i=1}^{k} \sum_{j=k+2}^{N} J(|i-j|) - \sum_{i=1}^{k} \sum_{j=k+1}^{N} J(|i-j|) - h \right) \\
&=& 2\left( \sum_{j=k+2}^{N} J(|k+1-j|) - \sum_{i=1}^{k} J(|i-(k+1)|) - h \right) \\
&=& 2\left( \sum_{i=1}^{N-k-1} J(i) - \sum_{i=1}^{k} J(i) - h \right)
\end{eqnarray*}
holds for all $k$ such that $0 \leq k \leq N-1$, and
\begin{eqnarray*}
\Delta^{2} f (k) &=& \Delta f(k+1) - \Delta f(k) \\
&=& 2\left( \sum_{i=1}^{N-k-2} J(i) - \sum_{i=1}^{N-k-1} J(i) - \sum_{i=1}^{k+1} J(i) + \sum_{i=1}^{k} J(i) \right) \\
&=& -2(J(N-k-1) + J(k+1))
\end{eqnarray*}
holds whenever $0 \leq k \leq N-2$.
Note that
\begin{equation}\label{omg1}
\Delta f (0) = 2\left( \sum_{i=1}^{N-1} J(i) - h \right) > 0,
\end{equation}
$1 \leq \left\lfloor \frac{N}{2} \right\rfloor\leq N-1$, and
\begin{equation}\label{omg2}
\Delta f \left(\left\lfloor \frac{N}{2} \right\rfloor\right) < 0.
\end{equation}
It follows from $\Delta^{2} f < 0 $ and equations (\ref{omg1}) and (\ref{omg2}) that $f$ satisfies
\begin{equation}\label{gammapos}
f(0) < f(1)
\end{equation}
and
\begin{equation}
f\left(\left\lfloor \frac{N}{2} \right\rfloor\right) > \dots > f(N),
\end{equation}
therefore, $f(k_{0}) = \max_{0 \leq k \leq N} f(k)$ for some $k_{0} \in \{1,\dots, \left\lfloor \frac{N}{2} \right\rfloor\}$.
Defining the path $\gamma: \mathbf{-1} \rightarrow \mathbf{+1}$ by $\gamma = (L^{(0)}, L^{(1)}, \dots, L^{(N)})$, it is easy to see that
\begin{equation}
\Phi(\mathbf{-1}, \mathbf{+1}) = \max_{\sigma \in \gamma}H_{\Lambda,h}(\sigma) = \max_{0 \leq k \leq N}H_{\Lambda, h} (\mathscr{P}{^{(k)}}) =\Gamma+H_{\Lambda,h}(\mathbf{-1}).
\end{equation}
\end{proof}
\subsection{Proof of Theorem \ref{t:minmax}.\ref{minmax3} and \ref{t:minmax}.\ref{minmax2}}
Before giving the proof of the second point of the main theorem, we give some results about the control of the energy of a spin-flipped configuration.
Given a configuration $\sigma$ and $k \in \Lambda$, the \emph{spin-flipped} configuration $\theta_{k}\sigma$ is defined as:
\begin{equation}
(\theta_{k}\sigma)_{i} =
\begin{cases}
-\sigma_{k} &\text{if $i = k$, and}\\
\sigma_{i} &\text{otherwise.}
\end{cases}
\end{equation}
Note that the energetic cost to flip the spin at position $k$ from the configuration $\sigma$ is given by
\begin{eqnarray*}
H_{\Lambda,h}(\theta_{k}\sigma) - H_{\Lambda,h}(\sigma) &=& \sum_{\{i,j\} \subseteq \Lambda} J(|i-j|)(\sigma_{i}\sigma_{j} - (\theta_{k}\sigma)_{i}(\theta_{k}\sigma)_{j})
+ h \sum_{i \in \Lambda} (\sigma_{i} - (\theta_{k}\sigma)_{i}) \\
&=& \left(\sum_{j \in \Lambda} J(|k-j|)2\sigma_{k}\sigma_{j}
+ 2h \sigma_{k} \right) \\
&=& 2\sigma_{k}\left(\sum_{j \in \Lambda} J(|k-j|)\sigma_{j} + h \right).
\end{eqnarray*}
\begin{prop}
Under Condition~\ref{c:condition}, given a configuration $\sigma$ such that
\begin{equation}
\label{property}
H_{\Lambda,h}(\theta_{k}\sigma) - H_{\Lambda,h}(\sigma) \geq 0
\end{equation}
for every $k \in \{1,\dots,N\}$, then
either $\sigma = \mathbf{-1}$ or $\sigma = \mathbf{+1}$.
\end{prop}
\begin{proof}
Let $k \in \{1,\dots,N-1\}$, and let $\sigma$ be a configuration such that $\sigma_{i}= +1$ whenever $1 \leq i \leq k$ and $\sigma_{k+1} = -1$. In the following,
we show that every such $\sigma$ cannot satisfy property (\ref{property}). If property (\ref{property}) is satisfied, then
\begin{equation}
\begin{cases}
H_{\Lambda,h}(\theta_{k}\sigma) - H_{\Lambda,h}(\sigma) \geq 0 \\
H_{\Lambda,h}(\theta_{k+1}\sigma) - H_{\Lambda,h}(\sigma) \geq 0
\end{cases}
\end{equation}
that is,
\begin{equation}
\begin{cases}
\sum_{i=1}^{k-1}J(|k-i|) - J(1) + \sum_{i=k+2}^{N}J(|k-i|)\sigma_{i} + h \geq 0 \\
-\left(\sum_{i=1}^{k}J(|k+1-i|) + \sum_{i=k+2}^{N}J(|k+1-i|)\sigma_{i} + h \right)\geq 0.
\end{cases}
\end{equation}
Summing both equations above, we have
\begin{eqnarray*}
0 &\leq& -J(k) - J(1) + \sum_{i=k+2}^{N}(J(i-k) - J(i-k-1))\sigma_{i} \\
&\leq& -J(k) - J(1) + \sum_{i=k+2}^{N}(J(i-k-1) - J(i-k))\\
&=& -J(k) - J(1) + \sum_{i=1}^{N-k-1}(J(i) - J(i+1))\\
&=& -J(k)-J(N-k)
\end{eqnarray*}
that is a contradiction. Analogously, every configuration $\sigma$ such that such that $\sigma_{i}= -1$ whenever $1 \leq i \leq k$ and $\sigma_{k+1} = 1$ for some
$k \in \{1,\dots,N-1\}$, property (\ref{property}) cannot be satisfied. Therefore, we conclude that for every $\sigma$ different from
$\mathbf{-1}$ and $\mathbf{+1}$, property (\ref{property}) does not hold.
The proof of the converse statement is straightforward.
\end{proof}
As an immediate consequence of the result above, the next result follows.
\begin{cor}\label{path}
Under Condition~\ref{c:condition}, for every configuration $\sigma$ different from
$\mathbf{-1}$ and $\mathbf{+1}$, there is a path $\gamma = (\sigma^{(1)}, \dots, \sigma^{(n)})$, where $\sigma^{(1)} = \sigma$ and $\sigma^{(n)} \in \{\mathbf{-1},\mathbf{+1}\}$,
such that $H_{\Lambda,h}(\sigma^{(i+1)}) < H_{\Lambda,h}(\sigma^{(i)})$.
\end{cor}
We have now all the element for proving item \ref{minmax3} and \ref{minmax2} of Theorem ~\ref{t:minmax}.
\begin{proof}[Proof of Theorem \ref{t:minmax}.\ref{minmax3}]
First, note that it follows from inequality (\ref{gammapos}) that $\Gamma > 0$. Now, let us show that $V_{\mathbf{-1}}$ satisfies
\begin{equation}
V_{\mathbf{-1}} = \Phi(\mathbf{-1},\mathbf{+1}) - H_{\Lambda,h}(\mathbf{-1}).
\end{equation}
Since $\mathbf{+1}\in \mathscr{I}_{\mathbf{-1}}$, we have
\begin{equation}
\label{v1}
V_{\mathbf{-1}} \leq \Phi(\mathbf{-1},\mathbf{+1}) - H_{\Lambda,h}(\mathbf{-1}).
\end{equation}
So, we conclude the proof if we show that
\begin{equation}
\label{v2}
\Phi(\mathbf{-1},\mathbf{+1}) \leq \Phi(\mathbf{-1},\eta)
\end{equation}
holds for every $\eta\in \mathscr{I}_{\mathbf{-1}}$. Let $\gamma_{1}: \mathbf{-1} \rightarrow \eta$ be a path from $\mathbf{-1}$ to $\eta$ given by
$\gamma_{1} = (\sigma^{(1)}, \dots, \sigma^{(n)})$,
then, according to Corollary \ref{path}, there is a path
$\gamma_{2} : \eta \rightarrow \mathbf{+1}$, say $\gamma_{2} = (\eta^{(1)},\dots, \eta^{(m)})$, along which the energy decreases. Hence, the path
$\gamma: \mathbf{-1} \rightarrow \mathbf{+1}$ given by
\begin{equation}
\gamma = (\sigma^{(1)},\dots, \sigma^{(n-1)}, \eta^{(1)},\dots, \eta^{(m)})
\end{equation}
satisfies
\begin{equation}
\Phi_{\gamma} (-\mathbf{1},+\mathbf{1}) = \Phi_{\gamma_1} (-\mathbf{1},\eta)
\vee \Phi_{\gamma_2} (\eta,+\mathbf{1}))
=\Phi_{\gamma_1} (-\mathbf{1},\eta).
\end{equation}
Hence, the inequality
\begin{equation}
\Phi(\mathbf{-1},\mathbf{+1}) \leq \Phi_{\gamma_{1}}(\mathbf{-1},\eta)
\end{equation}
holds for every path $\gamma_{1}: \mathbf{-1} \rightarrow \eta$, and equation (\ref{v2}) follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t:minmax}.\ref{minmax2}]
Given $\sigma \notin \{\mathbf{-1}, \mathbf{+1}\}$, let us show now that
\begin{equation}
\Phi(\sigma,\eta) - H_{\Lambda,h}(\sigma) < V_{\mathbf{-1}}
\end{equation}
holds for any $\eta\in \mathscr{I}_{\sigma}$. Let us consider the following cases.
\begin{enumerate}
\item Case $\eta = \mathbf{+1}$. According to Corollary (\ref{path}), there is a path $\gamma = (\sigma^{(1)}, \dots, \sigma^{(n)})$ from
$\sigma^{(1)}= \sigma$ to $\sigma^{(n)} \in \{\mathbf{-1},\mathbf{+1}\}$ along which the energy decreases.
\begin{itemize}
\item[(a)] If $\sigma^{(n)} = \mathbf{-1}$, then the path $\gamma_{0}: \sigma \rightarrow \mathbb{\eta}$
given by $\gamma_{0} = (\sigma^{(1)}, \dots, \sigma^{(n-1)}, L^{(0)}, \dots, L^{(N)})$ satisfies
\begin{eqnarray*}
\Phi(\sigma,\eta) - H_{\Lambda,h}(\sigma) &\leq& \max_{\zeta \in \gamma_{0}}H_{\Lambda,h}(\zeta) - H_{\Lambda,h}(\sigma) \\
&\leq& \left(\max_{\zeta \in \gamma}H_{\Lambda,h}(\zeta)\right)
\vee \left(\max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)})\right) - H_{\Lambda,h}(\sigma) \\
&=& 0
\vee \left(\max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)}) - H_{\Lambda,h}(\sigma)\right) \\
&<& \max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)}) - H_{\Lambda,h}(\mathbf{-1}) \\
&=& V_{\mathbf{-1}}.
\end{eqnarray*}
\item[(b)] Otherwise, if $\sigma^{(n)} = \mathbf{+1}$, then
\begin{eqnarray*}
\Phi(\sigma,\eta) - H_{\Lambda,h}(\sigma) &\leq& \max_{\zeta \in \gamma}H_{\Lambda,h}(\zeta) - H_{\Lambda,h}(\sigma) \\
&=& 0 \\
&<& V_{\mathbf{-1}}.
\end{eqnarray*}
\end{itemize}
\item Case $\eta = \mathbf{-1}$. According to Corollary (\ref{path}), there is a path $\gamma = (\sigma^{(1)}, \dots, \sigma^{(n)})$ from
$\sigma^{(1)}= \sigma$ to $\sigma^{(n)} \in \{\mathbf{-1},\mathbf{+1}\}$ along which the energy decreases.
\begin{itemize}
\item[(a)] If $\sigma^{(n)} = \mathbf{+1}$, then the path $\gamma_{0}: \sigma \rightarrow \mathbb{\eta}$
given by $\gamma_{0} = (\sigma^{(1)}, \dots, \sigma^{(n-1)}, L^{(N)}, \dots, L^{(0)})$ satisfies
\begin{eqnarray*}
\Phi(\sigma,\eta) - H_{\Lambda,h}(\sigma) &\leq& \max_{\zeta \in \gamma_{0}}H_{\Lambda,h}(\zeta) - H_{\Lambda,h}(\sigma) \\
&\leq& \left(\max_{\zeta \in \gamma}H_{\Lambda,h}(\zeta)\right)
\vee \left(\max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)})\right) - H_{\Lambda,h}(\sigma) \\
&=& 0
\vee \left(\max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)}) - H_{\Lambda,h}(\sigma)\right) \\
&<& \max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)}) - H_{\Lambda,h}(\mathbf{-1}) \\
&=& V_{\mathbf{-1}}.
\end{eqnarray*}
\item[(b)] Otherwise, if $\sigma^{(n)} = \mathbf{-1}$, then
\begin{eqnarray*}
\Phi(\sigma,\eta) - H_{\Lambda,h}(\sigma) &\leq& \max_{\zeta \in \gamma}H_{\Lambda,h}(\zeta) - H_{\Lambda,h}(\sigma) \\
&=& 0 \\
&<& V_{\mathbf{-1}}.
\end{eqnarray*}
\end{itemize}
\item Case $\eta \notin \{\mathbf{-1}, \mathbf{+1}\}$. Let $\gamma_{1} = (\sigma^{(1)}, \dots, \sigma^{(n)})$ and $\gamma_{2} = (\eta^{(1)}, \dots, \eta^{(m)})$ be
paths from $\sigma^{(1)} = \sigma$ to $\sigma^{(n)} \in \{\mathbf{-1},\mathbf{+1}\}$ and from $\eta^{(1)} = \eta$ to $\eta^{(m)} \in \{\mathbf{-1},\mathbf{+1}\}$, respectively, along which
the energy decreases.
\begin{itemize}
\item[(a)] If $\sigma^{(n)} = \eta^{(m)}$, define the path $\gamma: \sigma \rightarrow \eta$ given by $\gamma_{0} = (\sigma^{(1)}, \dots, \sigma^{(n-1)}, \eta^{(m)},\dots, \eta^{(1)})$
in order to obtain
\begin{eqnarray*}
\Phi(\sigma,\eta) - H_{\Lambda,h}(\sigma) &\leq& \max_{\zeta \in \gamma_{0}}H_{\Lambda,h}(\zeta) - H_{\Lambda,h}(\sigma) \\
&=& \left(\max_{\zeta \in \gamma_{1}}H_{\Lambda,h}(\zeta)\right)
\vee \left(\max_{\zeta \in \gamma_{2}}H_{\Lambda,h}(\zeta)\right) - H_{\Lambda,h}(\sigma) \\
&=& H_{\Lambda,h}(\sigma)
\vee H_{\Lambda,h}(\eta) - H_{\Lambda,h}(\sigma) \\
&=& 0 \\
&<& V_{\mathbf{-1}}.
\end{eqnarray*}
\item[(b)] If $\sigma^{(n)} = \mathbf{-1}$ and $\eta^{(m)} = \mathbf{+1}$, let us define the path $\gamma_{0}: \sigma \to \eta$ given by
\begin{equation}
\gamma_{0} = (\sigma^{(1)}, \dots, \sigma^{(n-1)}, L^{(0)}, \dots, L^{(N)}, \eta^{(m-1)}, \dots, \eta^{(1)})
\end{equation}
satisfies
\begin{eqnarray*}
\Phi(\sigma,\eta) - H_{\Lambda,h}(\sigma) &\leq& \max_{\zeta \in \gamma_{0}}H_{\Lambda,h}(\zeta) - H_{\Lambda,h}(\sigma) \\
&=& \left(\max_{\zeta \in \gamma_{1}}H_{\Lambda,h}(\zeta)\right)
\vee \left(\max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)})\right) \vee \left(\max_{\zeta \in \gamma_{2}}H_{\Lambda,h}(\zeta)\right) - H_{\Lambda,h}(\sigma) \\
&=& H_{\Lambda,h}(\sigma)
\vee \left(\max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)})\right) \vee H_{\Lambda,h}(\eta) - H_{\Lambda,h}(\sigma) \\
&=& 0
\vee \left(\max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)}) - H_{\Lambda,h}(\sigma)\right) \\
&<& \max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)}) - H_{\Lambda,h}(\mathbf{-1}) \\
&=& V_{\mathbf{-1}}.
\end{eqnarray*}
\item[(c)] If $\sigma^{(n)} = \mathbf{+1}$ and $\eta^{(m)} = \mathbf{-1}$, let us define the path $\gamma_{0}: \sigma \to \eta$ given by
\begin{equation}
\gamma_{0} = (\sigma^{(1)}, \dots, \sigma^{(n-1)}, L^{(N)}, \dots, L^{(0)}, \eta^{(m-1)}, \dots, \eta^{(1)})
\end{equation}
satisfies
\begin{eqnarray*}
\Phi(\sigma,\eta) - H_{\Lambda,h}(\sigma) &\leq& \max_{\zeta \in \gamma_{0}}H_{\Lambda,h}(\zeta) - H_{\Lambda,h}(\sigma) \\
&=& \left(\max_{\zeta \in \gamma_{1}}H_{\Lambda,h}(\zeta)\right)
\vee \left(\max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)})\right) \vee \left(\max_{\zeta \in \gamma_{2}}H_{\Lambda,h}(\zeta)\right) - H_{\Lambda,h}(\sigma) \\
&=& H_{\Lambda,h}(\sigma)
\vee \left(\max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)})\right) \vee H_{\Lambda,h}(\eta) - H_{\Lambda,h}(\sigma) \\
&=& 0
\vee \left(\max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)}) - H_{\Lambda,h}(\sigma)\right) \\
&<& \max_{0 \leq k \leq N}H_{\Lambda,h}(L^{(k)}) - H_{\Lambda,h}(\mathbf{-1}) \\
&=& V_{\mathbf{-1}}.
\end{eqnarray*}
\end{itemize}
\end{enumerate}
We conclude that for every $\sigma \notin \{\mathbf{-1},\mathbf{+1}\}$, we have $V_{\sigma} < V_{\mathbf{-1}}$.
\end{proof}
\section{Proofs of the critical droplets results}
\begin{proof}[Proof of Proposition~\ref{critdrop}]
As in the proof of Theorem~\ref{t:minmax}, let us define $f: \{0,\dots, N\} \rightarrow \mathbb{R}$ as
\begin{equation}
f(i) = H_{\Lambda, h} (L^{(i)}),
\end{equation}
and recall that
\begin{equation}
\Delta f (i) = 2\left( \sum_{n=1}^{N-i-1} J(n) - \sum_{n=1}^{i} J(n) - h \right).
\end{equation}
In the first case, we have $\Delta f(L-1) = 2(h_{L-1}^{(N)} - h) > 0$,
thus, since $f$ decreases for all $i$ greater than $L$, and since $\Delta^2 f<0$, we conclude that $f$ attains a unique strict global maximum at $L$. In the second case,
we have $\Delta f(k-1) = 2(h_{k-1}^{(N)} - h) > 0$ and $\Delta f(k) = 2(h_{k}^{(N)} - h) < 0$,
so, $f$ attains a unique strict global maximum at $k$. Finally, in the third case, we have $\Delta f(k) = 0$,
that is, $f(k) = f(k+1)$. Using the fact that $\Delta f(k+1) < 0 < \Delta f(k-1)$, we conclude that the global maximum of $f$ can we only be reached
at $k$ and $k+1$.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{corcrit}]
Since $\sum_{n = 1}^{\infty}J(n)$ converges, it follows that the set in equation (\ref{crit}) is nonempty, thus $k_{c}$ is well defined. Then, we have
\begin{equation}
\sum_{n = k_{c}+1}^{\infty}J(n) \leq h < \sum_{n = k_{c}}^{\infty}J(n).
\end{equation}
For all $N$ sufficiently large such that $\left\lfloor \frac{N}{2} \right\rfloor > k_{c}$ and
\begin{equation}
\sum_{n = N - k_{c}+1}^{\infty}J(n) < \sum_{n = k_{c}}^{\infty}J(n) - h,
\end{equation}
we have
\begin{equation}
h < \sum_{n = k_{c}}^{\infty}J(n) - \sum_{n = N - k_{c}+1}^{\infty}J(n) = h_{k_{c}-1}^{(N)}
\end{equation}
and
\begin{equation}
h_{k_{c}}^{(N)} = \sum_{n = k_{c}+1}^{\infty}J(n) - \sum_{n = N - k_{c}}^{\infty}J(n) < h.
\end{equation}
Therefore, by means of Proposition \ref{critdrop}, we conclude that for $N$ large enough, $k_{c}$ satisfies
\begin{equation}
H_{\Lambda,h}(\mathscr{P}^{(k_{c})}) > \max_{\substack{0 \leq i \leq N \\ i \neq k_{c}}} H_{\Lambda,h}(\mathscr{P}^{(i)}).
\end{equation}
\end{proof}
|
{
"timestamp": "2018-09-10T02:03:24",
"yymm": "1809",
"arxiv_id": "1809.02243",
"language": "en",
"url": "https://arxiv.org/abs/1809.02243"
}
|
\section{INTRODUCTION}
\label{sec:intro}
The {\it Chandra X-ray Observatory} (CXO) was launched on 23 July 1999
on the Space Shuttle {\it Columbia}. An overview of the mission and
its instruments are presented in
Weisskopf~{\em et al.}~(2000)~\cite{weisskopf2000} and an update on
the mission was provided in Weisskopf~{\em et al.}~(2012)\cite{weisskopf2012}.
The CXO carries two imaging instruments, the {\it Advanced CCD Imaging
Spectrometer} (ACIS) discussed in Garmire~{\em et al.}~(1992)~\cite{garmire92}
and Garmire~{\em et al.}~(2003)~\cite{garmire03} and the {\it High
Resolution Camera} (HRC) discussed in
Murray~{\em et al.}~(1997)~\cite{murray1997}.
In addition, the CXO
carries two gratings instruments known as the {\it High Energy
Transmission Grating} (HETG) described in
Canizares~{\em et al.}~(2000)~\cite{canizares2000} and the {\it Low
Energy Transmissions Grating} (LETG)
Brinkman~{\em et al.}~(2000)~\cite{brinkman2000}.
ACIS is the primary instrument on the CXO with a nominal bandpass of
0.3--10.0~keV, conducting over 90\% of the
observations. ACIS contains 10 CCDs arranged into two arrays. One
array, the ACIS Imaging array (ACIS-I), consists of four frontside
illuminated (FI) CCDs arranged in a $2\times2$ array, and the other
array, the ACIS Spectroscopy array (ACIS-S), consists of four FI CCDs
and two backside illuminated (BI) CCDs arranged
in a $1\times6$ array. The ACIS-I array is used primarily for imaging
spectroscopy and the ACIS-S array is used primarily as the readout
detector for the HETG and LETG, although the ACIS-S is also used for
imaging spectroscopy. The BI CCDs have higher quantum efficiency at
low energies than the FI CCDs and are therefore preferred over the FI
CCDs for some imaging observations.
In order to suppress optical to infrared photons but to transmit the
X-ray photons of interest, both ACIS arrays have an {\em Optical
Blocking Filter} (OBF) inserted in the optical path. The filters
were produced by ${\tt
Luxel^{TM}}$ and are made of polyimide with Al deposited on both
sides of the polyimide. The two filters are of slightly different
thicknesses, the ACIS-S OBF (OBF-S) is 100/200/30 nm of
Al/polyimide/Al and the ACIS-I OBF is 130/200/30 nm of
Al/polyimide/Al. The OBFs sit about 12~mm in front of the CCDs facing
the mirrors on the CXO. The volume around the CCDs is effectively
isolated from the interior of the spacecraft, while the surface of
the OBFs facing the mirrors is exposed to the interior of the
spacecraft. The CCD focal plane is regulated at a temperature of
-120~C. The OBFs are positioned at the top of the ACIS Camera Body (CB)
which was regulated at -60~C early in the mission, but has been
unregulated from April 2008 fluctuatng between between -72~C and
-60~C. The CB was regulated at -60~C from August 2015 until July 2016
but has since been unregulated (see
Plucinsky~{\em et al.}~2016~\cite{plucinsky2016} for details).
In normal operations, the centers of the
filters are warmer by $\sim2-4$ degrees due to the radiative heat load
of the warm mirrors (+20~C) and the optical bench assembly.
It was noticed early in the mission\cite{plucinsky2003} that the low
energy sensitivity of the ACIS instrument was decreasing with time.
It was quickly determined that this loss of detection efficiency was
the result of a contamination layer building up on the surface of the
OBFs facing the spacecraft interior. The contamination layer
continues to accumulate even after 18 years on orbit. The
accumulation rate, the chemical composition, and the spatial
distribution of the contaminant have all varied with time over the
mission. The accumulation rate exhibited a steep rise at the
beginning of the mission, a flattening from 2003 until 2010, and
then another steep rise from 2010 onwards. We reported in
2016\cite{plucinsky2016} on our efforts to reduce the accumulation
rate by turning on the ACIS Detector Housing (DH) heater which regulates
the CB and hence OBF edges at -60~C. There was no
measurable effect on the accumulation rate due to the DH heater
regulating the CB at -60~C. In this paper we report that the
accumulation rate has decreased significantly starting in 2017
and we discuss our current understanding of the
time-variable accumulation rate and chemical composition.
\section{ACIS Contamination Layer}
\label{sec:contam}
\subsection{Discovery and Initial Characterization}
\label{sec:discovery}
The existence of the contamination layer was discovered in
2002\cite{plucinsky2003} as a gradual decrease in the low energy
detection efficiency of all of the CCDs. The growth of
the contamination layer was tracked by repeated observations of the
{\em external calibration source}~(ECS) which has lines of Al-K~(1.5
keV), Ti-K~(4.5~keV), and Mn-K~(5.9~keV) from an ${\rm Fe^{55}}$
radioactive source with a half-life of 2.7~yr. The ECS also produced a
line complex from Mn-L around 0.67 keV. The ratio of the Mn-L/Mn-K
count rates
on the S3 BI CCD became the most useful measure of the declining
sensitivity at low energies. Unfortunately, the observed flux from the
Mn-L line complex decreased with time due to the decay of the
radioactive source and the increasing thickness of the contamination
layer. Eventually the uncertainties on the measurements became so
large that they were no longer useful to track the growth of the
contamination layer. As
the mission progressed, we transitioned to using celestial sources to
monitor the growth of the contamination layer. We used celestial
sources that are believed to be constant (or nearly constant on human
time scales), such as clusters of galaxies and supernova remnants
(SNRs), to monitor the change in low energy detection efficiency. We
also used bright, variable sources with the HETG and LETG to constrain
the absorption as a function of energy produced by the contaminant.
Early efforts to determine the chemical composition of the
contaminant\cite{marshall2004} identified absorption edges of C, O,
and F that were in excess of the edges in the ACIS OBF. The ACIS OBF
has absorption edges of C and O, but no edge due to F. The ACIS
detection efficiency as a function of energy was carefully calibrated
before launch\cite{bautz1998,nousek1998} including the transmission and
absorption edges of the OBFs. The flight measurements used a bright
continuum source dispersed with the HETG and/or LETG to achieve the
highest spectral resolution possible with the CXO. In these high
resolution spectra, it became obvious that some absorption edges were
deeper than in the pre-flight measurements or only appeared (in the
case of F) after launch. The newly-detected absorption edges were
also found to be increasing in time. C was by far the dominant species
in the contaminant while the O and F were approximately equal in
concentration. We believe the contaminant started accumulating as
soon as the ACIS door was opened and the OBFs were exposed to the
interior of the spacecraft. The contaminant has continued to
accumulate for the entire 19 year mission of the CXO, see
Section~\ref{sec:accumulation} for a detailed time history.
\subsection{OBF and Camera Body Temperatures}
\label{sec:heater}
ACIS has two separate filters, one for the Imaging array, OBF-I, and
one for the spectroscopy array, OBF-S. For diagrams and pictures of
the flight hardware, see the figures in
Plucinsky~{\em et al.}~(2004)\cite{plucinsky2004}. Both OBFs are secured to
the top surface of the ACIS Camera Body (CB). The OBFs have no active
thermal control but respond to the environment around them. The
edges of the filter are in good thermal contact with the CB and are
therefore at the same temperature as the CB. At the beginning of the
mission, the CB was held at -60~C. The centers of the
filter are warmer than the edges due to the radiative heat load from
the warm mirrors and optical bench cavity. The center of the OBF-I
is modeled to achieve a temperature of $\sim-56$~C while the center
of the OBF-S is at about $\sim-58$~C. There is no temperature
sensor on the OBFs themselves.
In April 2008, it was decided to turn off the ACIS Detector Housing
(DH) heater which kept the CB temperature at -60 C. With the DH
heater off, the CB temperature fluctuated between -72~C and -62.5~C
depending on the orientation of the CXO spacecraft. The cooler CB
temperature provided more margin for keeping the CCDs in the focal
plane at -120~C.
From
launch in 1999 until April 2008, the CB temperature regulated at
-59.9~C except for a few excursions during special activities.
After April 2008, the CB temperature was unregulated and varied
with the orientation of the spacecraft.
In August 2015, it was decided to turn the
ACIS DH heater back on with the hope that the accumulation rate of
the contaminant would decrease. But as we reported in
2016\cite{plucinsky2016}, the warmer CB temperatures has no effect
on the accumulation rate of the contaminant. Therefore, it was
decided in July 2016 to turn the DH heater back off and leave the
CB temperatures unregulated.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics{fig1.eps}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:akos_aciss} Optical depth at 0.67 keV for the ACIS-S
aimpoint as
determined by fits to the E0102(blue) and A1795(red) data. The
black line is the model for the optical depth in the
N0010 contamination model.
}
\end{figure}
\subsection{Time Dependence of the Accumulation Rate}
\label{sec:accumulation}
As mentioned in Section~\ref{sec:discovery}, the accumulation rate of
the contamination layer was monitored with the ECS until the
radioactive source became too faint to produce reliable results. At
this point, we switched to using the brightest SNR in the Small Magellanic
Cloud, 1E~0102.2-7219 (hereafter E0102), a bright cluster of galaxies
known as Abell~1795 (hereafter A1795), and a bright Blazar called
Markarian 421 (hereafter Mkn~421). E0102 has a soft, line-dominated
spectrum and we have used it throughout the mission to characterize the
contamination layer\cite{plucinsky2008,plucinsky2012}. The development
of the standard IACHEC model for E0102 and its application to the
current generation of X-ray instruments is presented in our 2017
paper\cite{plucinsky2017}. A1795 has a
harder thermal spectrum with some significant line emission. Mkn~421
has a continuum spectrum described by a curved power-law model.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics{fig2.eps}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:akos_acisi} Optical depth at 0.67 keV for the ACIS-I
aimpoint as
determined by fits to the E0102(blue) and A1795(red) data. The
black line is the model for the optical depth in the
N0010 contamination model.
}
\end{figure}
We have used the A1795 and E0102 data on ACIS-S and ACIS-I to measure the
optical depth of the contaminant at 0.67~keV (the energy of the Mn-L
complex in the ECS). The results for the ACIS-S aimpoint are plotted in
Figure~\ref{fig:akos_aciss}. The blue data points are derived from
the E0102 data, the red data points are derived from the A1795 data
and the black curve shows the expected increase in the contamination
layer that is contained in the current release of the CXC
contamination file ``{\tt acisD1999-08-13contamN0010.fits}'', called
``N0010'' for short. The measured optical depths from
the E0102 and A1795 data are consistent within the uncertainties.
The accumulation history of the contaminant is
shown in this figure, a steep rise early in the mission, a reduction
in the rate from 2003 to 2010, another sharp increase after 2010, and
an apparent decrease starting in 2017. The data in 2017 begin to deviate
from the expected accumulation rate and the trend continues into
2018. The decrease in the accumulation rate
is not correlated with the DH heater which was on from 11 August 2015
until 20 July 2016. The behavior at the aimpoint on ACIS-I is even
more dramatic and shown in Figure~\ref{fig:akos_acisi}. The first
data point to deviate from the expectation is in late 2017. Perhaps
more interesting, the last data point in 2018 is consistent with no
accumulation over the last 6 months. The uncertainties are relatively
large so future measurements will be necessary to confirm this
result. Note that the maximum optical depth is about 3.0 on the OBF-I
and is about 2.5 on the OBF-S. The contaminant has apparently
accumulated more rapidly at the center of the OBF-I than at the center
of the OBF-S. This can be seen more clearly in
Figure~\ref{fig:od_diff} which shows the difference in the optical
depth at the aimpoints on ACIS-I and ACIS-S as a function of time.
For most of the mission,
the optical depths were within 0.2 of each other. But starting in
2015, the contaminant grew more rapidly near the aimpoint on ACIS-I
reaching a maximum difference of 0.6 optical depths. Curiously, the
most recent data point in 2018 shows the difference between OBF-I and
OBF-S is decreasing. This suggests that the accumulation rate on OBF-I
is close to zero while the accumulation rate is still positive and
small on OBF-S.
Future observations will be necessary to determine if this trend will
continue.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics{fig3.eps}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:od_diff} The difference in the optical depth at 0.67
keV at the ACIS-I and ACIS-S aimpoints. The contaminant grew
more quickly on OBF-I than OBF-S from 2015 until 2017.
}
\end{figure}
\subsection{Time Dependence of the Chemical Composition}
\label{sec:composition}
The high resolution spectra provided by the HETG have been
used\cite{marshall2004} to constrain the chemical composition of the
contaminant and how it has changed with time. The contaminant is composed
mostly of C, with some O and F. One limitation of the HETG data is
that they only provide information on the contaminant for the OBF-S
filter. The optical depth of the contaminant
for each element (C, O, \& F) is modeled as a functions of time and
position with a two component model:
\begin{center}
$\tau(t;x,y) = \tau_0(t) + [\tau_1(t) \times f(x,y)]$
\end{center}
\noindent where $\tau_0(t)$ represents a time-variable, spatially uniform component,
$\tau_1(t)$ represents a time-variable, spatially variable component,
and $ f(x,y)$ is the spatial distribution for the spatially variable
component. Figure~\ref{fig:ck_tau0} shows the time dependence of the
$\tau_0(t)$ and $\tau_1(t)$ components for C near the aimpoint on
the ACIS-S detector derived from HETG observations of Mkn~421. The
time dependence of $\tau_0(t)$ for C matches that of the N0010 model
until the last few data points which are significantly
below the line. This is similar to the behavior seen for A1795 and
E0102 shown in Figure~\ref{fig:akos_aciss}.
The time dependence of $\tau_1(t)$ for C matches
that of the N0010 model in shape, but the N0010 model might be
slightly under-predicting at late times. The $\tau_1(t)$ component
has been mostly flat
with time from 2015 onwards, while the $\tau_0(t)$ continues to
accumulate, albeit at a lower rate than predicted by the N0010 model.
One interpretation of this behavior is that the spatially uniform
component and the spatially variable component correspond to separate
materials and the spatially variable component has ceased to
accumulate.
Figure~\ref{fig:ok_tau0} shows the time dependence $\tau_0(t)$ and
$\tau_1(t)$ components for O, again near the aimpoint on the ACIS-S
detector. The time dependence of $\tau_0(t)$ for O matches that of
the N0010 model until the last few data points
which are significantly above the line.
The time dependence
of $\tau_1(t)$ for O matches that of the N0010 model in shape and
amplitude. However, the data since 2015 are consistent with no growth in this
component so the N0010 model may be over-predicting the contaminant at
late times but the uncertainties are still large enough that the case
is not definitive.
The $\tau_0(t)$ result indicates that the N0010
model has less O than it should. But note that the total optical
depth of O is significantly less than that of C, $\sim2.0$ versus
$\sim15$, so that any error in the O optical depth has less effect on
the observed spectra and is therefore more difficult to discern.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=2.4in,angle=90]{fig4l.ps}
\includegraphics[width=2.4in,angle=90]{fig4r.ps}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:ck_tau0} LEFT: The optical depth at C-K of the spatially
uniform component near the center of the ACIS-S array. RIGHT: The
optical depth at C-K of the spatially
variable component near the center of the ACIS-S array.
The solid line for both is the prediction from the N0010 contamination
model.
}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=2.4in,angle=90]{fig5l.ps}
\includegraphics[width=2.4in,angle=90]{fig5r.ps}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:ok_tau0} LEFT: The optical depth at O-K of the spatially
uniform component near the center of the ACIS-S array. RIGHT:
The optical depth at O-K of the spatially
variable component near the center of the ACIS-S array.
The solid line for both is the prediction from the N0010 contamination
model.
}
\end{figure}
\section{Performance of the Current Contamination File}
\label{sec:verification}
\subsection{CXC Calibration Files}
\label{sec:calibration_files}
The CXC calibration group is responsible for providing calibration
files that accurately model the additional absorption produced by the
contamination layer. As mentioned above the characterization of the
contamination layer is complicated by the temporal variation of the
thickness, the chemical composition and the spatial distribution. The
CXC regularly acquires calibration data of standard targets such as
E0102, A1795, and Mkn~421 to verify the current contamination
calibration file. If deficiencies are found, a new calibration file
is created to address those deficiencies. The ACIS contamination
file has been updated 7 times over the course of the CXO mission. For
the analysis that follows, we use version {\tt N0010} of the model,
which is called {\tt acisD1999-08-13contamN0010.fits} in the CXC
{\em Calibration Database}~(CALDB). We used {\em Chandra Interactive
Analysis of Observations}~(CIAO) version~4.9 and CALDB version~4.7.8.
\subsection{E0102 Model}
\label{sec:e0102_model}
We have defined a standard model for E0102 as part of the activities
of the IACHEC. We have used this model
extensively~\cite{plucinsky2008,plucinsky2012,plucinsky2017} to test and improve
the ACIS response model earlier in the mission. The model is
intended for calibration analyses and is not intended to provide any
insight into E0102 as a SNR. The model is
empirical in that it uses 52 Gaussians to model the line emission.
It uses a two component absorption model, one component for the
Galactic contribution and one for the Small Magellanic Cloud (SMC) contribution.
We modeled the continuum using a modified version of the
{\em Astrophysical Plasma Emission Code}~({\tt
APEC})~\cite{smith2001} called the {\tt ``No-Line''} model.
This model excludes all line emission, while retaining all continuum
processes including bremsstrahlung, radiative recombination continua
(RRC), and the two-photon continuum from hydrogenic and helium-like
ions (from the strictly forbidden ${}^2S_{1/2} 2s \rightarrow$\ gnd
and ${}^1S_0 1s2s \rightarrow$\ gnd transitions, respectively).
We included two continuum components of this type in the E0102 model.
For details of the model and the parameters assumed see
Plucinsky~{\em et al.}~(2017)\cite{plucinsky2017}.
\input{s3tab.tex}
Although the standard IACHEC model has many parameters, most of them
are held fixed when we fit the data for calibration purposes. The
continuum components are fixed and the interstellar absorption components
are held fixed. All the line energies and widths are also held fixed.
Typically, we freeze all line normalizations except for the four
normalizations of the brightest lines/line complexes. We allow the
normalizations for the
\ion{O}{vii}~He$\alpha$~{\em r} line, the \ion{O}{viii}~Ly$\alpha$ line,
the \ion{Ne}{ix}~He$\alpha$~{\em r} line, and \ion{Ne}{X}~Ly$\alpha$
line to vary in the fit. For the \ion{O}{vii}~He$\alpha$ and
\ion{Ne}{ix}~He$\alpha$ triplets, we link the normalizations of the
{\em f}, {\em i}, and {\em r} lines to each other and only allow one of them
to vary during the fitting process. In this way, the triplet can
increase or decrease its normalization as a group but the
normalizations of the individual lines in the triplet can not vary
independently of each other. There is also a global constant that
multiplies the entire spectrum that is allowed to vary. In this
manner, we allow only 5 of the 208 parameters in the IACHEC model to
vary when we fit. We are essentially allowing the normalizations of the
four brightest line/line complexes to vary while freezing the weaker
lines and the continuum. We assume that E0102 is
not changing significantly over the 19 year lifetime of the CXO
mission such that the flux from the source in 1999 is not
significantly different from the flux in 2018. And therefore we
assume the total flux in a given line is not changing or changing very
little over the 19 year mission.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=4.0in,angle=270]{fig6.ps}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:s3_e0102_sp} ACIS-S3 spectra of E0102 from OBSIDs
18418(2016), 19850(2017) and 20639(2018). The 2016 data are fit with the
standard IACHEC model and that model is overplotted on the 2017
and 2018 data.
}
\end{figure}
\subsection{ACIS-S Results}
\label{sec:acis_s}
E0102 has been observed many times with ACIS-S since the beginning of
the mission. The mirrors on the CXO mission produce such sharp X-ray
images that observations of a point source can be affected by
``pileup''. ``Pileup'' is defined as two photons interacting with the CCD
within one detection cell (typically a $3\times3$ pixel region) within
one readout frame of the CCD. Even though E0102 is an extended
source, some of the bright filaments in E0102
are bright enough to have significant pileup. Most of the
observations of E0102 early in the mission were executed in
``full-frame'' mode with an integration time of 3.2~s. We have
excluded the ``full-frame'' observations from our analysis and selected
only the ``subarray'' observations with shorter frametimes of 1.1~s,
0.8~s, and 0.4~s in order to minimize the effects of pileup on our
data. There are 32 subarray observations of E0102 on S3 included in
our analysis listed in Table~\ref{tab:s3obs}. Most of these
observations are near the center of the CCD with {\tt chipy} values
around 512, but 13 of the observations are at different {\tt chipy}
positions.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=5.5in]{fig7.ps}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:e0102S3} Line normalizations from E0102 on S3 as a
function of time. The solid black line is the average of the
data points near the on-axis point aimpoint. The red dashed
lines are $+/-10\%$ above and below the average. The points away
from the nominal aimpoint are indicated in green and blue.
}
\end{figure}
We have fit all of the S3 observations with the standard IACHEC model
allowing only the global normalization and the normalizations for the
\ion{O}{vii}~He$\alpha$~{\em r} line, the \ion{O}{viii}~Ly$\alpha$ line,
the \ion{Ne}{ix}~He$\alpha$~{\em r} line, and \ion{Ne}{X}~Ly$\alpha$
line to vary. Figure~\ref{fig:s3_e0102_sp} shows an example of these fits
for the three most recent observations near the S3 aimpoint from 2016,
2017 and 2018. The model was fit to the 2016 data and then frozen
for the 2017 and 2018 observations to demonstrate deficiencies in the
time-dependent contamination model. The large residuals in the 2018
spectrum at the \ion{O}{viii}~Ly$\alpha$ and \ion{Ne}{X}~Ly$\alpha$
lines indicate that the contaminant is over-estimated.
Note that the difference between the 2017 and 2018 observations is not
as large as the model predicts. The residuals
indicate that the \ion{O}{viii}~Ly$\alpha$ line and the
\ion{Ne}{X}~Ly$\alpha$ are not well fitted in 2018 but the
\ion{Ne}{ix}~He$\alpha$~{\em r} line is well fitted. This will be
challenging to correct with a revised contamination model.
We compared the fitted line normalizations for the
\ion{O}{vii}~He$\alpha$~{\em r} line, the \ion{O}{viii}~Ly$\alpha$
line, the \ion{Ne}{ix}~He$\alpha$~{\em r} line, and the \ion{Ne}{X}~Ly$\alpha$
line as a function of time. The results are plotted in
Figure~\ref{fig:e0102S3}. The solid black line is the average of the
on-axis data and the black dashed lines are +/-10\% from the
average.
Figure~\ref{fig:e0102S3} shows that the line
normalizations are mostly consistent to within $\pm10\%$ from 2003
through 2016 for the on-axis data points with the exception of the
\ion{O}{vii}~He$\alpha$~{\em r} line. After 2016, the
\ion{O}{vii}~He$\alpha$~{\em r} line and \ion{O}{viii}~Ly$\alpha$
deviate dramatically from the previous values. The 2018
normalizations on-axis for the \ion{O}{viii}~Ly$\alpha$ line
and \ion{O}{vii}~He$\alpha$~{\em r} line are
$\sim28\%$ and $\sim49\%$ higher than the average value.
The \ion{Ne}{ix}~He$\alpha$~{\em r} line, and
\ion{Ne}{X}~Ly$\alpha$ line normalizations are consistent with the
average within 10\%
so the problem on S3 appears to effect energies below 0.9~keV.
\subsection{ACIS-I Results}
\label{sec:acis_i}
There are 16 subarray observations of E0102 on the I3 CCD in the
ACIS-I array. Table~\ref{tab:i3obs} lists the observations with their
locations on the CCD and exposure times and count rates. Unlike the
S3 CCD where the aimpoint is near the middle of the CCD, the aimpoint
on the I3 CCD is near the top, right corner (high {\tt chipx} and
{\tt chipy}). Hence most of these observations have {\tt chipx} of
$\sim875$ and {\tt chipy} values of $\sim930$. This position is close
to the center of the OBF-I filter, so the contamination layer is
thinner at this position than near the edges. There are only 3 of the
14 observations that are at positions other than the nominal aimpoint.
\input{i3tab.tex}
Figure~\ref{fig:i3_e0102_sp} shows an example of these fits
for the three most recent observations near the I3 aimpoint from 2016,
2017 and 2018. Again, the model was fit to the 2016 data and then frozen
for the 2017 and 2018 data to demonstrate deficiencies in the time-dependent
contamination model. Note the dramatic
difference in the expected model spectrum for the 2018 data.
The N0010 contamination model is over-estimating the contamination by a
large amount at the aimpoint on I3. This is partly due to the fact
that the accumulation rate has decreased but it is also due to
the fact that the N0010 contamination model predicted significantly
more contamination at the aimpoint on I3 than S3 (see
Figures~\ref{fig:akos_aciss} and ~\ref{fig:akos_acisi}).
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=4.0in,angle=270]{fig8.ps}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:i3_e0102_sp} ACIS-I3 spectra of E0102 from OBSIDs
18417(2016), 19849
(2017) and 20638(2018). The 2016 data are fit with the
standard IACHEC model and that model if overplotted on the 2017
and 2018 data.
}
\end{figure}
We compared the fitted line normalizations for the
\ion{O}{vii}~He$\alpha$~{\em r} line, the \ion{O}{viii}~Ly$\alpha$
line, the \ion{Ne}{ix}~He$\alpha$~{\em r} line, and the \ion{Ne}{X}~Ly$\alpha$
line as a function of time. The results are plotted in
Figure~\ref{fig:e0102I3}. The over-correction for the contamination
layer in 2018 is large. The normalizations for the
\ion{O}{vii}~He$\alpha$~{\em r} line, the \ion{O}{viii}~Ly$\alpha$ line, the
\ion{Ne}{ix}~He$\alpha$~{\em r} line, and the
\ion{Ne}{X}~Ly$\alpha$ line
are over-estimated by $\sim98\%$, $\sim125\%$, $\sim32\%$, and
$\sim25\%$ respectively.
The data from 2016 and earlier are mostly consistent with
each other to within 10\%. The discrepancy begins in 2017 and dramatically worsens
in 2018. The revised contamination file soon to be released by the
CXC should address most of this dicrepancy.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=5.5in]{fig9.ps}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:e0102I3} Line normalizations from E0102 on I3 as a
function of time. The solid black line is the average of the
data points near the on-axis point aimpoint. The red dashed
lines are $+/-10\%$ above and below the average. The points away
from the nominal aimpoint are indicated in red and blue.
}
\end{figure}
\section{Possible Explanations For the Reduction in the Accumulation Rate}
The analyses presented up to this point measure the accumulation rate
of the contaminant which is the difference between the deposition rate
and the vaporization rate. If the accumulation changes, we do not
know if the deposition rate changed or the vaporization rate changed
or both. Over the course of the mission, many components on the CXO
spacecraft have increased in temperature, reaching mission high
values within the last few years. It is conceiveable that a component
on the spacecraft was not out-gassing significantly early in the
mission, but as its temperature increased it began to out-gas at a
higher rate. Perhaps
the out-gassing from this component has now started to decrease, as the
source of the contaminant has diminished.
Another possibility is that the temperature distributions on the
filters have changed with time. Figure~\ref{fig:tice_emittance} shows
the expected temperature distributions on the filters in the presence of
no contamination when the emittance is expected to be 0.05. In this
case, the center of the OBF-I is at -55.8~C and the center of the
OBF-S is at $\sim -58.0$~C. As contaminant accumulates on the filters
and the surrounding surfaces the temperature distribution will change,
with the centers of the filters becoming warmer. For an emittance of
0.20, the center of the OBF-I increases to -41.7~C and the center of the
OBF-S increases to $\sim -46.0$~C. The temperatures of the OBFs increase as
the emittance increases because the OBFs are more coupled to the
temperature of the warm optical bench assembly (+20~C). But as the
emittance continues increasing the OBF temperatures start to decrease
again because in this model, the surfaces around the OBF are also
accumulating a contamination layer and those surfaces have a higher
emittance which results in better coupling between those relatively
cold surfaces and the OBFs.
As shown previously\cite{odell2013}, the
vaporization rate of materials is a steep function of temperature with
the vaporization rate increasing by roughly one order of magnitude for
every 5~C increase in temperature. Therefore, it is possible that the
vaporization rate has increased by about two to three orders of magnitude
in the centers of the filters as the temperatures have increased from
$\sim -56$~C to $\sim-42$~C. . This could be part of the explanation
for the reduction in the accumulation rate that has been observed. This
would be consistent with the center of the OBF-I showing a larger
reduction in the accumulation rate than the center of the OBF-S since
the center of the OBF-I is warmer than the center of the OBF-S.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[trim={0.7cm 0.5cm 1.3cm 0.5cm},clip,width=6.5in]{fig10.ps}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:tice_emittance} The expected temperature distributions
on the OBF-S and OBF-I as a function of emittance.
}
\end{figure}
\section{Future Work}
\label{sec:future}
The temporal model of the contamination correction in the {\tt N0010} file
{\tt acisD1999-08-13contamN0010.fits} contained in CALDB~4.7.8 needs
modification to predict less absorption near the center and edges of the OBFs.
This is clear from the E0102 line normalizations presented in
Figures~\ref{fig:e0102S3} and~\ref{fig:e0102I3}. The CXC calibration
team is working on a
revision to the {\tt N0010} model that will change the time dependence
of the spatial distribution and will update the chemical composition
as a function of time. We expect this revised contamination model to
be released in two stages (both in 2018), one release for the OBF-I
and one for OBF-S.
The accumulation rate of the contaminant will need to be monitored
more frequently in the coming years. The accumulation rate at the
centers and edges of the OBFs for both ACIS-S and ACIS-I have all
changed in unexpected ways over the past two years. The continued
characterization of these accumulation rates with time may provide
constraints on the deposition and vaporization rates.
The CXO project considered a ``Bakeout'' of the ACIS
instrument\cite{plucinsky2004} soon after the contamination layer was
discovered in 2004. The project decided at that time that a Bakeout
was not worth the risk. There have been several papers written
describing models of an ACIS Bakeout, see
O'Dell~{\em et al.}~(2005)\cite{odell2005}, O'Dell~{\em et al.}~(2013)\cite{odell2013}, and
O'Dell~{\em et al.}~(2015)\cite{odell2015}. These papers predict a range of
outcomes from successful to unsuccessful depending on the assumed
volatilities for the contaminants. The recently discovered reduction
in the accumulation rate makes it less likely the project will
consider a Bakeout worth the risk. Nevertheless, we will continue to
monitor the accumulation rate and spatial distribution of the contaminant
to constrain the volatilities of the possible contaminants to hopefully to
constrain the range of possible outcomes for a Bakeout. If the
contaminant were observed to decrease in the center of the OBF-I and
OBF-S, we would know that the vaporization rate is larger than the
deposition rate at the current temperatures. Such a result would
indicate that a Bakeout is likely to be successful, even at
temperatures not much higher than the current range of -70~C to -42~C.
\section{Conclusions}
\label{sec:conclusions}
We have presented the accumulation rate of the ACIS contamination
layer as a function of time. The accumulation rate decreased from launch
until 2005, was fairly linear from 2005 to 2010, increased
after 2010 but has sharply decreased since 2016. The chemical
composition of the contamination has changed
with time, possibly indicating that multiple sources are responsible
for the contamination. The C, O, and F all exhibit different time
dependencies again indicating that multiple materials have accumulated
at different rates over the course of the mission. Nevertheless, all
three have shown a dramatic decrease over the past year. The
explanation for this sudden decrease is not clear. It could be that
the deposition rate has decreased or the vaporization rate has
increased, or both. The CXC will need to monitor the contamination
layer frequently with dedicated calibration observations over the
coming years to accurately model the contamination layer
We tested the current contamination model {\tt N0010} with the SNR
E0102. We find that the fitted values for the normalizations of the
\ion{O}{vii}~He$\alpha$~{\em r} line, the \ion{O}{viii}~Ly$\alpha$ line,
the \ion{Ne}{ix}~He$\alpha$~{\em r} line, and \ion{Ne}{X}~Ly$\alpha$
line are mostly consistent to within $\pm10\%$ for both ACIS-S and
ACIS-I near the
aimpoint from 2003 through 2016. After 2016, the line normalizations
begin to deviate from the average value, with deviations as large as
49\% at the aimpoint on ACIS-S and 125\% at the aimpoint on ACIS-I
for the \ion{O}{viii}~Ly$\alpha$ line. The CXC is preparing a revised
contamination file
that will significantly improve the agreement from 2016 onwards for
release this year.
\acknowledgments
We acknowledge support under NASA contract NAS8-03060.
The {\em Chandra X-ray Observatory} is operated by the
Smithsonian Astrophysical Observatory under contract to the NASA
Marshall Space Flight Center (MSFC). The Advanced CCD Imaging
Spectrometer (ACIS) was developed by the Massachusetts Institute of
Technology and the Pennsylvania State University.\\
We sincerely thank all of our colleagues in the IACHEC that
contributed to the development of the highly successful spectral model
for E0102.
|
{
"timestamp": "2018-09-10T02:02:39",
"yymm": "1809",
"arxiv_id": "1809.02225",
"language": "en",
"url": "https://arxiv.org/abs/1809.02225"
}
|
\section{Introduction}\label{secintro}
{Based on over three decades of large-scale investigations \cite{Sta80,Gut81,Lin82,Alb,Lin83,Lin86a,Lin86b}, obviously the inflationary paradigm is going to be the corner stones of modern cosmology. The inflation theory can describe the early Universe evolutions successfully and also could be considered as a remedy for three vital problems which old big bang theory was faced i.e. the flatness, horizon and heavy monopoles problems \cite{Lid00,Bas,Lem,Kin,Bau09,Bau14}.
Besides, it seems to obtain a correct ratio for tensor-to-scalar ratio and, in general, a correct behavior of primordial perturbations this model is requisite \cite{Liddle0,Langl,Lyth,Guth00,Lidsey97,Bas,Mukhanov-etal,Haidar,Haidar2}.
In the standard inflation model, the potential term of Lagrangian is dominated comparing to the canonical kinetic term, i.e. the potential term dominated during inflation \cite{Lid00,Bas,Bau09,Bau14}. However there exist inflationary models in which the kinetic term has different form from the canonical one, namely non-canonical models, such as Dirac-Born-Infeld (DBI) action where the non-canonical kinetic term is attributed to the scalar field. It could be realized that DBI scalar field model can be assumed as a subset of k-inflation scenario \cite{Arm,Gar,Li,Hwa,Fra10a,Fra10b,Unn12,Unn13,Zha14a,Gol,Nazavari}. The observational constraints on $k$-inflation and its perturbations have been considered in literature \cite{Gar} and \cite{Li}.
Here we should emphasis despite a huge number of inflationary models, the precise data originated from inspecting Cosmic Microwave Background, CMB, have been reduced dramatically the number of allowable inflationary models \cite{Mar13,Mar14,Hos14a}.
Additionally, some other noticeable researches have been worked out in the context of non-canonical inflationary scenario and we refer the reader to \cite{Hwa,Fra10a,Fra10b,Unn12,Unn13,Zha14a,Gol}.
In \cite{Unn12} it was shown that one can reduce the values of slow-roll parameters and accordingly the slow-roll regime condition can be obtained very easily by using of a non-canonical Lagrangian than the canonical case. In addition, it has been shown that the steep potentials connected to dark energy in the canonical setting can drive inflation in the non-canonical framework precisely \cite{Unn12}. In the non-canonical setup, the power law inflation is consistent with the observational results and one can obtain a way to end the inflation and it causes to get rid of changing the form of the power law of the scale factor surrounding the horizon exit \cite{Unn13}.\\
Mostly there are exist various ways to, in the slow-rolling inflationary scenarios, obtain expression of observables such as the tensor and scalar spectral indices, the running of them and tensor-to-scalar ratio. Amongst them one can refer to the introducing different types for scale factor to run inflation and then examining the results comparing to observations. Here we are interesting to the intermediate type, which is the most generic and well-known, to investigate its role to derive inflation and for the first time has been presented by \cite{Barrow11}.
In this procedure, the scale factor introduced as an exponential function based on the cosmic time, i.e. $a(t) = \exp\big( \kappa t^f \big)$, $\kappa>0$ where usually $0<f<1$ \cite{Vallinotto,Starobinsky}. It leads to an asymptotically negative power-law potential, we can refer the steep potentials for instance \cite{Rendall}. Here we can supply the reason why people named these set of scale factors the intermediate ones. For them usually the expansion of the Universe is faster than the case which made by power-law one, i.e. ($a(t)=t^p, p>1$), and slower than de-Sitter inflation ($a(t)=\exp(Ht), H=constant$)}. It is interesting also we mention here that in Einstein gravity intermediate inflation for $\alpha=2/3$ creates scale invariant perturbations
\cite{Barrow11,Barrow-etal,Vallinotto,Starobinsky}. One important reason to consider the intermediate inflation is its results for tensor-to-scalar ratio and scalar spectrum index which are in a good agreement comparing to the CMB data \cite{BarrowNunes}. Due to advantages of the intermediate proposal in solving problems of inflation this scenario preserve an appropriate place in the community and for more details we can refer the reader to the literature \cite{Muslimov,BarowLiddle02,BarrowLiddle,kk,mohammadi} and references there.\\
The majority of investigations to find out dynamical evolution of inflationary models have been done in a homogenous and isotropic background for instance Friedmann-Lemaitre-Robertson-Walker, FLRW, metric. However, a bit little deviation of isotropy at the level of $10^{-5}$, has also been proposed by Bennett \textit{et al}, and subsequently this suggestion was approved by high resolution Wilkinson Microwave Anisotropy Probe, WMAP \cite{ch1:9,WM2}. We should emphasise here although according to recent studies the anisotropy should be small its imprints in large scale structure formation are considerable. To justify this claim the the effects of anisotropy on the early Universe evolutions and especially the primary seeds of structure formation in the frame work of Bianchi type I (BI) exactly have been investigated \cite{ko,ku,Y1,17,18,aghaohamadi}. Amongst the Bianchi different types we can refer to the Kasner-type as a specific one; in which cosmological scale factors evolve by a power-law function of time. In General Relativity, GR, the vacuum Kasner solutions \cite{r7} and their fluid filled counterparts, the
BI models, were verified effectively as a starting point for the investigation of the
structure of anisotropic models. Barrow and Clifton \cite{r8,r9} have shown that it
is possible finding out the solutions of the Kasner type for $R^n$-gravity.
Newly, the authors in \cite{18a} have discussed the effects of low anisotropy on the interacting Dark Energy, DE, models and have shown the advantages of their model comparing to the standard FLRW, $\Lambda$ Cold Dark Matter, $\Lambda$CDM, and $w$CDM model results. Additionally, they showed that the anisotropy should get a non-zero value at the present time. Let's again turn our attention to the BI Universe; in fact the BI model is a straightforward extension of the flat FLRW metric so we can emphasize that it is the simplest model of anisotropic but homogenous Universe with spatial flatness. Against the FLRW Universe, which has a same scale factor for its three spatial directions, in the BI Universe the scale factor could change in different independent directions. Hence the study of inflation in an anisotropic Universe has much more advantages than isotropic one. Therefore, based on aforementioned reasons, in this work we are going to consider the anisotropic model to investigate the effects of the intermediate inflation with a non-canonical scalar field \cite{BarowLiddle02,BarrowLiddle}.
By the way, here we want to answer this question why we need to deal with non-canonical model instead of pure canonical one. An immediately and temporary answer may be is intermediate inflation in the standard canonical inflation has some drawbacks and faced to failure comparing with observations. In Refs.\cite{BarowLiddle02,BarrowLiddle} it was showed that in light of the observations risen from the Cosmic Background Explorer ,COBE, the scalar and tensor
power spectral expressed by intermediate inflation has no any valuable results. Besides, the aforementioned drawbacks without adding some extra processes to the model the intermediate inflation never cease, behaves as same as the eternal inflation \cite{BarrowLiddle}.
At present work, we want to seek a probable remedy for these problems in which
the canonical, more even non-canonical, inflation with intermediate inflation faced but in an anisotropic framework.\\
This work is organized as follows: In Sec.\,\ref{secnon-can}, we will express the main dynamical equations for non-canonical Lagrangian in an anisotropic metric. And in Sec.\,\ref{secInfnon-can} by virtue of an intermediate scale factor we will evaluate the inflationary observables and will compare their results with the Planck $2015$ data as a well-known criterion. Also, for the asymptotical regimes, i.e. canonical intermediate inflation and isotropic background, we will show that their results are not in a good agreement compared to general at hand proposal and the planck data. At last, Sec.\,\ref{seccon} is devoted to conclusion and discussions.
\section{ non-canonical model in an anisotropic metric}\label{secnon-can}
Usually non-canonical inflation could be expressed by the following action
\begin{equation}
\label{action}
S = \int {{{\rm{d}}^4}} x\;\sqrt{-g}~\mathcal{L}(X,\phi),
\end{equation}
where Lagrangian ${\cal L}$ is a function of scalar field $\phi$ and its derivatives, i.e. the kinetic term $X \equiv {g^{\mu\nu}\nabla _\mu
}\phi {\nabla_\nu }\phi /2$. By varying the action and after some algebra
the equations of energy density
$\rho_{\phi}$ and pressure $p_{\phi}$ are obtained as follows: \cite{Arm,Gar,Li,Hwa,Fra10a,Fra10b,Unn12,Unn13,Zha14a}
\begin{eqnarray}
\label{rhodef}
{\rho _\phi } &=&
2X\left( {\frac{{\partial {\cal L}}}{{\partial X}}} \right) - {\cal
L}~,
\\
\label{pdef}
{p_\phi } &=& {\cal L}~.
\end{eqnarray}
\label{sec2}
As mentioned in introduction the BI cosmology refers to a spatially homogeneous background but not necessarily isotropic one. As a remembrance please note that we will consider BI cosmology in entire of this work expect when it has mentioned obviously. The metric of the BI model could be given by
\begin{equation}\label{1}
ds^2=dt^{2}-A^{2}(t)dx^{2}-B^{2}(t)dy^{2}-C^{2}(t)dz^{2},
\end{equation}
where the metric components $A, B$ and $C$ are merely functions of time, for more details about Lie algebra and isometry group of the BI metric we refer the reader to \cite{35a} . From the literature we know the energy momentum tensor for perfect fluid is expressed by
\begin{eqnarray}\label{4}
T^{\mu}_{\nu}=diag[\rho,-p,-p,-p],
\end{eqnarray}
where $\rho$ and $p$ represent the energy density and pressure respectively. Additionally, the field equations in the axial symmetry BI metric are obtained as \cite{36,37,38}
\begin{eqnarray}
3H^{2}-\sigma^{2}&=&\frac{1}{M_p^2}(\rho_{\phi}), \label{Fri} \\
3H^2+2\dot{H}+\sigma^{2}&=&-\frac{1}{M_p^2}\left(p_{\phi}\right), \label{Fri2}
\end{eqnarray}
where $M_p^2=1/(8\pi G)$ is the reduced Planck mass, and $\sigma^2=\sigma_{ij}\sigma^{ij}/2 $ in which
$\sigma_{ij}=u_{i,j}+\frac{1}{2}(u_{i;k}u^{k}u_{j}+u_{j;k}u^{k}u_{i})+\frac{1}{3}\theta(g_{ij}+u_{i}u_{j})$
is the shear tensor. By virtue of this tensor we can write down $(\sigma_{ij}u^j=0, \sigma^i_{~i}=0)$ that describes the rate of distortion of the
matter flow. The scalar expansion introduced by $3H=u^{j}_{;j}$ where $u^j$ is 4-velocity and in a comoving coordinate it is given by $(u^i=\delta^i_0)$.
Also the components of the Hubble parameter and the shear tensor based on the Eq.(\ref{1}) are expressed as \cite{36}
\begin{eqnarray}
H&=&\frac{1}{3}(\frac{\dot{A}}{A}+\frac{\dot{B}}{B}+\frac{\dot{C}}{C}), \label{14} \\
\sigma^{2}&=&3H^2-(\frac{\dot{A}\dot{B}}{AB}+\frac{\dot{B}\dot{C}}{BC}+\frac{\dot{A}\dot{C}}{AC}). \label{15}
\end{eqnarray}
If one takes $A=B^{\lambda}$ with $B=C$ the scale factor is appeared as $a = (ABC)^{1/3}=(B)^{(\lambda+2)/3} $ where $\lambda$ is a real constant. Then by assuming $H_2 = \frac{\dot{B} }{B}$ the Hubble parameter and the shear are reduced to the following simplified equations
\begin{eqnarray}
H = \frac{2 + \lambda }{3}H_2,\label{1an}\\
\sigma^2 = \frac{(\lambda-1)^2H_2^2}{3}.\label{2an}
\end{eqnarray}
By combining the Friedmann equations and embedded them into the Klein-Gordon equation the conservation equation is resulted as
\begin{equation}
\label{rhophidot}
{\dot \rho _\phi } + 3H\left( {{\rho _\phi } + {p_\phi }} \right) = 0.
\end{equation}
\section{ Intermediate inflation for an anisotropic Universe and non-canonical Lagrangian}\label{secInfnon-can}
In this section the inflationary behaviour for an intermediate scale factor by means of extended canonical Lagrangian inside a BI Universe is studied. Now let us turn our attention to investigate the inflationary evolution in the aforementioned framework. To do so we want to begin with introducing the namely first and second slow-roll parameters, viz.
\begin{eqnarray}
\label{eps}
\varepsilon &=& - \frac{{\dot H}}{{{H^2}}},
\\
\label{eta}
\eta &=& \frac{\dot{\varepsilon}}{H \varepsilon}.
\end{eqnarray}
To receive an accelerating phase as a necessary part of initial Universe, i.e. $\ddot a>0$, from Eq.(\ref{eps}) one immediately realize that the first slow-roll parameter should behave like $\varepsilon<1$. Also as mentioned above one of the big triumphes of inflation paradigm was the finding a remedy to cope with the horizon problem; inflation should drag on in order to persist the relation of $\varepsilon<1$ but the acceleration gets much smaller amounts than unity to run inflation. Hence, inflation occurs and persists if and only if both $\varepsilon $ and
$\left| \eta \right|$ being much less than unity and so these assumptions in the literature named usually the slow-roll approximations. Another critical parameter to drive inflation in expected way is the number of e-fold which is defined as
\begin{equation}
\label{N}
N = \int_{t_i}^t H dt = \int_{\phi_i}^{\phi} \frac{H}{{\dot \phi }}d\phi.
\end{equation}
In order to solve the horizon problem the number of e-fold should at least become more than 60 \cite{kk}. Now after introducing the necessary instruments of running the inflation we can go back to the Lagrangian again. The Lagrangian density which we shall consider can be considered as the following \cite{Unn12,Unn13}
\begin{equation}
\label{Lag}
\mathcal{L}(X,\phi ) = X{\left( {\frac{X}{{{M^4}}}}
\right)^{\alpha - 1}} - \;V(\phi ),
\end{equation}
where $M$ has the dimension of mass and $\alpha$ is a dimensionless parameter introduced to afford turning about to canonical case, i.e. ${\cal
L}(X,\phi ) = X - V(\phi )$. Additionally, the Lagrangian \ref{Lag} satisfies the requirements
$\partial {\cal L}/\partial X \ge 0 $ and ${\partial ^2}{\cal
L}/\partial {X^2} > 0$ to cope with both the null-energy condition and physical propagations of perturbations respectively \cite{Fra10a}. This type of Lagrangian has been taken in account in vast number of prior literature
to investigate some steep
potentials for chaotic or other inflationary scenarios \cite{Unn12}. To refine the power law
inflation in light of Planck $2013$ this Lagrangian has been considered as well \cite{Unn13}.\\
Now let's start the calculations based on the Lagrangian introduced in Eq.(\ref{Lag}). To do so we want to substitute the Lagrangian (\ref{Lag}) into the Eqs. (\ref{rhodef}) and (\ref{pdef}) and thence the energy density and pressure of the scalar field $\phi$ are given by
\begin{eqnarray}
\label{rhophi}
{\rho _\phi } &=& \left( {2\alpha - 1}
\right)X{\left( {\frac{X}{{{M^4}}}} \right)^{\alpha - 1}} +
V(\phi),
\\
\label{p}
{p_\phi } &=& X{\left( {\frac{X}{{{M^4}}}} \right)^{\alpha - 1}} - V(\phi ).
\end{eqnarray}
In addition the dynamical equation of the scalar field, i.e. Klein-Gordon equation, by embedding the Eqs.(\ref{rhophi}) and (\ref{p}) into the conservation equation
(\ref{rhophidot}) will be expressed as follows
\begin{equation}
\label{phiddot}
\ddot \phi + \frac{{3H\dot \phi }}{{2\alpha - 1}}
+ \left( {\frac{{V'(\phi )}}{{\alpha (2\alpha - 1)}}}
\right){\left( {\frac{{2{M^4}}}{{{{\dot \phi }^2}}}} \right)^{\alpha
- 1}} = 0.
\end{equation}
Whereas we have no any interaction of type of non-minimally coupled chameleonic mechanism here, so by varying the lagrangian with respect to the scalar field we can obtain the above equation equally to the procedure which have been used in the papers \cite{Kho2,MOF1,ch19,ch20,ch1:20a}. Now by substituting Eqs. (\ref{14},\ref{15},\ref{1an},\ref{2an}) into the Eqs.(\ref{rhophi}) and (\ref{p}), the slow-roll parameters, i.e. Eqs.(\ref{eps}) and (\ref{eta}), based on the potential $V(\phi)$ are expressed as follows
\begin{eqnarray}
\label{epsV}
{\varepsilon _V} &=&\frac{\sqrt{3(2\lambda +1)}}{2+\lambda} {\left[ {\frac{1}{\alpha }{{\left(
{\frac{{3{M^4}}}{V(\phi)}} \right)}^{\alpha - 1}}{{\left(
{\frac{{{M_P}V'(\phi)}}{{\sqrt 2 \;V(\phi)}}} \right)}^{2\alpha }}}
\right]^{\frac{1}{{2\alpha - 1}}}},
\\
\label{etaV}
{\eta _V} &=& \frac{\sqrt{3(2\lambda +1)}}{2+\lambda}\left( {\frac{{\alpha {\varepsilon
_V}}}{{2\alpha - 1}}} \right)\left( {\frac{{2V(\phi )V''(\phi
)}}{{V'{{(\phi )}^2}}} - 1} \right),
\end{eqnarray}
The Eqs.(\ref{epsV}) and (\ref{etaV}), so called the first and second potential based
slow-roll parameters respectively. In addition, the slow-roll approximation implies
the potential energy should be more larger than the kinetic one and therefore the
Friedmann equation (\ref{Fri}) is reduced to
\begin{equation}
\label{Frisr}
H^2\left(\phi\right) =\frac{(2+\lambda)^2}{9(2\lambda+1)}
\frac{1}{{M_P^2}}V(\phi ).
\end{equation}
Meanwhile, under the slow-roll condition the dynamical equation of the
scalar field, (\ref{phiddot}), is took the form
\begin{equation}
\label{phidot}
\dot \phi = -
\theta {\left \{\frac{\sqrt{3(2\lambda+1)}}{2+\lambda} {\left( {\frac{{{M_P}}}{{\sqrt 3 \alpha }}}
\right)\left( {\frac{{\theta V'(\phi )}}{{\sqrt {V(\phi )} }}}
\right){{\left( {2{M^4}} \right)}^{\alpha - 1}}}
\right\}^{\frac{1}{{2\alpha - 1}}}},
\end{equation}
where $\theta = \pm 1$ when the sign of $V'(\phi ) $ is $\pm$ \cite{Unn12, kk}.
As mentioned in the aforementioned sections, the main aim of this study goes back to investigate the intermediate
inflation in an anisotropic Universe, i.e. BI Universe. The scale factor expressed as $a = (ABC)^{1/3}=(B)^{(\lambda+2)/3} $ in which parameter $\lambda$ introduced to indicate the deviations of the isotropic background and could be considered a little bit larger or smaller than unity. Hence the appellation of low anisotropy implies these small deviations; and the component $B$ of the metric in intermediate inflation is expressed as
\begin{equation}
\label{at}
B(t) = a_i \exp \left[ {{\kappa }{{\left( {{M_P}t} \right)}^f}}
\right],
\end{equation}
where $a_i$ is the scale factor in $y$ axis direction, i.e. the $g_{22}$ component of the metric tensor at the initial time of the inflation. Thereupon, one will be able to obtain the main scale factor as
$a=(a_i \exp [ {{\kappa ^2}{{\left( {{M_P}t} \right)}^f}}
])^{(\lambda+2)/3}$.
Signally by virtue of this definition, the parameters of Hubble and shear could be obtained as follows
\begin{equation}
\label{Hubble}
H^2=\frac{{{\kappa}^2{f^2}{{({M_p}t)}^{2f}}{{(2 + \lambda )}^2}}}{{9{t^2}}},
\end{equation}
and
\begin{equation}
\label{shear}
\sigma^2=\frac{{{\kappa}^2{f^2}{{({M_p}t)}^{2f}}{{( - 1 + \lambda )}^2}}}{{3{t^2}}}.
\end{equation}
In the above expressions we have the constraints ${\kappa }>0$ and $0<f<1$ \cite{BarowLiddle02,BarrowLiddle}. For more convenient the scale factor is normalized to the present time values as $a_0=1$. By using the Eqs.(\ref{Fri}) and (\ref{Fri2}) with the
intermediate scale factor (\ref{at}) one receives
\begin{eqnarray}
\label{rhot}
\rho _\phi &=& \frac{{\kappa }^{2f}M_p^2(M_pt)^{2f}(1 + 2\lambda )}{t^2}
,
\\
\label{pt}
{p_\phi } &=& - \frac{{{\kappa}fM_p^2{{({M_p}t)}^f}[2( - 1 + f)(2 + \lambda ) + {\kappa }f{{({M_p}t)}^f}(5 + 2\lambda (1 + \lambda ))]}}{{3{t^2}}}.
\end{eqnarray}
Considering the slow-roll condition, i.e. ${\rho _\phi } = V(\phi )$, and Eq.(\ref{rhot}) we obtain
\begin{equation}
\label{Vt1}
V(\phi )=\frac{{{\kappa ^2}{f^2}{M_P^2}{{\left( {M_P t} \right)}^{2f}}\left( {1 + 2\lambda } \right)}}{{{t^2}}}.
\end{equation}
Substituting Eq.(\ref{Vt1}) into (\ref{phidot}) we receive a first order differential equation to the scalar field as follows
\begin{equation}
\label{phit}
\dot \phi(t) = {\left( { - \frac{{{2^\alpha }\left( { - 1 + f} \right){{\left( {{M^4}} \right)}^{ - 1 + \alpha }}{M_P}\sqrt {1 + 2\lambda } \sqrt {\frac{{{\kappa ^2}{f^2}M_P^2{{\left( {{M_P}t} \right)}^{2f}}\left( {1 + 2\lambda } \right)}}{{{t^2}}}} }}{{\alpha \left( {2 + \lambda } \right)t}}} \right)^{\frac{1}{2}/\alpha }}.
\end{equation}
Now, by integrating Eq.(\ref{phit}) and after some manipulations time $t$ could be obtained as a function of $\phi$,
\begin{eqnarray}
\label{tphi} \nonumber
t(\phi ) &= &{2^{ - \frac{{2\alpha }}{{ - 2 + f + 2\alpha }}}}\\
&\times&{\left( { - \frac{{\left( {2 - f - 2\alpha } \right)\phi }}{{{\left( { - \frac{{{2^\alpha }\left( { - 1 + f} \right){{\left( {{M^4}} \right)}^{ - 1 + \alpha }}{M_P}\sqrt {1 + 2\lambda } \sqrt {{\kappa ^2}{f^2}M_P^{2 + 2f}\left( {1 + 2\lambda } \right)} }}{{\alpha \left( {2 + \lambda } \right)}}} \right)}^{ \frac{1}{2}/\alpha }}\alpha }} \right)^{\frac{{2\alpha }}{{ - 2 + f + 2\alpha }}}}.
\end{eqnarray}
Then to find the form of the potential, we can substitute the above solution in Eq.(\ref{Vt1}) and it gives
\begin{equation}
\label{Vt2}
V(\phi ) = {V_0}{\phi ^s},
\end{equation}
where
\[\begin{array}{l}
{V_0} = {\kappa ^2}{f^2}M_P^4\left( {1 + 2\lambda } \right)\\
\times \left( {{M_P}{{\left( { \frac{{-\left( {2 - f - 2\alpha } \right)}}{{2\alpha {{\left( { \frac{{{2^\alpha }\left( { 1 - f} \right){{\left( {{M^4}} \right)}^{ - 1 + \alpha }}\sqrt {{\kappa ^2}{f^2}M_P^{4 + 2f}{{\left( {1 + 2\lambda } \right)}^2}} }}{{\alpha \left( {2 + \lambda } \right)}}} \right)}^{\frac{1}{2}/\alpha }}}}} \right)}^{\frac{{2\alpha }}{{ - 2 + f + 2\alpha }}}}} \right)^{-2+2f}\equiv { V_0}^\ast{\kappa ^\frac{4\alpha-2}{-2+2\alpha+f}}
\end{array}\]
is a constant and
\begin{equation}
\label{s}
s = \frac{{2\alpha ( - 2 + 2f)}}{{ - 2 + f + 2\alpha }}.
\end{equation}
It is obvious that the achieved potential behaves like the power law potentials \cite{BarowLiddle02,BarrowLiddle}. Whereas the value of the parameter $f$ for the intermediate scale factor (\ref{at}) gets the values of betwixt $0$ and $1$ \cite{Muslimov,BarowLiddle02,BarrowLiddle,kk,mohammadi}. Thence, from Eq.(\ref{s}) one can conclude that the parameter $s$ in (\ref{Vt2}) must be in the range $0 < s <-2\alpha/(\alpha - 1)$ to authorize the existence of intermediate inflation which $\alpha>1$ according to equation (\ref{Lag}). Since in the standard canonical setting ($\alpha =1$), so the $s$ parameter is varying between $-\infty< s <0$.
Now, given the inverse power law potential form as source of inflation in the slow roll condition,
we can obtain the necessary relations
for determining the inflationary observables.
The expression of the scalar and tensor power spectrum in the slow roll regime are given as \cite{Gar}
\begin{eqnarray}
\label{psk}
{{\cal P}_s}&=&(\frac{H^2}{2\pi(c_s(\rho_{\phi}+p_{}\phi)^{1/2}})^2_{aH_{iso}=c_sk}
,
\\
\label{ptk}
{{\cal P}_t} &=&\frac{8}{M_p^2}(\frac{H}{2\pi})^2_{aH_{iso}=k}.
\end{eqnarray}
By considering the Lagrangian (\ref{Lag}) and also the Eqs.(\ref{1an}, \ref{epsV}) in anisotropic metric, the above equations are expressed as
\cite{Unn12,Unn13}
\begin{eqnarray}
\label{Ps}
{{\cal P}_s} &=&\frac{(2+\lambda)^3}{(3(2\lambda+1))^3/2} \frac{1}{{72{\pi ^2}{c_s}}}\left(
{\frac{{{6^\alpha }\alpha V{{(\phi )}^{5\alpha -
2}}}}{{M_P^{14\alpha - 8}{{ M}^{4(\alpha - 1)}}V'{{(\phi
)}^{2\alpha }}}}} \right)_{a{ani} = {c_s}k}^{\frac{1}{{2\alpha - 1}}}.\\
{{\cal P}_t}&=&\frac{(2+\lambda)^2}{3(2\lambda+1)}\Big(\frac{2V(\phi)}{3{\pi}^2 M_p^4}\Big)_{aH_{ani}= k}.\label{PTV}
\end{eqnarray}
To receive equations \ref{Ps} and \ref{PTV} we used $H_{ani}=\frac{2+\lambda}{\sqrt{3(2\lambda+1)}}H_{iso}$ where subscribes ${ani} $ and ${iso}$ refer to the anisotropic and isotropic respectively. Now let's explain a little bit more about the constraint $aH_{ani} = {c_s}k$ in above equations. In fact, based on leading literature and textbooks the scalar power spectrum should be assessed at the sound horizon exit that specified by $aH_{ani} = {c_s}k$ where $k$ is the comoving wave number and
$c_{s}$ refers to the sound speed \cite{Arm,Gar,Li,Hwa,Fra10a,Fra10b,Unn12,Unn13,Zha14a,kk}. Additionally, the sound speed has a definition as follows
\begin{equation}
\label{csdef} c_s^2 \equiv \frac{{\partial {p_\phi }/\partial
X}}{{\partial {\rho _\phi }/\partial X}} = \frac{{\partial {\cal
L}(X,\phi )/\partial X}}{{\left( {2X} \right){\partial ^2}{\cal
L}(X,\phi )/\partial {X^2} + \partial {\cal L}(X,\phi )/\partial
X}}.
\end{equation}
And for our investigation here it takes the following form
\begin{equation}
\label{cs}
{c_s} = \frac{1}{{\sqrt{2\alpha - 1} }},
\end{equation}
where behaves just as a constant. Replacing the potential (\ref{Vt2}) onto Eqs.(\ref{Ps}) and (\ref{PTV})
after some algebra gives
\begin{eqnarray}\label{Psphi}
{P_s} &= &{(\frac{{\left( {2 + \lambda } \right)}}{{\sqrt {3\left( {2\lambda + 1} \right)} }})^3}\\ \nonumber
&\times &\frac{{{{\left( {{6^\alpha }M_P^{8 - 14\alpha }{\mkern 1mu} \alpha {\mkern 1mu} {\mu ^{4 - 4\alpha }}{{\left( {\frac{{2\alpha \left( { - 2 + 2f} \right)}}{{ - 2 + f + 2\alpha }}} \right)}^{ - 2\alpha }}{{\left( {{V_0}} \right)}^{ - 2 + 3\alpha }}} \right)}^{\frac{1}{{ - 1 + 2\alpha }}}}}}{{72{\pi ^2}{c_s}}}(\phi )_{aH_{ani} = {c_s}k}^{\frac{{\alpha \left( {6f - 4} \right)}}{{2\alpha + f - 2}}},
\end{eqnarray}
and
\begin{equation}
\label{PTphi}
{{\cal P}_t} = {(\frac{{\left( {2 + \lambda } \right)}}{{\sqrt {3\left( {2\lambda + 1} \right)} }})^2}\frac{{2{V_0}}}{{3M_P^4{\pi ^2}}}(\phi )_{aH_{ani} = k}^{\frac{{4\left( { - 1 + f} \right)\alpha }}{{ - 2 + f + 2\alpha }}},
\end{equation}
where $\mu= M/M_p$. From Eq.(\ref{Psphi}), it obviously could be seen that for the value of $f=2/3$ the
scalar power spectrum is appeared as an independent parameter of $\phi$ and so
it makes sense like the scale-invariant Harrison-Zel$'$dovich spectrum. Now, in order to calculate the evolution of power spectrum based on $N$, we need the scalar field in terms of the number of e-folds. Hence, we might need to calculate the values of scalar field at the initiation of the inflation namely $\phi_{i}$. To do this end according to the slow-roll parameter definition i.e. equation (\ref{epsV}) we have
\begin{equation}
\label{epsilonphi}
{\varepsilon _V}=\sqrt {3\left( {2\lambda + 1} \right)} \frac{{{{\left( {{\alpha ^{ - 1}}{2^{ - \alpha }}{3^{ - 1 + \alpha }}M_P^{2\alpha }{{\left( {\frac{{2\alpha \left( { - 2 + 2f} \right)}}{{ - 2 + f + 2\alpha }}} \right)}^{2\alpha }}{{\left( {\frac{{{M^4}}}{{{V_0}}}} \right)}^{ - 1 + \alpha }}} \right)}^{\frac{1}{{ - 1 + 2\alpha }}}}}}{{2 + \lambda }}{\phi ^{\frac{{s - \alpha s - 2\alpha }}{{ - 1 + 2\alpha }}}}.
\end{equation}
Now we can rely on these facts that at the beginning of the inflation $\varepsilon _V=1$ therefore easily we obtain the related value of scalar field as
\begin{equation}
\label{phibegin}
\phi_i = {{\Big[}{{\Big(}{\alpha ^{ - 1}}{2^{ - \alpha }}{3^{ - 1 + \alpha }}{M^{4\alpha - 4}}{V_o}^{1 - \alpha }M_p^{2\alpha }{\Big)}^{ - \frac{1}{{ - 1 + 2\alpha }}}}\chi{\Big]}^{\frac{{1 - 2\alpha }}{{ - s + 2\alpha + s\alpha }}}},
\end{equation}
where $\chi = \frac{{(2 + \lambda )}}{{\sqrt {3(2\lambda + 1)} }}$. By bringing in account the Eq.(\ref{N}) we get
\begin{equation}
\label{NN}
\phi = {{\Big(}\phi_i^{\frac{{2 - f}}{{ - 2 + f + 2\alpha } + s}} + \frac{N}{\Lambda }{\Big)}^{\frac{{ - 2 + f + 2\alpha }}{{(2 - f)(1 - s) + 2\alpha s}}}},
\end{equation}
where $$\Lambda = \frac{{2\alpha (2 + \lambda )\sqrt {{V_0}} {{({\gamma ^{ - \frac{{2\alpha }}{{ - 2 + f + 2\alpha }}}})}^{ - \frac{{ - 2 + f}}{{2\alpha }}}}}}{{3{M_p}( - 2 + f + 2\alpha )(1 + \frac{s}{2} - \frac{{ - 2 + f}}{{ - 2 + f + 2\alpha }})\gamma \sqrt {1 + 2\lambda } }},$$ and
$$\gamma = \frac{{2\alpha {{( - \frac{{{2^\alpha }( - 1 + f){M^{ - 4 + 4\alpha }}{M_p}\sqrt {1 + 2\lambda } \sqrt {{\kappa ^2}{f^2}{M_p}^{2 + 2f}(1 + 2\lambda )} }}{{\alpha (2 + \lambda )}})}^{1/2\alpha }}}}{{ - 2 + f + 2\alpha }}.$$
Now, by virtue of Eqs.(\ref{Psphi}) and (\ref{NN}),
the scalar power spectrum in
terms of the number of e-folds is given by
\begin{eqnarray}
\label{Psk}
{{\cal P}_s}={(\frac{{\left( {2 + \lambda } \right)}}{{\sqrt {3\left( {2\lambda + 1} \right)} }})^3}\frac{{{{\left( {{6^\alpha }M_P^{8 - 14\alpha }\,\alpha \,{\mu ^{4 - 4\alpha }}{{\left( {\frac{{2\alpha \left( { - 2 + 2f} \right)}}{{ - 2 + f + 2\alpha }}} \right)}^{ - 2\alpha }}{{\left( {{V_0}} \right)}^{ - 2 + 3\alpha }}} \right)}^{\frac{1}{{ - 1 + 2\alpha }}}}}}{{72{\pi ^2}{c_s}}}\times\cr
{{\Big(}{\Big(}\phi_i^{\frac{{2 - f}}{{ - 2 + f + 2\alpha } + s}} + \frac{N}{\Lambda }{\Big)}^{\frac{{ - 2 + f + 2\alpha }}{{(2 - f)(1 - s) + 2\alpha s}}}}{\Big)}^{\frac{{\alpha \left( {6f - 4} \right)}}{{2\alpha + f - 2}}}.
\end{eqnarray}
Whereas both $H$, in the slow roll inflation, and $c_s$, in this work are constant. Consequently at the sound horizon exit, $aH = c_s k$, we have \cite{Unn13}
\begin{equation}
\label{exithorizon}
\frac{{\rm{d}}}{{{\rm{d ln k}}}} \simeq -\frac{{\rm{d}}}{{{\rm{dN}}}}.
\end{equation}
The importance of this relation goes back to calculation the especially spectral indices. For the scalar spectral index we can write
\begin{equation}
\label{nsdef}
{n_s} - 1 \equiv \frac{{d\ln {{\cal P}_s}}}{{d\ln k}},
\end{equation}
in which by using Eq. (\ref{Psk}) we get
\begin{eqnarray}
\label{nsk}
n_s&=&1 - \frac{{( - 4 + 6f)\alpha }}{{(2 - f)(1 - s) + 2\alpha s)\Lambda }}\cr
&\times&\frac{1}{{\frac{N}{\Lambda } + {{\Big{(}{{({{(\frac{{{2^{ - \alpha }}{3^{ - 1 + \alpha }}{M^{4( - 1 + \alpha )}}V_0^{1 - \alpha }{{({M_p}s)}^{2\alpha }}}}{\alpha })}^{\frac{1}{{1 - 2\alpha }}}}\chi )}^{\frac{{1 - 2\alpha }}{{s( - 1 + \alpha ) + 2\alpha }}}}\Big{)}}^{s + \frac{{2 - f}}{{ - 2 + f + 2\alpha }}}}}}.
\end{eqnarray}
Another important parameter one can refer, to investigate the behavior and evolution of the initial cosmos, is the running parameter. We can consider the running of the scalar spectral index as
\begin{eqnarray}
\label{dnsk}
\alpha_s&=&\frac{{d{n_s}}}{{d\ln k}} =\frac{{( - 4 + 6f)\alpha }}{{{\Lambda ^2}(2 + f( - 1 + s) + 2s( - 1 + \alpha ))}}\cr
&\times&
{{\Bigg[\frac{N}{\Lambda } + {{\Big{(}{{({{(\frac{{{2^{ - \alpha }}{3^{ - 1 + \alpha }}{M^{4( - 1 + \alpha )}}V_0^{1 - \alpha }{{({M_p}s)}^{2\alpha }}}}{\alpha })}^{\frac{1}{{1 - 2\alpha }}}}\chi )}^{\frac{{1 - 2\alpha }}{{s( - 1 + \alpha ) + 2\alpha }}}}\Big{)}}^{s + \frac{{2 - f}}{{ - 2 + f + 2\alpha }}}}\Bigg]^{-2}}}
\end{eqnarray}
After the completion of the scalar part, now we can go through the tensor part. In terms of the number of e-folds the tensor power spectrum, by means of Eqs.(\ref{Lag}) and (\ref{PTphi}), is given by
\begin{equation}
\label{Pt}
{{\cal P}_t}=\frac{{2V_0{\chi ^2}{{3{M_p}^{-4}{\pi^{-2}}}}}}{\Bigg[{{\Big{(}\frac{N}{\Lambda } + {{({{({{(\frac{{{2^{ - \alpha }}{3^{ - 1 + \alpha }}{M^{4( - 1 + \alpha )}}V_0^{1 - \alpha }{{({M_p}s)}^{2\alpha }}}}{\alpha })}^{\frac{1}{{1 - 2\alpha }}}}\chi )}^{\frac{{1 - 2\alpha }}{{s( - 1 + \alpha ) + 2\alpha }}}})}^{s + \frac{{2 - f}}{{ - 2 + f + 2\alpha }}}}\Big{)}}^{\frac{1}{{s + \frac{{2 - f}}{{ - 2 + f + 2\alpha }}}}}}\Bigg]^{-s}}.
\end{equation}
The tensor spectral index is defined as
\begin{equation}
\label{ntdef}
{n_t} \equiv \frac{{d\ln
{{\cal P}_t}}}{{d\ln k}}.
\end{equation}
Now by using Eqs.(\ref{exithorizon}), (\ref{Pt}), and the above equation one can obtain
\begin{eqnarray}
\label{ntn}
{n_t} &=& \frac{{-s( - 2 + f + 2\alpha )}}{{((2 - f)(1 - s) + 2\alpha s)\Lambda }}\cr
&\times&\frac{1}{{\frac{{N}}{\Lambda } + {{\Big{(}{{({{(\frac{{{2^{ - \alpha }}{3^{ - 1 + \alpha }}{M^{4( - 1 + \alpha )}}V_0^{1 - \alpha }{{({M_p}s)}^{2\alpha }}}}{\alpha })}^{\frac{1}{{1 - 2\alpha }}}}\chi )}^{\frac{{1 - 2\alpha }}{{s( - 1 + \alpha ) + 2\alpha }}}}\Big{)}}^{s + \frac{{2 - f}}{{ - 2 + f + 2\alpha }}}}}}.
\end{eqnarray}
To measure the amplitude of the primordial fluctuations we need to calculate the tensor-to-scalar ratio which is defined as
\begin{equation}
\label{rdef}
r \equiv \frac{{{{\cal P}_t}}}{{{{\cal P}_s}}},
\end{equation}
where by using Eqs. (\ref{Psk}), (\ref{Pt}) and (\ref{rdef}), it can be expressed by
\begin{eqnarray}
\label{rn}
r& =& {\Bigg[{\Big[\frac{N}{\Lambda } + {\Big{(}{({(\frac{{{2^{ - \alpha }}{3^{ - 1 + \alpha }}{M^{4( - 1 + \alpha )}}V_0^{1 - \alpha }{{({M_p}s)}^{2\alpha }}}}{\alpha })^{\frac{1}{{1 - 2\alpha }}}}\chi )^{\frac{{1 - 2\alpha }}{{s( - 1 + \alpha ) + 2\alpha }}}}\Big{)}^{s + \frac{{2 - f}}{{ - 2 + f + 2\alpha }}}}\Big]^{\frac{1}{{s + \frac{{2 - f}}{{ - 2 + f + 2\alpha }}}}}}\Bigg]^{s + \frac{{(4 - 6f)\alpha }}{{ - 2 + f + 2\alpha }}}}\cr
&\times&\frac{{48{c_{s}}{V_0}{{({6^\alpha }M_{_p}^{^{8 - 14\alpha }}V_0^{^{ - 2 + 3\alpha }}{{(s)}^{ - 2\alpha }}\alpha {\mu ^{4 - 4\alpha }})}^{\frac{1}{{1 - 2\alpha }}}}}}{{M_p^4\chi }}.
\end{eqnarray}
The consistency relation between observable $r$ and $n_t$ in the non-canonical inflation is as follows
\begin{equation}
\label{rnt3}
r \approx -8 c_s n_t,
\end{equation}
which has an extra $c_s$ coefficient comparing to the canonical case, i.e. ($r=-8n_t$) \cite{Unn12,Unn13}.
Replacing Eq.(\ref{ntn}) into Eq(\ref{rnt3}) gives
\begin{eqnarray}
\label{rnt2}
r &\approx & \frac{{8 c_s s( - 2 + f + 2\alpha )}}{{((2 - f)(1 - s) + 2\alpha s)\Lambda }}\cr
&\times&\frac{1}{{\frac{{N}}{\Lambda } + {{\Big{(}{{({{(\frac{{{2^{ - \alpha }}{3^{ - 1 + \alpha }}{M^{4( - 1 + \alpha )}}V_0^{1 - \alpha }{{({M_p}s)}^{2\alpha }}}}{\alpha })}^{\frac{1}{{1 - 2\alpha }}}}\chi )}^{\frac{{1 - 2\alpha }}{{s( - 1 + \alpha ) + 2\alpha }}}}\Big{)}}^{s + \frac{{2 - f}}{{ - 2 + f + 2\alpha }}}}}}.
\end{eqnarray}
Subsequently we are going to check the accuracy and consistency of our theoretical results. To do so we have to make a comparison with observation. One of the best criterions for our aim could be considered is the data risen by Planck $2013$ and $2015$ \cite{Planck2015}. It is distinct that one of the most important results of Planck data is the $r-n_s$ diagram and the validity of different models relies on their compatibility to this observation.
Therefore, by virtue of
Eqs.(\ref{nsk}) and (\ref{rnt2}) we excited to depict the $r-n_s$ diagram for our scenario. This diagram is shown in figure \ref{fignsr}. Besides, the marginalized likelihoods based on Confidence Levels (CLs) 68\% and 95\% are allowed by
Planck $2015$ , TT, TE, EE+lowP data \cite{Planck2015} and we illustrated them in the figure \ref{fignsr}. Predictions of our model are specified by solid black line for the values of $\alpha=3$, $\kappa=3.02\times 10^{-12}$, $f=10^{-4}$ and $\lambda=3.5$. From figure \ref{fignsr} it could be visualized that our results can be considered in acceptable ranges compared to the observations. Then, it could be concluded this scenario is able to be regarded as a valid case for explaining the inflationary scenario.
\begin{figure}[ht]
\centering
\includegraphics[scale=.50]{wnsas}
\caption{{\it{The $r-n_s$ diagram indicates prediction of
the non-canonical intermediate inflationary model in anisotropy background for the specified values of $\alpha=3$, $\kappa=3.02\times 10^{-12}$, $f=10^{-4}$ ,$M_p=10^{18}$, $M=10^{12}$ and $\mu=10^{-6}$,
in comparison with the observational results of Planck 2015. Where the likelihood of Planck 2013 (grey contours), Planck TT+lowP(red contours), Planck TT,TE,EE+lowP(blue contours) and the thick black line
indicate the predictions of our model in which small and large dots are the value of $n_s$ at the number of e-fold $N=55,~N=66$.}}}
\label{fignsr}
\end{figure}
It is clear that
the grey, red and blue CLs are correspond to
Planck $2013$, Planck $2015$ TT+lowP and Planck $2015$ TT, TE, EE+lowP data
respectively \cite{Planck2015}.
Also in Table (\ref{FT}) we want to study the behaviour of the parameter $\lambda$, i.e. the effect of low anisotropy; one can obviously observe that the prediction of the model for
perturbation parameters are relied upon the specific values of free
parameters $f,~\alpha, ~\kappa $ but different values for parameter $\lambda$. Given the Table (\ref{FT}), it is clear that the $r-n_s$ diagram for $\alpha=3$ and $\lambda=3.5$ is in more consistency with the Planck $2015$ TT, TE, EE+lowP results \cite{Planck2015}. Meanwhile, at $\lambda=1$ and $\alpha=3$ the results lead to the non-canonical but isotropic Universe, that the $\mathcal{P}_s$has a little bit deviation of its observed value by Planck $\mathcal{P}^{\ast}_s=2.17\times 10^{-9}$
\begin{table}[h]
\centering
{\footnotesize
\begin{tabular}{p{1.2cm}p{1.5cm}p{0.8cm}p{0.8cm}p{1.2cm}p{2.5cm}p{2.5cm}}
\hline
$ \alpha$ & $\kappa*10^{-12}$ & $\ \ f$ & $\lambda$ & $\ \ n_s$ &\ \ $ r$ & $\qquad \mathcal{P}_s$ \\[0.1mm]
$3$ & $3.02$ & $10^{-4}$ & $3.5$ & $0.978$ & $0.078$ & $2.17\times 10^{-9}$ \\[2mm]
$3$ & $3.02$ & $10^{-4}$ & $1$ & $0.978$ & $0.078$ & $6.199\times 10^{-10}$ \\[2mm]
$1$ & $3.02$ & $10^{-4}$ & $3.5$ & $0.964$ & $0.167$ & $2.7\times 10^{590950}$ \\[0.1mm]
$2$ & $3.02$ & $10^{-4}$ & $2.5$ & $0.976$ & $0.290$ & $7293.78$ \\[0.1mm]
$1$ & $3.02$ & $10^{-4}$ & $1$ & $0.964$ & $0.130$ & $9.07\times 10^{587695}$ \\[0.1mm]
\end{tabular}
}
\caption{\footnotesize The prediction of the model for the perturbation parameters $n_s$, $r$ and $\mathcal{P}_s$ are prepared for different values of the free parameters $\lambda$ and $ \alpha$ besides the specified values of other free parameters. Also we used $M_p=10^{18}$, $M=10^{12}$ and $N=55$. This analyze shows the best behaviour for the first row of the table. On the second row we can see the behaviour of non-canonical model in isotropic background in which there is some deviations almost around one order for $\mathcal{P}_s$. We also examine the canonical case with anisotropic condition on third row and the result was very far from the observed results, especially the value for $\mathcal{P}_s$. The amounts for free parameters on the fourth row are supplied for more clarity in comparison. And finally we consider the canonical case in an isotropic background and the results again were not according to excepted results originated from observations}\label{FT}
\end{table}
Now we can turn into the running spectral index, $\alpha_s=d{n_s}/dN -{n_s}$, behaviour in comparison to the observational results originated of Planck data. So at first we regard $\alpha=3$, $\kappa=3.02\times 10^{-12}$, $f=10^{-4}$ ,$M_p=10^{18}$, $M=10^{12}$, $\mu=10^{-6}$, and $\lambda=3.5$. Then, by using Eqs. (\ref{nsk}) and
(\ref{dnsk}) we will plot $d{n_s}/dN$ versus $n_s$. The plot \ref{fignsdns1} shows the prediction of the model could lie insides the joint 68\% CLs region of Planck $2015$ TT, TE, EE+lowP data, and satisfies the agreement with observations \cite{Planck2015}.
\begin{figure}[ht]
\centering
\includegraphics[scale=.50]{nsdns}
\caption{{\it{
The $d{n_s}/dN - {n_s}$ diagram show Prediction of
the non-canonical intermediate inflationary model in anisotropy background for the specified values of $\alpha=3$, $\kappa=3.02\times 10^{-12}$, $f=10^{-4}$ ,$M_p=10^{18}$, $M=10^{12}$ and $\lambda=3.5$,
in comparison with the observational results of Planck 2015. Where the likelihood of Planck $2013$ (grey contours), Planck $2015$ TT+lowP(red contours), Planck $2015$ TT,TE,EE+lowP(blue contours) and the thick black line
indicate the predictions of our model in which small and large dots are the amount of $n_s$ at the e-folding value $N=55,~N=65$.}}}
\label{fignsdns1}
\end{figure}
\newpage
\section{Conclusions}\label{seccon}
A well-known class of scale factors namely the intermediate ones for a non-canonical Lagrangian in the anisotropic background has been investigated. The main motivation of doing such investigation goes back to cope with drawbacks of canonical and isotropic version of inflationary scenarios. Despite some complications in formulas and calculations, have been raised because of extension in the model, fortunately and without any hand made conditions it has bee shown that the obtained potential automatically takes a steep form, i.e. $V = V_0 \: \phi^{s}$ with $s<0$.
This class of potentials, as it has been shown, can be considered as a suitable candidate to run the inflation in an acceptable process, compared to observational constraints. To examine our proposal we have been followed the slow-roll method and all necessary parameters have been estimated based on a powerful criterion such as Planck $2015$. Amongst those aforementioned observables we have been focused on the amplitudes of scalar and tensor perturbations, their ratio, scalar and tensor spectral indices and their running as well. So by combining resulted potential and slow-rolling approach we have been tried to examine the accuracy of our estimations and also the claims about the succusses of Non-canonical anisotropic model. It has been clear one of the most important results of Planck data is the $r-n_s$ diagram, and the validity of theoretical models rely upon their acceptable compatibility with this criterion. Whereas we have been obtained all the necessary instruments to examine the validity of our results we could plot them based on the original figures originated from the Planck collaboration papers, e.g. \cite{Planck2015}. Therefore based on our investigations for $r-n_s$ analyzes at first we have been supplied a diagram in Fig.\ref{fignsr} and it has been observed that non-canonical anisotropic inflation with an intermediate scalar field could be considered as a suitable candidate to drive inflation. And then, consequently in Table. \ref{FT} different asymptotical behaviour based on definitions for the Lagrangian and also BI metric and the comparability with $n_s$ and $\mathcal{P}_s$ appeared in Planck have been evaluated. The best free parameters have been obtained as $\alpha=3$, $\lambda=3.5$ and $f=10^{-4}$ in which we have been used some specific values for other parameters like the $\kappa=3.02\times 10^{-12}$ ,$M_p=10^{18}$, and $M=10^{12}$. To visualize the aforementioned asymptotical behaviour at first we have been considered $\lambda=1$ to go back to isotropic background. From this point of view it has been concluded that even for the well accepted non-canonical Lagrangian the results in the isotropic universe have some deviations compared to data. Even more the situation could be absolutely teerible for canonical Lagrangian even in anisotropic background. Besides, in Fig.\ref{fignsdns1}, the predictions for the running spectral index have been appeared also in acceptable ranges comparing with observational data, that has been relied insides the joint 68\% CLs region of Planck 2015 TT, TE, EE+lowP data \cite{Planck2015}.
\section{Acknowledgement}
The authors thank the anonymous referee for his/her useful comments and suggestions have resulted in an improved version of their manuscript. HS would like to appreciate IPM, and specially H. Firouzjahi, for their hospitality and constructive discussions during his visit of there. Also he is grateful ICTP, during Summer School 2018, to give him constructive ideas about inflation and primordial fluctuations. He appreciates G. Ellis, A. Weltamn and UCT to arrange his short visit of there and good discussions about primordial universe. He is also grateful his wife E. Avirdi for her valuable notes and being patience during our stay in South Africa.
|
{
"timestamp": "2019-05-23T02:11:19",
"yymm": "1809",
"arxiv_id": "1809.02348",
"language": "en",
"url": "https://arxiv.org/abs/1809.02348"
}
|
\section{\label{Sec: Introduction} Introduction}
The cochlea is a highly sensitive device that is capable of sensing sound waves across a broad spectrum of frequencies ($20-20000 \si{Hz}$) and across a wide range of sound intensities ranging from $0 \si{dB}$ (threshold of hearing) up to $120 \si{dB}$ (sound of a jet engine). The cochlea was believed to be a passive device that acts like a Fourier analyzer: each frequency causes a vibration at a particular location on the basilar membrane (BM). This mechanism was discovered by the Nobel Prize winner George von B\'ek\'esy who carried out his experiments on cochleae of human cadavers. However, in 1948, Thomas Gold hypothesized that the ear is rather an active device that has a component termed the cochlear amplifier. Although Gold's hypothesis was rejected by von B\'ek\'esy, David Kemp validated it thirty years later by measuring emissions from the ear. These emissions, termed otoacoustic emissions (OAEs) are sound waves that are produced by the cochlea and can be measured in the ear canal.
It is widely accepted that the outer hair cells, anchored on the cochlear partition, are responsible for the active gain in the cochlea that produces these emissions. However, the underlying mechanism is still not well understood. For example, spontaneous otoacoustic emissions (SOAEs) -- emissions generated in the absence of any stimulus -- are studied in \cite{fruth2014active} and \cite{ku2008statistics}. The remarkable high sensitivity of the cochlea makes it vulnerable to stochastic perturbations that are believed to be the cause of these emissions. Particularly, in \cite{ku2008statistics}, the authors studied the instabilities that arise in a linear biomechanical cochlear model with spatially random active gain profiles that are static in time. In \cite{fruth2014active}, similar analysis was carried out on simplified cochlear models comprised of coupled active nonlinear oscillators. The randomness, or disorder, was introduced via static variations of a bifurcation parameter. In these previous works, the analysis was carried out through Monte Carlo simulations by studying the stability of different randomly generated active gain (or bifurcation) profiles.
In this paper, we carry out a \textit{simulation-free} stability analysis of the linearized dynamics of a nonlinear model of the cochlea. Our analysis employs structured stochastic uncertainty theory (\cite{bamieh2018structured}, \cite{filo2018structured}, \cite{lu2002mean}, \cite{elia2005remote}) rather than Monte Carlo simulations, where the active gain is stochastic in space and time and may have a spatially-varying expectation and/or covariance. It turns out that letting the active gain be a stochastic process puts the model in a standard setting of linear time-invariant (LTI) systems in feedback with a diagonal stochastic process that enters the dynamics multiplicatively (see Figure~\ref{Fig: Feedback Block Diagram}). This analysis allows us to predict the locations on the BM where the dynamics are more likely to destabilize due to the underlying uncertainties. It also provides a bound on the variance of the perturbations allowed such that stability is maintained.
The rest of the paper is organized as follows: we start by providing a brief description of a class of biomechanical models of the cochlea in section \ref{Section: Brief Model Description}. Then, in section \ref{Section: DSS Formulation}, we recast this class of models in a descriptor state space (DSS) form using operator language (i.e. in continuous space-time). In section \ref{Section: Active Gain Uncertainties}, we reformulate the DSS form in a standard setting that is particularly useful to carry out our stochastic uncertainty analysis. We also provide the conditions for mean-square stability (MSS). In section \ref{Section: Cochlear Instabilities}, we present the numerical results of the possible instabilities caused by stochastic gain profiles with different statistic properties. To validate our analysis, we show a stochastic simulation for the full nonlinear model in section~\ref{Section: Simulations}. Finally, before we conclude, we give a discussion in section~\ref{Section: Discussion} to give a physical interpretation of our results and provide some comments on previous works.
\section{Biomechanical Model of the Cochlea}
Throughout the literature, cochlear modeling attempts varied depending on two main factors. The first is concerned with the degree of biological realism of the mathematical model. This is realized by the incorporation of various biological structures (\cite{geisler1995cochlear}, \cite{lamar2006signal}, \cite{neely1986model}) and the dimensionality of the fluid filling the cochlear chambers (\cite{steele1979comparison}, \cite{givelberg2003comprehensive}). The second factor is concerned with the computational aspect of the models. Different numerical methods were devised to approach the spatio-temporal nature of the cochlea (\cite{neely1981finite}, \cite{elliott2013wave}). Particularly, \cite{elliott2007state} used a finite difference method developed in \cite{neely1981finite} to discretize space and formulate the model in state space form. Moreover, computationally efficient methods and model reduction techniques were developed for fast simulations of cochlear response (\cite{bertaccini2011fast}, \cite{filo2016order}). This section starts by describing the mathematical model adopted in this paper. Then, we reformulate the latter in a continuous space-time descriptor state space form, using operator language. This form has two advantages: (a) it encompasses a wider class of cochlear models and (b) it makes the dynamics more transparent by treating the exact model and its finite dimensional approximation (i.e. discretizing space by some numerical method) separately \cite{filo2017topics}.
\subsection{Mathematical Model Description} \label{Section: Brief Model Description}
The mathematical model can be divided into two main blocks as illustrated in Figure~\ref{Fig: Block Diagram of the Ear}(a). For a detailed derivation of the governing mechanics, refer to \cite{elliott2007state} and \cite{filo2016order} for a one and two dimensional modeling of the fluid stage, respectively.
\begin{figure}[h]
\centering
\begin{tabular}{l}
\includegraphics[scale = 0.7]{Figures/EarBlockDiagram.pdf} \\ \footnotesize{(a) Block Diagram of the Cochlea} \\
\includegraphics[scale = 0.7]{Figures/MicroMechanics.pdf} \\ \footnotesize{(b) Detailed Schematic Representing the Membranes Block }
\end{tabular}
\caption{\footnotesize{(a) The cochlea processes the acceleration of the stapes $\ddot s(t)$, in two stages, to produce the vibrations at every location of the BM, $u(x,t)$. The first stage is governed by the fluid that is stimulated by both the stapes and BM accelerations to yield a pressure $p(x,t)$ acting on every location of the BM. The second stage is governed by the dynamics of the membranes. The two stages are in feedback through the BM acceleration. (b) This figure is a schematic of a cross section (at a location $x$) of the cochlear partition showing the membranes governing the dynamics of the micro-mechanical stage. The spatially varying parameters $m_i$, $c_i(x)$ and $k_i(x)$ are the mass, damping coefficient and stiffness of the BM and TM for $i = 1$ and $2$, respectively. Furthermore, $c_3(x)$ and $k_3(x)$ are the mutual damping coefficient and stiffness, respectively; while $c_4(x)$ and $k_4(x)$ are the damping coefficient and stiffness associated with the active feedback gain from the outer hair cells (OHC) to the BM. The spring and damper between the BM and the OHC have variable negative values to capture the effect of the active force acting only on the BM without any direct effect on the TM. Their values depend on the the BM displacement $u$ via the nonlinear gain $\mathcal G(u)$. Equation(\ref{Eqn: Micromechanics}) describes the underlying dynamics.}}
\label{Fig: Block Diagram of the Ear}
\end{figure}
The fluid block, commonly referred to as the macro-mechanical stage, is linear and memoryless under the appropriate assumptions and approximations (refer to Appendix-\ref{Section: Mass Operators}). This block introduces spatial coupling along the different locations on the BM. Its output is the pressure $p(x,t)$ acting on each location of the BM. The governing equation can be written as a general expression, regardless of the dimensionality of the fluid and the numerical method used, as
\begin{equation} \label{Eqn: BM Pressure}
p(x,t) = -[\mathcal M_f \ddot u](x,t) - [\mathcal M_s \ddot s](t),
\end{equation}
where $\ddot{ }$ represents the second time derivative operation, and $\mathcal M_f$ and $\mathcal M_s$ are linear spatial operators associated with the fluid and stapes mass, respectively. Refer to Appendix-\ref{Section: Mass Operators} for a more detailed discussion of these mass operators and their finite dimensional approximations as matrices $M_f$ and $M_s$, respectively. The second block, commonly referred to as the micro-mechanical stage, takes the distributed pressure $p(x,t)$ as an input to produce the BM vibrations $u(x,t)$ at every location according to the following differential equations
\begin{equation} \label{Eqn: Micromechanics}
\begin{aligned}
\begin{bmatrix} \frac{g}{b} m_1 & 0 \\ 0 & m_2 \end{bmatrix}
\begin{bmatrix} \ddot u \\ \ddot v \end{bmatrix} +
\begin{bmatrix} \frac{g}{b}(c_1 + c_3 - \mathcal G(u)c_4) & \mathcal G(u) c_4 - c_3 \\ -\frac{g}{b}c_3 & c_2 + c_3 \end{bmatrix}
\begin{bmatrix} \dot u \\ \dot v \end{bmatrix} \\ +
\begin{bmatrix} \frac{g}{b}(k_1 + k_3 - \mathcal G(u)k_4) & \mathcal G(u) k_4 - k_3 \\ -\frac{g}{b}k_3 & k_2 + k_3 \end{bmatrix}
\begin{bmatrix} u \\ v \end{bmatrix} =
\begin{bmatrix} p \\ 0 \end{bmatrix},
\end{aligned}
\end{equation}
where $v(x,t)$ is the tectorial membrane (TM) vibration (refer to Figure~\ref{Fig: Block Diagram of the Ear}(b)). Note that the space and time variables $(x,t)$ are dropped where necessary for notational compactness. The constant $b$ is the ratio of the average to maximum vibration along the width of the BM, and $g$ is the BM to outer hair cells lever gain. Refer to \cite{neely1986model} for a detailed explanation of the parameters. Finally, $\mathcal G$ is the nonlinear active gain operator that captures the active nature of the outer hair cells, commonly referred to as the cochlear amplifier. In the spirit of \cite{lamar2006signal}, the action of $\mathcal G$ on a distributed BM displacement profile $u$ is given by
\begin{equation} \label{Eqn: Nonlinear Gain}
\left[\mathcal G(u)\right](x,t) = \frac{\gamma(x)}{1 + \theta \left[\Phi_\eta\left(\frac{u^2}{R^2}\right)\right](x,t)},
\end{equation}
where the gain coefficient $\gamma(x)$ represents the gain at a location $x$, in the absence of any stimulus ($u(x,t) = 0$). The constants $\theta$ and $R$ are the nonlinear coupling coefficient and BM displacement normalization factor, respectively. The operator $\Phi_{\eta}$ is a normalized Gaussian operator such that its action on $u$ is defined as
\begin{align}
[\Phi_\eta(u)](x,t) &:= \frac{\int_0^L \phi_{\eta}(x-\xi) u(\xi,t) d\xi}{\int_0^L \phi_{\eta}(x-\xi)d\xi}; \label{Eqn: Guassian Weighing Operator}\\
\phi_{\eta}(x) &:= \frac{1}{\eta\sqrt{2\pi}}e^{\frac{-x^2}{2\eta^2}}\label{Eqn: Gaussian Kernel},
\end{align}
where $L$ is the length of the BM and $\phi_{\eta}$ is the Gaussian kernel with a width $\eta$.
Note that $\eta = 0.5345 \si{mm}$ corresponds to the equivalent rectangular bandwidth on the BM (refer to Appendix-\ref{Section: ERB} for a detailed explanation).
Observe that the spatial coupling in the micro-mechanical stage appears only in the nonlinear active gain (\ref{Eqn: Nonlinear Gain}).
\subsection{Deterministic Descriptor State Space Formulation of the Linearized Dynamics in Continuous Space-Time} \label{Section: DSS Formulation}
This section gives a Descriptor State Space (DSS) formulation of the cochlear model described in (\ref{Eqn: BM Pressure}) and (\ref{Eqn: Micromechanics}). The DSS form is given for the linearized dynamics around the only fixed point which is the origin.
It can be shown (Appendix-\ref{Section: System Linearization}) that the linearized dynamics can be achieved by simply replacing the nonlinear active gain $[\mathcal G(u)](x,t)$ in (\ref{Eqn: Micromechanics}) by its gain coefficient $\gamma(x)$. First, define the state space variable $\psi(x,t)$ in continuous space-time as
\begin{equation} \label{Eqn: State Space Variable}
\psi(x,t) := \begin{bmatrix} u(x,t) & v(x,t) & \dot u(x,t) & \dot v(x,t) \end{bmatrix}^T.
\end{equation}
Then the DSS form of the linearized dynamics is
\begin{equation} \label{Eqn: Descriptor State Space Operator Form}
\begin{aligned}
\mathcal E \frac{\partial}{\partial t} \psi(x,t) &= \mathcal A_{\gamma} \psi(x,t) + \mathcal B \ddot s(t) \\
u(x,t) &= \mathcal C \psi(x,t),
\end{aligned}
\end{equation}
where $\mathcal E$, $\mathcal A_{\gamma}$ and $\mathcal B$ are matrices of linear spatial operators defined as follows
\begin{align*}
&\mathcal E :=
\begin{bmatrix}
\mathcal I & 0 & 0 & 0 \\
0 & \mathcal I & 0 & 0 \\
0 & 0 & \frac{g}{b}m_1 \mathcal I + \mathcal M_f & 0 \\
0 & 0 & 0 & m_2 \mathcal I \\
\end{bmatrix}; \quad \mathcal B :=
\begin{bmatrix} 0 \\ 0 \\ -\mathcal M_s \\ 0 \end{bmatrix}; \\
&\mathcal A_{\gamma} := \mathcal A_0 + \mathcal B_0 \gamma \mathcal C_0; \qquad
\mathcal C := \begin{bmatrix} \mathcal I & 0 & 0 & 0 \end{bmatrix}; \\
&\Scale[0.97]{\mathcal A_0 := \resizebox{0.92\hsize}{!}{$
\begin{bmatrix}
0 & 0 & \mathcal I & 0\\
0 & 0 & 0 & \mathcal I\\
-\frac{g}{b}(k_1+k_3) & k_3 & -\frac{g}{b}(c_1+c_3) & c_3\\
\frac{g}{b}k_3 & -(k_2+k_3) & \frac{g}{b}c_3 &-(c_2+c_3)
\end{bmatrix}$};} \\
&\mathcal B_0 := \begin{bmatrix} 0 & 0 & \mathcal I & 0 \end{bmatrix}^T; \qquad
\mathcal C_0 := \begin{bmatrix} \frac{g}{b}k_4 & -k_4 & \frac{g}{b}c_4 & -c_4 \end{bmatrix};
\end{align*}
and $\mathcal I $ is the identity operator. The equations in (\ref{Eqn: Descriptor State Space Operator Form}) represent a deterministic evolution differential equation and an output equation that provides the distributed displacement of the BM $u(x,t)$. Other outputs can be selected, such as the TM displacement, by appropriately constructing the $\mathcal C$ operator. In the subsequent section, we slightly modify the dynamical equations to account for stochastic perturbations in the gain coefficient $\gamma(x)$.
\section{Stochastic Uncertainties in the Active Gain} \label{Section: Active Gain Uncertainties}
This section investigates the Mean Square Stability (MSS, which we will formally define in section \ref{Section: Stochastic Feedback Interconnection}) of the linearized cochlear dynamics when the gain coefficient is a \textit{spatio-temporal stochastic process}. The stochastic gain coefficient, now denoted by $\gamma(x,t)$ to account for spatio-temporal perturbations, enters the dynamics (\ref{Eqn: Descriptor State Space Operator Form}) multiplicatively. We first reformulate the dynamics as an LTI system in feedback with a diagonal stochastic gain which is a standard setting in robust control theory\cite[Section 10.3]{zhou1996robust}. Then we carry out our MSS analysis based on \cite{filo2018structured}. By tracking the evolution of the instantaneous spatial covariances, MSS analysis allows us to predict the locations on the BM that are more likely to become unstable due to the underlying stochastic uncertainty. We conclude this section by defining and analyzing a linear operator, whose spectral radius provides a condition for MSS.
\subsection{Stochastic Feedback Interconnection} \label{Section: Stochastic Feedback Interconnection}
The purpose of this section is to separate the stochastic portion of the gain coefficient in a feedback interconnection. We assume that $\gamma(x,t)$ is a \textit{spatio-temporal stochastic process} that is white in time (but may be colored in space), and whose expectation and covariance are independent of time. More precisely, let $\bar \gamma(x)$ be the expectation of $\gamma(x,t)$ and $\tilde \gamma(x,t)$ be a temporally independent, zero mean stochastic perturbation, such that
\begin{equation} \label{Eqn: Stochastic Gain}
\begin{aligned}
\gamma(x,t) &= \bar \gamma(x) + \epsilon \tilde \gamma(x,t), \\
\text{with } & \left\{
\begin{aligned}
&\mathbb E[\gamma(x,t)] = \bar \gamma(x) \\
&\mathbb E[\tilde \gamma(x,t) \tilde \gamma(\xi,\tau)] = \mathbf \Gamma(x,\xi) \delta(t-\tau)
\end{aligned}\right. ~~ \forall t\geq 0,
\end{aligned}
\end{equation}
where $\mathbb E[.]$ denotes the expectation, $\epsilon$ is a perturbation parameter, $\delta(t)$ is the Dirac Delta function, and $\mathbf \Gamma(x,\xi)$ is a positive semi-definite covariance kernel.
Substituting (\ref{Eqn: Stochastic Gain}) in (\ref{Eqn: Descriptor State Space Operator Form}) yields
\begin{equation} \label{Eqn: Stochastic Feedback}
\begin{aligned}
\mathcal E \frac{\partial}{\partial t}\psi(x,t) &= (\mathcal A_{\bar \gamma} + \epsilon \mathcal B_0 \tilde \gamma \mathcal C_0) \psi(x,t) + \mathcal B \ddot s(t)\\
u(x,t) &= \mathcal C \psi(x,t).
\end{aligned}
\end{equation}
The evolution equation in (\ref{Eqn: Stochastic Feedback}) is a Stochastic Partial Differential Equation (SPDE) that is given an It\=o interpretation in the time variable. For more details on It\=o calculus, refer to \cite{oksendal2003stochastic}.
Define a secondary output related to the difference in BM and TM displacements and velocities as
\begin{equation} \label{Eqn: Secondary Output}
\begin{aligned}
y(x,t) &:= \epsilon \mathcal C_0 \psi(x,t).
\end{aligned}
\end{equation}
Furthermore, define the active feedback pressure resulting from the stochastic perturbations to be
\begin{equation} \label{Eqn: Feedback Pressure}
p_a(x,t) := \tilde \gamma(x,t) y(x,t).
\end{equation}
Therefore, using (\ref{Eqn: Stochastic Feedback}), (\ref{Eqn: Secondary Output}) and (\ref{Eqn: Feedback Pressure}), construct the feedback block diagram depicted in Figure \ref{Fig: Feedback Block Diagram}.
\begin{figure}[h]
\centering
\includegraphics[scale = 0.9]{Figures/FeedbackDiagram.pdf}
\caption{\footnotesize{The linearized cochlear model in feedback with multiplicative stochastic gain. The block to the top represents the deterministic portion of the linearized cochlear dynamics casted in a descriptor state space form. The feedback block is a diagonal spatial operator that represents the multiplicative stochastic gain. $y(x,t)$ is the differential vibration and velocity between the BM and TM as given by (\ref{Eqn: Secondary Output}). $p_a(x,t)$ is the active pressure that results from the stochastic component of the active gain.}}
\label{Fig: Feedback Block Diagram}
\end{figure}
This is a standard setting(\cite{filo2018structured} for structured stochastic uncertainty analysis, where the feedback gain is a diagonal spatial operator. This configuration is used to investigate the MSS of the cochlea which is formally defined next.
\textit{Definition}: The feedback system in Figure~\ref{Fig: Feedback Block Diagram} is MSS if, in the absence of an input (i.e. $\ddot s(t) = 0$), the state $\psi(x,t)$ and the active feedback pressure $p_a(x,t)$ have bounded variances for all time.
Therefore, to study MSS, we need to track the temporal evolution of the variances and look at their steady state limits as $t$ goes to $+\infty$. This is the topic of the next subsection.
\subsection{Temporal Evolution of the Covariance Operators} \label{Section: Covariance Evolution}
This section tracks the time evolution of the covariance operators in the absence of any input (i.e. we set $\ddot s(t) = 0$ for the rest of the paper). We use the term covariance ``operators" rather than covariance matrices because the spatial variables $x$ and $\xi$ are continuous. After using some numerical method to discretize space, the covariance operators can be approximated by covariance matrices.
With slight abuse of notation, we use the same symbol to denote the covariance operator and its associated kernel. Define the following instantaneous spatial covariance kernels
\begin{equation} \label{Eqn: Covariance Definitions}
\begin{aligned}
\mathcal X(x,\xi;t) &:= \mathbb E[\psi(x,t) \psi(\xi,t)] \\
\mathcal Y(x,\xi;t) &:= \mathbb E[y(x,t) y(\xi,t)] \\
\mathcal P(x,\xi;t) &:= \mathbb E[p_a(x,t) p_a(\xi,t)] \\
\mathcal U(x,\xi;t) &:= \mathbb E[u(x,t) u(\xi,t)] \\
\mathbf \Gamma(x,\xi) &:= \mathbb E[\tilde \gamma(x,t) \tilde \gamma(\xi,t)] \qquad \forall t\geq 0.
\end{aligned}
\end{equation}
Given that the stochastic perturbations $\tilde \gamma$ are temporally independent, it can be shown \cite[Section V]{filo2018structured} that the time evolution of the covariance operators are governed by the following operator-valued, \textit{differential algebraic} equations
\begin{equation} \label{Eqn: Covariance Evolution Equations}
\begin{aligned}
\mathcal E\dot {\mathcal X} \mathcal E^* &= \mathcal A_{\bar \gamma} \mathcal X \mathcal E^* + \mathcal E \mathcal X \mathcal A_{\bar \gamma}^* + \mathcal B_0 \mathcal P \mathcal B_0^* \\
\mathcal Y &= \epsilon ^2 \mathcal C_0 \mathcal X \mathcal C_0^*\\
\mathcal P &= \mathbf \Gamma \circ \mathcal Y,
\end{aligned}
\end{equation}
where $*$ is the adjoint operation and $\circ$ is the Hadamard product; i.e. the element-by-element multiplication of the kernels $\mathcal P(x,\xi;t) = \mathbf \Gamma(x,\xi) \mathcal Y(x,\xi;t)$.
In order to study the MSS, we need to look at the steady state limit of the covariances. We denote by the asymptotic limit of a covariance operator, when it exists, by an overbar. That is
\begin{equation} \label{Eqn: Covariance Limits}
\begin{aligned}
\bar {\mathcal X} := \lim_{t\to\infty} \mathcal X(t); ~~
\bar {\mathcal Y} := \lim_{t\to\infty} \mathcal Y(t); ~~
\bar {\mathcal P} := \lim_{t\to\infty} \mathcal P(t) .
\end{aligned}
\end{equation}
At the steady state, the covariances become constant in time and thus their time derivatives go to zero. Hence, the steady state covariances, if they exist, are governed by the following operator-valued \textit{algebraic} equations:
\begin{equation} \label{Eqn: State State Covariance Equations}
\begin{aligned}
&\mathcal A_{\bar \gamma} \bar{\mathcal X} \mathcal E^* + \mathcal E \bar{\mathcal X} \mathcal A_{\bar \gamma}^* + \mathcal B_0 \bar{\mathcal P} \mathcal B_0^* = 0 \\
&\bar{\mathcal Y} = \epsilon^2 \mathcal C_0 \bar{\mathcal X} \mathcal C_0^*\\
&\bar{\mathcal P} = \mathbf \Gamma \circ \bar{\mathcal Y}.
\end{aligned}
\end{equation}
In the next section, we will use (\ref{Eqn: State State Covariance Equations}) to define a new operator as a tool to check the boundedness of the steady state covariances.
\subsection{Loop Gain Operator \& MSS}
Using (\ref{Eqn: State State Covariance Equations}), define the loop gain operator $\mathbb{L}_{\mathbf \Gamma}$, parametrized by the perturbation covariance $\mathbf \Gamma$, as
\begin{equation} \label{Eqn: Loop Gain Operator}
\begin{aligned}
\mathbb{L}_{\mathbf \Gamma}(\bar{\mathcal P}_{\text{in}}) = \bar{\mathcal P}_{\text{out}} \Longleftrightarrow \left\{
\begin{aligned}
&\bar{\mathcal P}_{\text{out}} = \mathbf \Gamma \circ (\mathcal C_0 \bar{\mathcal X} \mathcal C_0^*) \\
&\mathcal A_{\bar \gamma} \bar{\mathcal X} \mathcal E^* + \mathcal E \bar{\mathcal X} \mathcal A_{\bar \gamma}^* + \mathcal B_0 \bar{\mathcal P}_{\text{in}} \mathcal B_0^* = 0.
\end{aligned} \right.
\end{aligned}
\end{equation}
The MSS condition is given in terms of the spectral radius of the loop gain operator as explained next.
\textit{\textbf{Theorem}}: \textit{Consider the system in Figure~\ref{Fig: Feedback Block Diagram} where $\tilde \gamma$ is a temporally independent multiplicative noise, interpreted in the sense of It\=o, with instantaneous spatial covariance $\mathbf \Gamma$, and $\mathcal M$ is a stable causal LTI system. The feedback system is MSS if and only if the spectral radius of the loop gain operator is strictly less than one, i.e.
\begin{equation} \label{Eqn: MSS Condition}
\epsilon^2 \rho(\mathbb L_{\mathbf \Gamma}) < 1,
\end{equation}
where $\mathbb L_{\mathbf \Gamma}$ is defined in (\ref{Eqn: Loop Gain Operator}) and $\rho(\mathbb L_{\mathbf \Gamma})$ is its spectral radius.}
The proof of this theorem is given in \cite{filo2018structured}. This theorem will be used to find an upper bound on the perturbation constant $\epsilon$ above which MSS is violated.
\subsection{Worst-Case Covariances}
The loop gain operator maps a covariance operator $\bar{\mathcal P}_{\text{in}}$ into another covariance operator $\bar{\mathcal P}_{\text{out}}$. Hence, the eigenvectors of $\mathbb L_{\mathbf \Gamma}$ are themselves operators. When a finite dimensional approximation of $\mathbb L_{\mathbf \Gamma}$ is carried out using some numerical method, these eigenvectors can be approximated as matrices. We are particularly interested in the eigenvector (or eigen-operator) of $\mathbb L_{\mathbf \Gamma}$ associated with the largest eigenvalue because it has a significant meaning explained in this subsection.
First, since the loop gain operator is a monotone operator \cite{bamieh2018structured}, it is guaranteed to have a real largest eigenvalue equal to $\rho(\mathbb L_{\mathbf \Gamma})$. It is also guaranteed that the eigen-operator associated with the largest eigenvalue is positive semidefinite, i.e. there exists a positive semidefinite covariance operator $\textbf{P}$ such that
\begin{equation} \label{Eqn: Worst Case Covariance}
\mathbb L_{\mathbf \Gamma}(\textbf{P}) = \rho(\mathbb L_{\mathbf \Gamma}) \textbf{P}.
\end{equation}
Note that $\textbf{P}$ is the operator counterpart of the Perron-Frobenius eigenvector for matrices with non-negative entries. Refer to \cite[Thm 2.3]{bamieh2018structured} for a proof of the aforementioned guarantees. If the stability condition (\ref{Eqn: MSS Condition}) is violated, $\textbf{P}$ will be the covariance mode that has the highest growth rate, hence the name ``worst-case" covariance. This provides information about the locations on the BM that are more likely to destabilize due to the stochastic perturbations of the gain. Particularly, since we are interested in the instabilities at the BM, the worst-case covariance of the BM vibrations, denoted by $\mathbf U$, can be computed by propagating the worst-case pressure covariance $\textbf{P}$ through the cochlear dynamics (at steady state) as follows
\begin{equation} \label{Eqn: BM Worst Case Covariance}
\begin{aligned}
&\mathcal A_{\bar \gamma} \textbf{X} \mathcal E^* + \mathcal E \textbf{X} \mathcal A_{\bar \gamma}^* + \mathcal B_0 \textbf{P} \mathcal B_0^* = 0 \\
&\textbf{U} = \mathcal C \textbf{X} \mathcal C^*,
\end{aligned}
\end{equation}
where $\textbf{X}$ denotes the worst-case covariance operator corresponding to the state space variable $\psi$.
\section{Instabilities in Linearized Cochlear Dynamics} \label{Section: Cochlear Instabilities}
This section contains the main results on the effects of stochastic uncertainties on cochlear instabilities. The analysis is carried out for three different scenarios of the perturbation covariance $\mathbf \Gamma(x,\xi)$:
\begin{itemize}
\item $\textbf{S}_1$: spatially uncorrelated uncertainties, i.e. ${\mathbf \Gamma(x,\xi) = \delta(x-\xi)}$
\item $\textbf{S}_2$: spatially correlated uncertainties with a correlation length $\lambda$, i.e. $\mathbf \Gamma(x,\xi) = \phi_{\lambda}(x-\xi)$
\item $\textbf{S}_3$: spatially localized and uncorrelated uncertainties, i.e. $\mathbf \Gamma(x,\xi) = \phi_{\sigma}(x-\mu) \delta(x-\xi)$,
\end{itemize}
where $\phi_\lambda$ and $\phi_{\sigma}$ are the Gaussian kernels defined in (\ref{Eqn: Gaussian Kernel}) such that $\lambda$ is the spatial correlation length and $\sigma$ is the spatial localization length. In the subsequent analysis, scenarios $\textbf{S}_1$ and $\textbf{S}_2$ are treated simultaneously because, in both cases, the perturbation covariance is a Toeplitz operator since $\mathbf \Gamma(x,\xi)$ depends solely on the difference $x-\xi$ rather than the absolute locations $x$ and $\xi$. However, in scenario $\textbf{S}_3$, the perturbation covariance is spatially localized and $\mathbf \Gamma(x,\xi)$ depends on the absolute locations, and thus it is treated separately in subsection \ref{Section: Spatially Variant Covariance}. Recall that the linearized cochlear dynamics excludes micro-mechanical spatial coupling along different locations of the BM; whereas, scenario $\textbf{S}_2$ sort of reintroduces spatial coupling via the spatial correlations of the stochastic active gain.
The condition of MSS (\ref{Eqn: MSS Condition}) can be rewritten as
\begin{equation} \label{Eqn: MSS Condition for S1, S2 and S3}
\begin{aligned}
\epsilon &< \frac{1}{\sqrt{\rho(\mathbb L_{\mathbf{\Gamma}})}},
\end{aligned}
\end{equation}
for scenarios $\mathbf S_1, \mathbf S_2$ and $\mathbf S_3$. This bound is the maximum allowed perturbation in (\ref{Eqn: Stochastic Feedback}) such that MSS is maintained. In this section, we compute the upper bound on $\epsilon$ and the ``worst-case" covariance $\textbf{U}$ for the linearized cochlear dynamics.
\subsection{Numerical Considerations}
This section describes the numerical considerations of the model and the numerical method used to compute the spectral radius and worst-case covariance of $\mathbb L_{\mathbf \Gamma}$.
The numerical values of the parameters in this paper are taken from Table~I in \cite{ku2008statistics} for the linear cochlea. However, the expectation of the gain coefficient, $\bar \gamma(x)$, (which was considered to be spatially constant in \cite{ku2008statistics}) is left as a spatially distributed parameter to be tuned. The fluids block in Figure~\ref{Fig: Block Diagram of the Ear}(a) considered here is the one dimensional traveling wave as described in Appendix-\ref{Section: Mass Operators}. A spatial discretization grid of step size $\Delta_x := L/N_x$, where $N_x = 400$, is used to give a finite dimensional approximation of the operators (as matrices) describing the dynamics in Figure~\ref{Fig: Feedback Block Diagram} (refer to Appendix-\ref{Section: Finite Realizations}).
Special care has to be taken when dealing with spatially white continuous processes (Scenario $\textbf{S}_1$). Let $\Gamma$ denote a matrix approximation of the uncertainty covariance operator $\mathbf \Gamma$ and approximate the Dirac delta function as
\begin{equation} \label{Eqn: Delta Function}
\begin{aligned}
\delta(x) &\approx \frac{1}{\Delta_x} \text{rect}_{\Delta_x}(x) \\
\text{such that}, \quad
&\text{rect}_{\Delta_x} :=
\begin{cases}
1, & \text{if}\ -\frac{\Delta_x}{2} \leq x \leq \frac{\Delta_x}{2} \\
0, & \text{otherwise}
\end{cases}.
\end{aligned}
\end{equation}
Hence, the finite dimensional approximation of the perturbation covariance needs to be scaled with the discretization step $\Delta_x$ as follows
\begin{equation} \label{Eqn: Discretized Covariance}
\Gamma = \frac{1}{\Delta_x}I,
\end{equation}
where $I$ is the identity matrix.
Furthermore, our analysis requires the computation of the largest eigenvalue of the loop gain operator and its associated eigenvector (or eigen-operator). The matrices that approximate the spatial operators have a size of ($4(N_x+1) = 1604$), and keeping track of the underlying sparsity of all the approximated operators is essential for carrying out the computations efficiently. Note that to maintain the sparsity of (\ref{Eqn: Loop Gain Operator}) for scenario $\textbf{S}_2$, we use a truncated Gaussian kernel to approximate $\phi_{\lambda}$ given in (\ref{Eqn: Gaussian Kernel}), i.e. $ \phi_\lambda(x-\xi) \approx 0, \text{ for } |x-\xi| > d$, where $d$ is a pre-specified constant that represents a compromise between computational accuracy and sparsity.
Finally, the power iteration method is employed for eigenvalue and eigenmatrix computations as recommended by \cite{parrilo2000cone}. This requires solving the Lyapunov-like equation in (\ref{Eqn: Loop Gain Operator}) at each iteration.
\subsection{Stochastic Gain Coefficient with a Spatially Constant Expectation} \label{Section: Constant Expectation}
In this section, we set the expectation of the gain coefficient to one everywhere along the BM, i.e. $\bar \gamma(x) = 1$. To study the effects of the spatial correlations in the gain coefficient, we compare scenarios $\mathbf S_1$ and $\mathbf S_2$ by keeping in mind that $\mathbf S_1$ can be seen as a special case of $\mathbf S_2$ at the limit when $\lambda$ goes to zero. First, we compute the upper bounds on $\epsilon$ in (\ref{Eqn: MSS Condition for S1, S2 and S3}) such that MSS is maintained. Then we compute the worst-case covariance $\textbf{U}$ in (\ref{Eqn: BM Worst Case Covariance}).
By applying the power iteration method on (\ref{Eqn: Worst Case Covariance}), we compute the spectral radii $\rho(\mathbb L_{\mathbf \Gamma})$ and their associated eigen-operators $\mathbf P$ for scenarios $\textbf{S}_1$ and $\textbf{S}_2$ with different correlation lengths $\lambda$. Then, (\ref{Eqn: MSS Condition for S1, S2 and S3}) yields the upper bounds on $\epsilon$. The results are illustrated in Figure~\ref{Fig: Sweeping lambda} showing the small upper bounds on $\epsilon$. This reflects the high sensitivity of the model to such stochastic perturbations. As one would expect, a larger correlation length $\lambda$ requires a larger perturbation to destabilize the linearized cochlea.
\begin{figure}[h]
\centering
\includegraphics[scale = .25]{Figures/Sweeping_lambda_ConstantGain.pdf}
\caption{\footnotesize{Mean Square Stability Curve: Upper bound on the perturbation parameter, $\epsilon$, of the stochastic gain (\ref{Eqn: Stochastic Gain}) whose expectation is $\bar \gamma(x) = 1$. The black dot corresponds to scenario $\textbf{S}_1$ (uncorrelated gain perturbations) and the solid black line corresponds to scenario $\textbf{S}_2$ (correlated gain perturbations) for different spatial correlation lengths $\lambda$. The figure shows that larger correlation lengths make the model more immune to stochastic perturbations.}}
\label{Fig: Sweeping lambda}
\end{figure}
The eigen-operator $\textbf{P}$ computed by the power iteration method is the worst-case pressure covariance. The corresponding worst-case covariance of the BM displacement $\textbf{U}$ is then computed using (\ref{Eqn: BM Worst Case Covariance}). Figure~\ref{Fig: Sweeping lambda Covariance}(a) shows $\textbf{U}$ for scenario $\textbf{S}_1$, zoomed in for $0 \leq x,\xi \leq L/10$. The intensity plot shows two sets of axes. The first axis represents the location on the BM and the second represents the corresponding characteristic frequency at each location, calculated using the Greenwood location-to-frequency mapping \cite{greenwood1990cochlear}. Observe that the covariance is band limited and the diagonal entries are dominant near the stapes ($x = 0$). This shows that instabilities essentially occur at high frequencies. Figure~\ref{Fig: Sweeping lambda Covariance}(b) plots the diagonal entries of $\textbf{U}$ for scenarios $\textbf{S}_1$ and $\textbf{S}_2$ for different correlation lengths $\lambda$.
\begin{figure}[h!]
\centering
\begin{tabular}{c}
\includegraphics[scale = .25]{Figures/CovarianceUncorrelated_ConstantGain.pdf} \\ \footnotesize{(a) Worst-Case Covariance of BM Displacement $\textbf{U}(x,\xi)$} \\ \\
\includegraphics[scale = .25]{Figures/DiagonalEntries.pdf} \\ \footnotesize{(b) Diagonal Entries of $\textbf{U}$}\\ \\
\includegraphics[scale = .25]{Figures/Eigenfunctions_ConstantGain.pdf} \\ \footnotesize{(c) Dominant Eigenfunction of $\textbf{U}$}
\end{tabular}
\caption{\footnotesize{Figure (a) shows an intensity plot of the worst-case covariance $\textbf{U}$ for scenario $\textbf{S}_1$ (uncorrelated gain perturbation) zoomed in for $0 \leq x,\xi \leq 3.5\si{mm}$. The axes correspond to the physical location $x$ in mm on the BM and the corresponding characteristic frequency $f$ in kHz. Figure (b) shows the diagonal entries of $\textbf{U}$ for scenarios $\textbf{S}_1$ and $\textbf{S}_2$ for different correlation lengths $\lambda$. Figure (c) depicts the dominant eigenfunction of $\textbf{U}$ for the different cases indicating the insignificant effect of $\lambda$ on the shape of the dominant eigenfunctions.}}
\label{Fig: Sweeping lambda Covariance}
\end{figure}
A smaller correlation length gives a slightly broader spectrum of unstable frequencies. However, for small $\epsilon$, the effect of the correlation length on the shape of the unstable BM modes is negligible. This is illustrated in Figure~\ref{Fig: Sweeping lambda Covariance}(c), where the dominant eigenfunction of $\textbf{U}$ is plotted for different cases.
\subsection{Stochastic Gain Coefficient with a Spatially Varying Expectation}
This section shows that the frequencies of instabilities (or, equivalently, the locations on the BM) can shift depending on the shape of the expectation of the gain coefficient $\bar \gamma(x)$. For illustration purposes, four different profiles of $\bar{\gamma}_0(x)$ are generated as
\begin{equation} \label{Eqn: Gain Mean Profiles}
\bar{\gamma}_0(x) = \frac{\tanh(x/10) + \beta}{\tanh(L/10) + \beta},
\end{equation}
where $x$ and $L$ are expressed in mm and $\beta = 0,2,4$ and $6$. First, we show the MSS curves, similar to Figure~\ref{Fig: Sweeping lambda} for the four different profiles generated using (\ref{Eqn: Gain Mean Profiles}). Figure~\ref{Fig: Sweeping lambda beta}(b) clearly shows that the shape of $\bar \gamma(x)$ affects the margin of MSS. Particularly, the larger the dip in the gain coefficient, the higher $\epsilon$ needs to be to destabilize the linearized dynamics in the MSS sense.
\begin{figure}[h]
\centering
\begin{tabular}{l}
\includegraphics[scale = .25]{Figures/GainProfiles.pdf} \\ \footnotesize{(a) Gain Coefficient Expectation Profiles} \\
\includegraphics[scale = .25]{Figures/Sweeping_lambda_VariableGain.pdf}\\ \footnotesize{(b) Corresponding MSS Curves} \\
\includegraphics[scale = .25]{Figures/Sweeping_beta_Eigenfunctions.pdf}\\ \footnotesize{(c) Eigenfunctions for scenario $\mathbf S_1$} \\
\end{tabular}
\caption{\footnotesize{Mean Square Stability Curves for different gain coefficient expectation profiles: Figure(a) shows four different profiles of $\bar{\gamma}(x)$ generated as examples of spatially varying gain coefficients using (\ref{Eqn: Gain Mean Profiles}). The same values of $\beta$ are used in figures (b) and (c). Particularly, Figure(b) shows the upper bound on the perturbation parameter $\epsilon$ for the corresponding profiles of $\bar \gamma(x)$ in Figure(a). The circles correspond to scenario $\textbf{S}_1$ (uncorrelated gain perturbations) and the solid lines correspond to scenario $\textbf{S}_2$ (correlated gain perturbations) for different spatial correlation lengths $\lambda$. Figure (c) shows the eigenfunctions of the worst-case covariance operator $\textbf{U}$ corresponding to the different profiles of $\bar \gamma(x)$. The peaks of the eigenfunctions shift consistently with the shape of the gain profiles.}}
\label{Fig: Sweeping lambda beta}
\end{figure}
Since the correlation length for small values of $\epsilon$ has a negligible effect on the shape of the unstable modes as shown in Figure~\ref{Fig: Sweeping lambda Covariance}(c), we only present the worst-case covariances for scenario $\textbf{S}_1$. In fact, the correlation length only affects the margin of stability as illustrated in Figure~\ref{Fig: Sweeping lambda beta}(b). Figure~\ref{Fig: Sweeping lambda beta}(c) depicts the dominant eigenfunctions of $\textbf{U}$ for the four different profiles of $\bar \gamma(x)$.
Clearly, the peaks of the unstable modes of the BM shift depending on the shape of $\bar \gamma(x)$. In fact, as the dip in $\bar \gamma(x)$ is increased, the peaks shift farther from the stapes resulting in instabilities of lower frequencies.
\subsection{Stochastic Gain Coefficient with a Spatially Localized Covariance} \label{Section: Spatially Variant Covariance}
We now treat the case where the gain coefficient $\gamma(x,t)$ in (\ref{Eqn: Stochastic Feedback}) has a spatially constant expectation, but spatially localized covariance given in scenario $\textbf{S}_3$, i.e.
$$ \bar \gamma(x) = 1 \qquad \text{and} \qquad \mathbf \Gamma(x,\xi) = \phi_\sigma(x-\mu) \delta(x-\xi),$$
for different values of $\sigma$ and $\mu$. Observe that for this form of $\mathbf \Gamma(x,\xi)$, the covariance is localized around $\mu$. Hence, this section investigates the cochlear instabilities that emerge as a result of stochastic perturbations localized around a particular location on the BM.
In particular, we are interested in tracking the unstable BM modes for different values of $\mu$ and $\sigma$, where $\mu$ is the location of the perturbation and $\sigma$ represents the local spread of the perturbation in the neighborhood of $\mu$. Following the same calculations of the previous sections, we compute the dominant eigenfunction of the worst-case covariance of the BM displacement $\mathbf U$ for different values of $\mu$ and $\sigma$. The results are depicted in Figure~\ref{Fig: Spatially Localized Covariance}.
\begin{figure}[h]
\centering
\begin{tabular}{c}
\includegraphics[scale = .25]{Figures/Sweeping_x0_sigma1.pdf} \\ (a) \\
\includegraphics[scale = .25]{Figures/Sweeping_x0_sigma2.pdf} \\ (b) \\
\includegraphics[scale = .25]{Figures/Sweeping_x0_sigma3.pdf} \\ (c)
\end{tabular}
\caption{\footnotesize{Eigenfunctions of the worst-case covariance operator $\textbf{U}$ for different localized gain coefficient perturbations. These figures show the dominant eigenfunctions of the worst-case covariance operators for three different values of $\mu$ and $\sigma$. Particularly, in each figure, we fix $\sigma$ and vary $\mu$. Each thin curve represents a particular uncertainty spread function $\phi_{\sigma}(x-\mu)$ (not drawn to scale in the vertical axis) and each thick curve (with the same color) represents the corresponding dominant eigenfunction of the worst-case covariance operator. This figure illustrates the ``basal shifting" observation that resembles the phenomenon of detuning. }}
\label{Fig: Spatially Localized Covariance}
\end{figure}
Observe that localized perturbations of the active gain coefficient at some location $\mu$ of the BM causes instabilities in that neighborhood. Particularly, for relatively small spread $\sigma = L/100$, the instabilities emerge at the same locations of the perturbations as shown in Figure~\ref{Fig: Spatially Localized Covariance}(a). However, as the spread of the uncertainty is increased up to $\sigma = L/30$ and $ L/10$, the location of the instability shifts towards the stapes. In fact, the wider the spread the larger the shift is as illustrated in Figures~\ref{Fig: Spatially Localized Covariance}(b) and (c).
This ``basal shifting" resembles the phenomenon of detuning observed in the cochlea. Acting as a frequency analyzer (or ``inverse-piano"), each location on the BM vibrates in response to a sound stimulus at a particular frequency. Thus, the BM has a frequency-to-location map such that every stimulus frequency has a preferred place on the BM called Characteristic Place (CP). The detuning phenomenon is observed as the shifting of the CP towards the stapes as the intensity of the stimulus (in dB) is increased. In this section, we showed that increasing the spread of the stochastic perturbations also shifts the BM vibrations towards the stapes. Nonlinear dynamics are necessary to model the detuning phenomenon. However, modeling this ``detuning-like" phenomenon doesn't require nonlinearities, instead a locally perturbed active gain is sufficient to explain it.
It is believed that these instabilities in the BM reflect back to the middle ear causing SOAEs \cite{nuttall2004spontaneous}. It is also believed that if these BM vibrations are intense enough, they can be perceived as tinnitus. Our results suggest a mechanism that explains the frequencies that can be detected in the ear canal due to SOAEs and/or perceived as tinnitus. As a matter of fact, the shape of the statistics (expectation and covariance) of the gain coefficient is a factor that controls the bands of the frequencies that are emitted as SOAEs. These emissions arise due to (a) \textit{spatially variant inhomogeneities} along the cochlear partition and (b) \textit{temporal stochastic perturbations} that give rise to structured stochastic uncertainties.
\section{Nonlinear Stochastic Simulations} \label{Section: Simulations}
So far, the MSS analysis is carried out on the linearized dynamics. In this section, we carry out stochastic simulations of the nonlinear model to validate the predictions of our analysis of the linearized dynamics.
\subsection{Nonlinear Descriptor State Space Formulation in Continuous Space-Time}
We first start by formulating the nonlinear dynamics in a DSS form similar to that given in section \ref{Section: DSS Formulation}. Recall that, the nonlinear deterministic active gain is given by (\ref{Eqn: Nonlinear Gain}) with $\gamma(x)$ representing the gain coefficient. To include stochastic perturbations, we substitute (\ref{Eqn: Stochastic Gain}) in (\ref{Eqn: Nonlinear Gain}) so that the nonlinear stochastic active gain can be written as
\begin{equation} \label{Eqn: Stochastic Nonlinear Active Gain}
\begin{aligned}
\left[\mathcal G(u)\right](x,t) &=\frac{\bar\gamma(x) + \epsilon \tilde \gamma(x,t)}{1 + \theta \left[\Phi_\eta\left(\frac{u^2}{R^2}\right)\right](x,t)} \\
& =: \bigg(\bar\gamma(x) + \epsilon \tilde \gamma(x,t)\bigg) \left[\tilde{\mathcal G}(u)\right](x,t).
\end{aligned}
\end{equation}
Recall that $\Phi_{\eta}$ is the Gaussian spatial operator given by (\ref{Eqn: Guassian Weighing Operator}), $\theta = 0.5$, $R = 1\si{nm}$ and $\eta = 0.5345\si{mm}$. By substituting (\ref{Eqn: Stochastic Nonlinear Active Gain}) in (\ref{Eqn: Micromechanics}), we can rewrite the nonlinear model in a nonlinear DSS form as
\begin{equation} \label{Eqn: Nonlinear DSS}
\mathcal E \frac{\partial}{\partial t} \psi(x,t) = \big(\mathcal A_{\bar \gamma}(u) + \epsilon \tilde{\mathcal{B}}_0(u)\tilde \gamma \mathcal C_0\big) \psi(x,t),
\end{equation}
where $\mathcal A_{\bar{\gamma}}(u) := \mathcal A_0 + \tilde{\mathcal B_0}(u) \bar \gamma \mathcal C_0$ and $\tilde{\mathcal{B}}_0(u)\tilde \gamma \mathcal C_0
$ are nonlinear spatial operators that represent the deterministic and stochastic portions of the dynamics, respectively. Note that $\mathcal E, \mathcal A_0$, and $\mathcal C_0$ are all defined in (\ref{Eqn: Descriptor State Space Operator Form}), and $\tilde{\mathcal B}_0(u) = \begin{bmatrix} 0 & 0 & \tilde{\mathcal G}(u) & 0 \end{bmatrix}^T$.
Therefore, (\ref{Eqn: Nonlinear DSS}) represents the nonlinear stochastic dynamics given in a DSS operator form, where the spatial variable is continuous. This is really a Stochastic Partial Differential Equation (SPDE) that needs to be discretized in space and time in order to carry out our simulations.
\subsection{Description of the Numerical Method for Simulations}
In this section, we discretize (\ref{Eqn: Nonlinear DSS}) in space and time so that numerical simulations become fairly straightforward to implement. On a side note, if the stochastic perturbation $\tilde \gamma = 0$, (\ref{Eqn: Nonlinear DSS}) becomes a deterministic Partial Differential Equation (PDE). This can be easily integrated by discretizing space using a spatial grid, and then employ a time marching solver such as ODE45 in MATLAB. However, for an SPDE, one has to carefully treat the scaling of the covariances with the discretization steps.
Space and time are discretized as $x_i = i\Delta_x$ and $t_n = n\Delta_t$ with discretization steps $\Delta_x = L/N_x$ and $ \Delta_t = t_f/N_t$ for $i = 0, 1, ..., N_x$ and $ n = 0, 1, ..., N_t$, where $t_f$ is the final time. Let the BM and TM displacements on the discretized space-time grid be denoted by the vectors $u_n$ and $v_n \in \mathbb R^{N_x+1}$, respectively such that
\begin{align*}
u_n &:= \begin{bmatrix} u(x_0,t_n) & \cdots & u(x_{N_x},t_n) \end{bmatrix}^T \\
v_n &:= \begin{bmatrix} v(x_0,t_n) & \cdots & v(x_{N_x},t_n) \end{bmatrix}^T.
\end{align*}
Then the discretized state space variable can be expressed by $\psi_n \in \mathbb R^{4(N_x+1)}$ as
$$ \psi_n := \begin{bmatrix} u_n^T & v_n^T & \dot u_n^T & \dot v_n^T \end{bmatrix}^T.$$
For scenarios $\textbf{S}_1$ and $\textbf{S}_3$, $\tilde \gamma(x,t)$ is a zero-mean white process in space and time. It can be approximated at the spatial grid points $\{x_i\}_{i=0,1,...,N_x}$ and at time $t_n$ as follows
\begin{equation*}
\begin{bmatrix}
\tilde \gamma(x_0,t_n) & \tilde \gamma(x_1,t_n) & \cdots & \tilde \gamma(x_{N_x},t_n)
\end{bmatrix}^T\approx \frac{1}{\sqrt{\Delta_x \Delta_t}} w_n,
\end{equation*}
where $w_n \in \mathbb R^{N_x+1}$ is a zero-mean Gaussian random vector with a covariance matrix $\mathbb E\left[w_n w_n^T\right] = I$ for $\mathbf S_1$ and $\mathbb E\left[w_n w_n^T\right] = \mathcal D\left( \begin{bmatrix} \phi_{\sigma}(x_0-\mu) & \cdots & \phi_{\sigma}(x_{N_x}-\mu) \end{bmatrix}\right)$ for $\mathbf S_3$, where $\mathcal D$ is the diagonal operator such that $\mathcal D(w_n)$ is a diagonal matrix with $w_n$ arranged on its diagonal entries.
For scenario $\textbf{S}_2$, $\tilde \gamma(x,t)$ is a stochastic process that is white in time but ``colored" in space with a spatial covariance $\mathbf \Gamma(x,\xi) = \epsilon^2 \phi_\lambda(x-\xi)$. In this scenario, the noise is smooth in space and there is no need to scale the covariance by the spatial discretization step. More precisely, $\tilde \gamma(x,t)$ can be approximated as
\begin{equation*}
\begin{bmatrix}
\tilde \gamma(x_0,t_n) & \tilde \gamma(x_1,t_n) & \cdots & \tilde \gamma(x_{N_x},t_n)
\end{bmatrix}^T\approx \frac{1}{\sqrt{\Delta_t}} w_n,
\end{equation*}
where $\mathbb E\left[w_n w_n^T\right]$ is now a symmetric matrix whose $(i,j)^{\text{th}}$ entry is given by $\phi_{\lambda}(x_i-x_j)$.
Therefore, a first order approximation of (\ref{Eqn: Nonlinear DSS}) can be carried out in the spirit of the Euler-Maruyama method \cite{maruyama1955continuous} to obtain
\begin{equation}\label{Eqn: Nonlinear Numerics}
E \psi_{n+1} = E\psi_n + \Delta_t A_{\bar \gamma}(u_n) \psi_n + \alpha \tilde B_0(u_n) \mathcal D(w_n) C_0\psi_n
\end{equation}
where $\alpha = \epsilon \sqrt{\Delta_t/\Delta_x}$ for $\mathbf S_1$ and $\mathbf S_3$; and $\alpha = \epsilon \sqrt{\Delta_t}$ for $\mathbf S_2$.
The matrices $E , A_{\bar{\gamma}}(u_n) , \tilde B_0(u_n)$ and $C_0 $ are all finite dimensional approximations of the operators $\mathcal E, \mathcal A_{\bar \gamma}(u), \tilde{\mathcal B}_0(u)$ and $\mathcal C_0$, respectively (Appendix-\ref{Section: Finite Realizations}).
Equation (\ref{Eqn: Nonlinear Numerics}) represents the recursive numerical methods to solve (\ref{Eqn: Nonlinear DSS}) for all three scenarios with the right choice of $\alpha$ and $\mathbb E[w_nw_n^T]$.
\subsection{Simulation of the Nonlinear Stochastic Model}
To validate our MSS analysis of the linearized dynamics and evaluate how well it copes with the nonlinear dynamics, we carry out a simulation of (\ref{Eqn: Nonlinear DSS}). This section considers scenario $\textbf{S}_1$. Hence, the numerical method used here is that given in (\ref{Eqn: Nonlinear Numerics}) for $\alpha = \epsilon^2 \sqrt{\Delta_t/\Delta_x}$ and $\mathbb E[w_nw_n^T] = I$.
The nonlinear stochastic simulation shown here is for $\bar \gamma(x)$ given in (\ref{Eqn: Gain Mean Profiles}) with $\beta = 2$. All other scenarios are in agreement with our MSS analysis; however, this particular case study ($\beta = 2$) is chosen here to illustrate the effectiveness of our analysis. Observe using Figure~\ref{Fig: Sweeping lambda beta}(b) that for $\beta = 2$, the MSS condition is violated if $\epsilon \geq 9.1\times 10^{-6}$. We choose $\epsilon = 1.1 \times 10^{-5}$ which slightly violates the MSS condition for the linearized dynamics and allows the nonlinearity to kick in and saturate the response. The spatio-temporal response of the BM is depicted in Figure~\ref{Fig: Nonlinear Stochastic Simulation}(a) for $t \in [0, t_f]$ with $t_f = 200\si{ms}$. The response is maximal in a band limited region $10\si{mm} < x < 20\si{mm}$ which corresponds to a frequency range of roughly between $1\si{kHz}$ and $5\si{kHz}$.
\begin{figure}[h!]
\centering
\begin{tabular}{c}
\includegraphics[scale = .22]{Figures/Stochastic_Simulation.pdf} \\ \footnotesize{(a) Spatio-Temporal Stochastic Evolution of the BM} \\
\includegraphics[scale = .22]{Figures/Empirical_Covariance.pdf} \\ \footnotesize{(b) Empirical Covariance $\mathbf U_{\text{Emp}}(x,\xi)$} \\
\includegraphics[scale = .22]{Figures/Theoretical_Covariance.pdf} \\ \footnotesize{(c) Predicted Worst-Case Covariance $\mathbf U(x,\xi)$} \\
\begin{tabular}{ll}
\includegraphics[scale =.119]{Figures/Eigenvalues.pdf} &
\includegraphics[scale =.119]{Figures/EmpiricalTheoretical_Comparison.pdf} \\
\includegraphics[scale=.119]{Figures/Second_EV.pdf} &
\includegraphics[scale=.119]{Figures/Third_EV.pdf}
\end{tabular}\\
\footnotesize{(d) Empirical and Theoretical Dominant Eigenvalues/functions}
\end{tabular}
\caption{\footnotesize{Nonlinear Stochastic Simulation. Figure (a) shows the BM response to spatially uncorrelated stochastic active gain (scenario $\mathbf S_1$) with an expectation given by (\ref{Eqn: Gain Mean Profiles}) where $\beta = 2$ and a perturbation of $\epsilon = 1.1\times 10^{-5}$. Figures (b) and (c) show a comparison between the empirical and predicted covariances. The predicted covariance is computed for the linearized dynamics via the power iteration method applied on the loop gain operator (\ref{Eqn: Loop Gain Operator}). The empirical covariance is computed using the data obtained from one nonlinear stochastic simulation using (\ref{Eqn: Nonlinear Numerics}) and integrated in time using (\ref{Eqn: Ergodic Integration}) assuming ergodicity. Figure (d) shows a comparison between the dominant eigenvalues/functions of the empirical and predicted covariances shown in Figures (b) and (c), respectively. This eigen-decomposition is referred to as the \textit{Karhunen{–}Lo\`ve} decomposition. Clearly the theoretical predictions match the empirical data, thus suggesting that the nonlinearities only saturate the response without significantly deforming the waveforms.}}
\label{Fig: Nonlinear Stochastic Simulation}
\end{figure}
To be more precise, we compute the empirical covariance $\textbf{U}_{\text{Emp}}(x,\xi)$ as follows
\begin{equation} \label{Eqn: Ergodic Integration}
\textbf{U}_{\text{Emp}}(x,\xi) = \frac{1}{t_f} \int_0^{t_f} u(x,\tau) u(\xi, \tau) d\tau.
\end{equation}
The time averaging replaces the expectation assuming ergodicity. Figures~\ref{Fig: Nonlinear Stochastic Simulation}(b) and (c) compare the empirical covariance to the predicted worst-case covariance. By visual inspection, we observe that the empirical results are in good agreement with our theoretical predictions.
For a more precise comparison, we plot the first twenty dominant eigenvalues and first three dominant eigenfunctions of both the predicted and empirical covariances in Figure~\ref{Fig: Nonlinear Stochastic Simulation}(d). This eigen-decomposition is referred to as the \textit{Karhunen{-}Lo\`eve decomposition}. The eigenfunctions are the modes of BM vibrations that have the highest growth rate and are more likely to destabilize for small perturbations of the active gain. The plots doesn't show any significant difference between the empirical and theoretical results. In fact, although the nonlinear active gain slightly deforms the response, but its fundamental role (in the absence of a stimulus) is to saturate the linearized instabilities to form oscillations that remain bounded in time.
\section{Discussion} \label{Section: Discussion}
The mechanisms underlying cochlear instabilities such as SOAEs and tinnitus are still controversial and not well understood. This paper suggests a new possible source of cochlear instabilities: spatio-temporal stochastic perturbations of the active gain.
It is widely accepted that Outer Hair Cells (OHC) are responsible for the active gain in the cochlea. This work proposes a \textit{simulation-free}, system theoretic framework to analyze the effects of small stochastic perturbations that may occur on the level of the OHCs. These perturbations can have several physical origins such as noisy nearby neuronal activities, cellular activities, blood flow, etc...
Studying the effects of randomness in the active gain is not new \cite{fruth2014active}, \cite{ku2008statistics}. However, the previous studies on this matter considered random spatial perturbations that are time-invariant. This type of randomness is referred to as ``frozen" or \textit{quenched disorder} in the statistical physics community. In fact, \cite{ku2008statistics} investigated the effects of the frozen spatial randomness by carrying out Monte Carlo simulations to study the statistics of the instabilities. However, to achieve a broad spectrum of unstable frequencies, the authors allowed severe perturbations of the active gain which is not realistic. Without these severe perturbations, the unstable frequencies would be limited to a band of high frequencies only (Section~\ref{Section: Constant Expectation}). This doesn't agree with the experimental observations where, for example, SOAEs are mainly found between $0.5$ and $4.5\si{kHz}$.
A more realistic case is to treat the active gain as a stochastic process, where the randomness may occur in space and time, simultaneously. In addition to that, only small perturbations of the active gain are considered (three to four orders of magnitude less than \cite{ku2008statistics}). A major advantage of our analysis is that it is \textit{simulation-free} and no Monte Carlo simulations are required to study the statistics of the emerging instabilities. In our analysis, we also show that the band of unstable frequencies can be controlled by the tuning of the structural parameters of the cochlea such as the active gain coefficient. Hence, we show that even for very small perturbations, the unstable frequencies can be shifted dramatically. Furthermore, examining localized stochastic perturbations in the active gain allowed us to observe local instabilities that shift toward the stapes as the localization length or spread is larger. This observation resembles the detuning phenomenon present in the cochlea.
\section{Conclusion and Future Work}
This paper examines the instabilities that occur in the linearized dynamics due to spatio-temporal stochastic perturbations in the distributed structure of the cochlear partition. The simulation-free analysis is carried out through a structured stochastic uncertainty framework. It is shown that the spatial shape of the expectation and covariance of the gain coefficient affect the locations of the instabilities on the basilar membrane. These instabilities eventually saturate to form bounded oscillations due to the saturation nonlinearity of the active gain (\ref{Eqn: Nonlinear Gain}) producing spontaneous basilar membrane vibrations. It is believed that these instabilities are reflected to the middle ear as spontaneous otoacoustic emissions (SOAEs) \cite{nuttall2004spontaneous} with frequencies corresponding to the location of the instability on the basilar membrane. This analysis also suggests an explanation of one possible source of tinnitus, which is less addressed in the literature. Particularly, if the spontaneous BM vibrations were intense enough, they may be perceived as tinnitus. Future work will address instabilities that may occur due to stochastic uncertainties in structural parameters other than the active gain coefficient, such as the cochlear fluid density.
\bibliographystyle{ieeetr}
|
{
"timestamp": "2018-09-10T02:07:42",
"yymm": "1809",
"arxiv_id": "1809.02357",
"language": "en",
"url": "https://arxiv.org/abs/1809.02357"
}
|
\section{Introduction}
Consider $n$ possible treatments, say, drugs in a clinical trial, where each treatment either has a positive expected effect relative to a baseline (actual positive), or no difference (null), with a goal of identifying as many actual positive treatments as possible.
If evaluating the $i$th trial results in a noisy outcome (e.g. due to variance in the actual measurement or just diversity in the population) then given a total measurement budget of $B$, it is standard practice to execute and average $B/n$ measurements of each treatment, and then output a set of predicted actual positives based on the measured effect sizes.
False alarms (i.e. nulls predicted as actual positives) are controlled by either controlling \emph{family-wise error rate (FWER)}, where one bounds the probability that at least one of the predictions is null, or \emph{false discovery rate (FDR)}, where one bounds the expected proportion of the number of predicted nulls to the number of predictions.
FDR is a weaker condition than FWER but is often used in favor of FWER because of its higher \emph{statistical power}: more actual positives are output as predictions using the same measurements.
In the pursuit of even greater statistical power, there has recently been increased interest in the biological sciences to reject the uniform allocation strategy of $B/n$ trials to the $n$ treatments in favor of an \emph{adaptive} allocation.
Adaptive allocations partition the budget $B$ into sequential rounds of measurements in which the measurements taken at one round inform which measurements are taken in the next \cite{hao2008drosophila,rocklin}.
Intuitively, if the effect size is relatively large for some treatment, fewer trials will be necessary to identify that treatment as an actual positive relative to the others, and that savings of measurements can be allocated towards treatments with smaller effect sizes to boost the signal.
However, both \cite{hao2008drosophila,rocklin} employed ad-hoc heuristics which may not only have sub-optimal statistical power, but also may even result in more false alarms than expected.
As another example, in the domain of A/B/n testing in online environments, the desire to understand and maximize click-through-rate across treatments (e.g., web-layouts, campaigns, etc.) has become ubiquitous across retail, social media, and headline optimization for the news.
And in this domain, the desire for statistically rigorous adaptive sampling methods with high statistical power are explicit \cite{johari2015always}.
In this paper we propose an adaptive measurement allocation scheme that achieves near-optimal statistical power subject to FWER or FDR false alarm control.
Perhaps surprisingly, we show that even if the treatment effect sizes of the actual positives are identical, adaptive measurement allocation can still substantially improve statistical power.
That is, more actual positives can be predicted using an adaptive allocation relative to the uniform allocation under the same false alarm control.
\subsection{Problem Statement}\label{sec:problem_statement}
Consider $n$ distributions (or arms) and a game where at each time $t$, the player chooses an arm $i \in [n] := \{1,\dots,n\}$ and immediately observes a reward $X_{i,t} \overset{iid}{\sim} \nu_i$ where $X_{i,t} \in [0,1]$\footnote{All results without modification apply to unbounded, sub-Gaussian random variables.} and $\mathbb{E}_{\nu_i}[X_{i,t}] = \mu_i$.
For a \emph{known} threshold $\mu_0$, define the sets\footnote{All results generalize to the case when $\mathcal{H}_0 = \{ i : \mu_i \leq \mu_0 \}$.}
\begin{align*}
\mathcal{H}_1 = \{ i \in [n]: \mu_i > \mu_0 \} \quad \text{ and } \quad \mathcal{H}_0 = \{ i \in [n] : \mu_i = \mu_0 \} = [n] \setminus \mathcal{H}_1 .
\end{align*}
The value of the means $\mu_i$ for $i\in[n]$ and the cardinality of $\mathcal{H}_1$ are \emph{unknown}.
The arms (treatments) in $\mathcal{H}_1$ have means greater than $\mu_0$ (positive effect) while those in $\mathcal{H}_0$ have means equal to $\mu_0$ (no effect over baseline).
At each time $t$, after the player plays an arm, she also outputs a set of indices $\mathcal{S}_t \subseteq [n]$ that are interpreted as \emph{discoveries} or rejections of the null-hypothesis (that is, if $i \in \mathcal{S}_t$ then the player believes $i \in \mathcal{H}_1$).
For as small a $\tau \in \mathbb{N}$ as possible, the goal is to have the number of true detections $|\mathcal{S}_t \cap \mathcal{H}_1|$ be approximately $|\mathcal{H}_1|$ for all $t \geq \tau$, subject to the number of false alarms $|\mathcal{S}_t \cap \mathcal{H}_0|$ being small uniformly over all times $t \in \mathbb{N}$.
We now formally define our notions of false alarm control and true discoveries.
\begin{definition}[False Discovery Rate, FDR-$\delta$]
Fix some $\delta \in (0,1)$.
We say an algorithm is FDR-$\delta$ if for all possible problem instances $(\{\nu_i\}_{i=1}^n, \mu_0)$ it satisfies $\displaystyle\mathbb{E}[\tfrac{|\mathcal{S}_t \cap \mathcal{H}_0|}{|\mathcal{S}_t|\vee 1}] \leq \delta$ for all $t \in \mathbb{N}$ simultaneously.
\end{definition}
\begin{definition}[Family-wise Error Rate, FWER-$\delta$]
Fix some $\delta \in (0,1)$.
We say an algorithm is FWER-$\delta$ if for all possible problem instances $(\{\nu_i\}_{i=1}^n, \mu_0)$ it satisfies $\mathbb{P}(\bigcup_{t=1}^\infty \{ \mathcal{S}_t \cap \mathcal{H}_0 \neq \emptyset\} ) \leq \delta$.
\end{definition}
\noindent Note FWER-$\delta$ implies FDR-$\delta$, the former being a stronger condition than the latter.
Allowing a relatively small number of false discoveries is natural, especially if $|\mathcal{H}_1|$ is relatively large.
Because $\mu_0$ is known, there exist schemes that guarantee FDR-$\delta$ or FWER-$\delta$ even if the arm means $\mu_i$ and the cardinality of $\mathcal{H}_1$ are unknown (see Section~\ref{sec:false_alarm_control}).
It is also natural to relax the goal of identifying \emph{all} arms in $\mathcal{H}_1$ to simply identifying a \emph{large proportion} of them.
\begin{definition}[True Positive Rate, TPR-$\delta,\tau$]
Fix some $\delta \in (0,1)$.
We say an algorithm is TPR-$\delta,\tau$ on an instance $(\{\nu_i\}_{i=1}^n, \mu_0)$ if $\mathbb{E}[\frac{|\mathcal{S}_t \cap \mathcal{H}_1|}{|\mathcal{H}_1|}] \geq 1-\delta$ for all $t \geq \tau$.
\end{definition}
\begin{definition}[Family-wise Probability of Detection, FWPD-$\delta,\tau$]
Fix some $\delta \in (0,1)$.
We say an algorithm is FWPD-$\delta,\tau$ on an instance $(\{\nu_i\}_{i=1}^n, \mu_0)$ if $\mathbb{P}(\mathcal{H}_1 \subseteq \mathcal{S}_t ) \geq 1- \delta$ for all $t \geq \tau$.
\end{definition}
Note that FWPD-$\delta,\tau$ implies TPR-$\delta,\tau$, the former being a stronger condition than the latter.
Also note $\mathbb{P}( \bigcup_{t=1}^\infty \{ \mathcal{S}_t \cap \mathcal{H}_0 \neq \emptyset\} ) \leq \delta$ and $\mathbb{P}( \mathcal{H}_1 \subseteq \mathcal{S}_\tau) \geq 1-\delta$ together imply $\mathbb{P}(\mathcal{H}_1 = \mathcal{S}_\tau) \geq 1-2\delta$.
We will see that it is possible to control the number of false discoveries $|\mathcal{S}_t \cap \mathcal{H}_0|$ regardless of how the player selects arms to play.
It is the rate at which $\mathcal{S}_t$ includes $\mathcal{H}_1$ that can be thought of as the statistical power of the algorithm, which we formalize as its \emph{sample complexity}:
\begin{definition}[Sample Complexity]
Fix some $\delta \in (0,1)$ and an algorithm $\mathcal{A}$ that is FDR-$\delta$ (or FWER-$\delta$) over all possible problem instances.
Fix a particular problem instance $(\{\nu_i\}_{i=1}^n, \mu_0)$.
At each time $t \in \mathbb{N}$, $\mathcal{A}$ chooses an arm $i \in [n]$ to obtain an observation from, and before proceeding to the next round outputs a set $\mathcal{S}_t \subseteq [n]$.
The \emph{sample complexity} of $\mathcal{A}$ on this instance is the smallest time $\tau \in \mathbb{N}$ such that $\mathcal{A}$ is TPR-$\delta,\tau$ (or FWPD-$\delta,\tau$).
\end{definition}
The sample complexity and value of $\tau$ of an algorithm will depend on the particular instance $(\{\nu_i\}_{i=1}^n, \mu_0)$.
For example, if $\mathcal{H}_1 = \{i \in [n] : \mu_i =\mu_0 + \Delta\}$ and $\mathcal{H}_0 = [n] \setminus \mathcal{H}_1$, then we expect the sample complexity to increase as $\Delta$ decreases since at least $\Delta^{-2}$ samples are necessary to determine whether an arm has mean $\mu_0$ versus $\mu_0 + \Delta$. The next section will give explicit cases.
\begin{remark}[Impossibility of stopping time] We emphasize that just as in the non-adaptive setting, at no time can an algorithm \emph{stop} and declare that it is TPR-$\delta,\tau$ or FWPD-$\delta,\tau$ for any finite $\tau \in \mathbb{N}$.
This is because there may be an arm in $\mathcal{H}_1$ with a mean infinitesimally close to $\mu_0$ but distinct such that no algorithm can determine whether it is in $\mathcal{H}_0$ or $\mathcal{H}_1$.
Thus, the algorithm must run indefinitely or until it is stopped externally.
However, using an anytime confidence bound (see Section~\ref{sec:algorithm}) one can always make statements like ``either $\mathcal{H}_1 \subseteq \mathcal{S}_t$, or $\max_{i \in \mathcal{H}_1 \setminus \mathcal{S}_t} \mu_i-\mu_0 \leq \epsilon$'' where the $\epsilon$ will depend on the width of the confidence interval.
\end{remark}
\subsection{Contributions and Informal Summary of Main Results}\label{sec:contributions}
\begin{table}
\begin{center}
\begin{tabular}{c | c c }
& \multicolumn{2}{c}{\textbf{False alarm control}}\\[2pt]
& \makecell{FDR-$\delta$ \\ $\max_t \mathbb{E}[\frac{|\mathcal{S}_t \cap \mathcal{H}_0|}{|\mathcal{S}_t| \vee 1}] \leq \delta$} & \makecell{FWER-$\delta$ \\ $\mathbb{P}( \bigcup_{t=1}^\infty \{ \mathcal{S}_t \cap \mathcal{H}_0 \neq \emptyset\} ) \leq \delta$} \\ \hline \\[-8pt]
\makecell{\textbf{Detection Probability}\\[4pt] TPR-$\delta,\tau$ \\ $\mathbb{E}[\frac{|\mathcal{S}_\tau \cap \mathcal{H}_1|}{|\mathcal{H}_1|}] \geq 1-\delta$\\[8pt] } & \makecell{ {\small Theorem~\ref{thm:FDR_TPR}}\\[-0pt] $n\Delta^{-2}$} & \makecell{ {\small Theorem~\ref{thm:FWER_TPR}}\\[-0pt] $(n-k)\Delta^{-2} + k \Delta^{-2} \log(n-k)$} \\[16pt]
\makecell{FWPD-$\delta,\tau$ \\ $\mathbb{P}( \mathcal{H}_1 \subseteq \mathcal{S}_\tau) \geq 1-\delta$} & \makecell{ {\small Theorem~\ref{thm:FDR_FWPD}}\\[-0pt] $\quad (n-k) \Delta^{-2} \log(k) + k \Delta^{-2} \quad$} & \makecell{ {\small Theorem~\ref{thm:FWER_FWPD}}\\[-0pt] $(n-k) \Delta^{-2}\log(k) + k \Delta^{-2}\log(n-k)$}
\end{tabular}
\end{center}
\caption{\small Informal summary of sample complexity results proved in this paper for $|\mathcal{H}_1|=k$, constant $\delta$ (e.g., $\delta=.05$) and $\Delta=\min_{i \in \mathcal{H}_1} \mu_i - \mu_0$. Uniform sampling across all settings requires at least $n \Delta^{-2} \log(n/k)$ samples, and in the FWER+FWPD setting requires $n \Delta^{-2}\log(n)$.
Constants and $\log\log$ factors are ignored.\label{tab:complexity_table}}
\vspace{-.25in}
\end{table}
In Section~\ref{sec:algorithm} we propose an algorithm that handles all four combinations of \{FDR-$\delta$, FWER-$\delta$\} and \{TPR-$\delta,\tau$, FWPD-$\delta,\tau$\}.
A reader familiar with the multi-armed bandit literature would expect an adaptive sampling algorithm to have a large advantage over uniform sampling when there is a large diversity in the means of $\mathcal{H}_1$ since larger means can be distinguished from $\mu_0$ with fewer samples.
However, one should note that to declare all of $\mathcal{H}_1$ as discoveries, one must sample every arm in $\mathcal{H}_0$ \emph{at least} as many times as the \emph{most sampled} arm in $\mathcal{H}_1$, otherwise they are statistically indistinguishable.
As discoveries are typically uncovering rare phenomenon, it is common to assume $|\mathcal{H}_1| = n^\beta$ for $\beta \in (0,1)$ \cite{castro2014adaptive,rabinovich2017optimal}, or $|\mathcal{H}_1| = o(n)$, but this implies that the number of samples taken from the arms in $\mathcal{H}_1$, regardless of how samples are allocated to those arms, will almost always be dwarfed by the number of samples allocated to those arms in $\mathcal{H}_0$ since there are $\Omega(n)$ of them.
This line of reasoning, in part, is what motivates us to give our sample complexity results in terms of the quantities that best describe the contributions from those arms in $\mathcal{H}_0$, namely, the cardinality $|\mathcal{H}_1| = n-|\mathcal{H}_0|$, the confidence parameter $\delta$ (e.g., $\delta=.05$), and the gap $\Delta := \min_{i \in \mathcal{H}_1} \mu_i - \mu_0$ between the means of the arms in $\mathcal{H}_0$ and the smallest mean in $\mathcal{H}_1$.
Reporting sample complexity results in terms of $\Delta$ also allows us to compare to known lower bounds in the literature \cite{2017arXiv170306222R,castro2014adaptive,malloy2014sequential,simchowitz2017simulator}.
Nevertheless, we do address the case where the means of $\mathcal{H}_1$ are varied in Theorem~\ref{thm:FDR_TPR}.
An informal summary of the sample complexity results proven in this work are found in Table~\ref{tab:complexity_table} for $|\mathcal{H}_1|=k$.
For the least strict setting of FDR+TPR, the upper-left quadrant of Table~\ref{tab:complexity_table} matches the lower bound of \cite{castro2014adaptive}, a sample complexity of just $\Delta^{-2}n$.
In this FDR+TPR setting (which requires the fewest samples of the four settings), uniform sampling which pulls each arm an equal number of times has a sample complexity of at least $n \Delta^{-2} \log(n/|\mathcal{H}_1|)$
(see Theorem \ref{thm:uniform_fdr_lower_bound} in Appendix~\ref{sec:succ-elim}), which exceeds all results in Table~\ref{tab:complexity_table} demonstrating the statistical power gained by adaptive sampling.
For the most strict setting of FWER+FWPD, the lower-right quadrant of Table~\ref{tab:complexity_table} matches the lower bounds of \cite{malloy2014sequential,kalyanakrishnan2012pac,simchowitz2017simulator}, a sample complexity of
$(n-k) \Delta^{-2}\log(k) + k \Delta^{-2}\log(n-k)$.
Uniform sampling in the FWER+FWPD setting has a sample complexity lower bounded by $n \Delta^{-2} \log(n)$ (see Theorem~\ref{thm:uniform_fwer_lower_bound} in Appendix~\ref{sec:succ-elim}).
The settings of FDR+FWPD and FWER+TPR are sandwiched between these results, and we are unaware of existing lower bounds for these settings.
All the results in Table~\ref{tab:complexity_table} are novel, and to the best of our knowledge are the first non-trivial sample complexity results for an adaptive algorithm in the \emph{fixed confidence} setting where a desired confidence $\delta$ is set, and the algorithm attempts to minimize the number of samples taken to meet the desired conditions.
We also derive tools that we believe may be useful outside this work: for always valid $p$-values (c.f. \cite{johari2015always,yang2017framework}) we show that FDR is controlled for all times using the Benjamini-Hochberg procedure \cite{benjamini1995controlling} (see Lemma~\ref{lem:expected_FDR}), and also provide an anytime high probability bound on the false discovery proportion (see Lemma~\ref{lem:fdr_high_prob}).
Finally, as a direct consequence of the theoretical guarantees proven in this work and the empirical performance of the FDR+TPR variant of the algorithm on real data, an algorithm faithful to the theory was implemented and is in use in production at a leading A/B testing platform \cite{optimizely}.
\subsection{Related work}\label{sec:related_work}
Identifying arms with means above a threshold, or equivalently, multiple testing via rejecting null-hypotheses with small $p$-values, is an ubiquitous problem in the biological sciences.
In the standard setup, each arm is given an equal number of measurements (i.e., a uniform sampling strategy), a $p$-value $P_i$ is produced for each arm where $\mathbb{P}(P_i \leq x) \leq x$ for all $x \in (0,1]$ and $i \in \mathcal{H}_0$, and a procedure is then run on these $p$-values to declare small $p$-values as rejections of the null-hypothesis, or discoveries.
For a set of $p$-values $P_1 \leq P_2 \leq \dots \leq P_n$, the so-called Bonferroni selection rule selects $\mathcal{S}_{BF} = \{ i : P_{i} \leq \delta/n \}$.
The fact that FWER control implies FDR control, $\mathbb{E}[ | \mathcal{S}_{BF} \cap \mathcal{H}_0| ] \leq \mathbb{P}(\bigcup_{i \in \mathcal{H}_0} \{ P_{i} \leq \delta/n\} ) \leq \delta \frac{|\mathcal{H}_0|}{n} \leq \delta$,
suggests that greater statistical power (i.e. more discoveries) could be achieved with procedures designed specifically for FDR.
The BH procedure \cite{benjamini1995controlling} is one such procedure to control FDR and is widely used in practice (with its many extensions \cite{2017arXiv170306222R} and performance investigations \cite{rabinovich2017optimal}).
Recall that a uniform measurement strategy where every arm is sampled the same number of times requires
$n \Delta^{-2} \log(n/k)$ samples in the FDR+TPR setting, and $n \Delta^{-2}\log(n)$ samples in the FWER+FWPD setting (Theorems~\ref{thm:uniform_fdr_lower_bound} and \ref{thm:uniform_fwer_lower_bound} in Appendix~\ref{sec:succ-elim}), which can be substantially worse than our adaptive procedure (see Table~\ref{tab:complexity_table}).
Adaptive sequential testing has been previously addressed in the \emph{fixed budget} setting: the procedure takes a sampling budget as input, and the guarantee states that if the given budget is larger than a problem dependent constant, the procedure drives the error probability to zero and the detection probability to one.
One of the first methods called \emph{distilled sensing} \cite{haupt2011distilled} assumed that arms from $\mathcal{H}_0$ were Gaussian with mean at most $\mu_0$, and successively discarded arms after repeated sampling by thresholding at $\mu_0$--at most the median of the null distribution--thereby discarding about half the nulls at each round.
The procedure made guarantees about FDR and TPR, which were later shown to be nearly optimal \cite{castro2014adaptive}.
Specifically, \cite[Corollary 4.2]{castro2014adaptive} implies that any procedure with $\max\{FDR+(1-TPR)\} \leq \delta$ requires a budget of at least $\Delta^{-2} n \log(1/\delta)$, which is consistent with our work.
Later, another thresholding algorithm for the fixed budget setting addressed the FWER and FWPD metrics \cite{malloy2014sequential}.
In particular, if their procedure is given a budget exceeding $(n-|\mathcal{H}_1|) \Delta^{-2}\log(|\mathcal{H}_1|) + |\mathcal{H}_1| \Delta^{-2} \log(n-|\mathcal{H}_1|)$ then the FWER is driven to zero, and the FWPD is driven to one.
By appealing to the optimality properties of the SPRT (which knows the distributions precisely) it was argued that this is optimal.
These previous works mostly focused on the asymptotic regime as $n \rightarrow \infty$ and $|\mathcal{H}_1| = o(n)$.
Our paper, in contrast to these previous works considers the \emph{fixed confidence} setting: the procedure takes a desired FDR (or FWER) and TPR (or FWPD) and aims to minimize the number of samples taken before these constraints are met. To the best of our knowledge, our paper is the first to propose a scheme for this problem in the fixed confidence regime with near-optimal sample complexity guarantees.
A related line of work is the threshold bandit problem, where all the means of $\mathcal{H}_1$ are assumed to be strictly above a given threshold, and the means of $\mathcal{H}_0$ are assumed to be strictly below the threshold \cite{locatelli2016optimal,kano2017good}. To identify this partition, each arm must be pulled a number of times inversely proportional to the square of its deviation from the threshold. This contrasts with our work, where the majority of arms may have means \emph{equal} to the threshold and the goal is to identify arms with means greater than the threshold subject to discovery constraints. If the arms in $\mathcal{H}_0$ are assumed to be strictly below the threshold it is possible to declare arms as in $\mathcal{H}_0$. In our setting we can only ever determine that an arm is in $\mathcal{H}_1$ and not $\mathcal{H}_0$, but it is impossible to detect that an arm is in $\mathcal{H}_0$ and not in $\mathcal{H}_1$.
Note that the problem considered in this paper is very related to the top-$k$ identification problem where the objective is to identify the unique $k$ arms with the highest means with high probability \cite{chen2017nearly,kalyanakrishnan2012pac,simchowitz2017simulator}.
Indeed, if we knew $|\mathcal{H}_1|$, then our FWER+FWPD setting is equivalent to the top-$k$ problem with $k=|\mathcal{H}_1|$. Lower bounds derived for the top-$k$ problem assume the algorithm has knowledge of the values of the means, just not their indices \cite{chen2017nearly, simchowitz2017simulator}. Thus, these lower bounds also apply to our setting and are what are referenced in Section \ref{sec:contributions}.
As pointed out by \cite{locatelli2016optimal}, both our setting and the threshold bandit problem can be posed as a combinatorial bandits problem as studied in \cite{chen2014combinatorial,cao2017disagreement}, but such generality leads to unnecessary $\log$ factors.
The techniques used in this work aim to reduce extraneous $\log$ factors, a topic of recent interest in the top-$1$ and top-$k$ arm identification problem \cite{even2006action,karnin2013almost,jamieson2014lil,chen2017towards,chen2017nearly,simchowitz2017simulator}.
While these works are most similar to exact identification (FWER+FWPD), there also exist examples of \emph{approximate} top-$k$ where the objective is to find any $k$ means that are each within $\epsilon$ of the best $k$ means \cite{kalyanakrishnan2012pac}.
Approximate recovery is also studied in a ranking context with a symmetric difference metric \cite{heckel2018approximaterank} which is more similar to the FDR and TPR setting, but neither this nor that work subsumes one another.
Finally, maximizing the number of discoveries subject to a FDR constraint has been studied in a sequential setting in the context of A/B testing with uniform sampling \cite{johari2015always}.
This work popularized the concept of an always valid $p$-value that we employ here (see Section~\ref{sec:algorithm}).
The work of \cite{yang2017framework} controls FDR over a \emph{sequence} of independent bandit problems that each outputs at most one discovery.
While \cite{yang2017framework} shares much of the same vocabulary as this paper, the problem settings are very different.
\section{Algorithm and Discussion}\label{sec:algorithm}
\begin{algorithm}[t]
\textbf{Input:} Threshold $\mu_0$, confidence $\delta\in (0, e^{-1}]$, confidence interval $\phi(\cdot,\cdot)$\\
\textbf{Initialize:} Pull each arm $i \in [n]$ once and let $T_i(t)$ denote the number of times arm $i$ has been pulled up to time $t$. Set $\mathcal{S}_{n+1} = \emptyset$, $\mathcal{R}_{n+1} = \emptyset$, and\\
\textbf{If } TPR\\
\hspace*{.25in}$\xi_t=1$, \hspace{.2in} and \hspace{.2in} $\nu_t=1 \quad \forall t$\\
\textbf{Else if } FWPD\\
\hspace*{.25in}$\xi_t=\max\{ 2|\mathcal{S}_t|, \tfrac{5}{3(1-4\delta)} \log(1/\delta) \}$, \hspace{.2in} and \hspace{.2in} $\nu_t=\max\{ |\mathcal{S}_t|, 1\} \quad \forall t$ \\[4pt]
\textbf{For} $t = n+1,n+2,\dots$ \\
\hspace*{.25in}\textbf{Pull arm} $\displaystyle I_t = \arg\max_{i \in [n] \setminus \mathcal{S}_t } \widehat{\mu}_{i,T_i(t)} + \phi(T_i(t),\tfrac{\delta}{\xi_t})$, \\
\hspace*{.25in} \textbf{Apply} Benjamini-Hochberg \cite{benjamini1995controlling} selection at level $\delta' = \tfrac{\delta}{6.4\log(36/\delta)}$ to obtain $\delta$ FDR-controlled set $\mathcal{S}_t$:\\[2pt]
\hspace*{.5in}$s(k) = \{ i \in [n] : \widehat{\mu}_{i,T_i(t)} - \phi(T_i(t),\delta' \tfrac{k}{n}) \geq \mu_0 \}$, $\forall k \in [n]$ \\
\hspace*{.5in}$\displaystyle\mathcal{S}_{t+1} = s(\widehat{k}) \text{ where }\widehat{k} = \max \{ k \in [n]: |s(k)| \geq k \}$ (if $\not\exists \widehat{k}$ set $\mathcal{S}_{t+1} = \mathcal{S}_t$)\\[4pt]
\hspace*{.25in}\textbf{If } FWER and $\mathcal{S}_t \neq \emptyset$:\\
\hspace*{.5in}\textbf{Pull arm} $\displaystyle J_t = \arg\max_{i \in \mathcal{S}_t\setminus \mathcal{R}_t} \widehat{\mu}_{i,T_i(t)} + \phi(T_i(t),\tfrac{\delta}{\nu_t})$ \\
\hspace*{.5in} \textbf{Apply} Bonferroni-like selection to obtain FWER-controlled set $\mathcal{R}_t$:\\[2pt]
\hspace*{.75in}$\chi_t = n - (1-2\delta'(1+4\delta')) |\mathcal{S}_t| + \tfrac{4(1+4\delta')}{3}\log(5\log_2(n/\delta')/\delta')$\\
\hspace*{.75in}$\displaystyle\mathcal{R}_{t+1} = \mathcal{R}_t \cup \{ i \in \mathcal{S}_t : \widehat{\mu}_{i,T_i(t)} - \phi(T_i(t), \tfrac{\delta}{\chi_t}) \geq \mu_0 \}$ \\
\caption{\small An algorithm for identifying arms with means above a threshold $\mu_0$ using as few samples as possible subject to false alarm and true discovery conditions.
The set $\mathcal{S}_t$ is designed to control FDR at level $\delta$.
The set $\mathcal{R}_t$ is designed to control FWER at level $\delta$. \label{alg:FDRMAB}}
\end{algorithm}
Throughout, we will assume the existence of an \emph{anytime confidence interval}.
Namely, if $\widehat{\mu}_{i,t}$ denotes the empirical mean of the first $t$ bounded i.i.d. rewards in $[0,1]$ from arm $i$, then for any $\delta \in (0,1)$ we assume the existence of a function $\phi$ such that for any $\delta$ we have
$\mathbb{P}\left( \bigcap_{t=1}^\infty \{ |\widehat{\mu}_{i,t} - \mu_i| \leq \phi(t,\delta) \} \right) \geq 1-\delta$.
We assume that $\phi( t , \delta )$ is non-increasing in its second argument and that there exists an absolute constant $c_\phi$ such that $\phi( t , \delta ) \leq \sqrt{\frac{ c_\phi \log( \log_2(2 t)/\delta)}{t}}$.
It suffices to define $\phi$ with this upper bound with $c_\phi=4$ but there are much sharper known bounds that should be used in practice (e.g., they may take empirical variance into account), see \cite{jamieson2014lil,Balsubramani2014,kaufmann2016complexity,tanczosnowak2017}.
Anytime bounds constructed with such a $\phi(t,\delta)$ are known to be tight in the sense that $\mathbb{P}(\bigcup_{t=1}^\infty \{|\widehat{\mu}_{i,t} - \mu_i| \geq \phi(t,\delta)\}) \leq \delta$ and that there exists an absolute constant $h \in (0,1)$ such that $\mathbb{P}(\{|\widehat{\mu}_{i,t} - \mu_i| \geq h \, \phi(t,\delta)\text{ for infinitely many $t \in \mathbb{N}$}\}) = 1$ by the Law of the Iterated Logarithm \cite{HartmanWintnerLIL}.
Consider Algorithm~\ref{alg:FDRMAB}.
Before entering the for loop, time-dependent variables $\xi_t$ and $\nu_t$ are defined that should be updated at each time $t$ for different settings.
If just FDR control is desired, the algorithm merely loops over the three lines following the for loop, pulling the arm $I_t$ not in $\mathcal{S}_t$ that has the highest upper confidence bound; such strategies are common for pure-exploration problems \cite{jamieson2014lil,yang2017framework}.
But if FWER control is desired then at most one additional arm $J_t$ is pulled per round to provide an extra layer of filtering and evidence before an arm is added to $\mathcal{R}_t$.
Below we describe the main elements of the algorithm and along the way sketch out the main arguments of the analysis, shedding light on the constants $\xi_t$ and $\nu_t$.
\subsection{False alarm control}\label{sec:false_alarm_control}
\textbf{$\mathcal{S}_t$ is FDR-controlled.}
In addition to its use as a confident bound, we can also use $\phi(t,\delta)$ to construct:
\begin{align}\label{eqn:anytime_pvalue}
P_{i,t} := \sup \{ \alpha \in (0,1] : \widehat{\mu}_{i,t}-\mu_0 \leq \phi(t,\alpha) \} \leq \log_2(2t) \exp(-t (\widehat{\mu}_{i,t}-\mu_0)^2/c_\phi).
\end{align}
Proposition~1 of \cite{yang2017framework} (and the proof of our Lemma~\ref{lem:expected_FDR}) shows that if $i \in \mathcal{H}_0$ so that $\mu_i = \mu_0$ then $P_{i,t}$ is an \emph{anytime, sub-uniformly distributed $p$-value} in the sense that $\mathbb{P}( \bigcup_{t=1}^\infty \{ P_{i,t} \leq x \} ) \leq x$.
Sequences that have this property are sometimes referred to as \emph{always-valid} $p$-values \cite{johari2015always}.
Note that if $i \in \mathcal{H}_1$ so that $\mu_i > \mu_0$, we would intuitively expect the sequence $\{P_{i,t}\}_{t=1}^\infty$ to be point-wise smaller than if $\mu_i = \mu_0$ by the property that $\phi(\cdot,\cdot)$ is non-increasing in its second argument.
This leads to the intuitive rule to reject the null-hypothesis (i.e., declare $i \notin \mathcal{H}_0$) for those arms $i \in [n]$ where $P_{i,t}$ is very small.
The Benjamini-Hochberg (BH) procedure introduced in \cite{benjamini1995controlling} proceeds by first sorting the $p$-values so that $P_{(1),T_{(1)}(t)} \leq P_{(2),T_{(2)}(t)} \leq \dots \leq P_{(n),T_{(n)}(t)}$, then defines $\widehat{k} = \max\{ k : P_{(k),T_{(k)}(t)} \leq \delta \tfrac{k}{n} \}$, and sets $\mathcal{S}_{BH} = \{ i : P_{i,T_i(t)} \leq \delta \tfrac{\widehat{k}}{n} \}$.
Note that this procedure is identical to defining sets
\begin{align*}
s(k) = \{ i : P_{i,T_i(t)} \leq \delta \tfrac{k}{n} \} = \{ i : \widehat{\mu}_{i,T_i(t)} - \phi(T_i(t),\delta\tfrac{k}{n}) \geq \mu_0 \},
\end{align*}
setting $\widehat{k} = \max\{ k : |s(k)| \geq k \}$, and $\mathcal{S}_{BH} = s(\widehat{k})$, which is exactly the set $\mathcal{S}_t = \mathcal{S}_{BH}$ in Algorithm~\ref{alg:FDRMAB}.
Thus, $\mathcal{S}_t$ in Algorithm~\ref{alg:FDRMAB} is equivalent to applying the BH procedure at a level $O(\delta/\log(1/\delta))$ to the anytime $p$-values of \eqref{eqn:anytime_pvalue}. We now discuss the extra logarithmic factor.
Because the algorithm is pulling arms sequentially, some dependence between the $p$-values may be introduced. Because the anytime $p$-values are not independent, the BH procedure at level $\delta$ does not directly guarantee FDR-control at level $\delta$.
However, it has been shown \cite{benjamini2001control} that for even arbitrarily dependent $p$-values the BH procedure at level $\delta$ controls FDR at level $\delta \log(n)$ (and that it is nearly tight).
Similarly, the following theorem, which may be of independent interest,
is a significant improvement when applied to our setting.
\begin{theorem}\label{thm:newFDRcontrol}
Fix $\delta \in (0,e^{-1})$.
Let $p_1,\dots,p_n$ be random variables such that $\{ p_i \}_{i\in \mathcal{H}_0}$ are independent and sub-uniformly distributed so that $\max_{i \in \mathcal{H}_0} \mathbb{P}(p_i \leq x ) \leq x$. For any $k \in \{0,1,\dots,n\}$, let $R_k := \{ i : p_i \leq \delta \frac{k}{n} \}$ and $\widehat{FDP}(R_k) := \tfrac{\max_{p_i \in R_k} p_i }{|R_k| \vee 1}$.
\begin{align*}
\mathbb{E}\left[ \max_{k : \widehat{FDP}(R_k) \leq \delta} FDP(R_k) \right]
&\leq \frac{|\mathcal{H}_0|\delta}{n} \left( 2 \log( \tfrac{2n}{|\mathcal{H}_0|\delta }) + \log( 8 e^5\log( \tfrac{8n}{|\mathcal{H}_0| \delta} ) ) \right) \\
&\leq 4 \delta \log( 9/\delta)
\end{align*}
In other words, any procedure that chooses a set $\{ i : p_i \leq \frac{\delta k}{n} \}$ satisfying $|\{ i : p_i \leq \frac{\delta k}{n} \}| \geq k$ is FDR controlled at level $O(\delta \log(1/\delta))$.
\end{theorem}
Recall, if $\widehat{k} = \max\{ k : \widehat{FDP}(R_k) \leq \delta \}$ then $\mathbb{E}[ FDP(R_{\widehat{k}})] \leq \delta$ by the standard BH result.
When running the algorithm we recommend using BH at level $\delta$, not level $O(\delta/\log(1/\delta))$.
As $T_i$ gets very large, $P_{i,T_i(t)} \rightarrow P_{i,*}$ and we know that if BH is run on $P_{i,*}$ at level $\delta$ then FDR would be controlled at level $\delta$. We believe this inflation to be somewhat of an artifact of our proofs.
\textbf{$\mathcal{R}_t$ is FWER-controlled.}
A core obstacle in our analysis is the fact that we don't know the cardinality of $\mathcal{H}_1$.
If we did know $|\mathcal{H}_1|$ (and equivalently know $|\mathcal{H}_0| = n - |\mathcal{H}_1|$) then a FWER+FWPD algorithm is equivalent to the so-called top-$k$ multi-armed bandit problem \cite{kalyanakrishnan2012pac,simchowitz2017simulator} and controlling FWER would be relatively simple {using a Bonferroni correction:
\begin{align*}
\mathbb{P}\Big( \bigcup_{i \in \mathcal{H}_0} \cup_{t=1}^\infty \{ \widehat{\mu}_{i,t} - \phi(t, \tfrac{\delta}{n-|\mathcal{H}_1|}) \geq \mu_0\}\Big) \leq \sum_{i \in \mathcal{H}_0} \mathbb{P}\left(\cup_{t=1}^\infty \{ \widehat{\mu}_{i,t} - \phi(t, \tfrac{\delta}{|\mathcal{H}_0|}) \geq \mu_0\} \right)\leq |\mathcal{H}_0| \tfrac{\delta}{|\mathcal{H}_0|}
\end{align*}
which implies FWER-$\delta$.
Comparing the first expression immediately above to the definition of $\mathcal{R}_t$ in the algorithm, it is clear our strategy is to use $|\mathcal{S}_t|$ as a surrogate for $|\mathcal{H}_1|$.
Note that we could use the bound $|\mathcal{H}_0| = n - |\mathcal{H}_1| \leq n$ to guarantee FWER-$\delta$, but this could be very loose and induce an $n\log(n)$ sample complexity.
Using $|\mathcal{S}_t|$ as a surrogate for $|\mathcal{H}_1|$ in $\mathcal{R}_t$ is intuitive because by the FDR guarantee, we know $|\mathcal{H}_1| \geq \mathbb{E}[|\mathcal{S}_t \cap \mathcal{H}_1|] = \mathbb{E}[|\mathcal{S}_t|] - \mathbb{E}[|\mathcal{S}_t \cap \mathcal{H}_0|] \geq (1-\delta) \mathbb{E}[ |\mathcal{S}_t|]$, implying that $|\mathcal{H}_0| = n - |\mathcal{H}_1| \leq n - (1-\delta) \mathbb{E}[|\mathcal{S}_t|]$ which may be much tighter than $n$ if $\mathbb{E}[|\mathcal{S}_t|] \rightarrow |\mathcal{H}_1|$.
Because we only know $|\mathcal{S}_t|$ and not its expectation, the extra factors in the surrogate expression used in $\mathcal{R}_t$ are used to ensure correctness with high-probability (see Lemma~\ref{lem:fwer}).
\subsection{Sampling strategies to boost statistical power}
The above discussion about controlling false alarms for $\mathcal{S}_t$ and $\mathcal{R}_t$ holds for \emph{any} choice of arms $I_t$ and $J_t$ that may be pulled at time $t$.
Thus, $I_t$ and $J_t$ are chosen in order to minimize the amount of time necessary to add arms into $\mathcal{S}_t$ and $\mathcal{R}_t$, respectively, and optimize the sample complexity.
\textbf{TPR-$\delta,\tau$ setting} implies $\xi_t=\nu_t=1$.
Define the random set $\mathcal{I} = \{ i \in \mathcal{H}_1 : \widehat{\mu}_{i,T_i(t)} + \phi(T_i(t),\delta) \geq \mu_i \ \ \forall t \in \mathbb{N} \}$.
Because $\phi$ is an anytime confidence bound, $\mathbb{E}\left[ \left| \mathcal{I} \right| \right] \geq (1-\delta) |\mathcal{H}_1|$.
If $\Delta = \min_{i \in \mathcal{H}_1} \mu_i - \mu_0$, then $\min_{i \in \mathcal{I}} \mu_i \geq \mu_0 + \Delta$ and we claim that with probability at least $1-O(\delta)$ (Section~\ref{sec:FDR_TPR_proof})
\begin{align*}
\textstyle\sum_{t=1}^\infty \mathbf{1}\{I_t \in \mathcal{H}_0, \mathcal{I} \not\subseteq \mathcal{S}_t \} &\leq \textstyle\sum_{t=1}^\infty \mathbf{1}\{I_t \in \mathcal{H}_0, \widehat{\mu}_{I_t,T_{I_t}(t)} + \phi(T_{I_t}(t),\delta) \geq \mu_0 + \Delta \}\\
&\leq c|\mathcal{H}_0| \Delta^{-2}\log(\log(\Delta^{-2}/\delta).
\end{align*}
Thus once this number of samples has been taken, either $\mathcal{I} \subseteq \mathcal{S}_t$, or arms in $\mathcal{I}$ will be repeatedly sampled until they are added to $\mathcal{S}_t$ since each arm $i \in \mathcal{I}$ has its upper confidence bound larger than those arms in $\mathcal{H}_0$ by definition.
It is clear that an arm in $\mathcal{H}_1$ that is repeatedly sampled will eventually be added to $\mathcal{S}_t$ since its anytime $p$-value of \eqref{eqn:anytime_pvalue} approaches $0$ at an exponential rate as it is pulled, and BH selects for low $p$-values.
A similar argument holds for $J_t$ and adding arms to $\mathcal{R}_t$.
\begin{remark}
While the main objective of Algorithm~\ref{alg:FDRMAB} is to identify all arms with means above a given threshold, we note that prior to adding an arm to $\mathcal{S}_t$ in the TPR setting (i.e., when $\xi_t=1$) Algorithm~\ref{alg:FDRMAB} behaves identically to the nearly optimal best-arm identification algorithm lil'UCB of \cite{jamieson2014lil}.
Thus, whether the goal is best-arm identification or to identify all arms with means above a certain threshold, Algorithm~\ref{alg:FDRMAB} is applicable.
\end{remark}
\textbf{FWPD-$\delta,\tau$ setting} is more delicate and uses inflated values of $\xi_t$ and $\nu_t$.
This time, we must ensure that $\{\mathcal{H}_1 \not\subseteq \mathcal{S}_t\} \implies \max_{i \in \mathcal{H}_1 \cap S_t^c} \widehat{\mu}_{i,T_i(t)} + \phi(T_i(t),\delta) \geq \min_{i \in \mathcal{H}_1 \cap \mathcal{S}_t^c} \mu_i \geq \mu_0 + \Delta$.
Because then we could argue that either $\mathcal{H}_1 \subset \mathcal{S}_t$, or only arms in $\mathcal{H}_1$ are sampled until they are added to $\mathcal{S}_t$ (mirroring the TPR argument).
As in the FWER setting above, if we knew the value of $|\mathcal{H}_1|$ the we could set $\xi_t \geq |\mathcal{H}_1|$ to observe that
\begin{align*}
\textstyle\mathbb{P}( \bigcup_{i \in \mathcal{H}_1} \cup_{t=1}^\infty \{ \widehat{\mu}_{i,t} + \phi(t, \tfrac{\delta}{\xi_t}) < \mu_i\} ) \leq \sum_{i \in \mathcal{H}_1} \mathbb{P}\left(\cup_{t=1}^\infty \{ \widehat{\mu}_{i,t} + \phi(t, \tfrac{\delta}{\xi_t}) < \mu_i\} \right)\leq |\mathcal{H}_1| \tfrac{\delta}{\xi_t}
\end{align*}
which is less than $\delta$, to guarantee such a condition.
But we don't know $|\mathcal{H}_1|$ so we use $|\mathcal{S}_t|$ as a surrogate, resulting in the inflated definitions of $\xi_t$ and $\nu_t$ relative to the TPR setting.
The key argument is that either $\mathcal{I} \not\subseteq \mathcal{S}_t$ so that $\max_{i \in \mathcal{I} \cap \mathcal{S}_t^c} \widehat{\mu}_{i,T_i(t)} + \phi(T_i(t),\tfrac{\delta}{\xi_t}) \geq \mu_0+\Delta$ by the definition of $\mathcal{I}$ (since $\xi_t \geq 1$), or $\mathcal{I} \subset \mathcal{S}_t$ and $|\mathcal{S}_t| \geq \frac{1}{2} |\mathcal{H}_1|$ with high probability which implies $\xi_t = \max\{ 2|\mathcal{S}_t|, \tfrac{5}{3(1-4\delta)} \log(1/\delta) \} \geq |\mathcal{H}_1|$ and the union bound of the display above holds.
\section{Main Results}\label{sec:main_results}
In what follows, we say $f \lesssim g$ if there exists a $c >0$ that is independent of all problem parameters and $f \leq c g$.
The theorems provide an upper bound on the sample complexity $\tau \in \mathbb{N}$ as defined in Section~\ref{sec:problem_statement} for TPR-$\delta,\tau$ or FWER-$\delta,\tau$ that holds with probability at least $1-c \delta$ for different values of $c$\footnote{
Each theorem relies on different events holding with high probability, and consequently a different $c$ for each. To have $c=1$ for each of the four settings, we would have had to define different constants in the algorithm for each setting. We hope the reader forgives us for this attempt at minimizing clutter.}.
We begin with the least restrictive setting, resulting in the smallest sample complexity of all the results presented in this work.
Note the slight generalization in the below theorem where the means of $\mathcal{H}_0$ are assumed to be no greater than $\mu_0$.
\begin{theorem}[FDR, TPR]\label{thm:FDR_TPR}
Let $\mathcal{H}_1 = \{ i \in [n]: \mu_i > \mu_0\}$, $\mathcal{H}_0 = \{ i \in [n]: \mu_i \leq \mu_0 \}$. Define $\Delta_i = \mu_i - \mu_0$ for $i \in \mathcal{H}_1$, $\Delta = \min_{i \in \mathcal{H}_1} \Delta_i$, and $\Delta_i=\min_{j \in \mathcal{H}_1} \mu_j - \mu_i = \Delta + (\mu_0 - \mu_i)$ for $i\in\mathcal{H}_0$.
For all $t \in \mathbb{N}$ we have $\mathbb{E}[\frac{|\mathcal{S}_t \cap \mathcal{H}_0|}{|\mathcal{S}_t| \vee 1}] \leq \delta$.
Moreover, with probability at least $1-2\delta$ there exists a $T$ such that
\begin{align*}
\textstyle T \lesssim \min\big\{&n \Delta^{-2} \log( \log(\Delta^{-2})/\delta), \\
&\textstyle\sum_{i \in \mathcal{H}_0} \Delta_i^{-2} \log( \log(\Delta_i^{-2})/\delta) + \textstyle\sum_{i \in \mathcal{H}_1} \Delta_i^{-2} \log( n \log(\Delta_i^{-2})/\delta)\big\}
\end{align*}
and $\mathbb{E}[\frac{|\mathcal{S}_t \cap \mathcal{H}_1|}{|\mathcal{H}_1|}] \geq 1-\delta$ for all $t \geq T$.
Neither argument of the minimum follows from the other.
\end{theorem}
If the means of $\mathcal{H}_1$ are very diverse so that $\max_{i \in \mathcal{H}_1} \mu_i-\mu_0 \gg \min_{i \in \mathcal{H}_1} \mu_i-\mu_0$ then the second argument of the min in Theorem~\ref{thm:FDR_TPR} can be tighter than the first.
But as discussed above, this advantage is inconsequential if $|\mathcal{H}_1| = o(n)$.
The remaining theorems are given in terms of just $\Delta$.
The $\log\log(\Delta^{-2})$ dependence is due to inverting the $\phi$ confidence interval and is unavoidable on at least one arm when $\Delta$ is unknown a priori due to the law of the iterated logarithm \cite{HartmanWintnerLIL,jamieson2014lil,chen2017towards}.
Informally, Theorem~\ref{thm:FDR_TPR} states that if just most true detections suffice while not making too many mistakes, then $O(n)$ samples suffice.
The first argument of the min is known to be tight in a minimax sense up to doubly logarithmic factors due to the lower bound of \cite{castro2014adaptive}.
As a consequence of this work, an algorithm inspired by Algorithm~\ref{alg:FDRMAB} in this setting is now in production at one of the largest A/B testing platforms on the web.
The full proof of Theorem~\ref{thm:FDR_TPR} (and all others) is given in the Appendix due to space.
\begin{theorem}[FDR, FWPD]\label{thm:FDR_FWPD}
For all $t \in \mathbb{N}$ we have $\mathbb{E}[\frac{|\mathcal{S}_t \cap \mathcal{H}_0|}{|\mathcal{S}_t| \vee 1}] \leq \delta$.
Moreover, with probability at least $1-5\delta$, there exists a $T$ such that
\begin{align*}
T \lesssim (n-|\mathcal{H}_1|) \Delta^{-2} \log ( \max\{|\mathcal{H}_1|,& \log\log(n/\delta)\} \log(\Delta^{-2})/ \delta) + |\mathcal{H}_1| \Delta^{-2} \log ( \log(\Delta^{-2})/ \delta)
\end{align*}
and $\mathcal{H}_1 \subseteq \mathcal{S}_t$ for all $t \geq T$.
\end{theorem}
Here $T$ roughly scales like $(n-|\mathcal{H}_1|) \max\{\log(|\mathcal{H}_1|), \log\log\log(n/\delta)\} + |\mathcal{H}_1|$ where the $\log\log\log(n/\delta)$ term comes from a high probability bound on the false discovery proportion for anytime $p$-values (in contrast to just expectation) in Lemma~\ref{lem:fdr_high_prob} that may be of independent interest.
While negligible for all practical purposes, it appears unnatural and we suspect that this is an artifact of our analysis.
We note that if $|\mathcal{H}_1| = \Omega(\log(n))$ then the sample complexity sheds this awkwardness\footnote{In the asymptotic $n$ regime, it is common to study the case when $|\mathcal{H}_1| = n^\beta$ for $\beta \in (0,1)$ \cite{castro2014adaptive,haupt2011distilled}.}.
The next two theorems are concerned with controlling FWER on the set $\mathcal{R}_t$ and determining how long it takes before the claimed detection conditions are satisfied on the set $\mathcal{R}_t$.
Note we still have that FDR is controlled on the set $\mathcal{S}_t$ but now this set feeds into $\mathcal{R}_t$.
\begin{theorem}[FWER, FWPD]\label{thm:FWER_FWPD}
For all $t$ we have $\mathbb{E}[\frac{|\mathcal{S}_t \cap \mathcal{H}_0|}{|\mathcal{S}_t| \vee 1}] \leq \delta$.
Moreover, with probability at least $1-6\delta$, we have $\mathcal{H}_0 \cap \mathcal{R}_t = \emptyset$ for all $t \in \mathbb{N}$ and there exists a $T$ such that
\begin{align*}
T \lesssim& (n-|\mathcal{H}_1|) \Delta^{-2} \log ( \max\{|\mathcal{H}_1|, \log\log(n/\delta)\} \log(\Delta^{-2}) / \delta) \\
&+ |\mathcal{H}_1| \Delta^{-2} \log ( \max\{n-(1-2\delta(1+4\delta))|\mathcal{H}_1|, \log\log(n/\delta)\}\log(\Delta^{-2}) / \delta)
\end{align*}
and $\mathcal{H}_1 \subseteq \mathcal{R}_t$ for all $t \geq T$.
Note, together this implies $\mathcal{H}_1 = \mathcal{R}_t$ for all $t \geq T$.
\end{theorem}
Theorem~\ref{thm:FWER_FWPD} has the strongest conditions, and therefore the largest sample complexity.
Ignoring $\log\log\log(n)$ factors, $T$ roughly scales as $(n-|\mathcal{H}_1|) \log(|\mathcal{H}_1|) + |\mathcal{H}_1| \log(n-(1-2\delta(1+4\delta))|\mathcal{H}_1|)$.
Inspecting the top-k lower bound of \cite{simchowitz2017simulator} where the arms' means in $\mathcal{H}_1$ are equal to $\mu_0 + \Delta$, the arms' means in $\mathcal{H}_0$ are equal to $\mu_0$, and the algorithm has knowledge of the cardinality of $\mathcal{H}_1$, a necessary sample complexity of $(n-|\mathcal{H}_1|)\log(|\mathcal{H}_1|) + |\mathcal{H}_1| \log(n-|\mathcal{H}_1|)$ is given.
It is not clear whether this small difference of $\log(n-(1-2\delta(1+4\delta)) |\mathcal{H}_1|)$ versus $\log(n-|\mathcal{H}_1|)$ is an artifact of our analysis, or a fundamental limitation when the cardinality $|\mathcal{H}_1|$ is unknown.
We now state our final theorem.
\begin{theorem}[FWER, TPR]\label{thm:FWER_TPR}
For all $t$ we have $\mathbb{E}[\frac{|\mathcal{S}_t \cap \mathcal{H}_0|}{|\mathcal{S}_t| \vee 1}] \leq \delta$.
Moreover, with probability at least $1-7\delta$ we have $\mathcal{H}_0 \cap \mathcal{R}_t = \emptyset$ for all $t \in \mathbb{N}$ and there exists a $T$ such that
\begin{align*}
T \lesssim& (n-|\mathcal{H}_1|) \Delta^{-2} \log ( \log(\Delta^{-2})/ \delta) \\
&+ |\mathcal{H}_1| \Delta^{-2} \log ( \max\{n-(1-\eta)|\mathcal{H}_1|, \log\log(n\log(1/\delta)/\delta)\}\log(\Delta^{-2}) / \delta)
\end{align*}
and $\mathbb{E}[\frac{|\mathcal{R}_t \cap \mathcal{H}_1|}{|\mathcal{H}_1|}] \geq 1-\delta$ for all $t \geq T$, where $\eta=( 1 - 3\delta- \sqrt{2\delta \log(1/\delta)/|\mathcal{H}_1|})$.
\end{theorem}
\section{Experiments}
The distribution of each arm equals $\nu_i = \mathcal{N}(\mu_i,1)$ where $\mu_i = \mu_0 = 0$ if $i \in \mathcal{H}_0$, and $\mu_i>0$ if $i \in \mathcal{H}_1$.
We consider three algorithms: $i$) uniform allocation with anytime BH selection as done in Algorithm~1, $ii$) successive elimination (SE) (see Appendix~\ref{sec:succ-elim})\footnote{Inspired by the best-arm identification literature \cite{even2006action}.} that performs uniform allocation on only those arms that have not yet been selected by BH, and $iii$) Algorithm 1 (UCB).
Algorithm 1 and the BH selection rule for all algorithms use $\phi(t,\delta) = \sqrt{\tfrac{2\log(1/\delta)+6 \log\log(1/\delta) + 3 \log(\log(e t/2))}{t}}$ from \cite[Theorem 8]{kaufmann2016complexity}. In addition, we ran BH at level $\delta$ instead of $\delta/(6.4\log(36/\delta))$ as discussed in section \ref{sec:main_results}.
Here we present the sample complexity for TPR+FDR with $\delta=0.05$ and different parameterizations of $\mu$, $n$, $|\mathcal{H}_1|$.\\[6pt]
\begin{tabular}{l l l l l}
\includegraphics[width=.252\textwidth,trim={.6cm .7cm .7cm .7cm},clip]{plot_TPR_time.png} &
\includegraphics[width=.201\textwidth,trim={.3cm .7cm .7cm .4cm},clip]{plot_H1eq2.png} &
\includegraphics[width=.18\textwidth,trim={1.3cm .7cm .7cm .4cm},clip]{plot_H1sqrtN.png} &
\includegraphics[width=.18\textwidth,trim={1.3cm .7cm .7cm .4cm},clip]{plot_H1eqN5.png} &
\includegraphics[width=.18\textwidth,trim={1.3cm .7cm .7cm .4cm},clip]{plot_H1var.png}
\end{tabular}
The first panel shows an empirical estimate of $\mathbb{E}[ \frac{|\mathcal{S}_t \cap \mathcal{H}_1|}{|\mathcal{H}_1|} ]$ at each time $t$ for each algorithm, averaged over 1000 trials. The black dashed line on the first panel denotes the level $\mathbb{E}[ \frac{|\mathcal{S}_t \cap \mathcal{H}_1|}{|\mathcal{H}_1|} ]=1-\delta = .95$, and corresponds to the dashed black line on the second panel. The right four panels show the number of samples each algorithm takes before the true positive rate exceeds $1-\delta=.95$, relative to the number of samples taken by UCB, for various parameterizations.
Panels two, three, and four have $\Delta_i=\Delta$ for $i \in \mathcal{H}_1$ while panel five is a case where the $\Delta_i$'s are linear for $i \in \mathcal{H}_1$.
While the differences are most clear on the second panel when $|\mathcal{H}_1| =2= o(n)$, over all cases UCB uses at least $\approx3$ times fewer samples than uniform and SE.
For FDR+TPR, Appendix~\ref{sec:succ-elim} shows uniform sampling roughly has a sample complexity that scales like $n \Delta^{-2} \log(\tfrac{n}{|\mathcal{H}_1|})$ while SE's is upper bounded by $\min\{ n \Delta^{-2} \log(\tfrac{n}{|\mathcal{H}_1|}), (n-|\mathcal{H}_1|)\Delta^{-2} \log(\tfrac{n}{|\mathcal{H}_1|}) + \sum_{i \in \mathcal{H}_1} \Delta_i^{-2} \log(n )\}$. Comparing with Theorem~\ref{thm:FDR_TPR} for the difference cases (i.e., $|\mathcal{H}_1| =2, \sqrt{n}, n/5$) provides insight into the relative difference between UCB, uniform, and SE on the different panels.
\subsection*{Acknowledgments}
This work was informed and inspired by early discussions with Aaditya Ramdas on methods for controlling the false discovery rate (FDR) in multiple testing; we are grateful to have learned from a leader in the field. We also thank him for his careful reading and feedback. We'd also like to thank Martin J. Zhang for his input.
We also thank the leading experimentation and A/B testing platform on the web, \emph{Optimizely}, for its support, insight into its customers' needs, and for committing engineering time to implementing this research into their platform \cite{optimizely}.
In particular, we thank Whelan Boyd, Jimmy Jin, Pete Koomen, Sammy Lee, Ajith Mascarenhas, Sonesh Surana, and Hao Xia at Optimizely for their efforts.
\clearpage
\bibliographystyle{unsrt}
|
{
"timestamp": "2019-07-18T02:02:13",
"yymm": "1809",
"arxiv_id": "1809.02235",
"language": "en",
"url": "https://arxiv.org/abs/1809.02235"
}
|
\section{Introduction}
\label{sec:intro}
As we approach the end of the semiconductor roadmap~\cite{semiconductor2015}, we are entering a regime in which fundamental thermodynamic considerations limit the sub-threshold slope, practical switching voltages, and gate energies---implying that further downscaling of device sizes and gate capacitances will soon no longer yield improvements in energy efficiency for conventional logic.
Industry's shift towards 3D geometries~\cite{semiconductor2015} will somewhat reduce parasitic energy losses in circuit structures, but once that line of improvements is played out, the only remaining approach to further increase energy efficiency will be to begin applying techniques of energy recovery.
In this regard, resonant circuit techniques to recycle and reuse logic signal energies, rather than dissipating the entire $\frac{1}{2}CV^2$ circuit node energy on each logic-level transition, are promising. Unlike all other options, no fundamental theoretical limits on the ultimate energy efficiency of energy recovery are currently known for this direction---offering a path towards future growth of computing performance within any given energy dissipation constraints.
But apparently the ideal of 100\% energy recovery implies that all switching activity of a device must be carried out in a manner that is asymptotically adiabatic---avoiding any abrupt loss of signal energy to heat. This motivated the consideration of \emph{adiabatic circuits} which allow for computations with an asymptotically close to zero energy dissipation (at the expense of a slower execution).
Due to Landauer's limit~\cite{Landauer61}, this in turn implies that the computational function of the switching circuit must be \emph{logically reversible}, in the appropriately generalized sense discussed in~\cite{DBLP:conf/rc/Frank17}. Otherwise, the information lost from a conventional circuit leads to an entropy increase and, therefore, to an irreducible energy dissipation. This was recently also advocated to a larger community in~\cite{frank2017reversible} stating that the future of computing depends on reversible computations.
While these concepts have already been around for a while---general techniques for designing fully-adiabatic and reversible circuits have been introduced in the 1990's and resulted in a large body of literature (see e.g.~\cite{koller1992adiabatic,hall1992electroid,merkle1992towards,younis1993practical})---most of the adiabatic design families that have been proposed contain flaws preventing them from being truly adiabatic~\cite{DBLP:conf/csreaESA/Frank03}. In this regard, \emph{two-level adiabatic logic} (2LAL as proposed in~\cite{anantharam2004driving}) represents a very promising, \mbox{fully-adiabatic} transmission-gate logic family that relies on simple but rather efficient building blocks.
However, to realize correct adiabatic and reversible circuit designs that could truly approach arbitrarily low levels of energy dissipation
requires to satisfy certain \emph{switching rules} which differ from the design of conventional circuitry---crying out for automated approaches for the design of such adiabatic circuits. Heading into this direction recently also gained relevance in industry---triggered e.g.~by investments of funding agencies and national departments~\cite{frank2017reversible}. Accordingly, researchers started to work towards such solutions.
However, previously proposed approaches either focus on their electrical realization (see e.g.~\cite{younis1993practical,anantharam2004driving}) or on designing purely reversible building blocks like Toffoli gates (see e.g.~\cite{DBLP:journals/tcad/MorrisonR14,rauchenecker2017exploiting} in combination with corresponding synthesis approaches such as~\cite{zulehner2017one,ZulehnerW18Exploiting}). While the former approaches are restricted to small and \mbox{hand-crafted} circuits only, relying on purely reversible building blocks results in an unnecessarily large overhead.
Instead, recent findings (summarized in~\cite{DBLP:conf/rc/Frank17}) show that conditional reversibility is sufficient for adiabatic circuits. But
thus far, no design automation approach for adiabatic circuits exists which exploits that in an automatic fashion.
In this work, we overcome this issue by combining expertise from both adiabatic circuits and design automation. More precisely,
we review the theoretical and technical background of adiabatic circuits and, based on that, propose
an automatic \emph{and} dedicated design flow for this promising technology.
Two complementary design styles (namely retractile and fully-pipelined) are thereby considered which allow for the generation of adiabatic circuits either focusing on reducing the number of gates or keeping the number of so-called power clocks small. Furthermore, optimizations for both design styles
are proposed which utilize application-specific properties and, by this, allow e.g.~for a reduction in the number of gates by approx. 37\% and 30\% on average for the retractile and fully-pipelined design styles, respectively.
Evaluations confirm the benefits and applicability of the proposed solution.
The remainder of this work is structured as follows:
Section~\ref{sec:background} provides a review of the theoretical and technical background of adiabatic circuits. Based on that, the proposed design flow is introduced in Section~\ref{sec:flow} followed by the descriptions of the respective mapping methods following the retractile and fully-pipelined design style in Section~\ref{sec:retractile} and Section~\ref{sec:pipelined}, respectively. Finally, a summary of the results from our evaluations is given in Section~\ref{sec:exp} and the paper is concluded in Section~\ref{sec:conclusions}.
\vfill
\section{Adiabatic Circuits}
\label{sec:background}
In this work, we consider design automation for adiabatic circuits according to the \emph{two-level adiabatic logic} (2LAL,~\cite{anantharam2004driving}) circuit family. This type of adiabatic circuit uses only two different voltage levels and heavily relies on transmission gates.
Furthermore, a dual-rail encoding is used for the signals of the circuit, i.e.~each signal occurs in uncomplemented as well as in complemented form.\footnote{The uncomplemented form of a signal is labeled with a subscript $N$ and the complemented form is labeled with subscript $P$.}
Fig.~\ref{fig:transmission_gate} provides the notation for transmission gates: If the signal $P$ is 1 (the gate is \emph{turned on}), $A$ and $B$ are connected.\footnote{Note that logic 1 (i.e.~$X=1$) is realized by $X_N = 1$ and $X_P=0$ since a dual-rail encoding is employed.} Otherwise (the gate is \emph{turned off}), $A$ and $B$ are not connected. Since $A$ and~$B$ are both encoded in a dual-rail fashion and, thus, have an uncomplemented as well as complemented form, two transmission gates as shown in the right-hand side of Fig.~\ref{fig:transmission_gate} are required.\footnote{For sake of simplicity, we abstract the two transmission gates in the following illustrations and use the more compact form as shown in the left-hand side of Fig.~\ref{fig:transmission_gate} instead.}
The general \emph{switching rules} for transistors in adiabatic circuits (e.g.~outlined in~\cite{anantharam2004driving,DBLP:conf/csreaESA/Frank03}) imply that
a transmission gate shall never be turned on if $A$ and~$B$ have different values.
\begin{figure}\centering
\scalebox{0.9}{
\begin{tikzpicture}
\draw[line width=0.75pt] (0,0) rectangle ++(1,0.5);
\draw (-0.25,0.25) node[left] {A} -- (0, 0.25);
\draw (1,0.25) -- (1.25,0.25) node[right] {$B$};
\draw (0.5,0.5) -- (0.5, 0.75) node[above] {$P$} ;
\draw (3.25,0.25) node[left] {$A_P$} -| (3.5,0.4) -| (4,0.25) -- (4.25,0.25) node[right] {$B_P$};
\draw (3.5,0.25) |- (4,0.1) -- (4, 0.25);
\draw (3.5,0.05) -- (4, 0.05);
\draw (3.5,0.45) -- (4, 0.45);
\draw[fill=white] (3.75,0.5) circle(0.05);
\draw (3.75,0.55) -- (3.75,0.7);
\draw (3.75,0.05) -- (3.75,-0.1);
\draw[line width = 1pt] (2,0.125) -- (2.25,0.125);
\draw[line width = 1pt] (2,0.25) -- (2.25,0.25);
\draw[line width = 1pt] (2,0.375) -- (2.25,0.375);
\draw (5.5,0.25) node[left] {$A_N$} -| (5.75,0.4) -| (6.25,0.25) -- (6.5,0.25) node[right] {$B_N$};
\draw (5.75,0.25) |- (6.25,0.1) -- (6.25, 0.25);
\draw (5.75,0.05) -- (6.25, 0.05);
\draw (5.75,0.45) -- (6.25, 0.45);
\draw[fill=white] (6,0.5) circle(0.05);
\draw (6,0.55) -- (6,0.7);
\draw (6,0.05) -- (6,-0.1);
\draw (6,-0.1) -- (3.75,-0.1);
\draw (4.875,-0.1) -- (4.875,-0.25) node[below] {$P_N$};
\draw (6,0.7) -- (3.75,0.7);
\draw (4.875,0.7) -- (4.875,0.85) node[above] {$P_P$};
\end{tikzpicture}}
\vspace*{-3mm}
\caption{Transmission gate for dual-rail signals}
\label{fig:transmission_gate}
\vspace*{-3mm}
\end{figure}
Besides that, so-called \emph{power clocks} (denoted $\phi_i$) are additionally utilized to realize typical functions such as OR or AND.
More precisely, the inputs of the gate control a network of transmission gates which connect the output $Y$ of the gate to one of the power clocks~$\phi_i$ in case the function to be realized evaluates to 1.
To obey the switching rules, the output $Y$ of the gate as well as the power clock $\phi_i$ are assumed to be 0 initially. By transitioning the power clock to 1, the output of the gate is set to the desired value.
Moreover, when resetting all inputs of a gate to 0 (and, thus, disconnecting $\phi_i$ and $Y$) while $\phi_i$ is still 1, the output preserves its value (even if resetting $\phi_i$ to 0 afterwards). This allows for an inherent \emph{latching} of an output value to be used by following gates.
An example illustrates the idea:
\begin{myexample}
Fig.~\ref{fig:gates} shows the 2LAL realization of an OR gate and an AND gate. The OR gate is composed of two parallel transmission gates whose outputs are connected.
In case $A=1$ ($B=1$), the upper (the lower) transmission gate is turned on and connects the power clock~$\phi_i$ to the output $Y$. Consequently, $Y$ is connected to $\phi_i$ if \mbox{$A+B=1$}. Transitioning $\phi_i$ to 1 sets $Y$ to the desired value. If we now reset the inputs $A$ and $B$ to 0, the output $Y$ is latched---its value is preserved even when setting $\phi_i$ back to 0 afterwards.
The AND gate is realized similarly as a sequence of two transmission gates. Note that a second output $Y_2=A$ is required in this case to operate the gate in an adiabatic fashion in case $A=1$ and $B=0$ when used in a \mbox{fully-pipelined} circuit (cf.~Section~\ref{sec:pipelined}).
\end{myexample}
\begin{figure}
\begin{subfigure}[b]{0.44\linewidth}
\centering
\scalebox{0.7}{
\begin{tikzpicture}
\draw[line width=0.75pt] (0,0) rectangle ++(1,0.5);
\draw[line width=0.75pt] (0,-0.75) rectangle ++(1,0.5);
\draw (0, 0.25) -- (-0.25,0.25) |- (0, -0.5) ;
\draw (1,0.25) -- (1.25,0.25) |- (1,-0.5);
\draw (0.5,0.5) -- (0.5, 0.75) node[above] {$A$};
\draw (0.5,-0.75) -- (0.5, -1) node[below] {$B$};
\draw (-0.25,-0.125) -- (-0.5,-0.125) node[left] {$\phi_i$};
\draw (1.25,-0.125) -- (1.5,-0.125) node[right] {$Y=A+B$};
\end{tikzpicture}}
\vspace*{-1mm}
\caption{OR gate}\label{fig:gates_or}
\end{subfigure}
\begin{subfigure}[b]{0.55\linewidth}
\centering
\scalebox{0.7}{
\begin{tikzpicture}
\draw[line width=0.75pt] (0,0) rectangle ++(1,0.5);
\draw[line width=0.75pt] (1.5,0) rectangle ++(1,0.5);
\draw (0, 0.25) -- (-0.25,0.25) node[left] {$\phi_i$};
\draw (1,0.25) -- (1.5,0.25);
\draw (0.5,0.5) -- (0.5, 0.75) node[above] {$A$};
\draw (2,0.5) -- (2, 0.75) node[above] {$B$};
\draw (2.5,0.25) -- (2.75,0.25) node[right] {$Y_1=A\cdot B$};
\draw (1.25,0.25) |- (2.75,-0.25) node[right]{$Y_2=A$};
\draw[draw=none] (0.5, -0.625) node[below] {\phantom{$B$}};
\end{tikzpicture}}
\vspace*{-1mm}
\caption{AND gate}
\end{subfigure}
\vspace*{-7mm}
\caption{Adiabatic gates}
\label{fig:gates}
\vspace*{-3mm}
\end{figure}
Once the output of a gate is not needed anymore (e.g.~by a following gate),
an essential step for adiabatic circuits is the ability to decompute it---feeding charge back to the power clocks.
In case that the output was not latched (i.e.~the output is still connected to the power clock), it is decomputable by simply resetting the power clock to 0 (as discussed above).
If the output is latched (i.e.~it was disconnected from the power clock by setting the inputs back to~0), the power clock has to be transitioned to 1 as well, before the inputs are applied in order to obey the switching rules.
Then, the output is decomputable by transitioning the power clock back to 0.
\begin{myexample}\label{ex:decompute}
Consider again the 2LAL realization of an OR gate
(cf.~Fig.~\ref{fig:gates_or}). Assume that the output $Y=A+B$ of the gate is latched and that all other signals are set to~0. To unlatch the output $Y$, we first have to set the power clock $\phi_i$ to 1. By this, $\phi_i$ and $Y$ have the same value if they get connected by resetting the inputs to their original value.
Then, $Y$ is decomputable by changing the power clock $\phi_i$ back to 0---the charge representing $Y=1$ is fed back to the power supply.
\end{myexample}
Following this main principle allows for conducting operations with an asymptotically close to zero energy dissipation (at the expense of a slower execution since more steps have to be conducted). In fact, in contrast to conventional circuits in which energy is frequently ``grounded'', adiabatic circuits allow for feeding energy back to the clocks providing the power supply.
However, this concept of feeding back charge to the power clocks by decomputing signals demands for a logical reversibility of the underlying computations. This is because, in order to not violate the switching rules, the original input assignments have to be applied so that signals with different values are never connected (cf.~Example~\ref{ex:decompute}).
While in the past, a pure reversible scheme has been assumed (see e.g.~\cite{DBLP:journals/tcad/MorrisonR14,rauchenecker2017exploiting}), findings recently summarized in~\cite{DBLP:conf/rc/Frank17} showed that conditional reversibility is actually sufficient for adiabatic circuits. Again, this is illustrated by means of an example:
\begin{myexample}
Consider again the
OR gate shown in Fig.~\ref{fig:gates_or}. Considering the state of the signals $A$, $B$, and $Y$, the gate describes a function $f:\mathbb{B}^3 \rightarrow \mathbb{B}^3 = (A,B,Y) \rightarrow (A,B,A+B)$.
This function is not reversible in general, since the initial value of $Y$ can not be computed from the output values. However, the function is conditionally reversible under the precondition that the value of $Y$ is initially set to 0, i.e.~an input combination like e.g.~$(1,0,1)$ can never occur. Conditional reversibility is a much weaker constraint than unconditional reversibility (as e.g.~considered in~\cite{DBLP:journals/tcad/MorrisonR14,rauchenecker2017exploiting})---allowing to realize adiabatic gates as e.g.~shown in Fig.~\ref{fig:gates}.
\end{myexample}
Obviously, conducting computations in such a fashion requires the corresponding circuits to be designed in a significantly different fashion than conventional circuitry. Besides the generation of a proper netlist composed of transmission gates, this additionally requires dedicated power clocks which correspondingly trigger the required operations at the correct point in time. Moreover, also the design objectives change. While the number of required (transmission) gates is still a factor (e.g.~to approximate the required area), their impact on energy consumption is smaller than for conventional circuits.
This is because energy is never grounded in adiabatic circuits but frequently fed back to the power supply as described above.
In contrast, the number of power clocks is much more crucial as they are the entities which actually require energy and whose waveform might be hard to generate. Besides that, more clocks usually also require longer execution times.
\section{Proposed Design Flow}\label{sec:flow}
As discussed above, previous design methods for designing adiabatic circuits (e.g.~\cite{DBLP:journals/tcad/MorrisonR14,rauchenecker2017exploiting})
assumed the requirement of full reversibility. As recently discussed in~\cite{DBLP:conf/rc/Frank17}, this leads to a significant overhead and is not necessarily needed. In fact, conditional reversibility as reviewed above is sufficient and constitutes a much weaker constraint.
However, thus far, no design automation for this kind of adiabatic circuits exists. Also, solely employing conventional design solutions is not an option since, despite the pure functionality, a dedicated mapping and clocking scheme is required.
In this work, we present different methods which address these issues. All of them employ thereby a two-stage process.
The first step is similar to the design of conventional circuits: We realize the function to be synthesized with respect to a certain logic gate library. Afterwards, the resulting netlist is mapped to an adiabatic circuit which respectively satisfies and optimizes the rules and objectives reviewed in Section~\ref{sec:background}.
For the first part, we utilize a solution based on \emph{AND-Inverter Graphs} (AIGs~\cite{KPKG:2002}) which realize the function to be synthesized in terms of NAND gates.\footnote{Note that the design methods proposed in this work can correspondingly be adjusted to any other synthesis solution and, hence, logic gate library as well.}
AIGs allow for a graph-based representation of Boolean functions. The graph has one root node for each output of the function. The inputs of the function are provided as terminals. The intermediate nodes of an AIG represent an AND operation and, thus, have two successors each. To gain universality, the inputs of the AND operation can be inverted. This is denoted by black circles on the respective edges.
Equal nodes occur frequently and can be shared---allowing for a compact representation of the function to be realized.
\begin{myexample}
Fig.~\ref{fig:aig} shows the AIG of a 3-input 2-output Boolean function with inputs $x_2$, $x_1$, and $x_0$ as well as outputs
$y_1$ and $y_0$ which represent $y_1 = \overline{x}_2\overline{x}_1 + \overline{x}_2x_0 + \overline{x}_1x_0$ and $y_0 = \overline{x}_2x_1+x_1x_0+x_2\overline{x}_1\overline{x}_0$ in terms of an AIG and, hence, NAND operations.
\end{myexample}
How to determine and optimize an AIG (e.g.~minimizing its number of nodes/gates) has intensely been considered in the literature (see e.g.~\cite{DBLP:conf/dac/MishchenkoCB06}) and, hence, is not covered further in the following. Instead, we focus on the second step, i.e.~how to map the resulting NAND netlist to an adiabatic circuit, i.e.~a network of transmission gates and the corresponding power clocks.
To this end, we translate the AIG into an \emph{OR-Inverter graph} (OIG) so that a NOR gate netlist results. An OIG can easily be derived from an AIG by simply applying De Morgan's laws, i.e.~by relabeling the inner nodes from AND to OR and inverting the polarity of the edges to the terminals and the edges to the root nodes (cf.~\ref{fig:oig}).
\begin{figure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\scalebox{0.75}{
\centering
\begin{tikzpicture}[terminal/.style={draw,rectangle,inner sep=2pt}]
\matrix[matrix of nodes,ampersand replacement=\&,every node/.style={vertex},column sep={1.5cm,between origins},row sep={0.9cm,between origins}] (qmdd) {
\node[regular polygon,regular polygon sides=4, inner sep=1pt] (y1) {$y_1$}; \& \node[regular polygon,regular polygon sides=4, inner sep=1pt] (y0) {$y_0$}; \& \\
\node(n1a) {$\wedge_6$}; \& \node(n1b) {$\wedge_7$}; \& \\
\& \node (n2a) {$\wedge_4$}; \& \node (n2b) {$\wedge_5$}; \\
\node(n3a) {$\wedge_2$}; \& \node(n3b) {$\wedge_3$}; \& \\
\& \node(n4) {$\wedge_1$}; \& \& \\
\node[regular polygon,regular polygon sides=3,draw,inner sep=0pt] (x2) {$x_2$}; \& \node[regular polygon,regular polygon sides=3,draw,inner sep=0pt] (x1) {$x_1$}; \& \node[regular polygon,regular polygon sides=3,draw,inner sep=0pt] (x0) {$x_0$}; \\
};
\draw (x2.north) -- (n3a);
\draw (x2.north) -- (n4) node[midway, circle, inner sep=1pt, fill] {};
\draw (x1.north) -- (n3a);
\draw (x1.north) -- (n4) node[midway, circle, inner sep=1pt, fill] {};
\draw (x1.north) -- (n2b);
\draw (x0.north) -- (n3b) node[midway, circle, inner sep=1pt, fill] {};
\draw (x0.north) -- (n2b);
\draw (n4) -- (n3b) node[midway, circle, inner sep=1pt, fill] {};
\draw (n3a) -- (n1a) node[midway, circle, inner sep=1pt, fill] {};
\draw (n3a) -- (n2a) node[midway, circle, inner sep=1pt, fill] {};
\draw (n3b) -- (n1a) node[midway, circle, inner sep=1pt, fill] {};
\draw (n3b) -- (n2a);
\draw (n2a) -- (n1b) node[midway, circle, inner sep=1pt, fill] {};
\draw (n2b) -- (n1b) node[midway, circle, inner sep=1pt, fill] {};
\draw (n1a) -- (y1);
\draw (n1b) -- (y0) node[midway, circle, inner sep=1pt, fill] {};
\end{tikzpicture}}
\caption{AND-Inverter graph}
\label{fig:aig}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\scalebox{0.75}{
\centering
\begin{tikzpicture}[terminal/.style={draw,rectangle,inner sep=2pt}]
\matrix[matrix of nodes,ampersand replacement=\&,every node/.style={vertex},column sep={1.5cm,between origins},row sep={0.9cm,between origins}] (qmdd) {
\node[regular polygon,regular polygon sides=4, inner sep=1pt] (y1) {$y_1$}; \& \node[regular polygon,regular polygon sides=4, inner sep=1pt] (y0) {$y_0$}; \& \\
\node(n1a) {$\vee_6$}; \& \node(n1b) {$\vee_7$}; \& \\
\& \node (n2a) {$\vee_4$}; \& \node (n2b) {$\vee_5$}; \\
\node(n3a) {$\vee_2$}; \& \node(n3b) {$\vee_3$}; \& \\
\& \node(n4) {$\vee_1$}; \& \& \\
\node[regular polygon,regular polygon sides=3,draw,inner sep=0pt] (x2) {$x_2$}; \& \node[regular polygon,regular polygon sides=3,draw,inner sep=0pt] (x1) {$x_1$}; \& \node[regular polygon,regular polygon sides=3,draw,inner sep=0pt] (x0) {$x_0$}; \\
};
\draw (x2.north) -- (n3a) node[midway, circle, inner sep=1pt, fill] {};
\draw (x2.north) -- (n4);
\draw (x1.north) -- (n3a) node[midway, circle, inner sep=1pt, fill] {};
\draw (x1.north) -- (n4);
\draw (x1.north) -- (n2b) node[midway, circle, inner sep=1pt, fill] {};
\draw (x0.north) -- (n3b);
\draw (x0.north) -- (n2b) node[midway, circle, inner sep=1pt, fill] {};
\draw (n4) -- (n3b) node[midway, circle, inner sep=1pt, fill] {};
\draw (n3a) -- (n1a) node[midway, circle, inner sep=1pt, fill] {};
\draw (n3a) -- (n2a) node[midway, circle, inner sep=1pt, fill] {};
\draw (n3b) -- (n1a) node[midway, circle, inner sep=1pt, fill] {};
\draw (n3b) -- (n2a);
\draw (n2a) -- (n1b) node[midway, circle, inner sep=1pt, fill] {};
\draw (n2b) -- (n1b) node[midway, circle, inner sep=1pt, fill] {};
\draw (n1a) -- (y1) node[midway, circle, inner sep=1pt, fill] {};
\draw (n1b) -- (y0);
\end{tikzpicture}}
\caption{OR-Inverter graph}
\label{fig:oig}
\end{subfigure}
\vspace*{-2mm}
\caption{Graph representations for Boolean functions}
\vspace*{-2mm}
\end{figure}
Now, the nodes of an OIG can directly be mapped to the adiabatic OR gates introduced before in Fig~\ref{fig:gates_or}.
However, it remains open and non-trivial how to connect these gates to the power clocks and how to generate a corresponding waveform of these clocks (again, following the switching rules and optimization objectives reviewed in Section~\ref{sec:background}).
To this end, two (complementary) design styles are considered: \emph{retractile} circuits (cf.~\cite{hall1992electroid}) as well as \mbox{\emph{fully-pipelined}} circuits (cf.~\cite{younis1993practical,anantharam2004driving,DBLP:conf/rc/Frank17}).
Note that for both design styles the conditional reversibility is inherently satisfied by preserving the inputs of the signals throughout the whole computation and by assuming that all additional (intermediate) signals are initially set to 0.
In the following sections, we discuss advantages and disadvantages of both design styles and present according (automatic) mapping schemes.
More precisely, for each design style we first describe a straightforward mapping scheme (conveying the main idea of the design style) followed by an advanced mapping scheme (which results in a significantly smaller number of gates as well as, in case of retractile circuits, to a smaller number of power clocks).
These considerations eventually motivate the implementation of different methods for design automation of adiabatic circuits whose performance is eventually discussed in Section~\ref{sec:exp}.
\section{Retractile Circuits}
\label{sec:retractile}
\subsection{Straightforward Solution}
\label{sec:retractile_sf}
The straightforward mapping for retractile circuits is similar to conventional circuitry, where an AIG or OIG is directly mapped to the target technology. In fact, we can realize each node of the OIG with an OR gate and negations with inverters.
Moreover, in case of adiabatic circuits, the inverters come ``for free'' since we are operating on dual-rail signals and, hence, an inverted input can easily be realized with no further hardware by swapping the rails of the signal.
\begin{myexample}
Consider again the OIG depicted in Fig.~\ref{fig:oig}. Mapping the OIG to conventional gates results in the circuit shown in Fig.~\ref{fig:circuit_retractile_gates}.
Doing this mapping for adiabatic circuits following the retractile design style, each OR-gate is realized with two transmission gates as discussed in Section~\ref{sec:background}.
\end{myexample}
To operate the circuit in an adiabatic fashion, all intermediate signals are first initialized with~0. Furthermore,
each stage $s_i$ ($0 \le i < N$) of the circuit with depth $N$
has an associated dual-rail encoded clock $\phi_i$---allowing to compute the individual stages sequentially. Then, the computations are started by transitioning the $0^{th}$ clock from 0 to 1---triggering the desired operations of the first stage.
Once stable, the operations of the next stages are sequentially triggered.
To allow for decomputing the intermediate results, the clocks transition back to 0 in reverse order, i.e.~first the $N-1^{th}$ clock is set back to~0, then the other ones.
This way, all intermediate results are decomputed and restored back to 0. Overall, this requires $2N+1$ time steps
for a single computation (assuming one additional time step is required to process the outputs of the circuit). During these time steps, the inputs have to remain constant---yielding a rather low throughput.
\addtocounter{myexample}{-1}
\begin{myexample}[continued]
Since the resulting circuit has four stages (the OIG has a depth of 4), we need four different clocks (eight if we take the dual-rail encoding into account). The waveforms of these clocks are shown in Fig.~\ref{fig:clocks}. Overall, this causes that a single computation of this circuit requires 9 timesteps.
\end{myexample}
\begin{figure}
\begin{subfigure}{0.58\linewidth}
\centering
\scalebox{0.7}{
\begin{tikzpicture}
\node (x2) at (0, 2) {$x_2$};
\node (x1) at (0, 1) {$x_1$};
\node (x0) at (0, 0) {$x_0$};
\node[or gate US, draw, anchor=input 1] at ($(x2) + (2.5, 0)$) (or2) {$g_2$};
\node[or gate US, draw, anchor=input 2] at ($(x1) + (1, 0)$) (or1) {$g_1$};
\node[or gate US, draw, anchor=input 1] at (2.5,0 |- or1.output) (or3) {$g_3$};
\node[or gate US, draw, anchor=input 2] at (4,0 |- or3.output) (or4) {$g_4$};
\node[or gate US, draw, anchor=input 2] at ($(x0) + (4, 0)$) (or5) {$g_5$};
\node[or gate US, draw, anchor=input 1] at (5.5,0 |- or2.output) (or6) {$g_6$};
\coordinate (pos7) at ($(or4.output)!.5!(or5.output)$);
\node[or gate US, draw, anchor=output] at (or6.output |- pos7) (or7) {$g_7$};
\draw (x2) -- (or2.input 1);
\draw (x1) -- (or1.input 2);
\draw (or1.output) -- (or3.input 1);
\draw (x0) -- (or5.input 2);
\draw (or3.output) -- (or4.input 2);
\draw (or2.output) -- (or6.input 1);
\draw ($(x2)+(0.75,0)$) node[branch] {} |- ($(or1.input 1)$);
\draw ($(x1)+(0.5,0)$) node[branch] {} |- ($(or2.input 2)$);
\draw ($(x0)+(2,0)$) node[branch] {} |- ($(or3.input 2)$);
\draw ($(or2.output)+(0.5,0)$) node[branch] {} |- ($(or4.input 1)$);
\draw ($(or3.output)+(0.25,0)$) node[branch] {} |- ($(or6.input 2)$);
\draw ($(x1)+(0.5,0)$) |- ($(or5.input 1)$);
\draw (or5.output) -- ++ (0.5,0) |- ($(or7.input 2)$);
\draw (or4.output) -- ++ (0.5,0) |- ($(or7.input 1)$);
\draw (or6.output) -- ++ (0.25,0) node[anchor=west] {$y_1$};
\draw (or7.output) -- ++ (0.25,0) node[anchor=west] {$y_0$};
\draw[line width=0.4, fill=white] ($(or2.input 1)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or2.input 2)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or3.input 1)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or4.input 1)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or5.input 1)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or5.input 2)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or6.input 1)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or6.input 2)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or7.input 1)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or7.input 2)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or5.input 1)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or6.output)+(0.07,0)$) circle (0.07);
\draw[dashed] (1.75,-0.25) -- (1.75,2.5);
\node[above] at (1,2.25) {$s_0$};
\draw[dashed] (3.25,-0.25) -- (3.25,2.5);
\node[above] at (2.5,2.25) {$s_1$};
\draw[dashed] (4.75,-0.25) -- (4.75,2.5);
\node[above] at (4,2.25) {$s_2$};
\node[above] at (5.5,2.25) {$s_3$};
\end{tikzpicture}}
\caption{Circuit}
\label{fig:circuit_retractile_gates}
\end{subfigure}
\begin{subfigure}{0.39\linewidth}
\centering
\scalebox{0.6}{
\begin{tikzpicture}
\draw[->](0,0) -- (5.0, 0) node[right] {$t$};
\draw[-](0, 0) -- (0, 2.5) node[above] {};
\draw[domain=0:.5, smooth, variable=\x, red]
plot ({\x}, {(\x)});
\draw[domain=.5:4, smooth, variable=\x, red]
plot ({\x}, {.5});
\draw[domain=4:4.5, smooth, variable=\x, red]
plot ({\x}, {-\x+4.5});
\draw[domain=0:0.5, smooth, variable=\x, red]
plot ({\x}, {0.6});
\draw[domain=0.5:1, smooth, variable=\x, red]
plot ({\x}, {0.1+(\x)});
\draw[domain=1:3.5, smooth, variable=\x, red]
plot ({\x}, {1.1});
\draw[domain=3.5:4, smooth, variable=\x, red]
plot ({\x}, {-\x+4.6});
\draw[domain=4:4.5, smooth, variable=\x, red]
plot ({\x}, {0.6});
\draw[domain=0:1, smooth, variable=\x, red]
plot ({\x}, {1.2});
\draw[domain=1:1.5, smooth, variable=\x, red]
plot ({\x}, {0.2+(\x)});
\draw[domain=1.5:3, smooth, variable=\x, red]
plot ({\x}, {1.7});
\draw[domain=3:3.5, smooth, variable=\x, red]
plot ({\x}, {-\x+4.7});
\draw[domain=3.5:4.5, smooth, variable=\x, red]
plot ({\x}, {1.2});
\draw[domain=0:1.5, smooth, variable=\x, red]
plot ({\x}, {1.8});
\draw[domain=1.5:2, smooth, variable=\x, red]
plot ({\x}, {0.3+(\x)});
\draw[domain=2:2.5, smooth, variable=\x, red]
plot ({\x}, {2.3});
\draw[domain=2.5:3, smooth, variable=\x, red]
plot ({\x}, {-\x+4.8});
\draw[domain=3:4.5, smooth, variable=\x, red]
plot ({\x}, {1.8});
\node[anchor = center] at (-0.25,0.25) {$\phi_0$};
\node[anchor = center] at (-0.25,0.85) {$\phi_1$};
\node[anchor = center] at (-0.25,1.45) {$\phi_2$};
\node[anchor = center] at (-0.25,2.05) {$\phi_3$};
\draw (0,-0.1) node[below] {0};
\foreach \x in {1,...,9} {
\draw (0.5*\x,0.1) -- (0.5*\x,-0.1) node[below]{\x};
}
\end{tikzpicture}
}
\caption{Power clocks}
\label{fig:clocks}
\end{subfigure}
\vspace*{-2mm}
\caption{Synthesized retractile circuit}
\label{fig:circuit_retractile}
\vspace*{-2mm}
\end{figure}
\subsection{Advanced Solution}
\label{sec:retractile_opt}
The straightforward mapping described above can significantly be optimized to reduce the number of required transmission gates and power clocks.
The optimized mapping scheme is motivated by an analysis of the realization of an OR gate, which is composed of two parallel buffers (i.e.~a transmission gate), whose outputs are connected (cf.~Fig.~\ref{fig:gates_or}).
Consequently, an OR gate with multiple inputs can be generated by adding further buffers in parallel.
This way, each OIG node, whose children both have a fanout of 1 (and, thus, represents a 4-inputs OR gate) can be realized in a single stage of the circuit composed of two 2-input OR gates whose outputs are connected.
A similar optimization can be performed for OIG nodes, where only one of the children has a fanout of 1. Here, one buffer is required for the child which has a fanout larger than one (in order to avoid sneak-paths). Additionally, the gate representing the child with fanout 1 has to be lifted to the next stage of the circuit since both, the buffer as well as the gate, have to be operated by the same power clock
to allow for an adiabatic computation. The optimization rules are shown in Fig.~\ref{fig:retractile_opt_rules} and denoted \emph{Rule 1} and \emph{Rule 2} in the following.
\begin{figure}
\centering
\scalebox{0.7}{
\begin{tikzpicture}
\node[or gate US, draw] (or1) {};
\node[or gate US, draw, below = 0.25cm of or1] (or2) {};
\coordinate (pos7) at ($(or1.output)!.5!(or2.output)$);
\node[or gate US, draw] at (1.15,0 |- pos7) (or3) {};
\node[above=-0.1cm of or1,xshift=0.4cm] {fanout $=$ 1};
\node[below=-0.1cm of or2,xshift=0.4cm] {fanout $=$ 1};
\draw (or1.output) -- ++ (0.25,0) |- ($(or3.input 1)$);
\draw (or2.output) -- ++ (0.25,0) |- ($(or3.input 2)$);
\draw (or1.input 1) -- ++ (-0.25,0);
\draw (or1.input 2) -- ++ (-0.25,0);
\draw (or2.input 1) -- ++ (-0.25,0);
\draw (or2.input 2) -- ++ (-0.25,0);
\draw (or3.output) -- ++ (0.25,0);
\draw[dashed] ($(or1.input 1)+(-0.15,0.5)$) |- ($(or2.input 2)+(1.95,-0.5)$) |- ($(or1.input 1)+(-0.15,0.5)$);
\node[or gate US, draw, right=2.9cm of or1] (or4) {};
\node[or gate US, draw, below = 0.25cm of or4] (or5) {};
\coordinate (pos8) at ($(or4.output)!.5!(or5.output)$);
\draw (or4.output) -- ++ (0.25,0) |- ($(pos8)+(0.55,0)$);
\draw (or5.output) -- ++ (0.25,0) |- ($(pos8)+(0.55,0)$) node[midway, branch] {};
\draw (or4.input 1) -- ++ (-0.25,0);
\draw (or4.input 2) -- ++ (-0.25,0);
\draw (or5.input 1) -- ++ (-0.25,0);
\draw (or5.input 2) -- ++ (-0.25,0);
\draw[dashed] ($(or4.input 1)+(-0.15,0.5)$) |- ($(or5.input 2)+(1.1,-0.5)$) |- ($(or4.input 1)+(-0.15,0.5)$);
\path[draw=black,solid,line width=2mm,fill=black,
preaction={-triangle 90,thin,draw,shorten >=-1mm}
]($(or3.output)+(0.4,0)$)-- ++ (0.8,0) node[above, midway]{Rule 1};
\node[or gate US, draw, right = 5.5 cm of or1] (or6) {};
\node[or gate US, draw, below = 0.25cm of or6] (or7) {};
\coordinate (pos1) at ($(or6.output)!.5!(or7.output)$);
\node[or gate US, draw] at (7.3,0 |- pos1) (or8) {};
\node[above=-0.1cm of or6,xshift=0.4cm] {fanout $\neq$ 1};
\node[below=-0.1cm of or7,xshift=0.4cm] {fanout $=$ 1};
\draw (or6.output) -- ++ (0.25,0) |- ($(or8.input 1)$);
\draw (or7.output) -- ++ (0.25,0) |- ($(or8.input 2)$);
\draw (or6.input 1) -- ++ (-0.25,0);
\draw (or6.input 2) -- ++ (-0.25,0);
\draw (or7.input 1) -- ++ (-0.25,0);
\draw (or7.input 2) -- ++ (-0.25,0);
\draw (or8.output) -- ++ (0.25,0);
\draw ($(or6.output)+(0.25,0)$) node[branch] {} |- ($(or6.output)+(0.7,0.75)$);
\draw[dashed] ($(or6.input 1)+(-0.15,0.5)$) |- ($(or7.input 2)+(1.95,-0.5)$) |- ($(or6.input 1)+(-0.15,0.5)$);
\node[or gate US, draw, right = 2.9cm of or6] (or9) {};
\node[or gate US, draw=none, below = 0.25cm of or9] (or10) {};
\coordinate (pos11) at ($(or9.output)!.5!(or10.output)$);
\node[or gate US, draw] at ($(or10)+(0.9,0)$) (or11) {};
\node[buffer gate US, draw, anchor=output] at (or11.output |- or9.output) (buf) {};
\draw (or9.output) -- (buf.input);
\draw ($(or10.input 1)+(-0.25,0)$) -- (or11.input 1);
\draw ($(or10.input 2)+(-0.25,0)$) -- (or11.input 2);
\draw (or9.input 1) -- ++ (-0.25,0);
\draw (or9.input 2) -- ++ (-0.25,0);
\coordinate (pos2) at ($(or11.output)!.5!(buf.output)$);
\draw (buf.output) -- ++ (0.2,0) |- ($(pos2)+(0.55,0)$) node[midway, branch] {};
\draw (or11.output) -- ++ (0.2,0) |- ($(pos2)+(0.55,0)$);
\draw ($(or9.output)+(0.2,0)$) node[branch] {} |- ($(or9.output)+(0.7,0.75)$);
\draw[dashed] ($(or9.input 1)+(-0.15,0.5)$) |- ($(or10.input 2)+(1.95,-0.5)$) |- ($(or9.input 1)+(-0.15,0.5)$);
\path[draw=black,solid,line width=2mm,fill=black,
preaction={-triangle 90,thin,draw,shorten >=-1mm}
]($(or8.output)+(0.4,0)$)-- ++ (0.8,0) node[above, midway]{Rule 2};
\end{tikzpicture}}
\caption{Rules for optimization}
\label{fig:retractile_opt_rules}
\end{figure}
Note that one has to be careful when applying the rules if the corresponding input of the gate is inverted. In this case, the inversion has to be pushed towards the inputs. This is possible by applying De Morgan's law ($\overline{a+b} = \overline{a}\cdot\overline{b}$). Consequently, we have to invert the inputs on this level and exchange the OR gate with an AND gate.\footnote{Note that this is also possible if there are several subsequent nodes for which the rules can be applied.}
\begin{myexample}
Consider again the circuit shown in Fig.~\ref{fig:circuit_retractile}. The children of gate $g_7$ (i.e.~$g_4$ and $g_5$) both have a fanout of 1. Consequently, we can apply \emph{Rule 1} to remove $g_7$. Furthermore, one child of gate $g_3$ has a fanout of 1 (i.e.~the input $x_0$). Consequently, we can apply \emph{Rule 2} for gate $g_3$. The resulting (optimized) circuit is shown in Fig.~\ref{fig:circuit_retractile_opt}. Since both inputs of $g_7$ and one input of $g_3$ are inverted, we have to apply De Morgan's law. Consequently, the gates $g_1$, $g_4$, and $g_5$ are transformed into an AND-gate. The resulting circuit only requires 11 transmission gates and has only two stages (and, thus, suddenly requires only two different dual-rail encoded clocks).
\end{myexample}
\begin{figure}[t]
\begin{minipage}[c]{0.5\linewidth}
\centering
\scalebox{0.7}{\begin{tikzpicture}
\node (x2) at (0, 2) {$x_2$};
\node (x1) at (0, 1) {$x_1$};
\node (x0) at (0, 0) {$x_0$};
\node[or gate US, draw, anchor=input 1] at ($(x2) + (1, 0)$) (or2) {$g_2$};
\node[and gate US, draw, anchor=input 2] at ($(x1) + (1, 0)$) (or1) {$g_1$};
\node[buffer gate US, draw, anchor=output] at (or1.output |- 0,0.5) (buf) {$g_3$};
\node[and gate US, draw, anchor=input 2] at (2.5,0 |- or1.output) (or4) {$g_4$};
\node[and gate US, draw, anchor=input 2] at ($(x0) + (2.5, 0)$) (or5) {$g_5$};
\node[or gate US, draw, anchor=input 1] at (2.5,0 |- or2.output) (or6) {$g_6$};
\coordinate (pos7) at ($(or4.output)!.5!(or5.output)$);
\draw (x2) -- (or2.input 1);
\draw (x1) -- (or1.input 2);
\draw (x0) -- (or5.input 2);
\draw (or1.output) -- (or4.input 2);
\draw (or2.output) -- (or6.input 1);
\draw ($(x2)+(0.75,0)$) node[branch] {} |- ($(or1.input 1)$);
\draw ($(x1)+(0.5,0)$) node[branch] {} |- ($(or2.input 2)$);
\draw ($(x0)+(0.75,0)$) node[branch] {} |- ($(buf.input)$);
\draw ($(buf.output)$) -| ($(or1.output)+(0.25,0)$) node[branch] {};
\draw ($(or2.output)+(0.5,0)$) node[branch] {} |- ($(or4.input 1)$);
\draw ($(or1.output)+(0.25,0)$) |- ($(or6.input 2)$);
\draw ($(x1)+(0.5,0)$) |- ($(or5.input 1)$);
\draw (or5.output) -| ($(or4.output)+(0.25,0)$) node[branch] {};
\draw (or4.output) -- ++ (0.5,0) node[anchor=west] {$y_0$};
\draw (or6.output) -- ++ (0.5,0) node[anchor=west] {$y_1$};
\draw[line width=0.4, fill=white] ($(or1.input 1)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or1.input 2)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or2.input 1)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or2.input 2)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or4.input 2)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or6.input 1)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or6.input 2)-(0.07,0)$) circle (0.07);
\draw[line width=0.4, fill=white] ($(or6.output)+(0.07,0)$) circle (0.07);
\end{tikzpicture}}
\end{minipage}
\begin{minipage}[c]{0.49\linewidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\draw[->](0,0) -- (3.0, 0) node[right] {$t$};
\draw[-](0, 0) -- (0, 1.5) node[above] {};
\draw[domain=0:.5, smooth, variable=\x, red]
plot ({\x}, {(\x)});
\draw[domain=.5:2, smooth, variable=\x, red]
plot ({\x}, {.5});
\draw[domain=2:2.5, smooth, variable=\x, red]
plot ({\x}, {-\x+2.5});
\draw[domain=0:0.5, smooth, variable=\x, red]
plot ({\x}, {0.6});
\draw[domain=0.5:1, smooth, variable=\x, red]
plot ({\x}, {0.1+(\x)});
\draw[domain=1:1.5, smooth, variable=\x, red]
plot ({\x}, {1.1});
\draw[domain=1.5:2, smooth, variable=\x, red]
plot ({\x}, {-\x+2.6});
\draw[domain=2:2.5, smooth, variable=\x, red]
plot ({\x}, {0.6});
\node[anchor = center] at (-0.25,0.25) {$\phi_0$};
\node[anchor = center] at (-0.25,0.85) {$\phi_1$};
\draw (0,-0.1) node[below] {0};
\foreach \x in {1,...,5} {
\draw (0.5*\x,0.1) -- (0.5*\x,-0.1) node[below]{\x};
}
\end{tikzpicture}}
\end{minipage}
\vspace*{-2mm}
\caption{Optimized retractile circuit}
\label{fig:circuit_retractile_opt}
\vspace*{-2mm}
\end{figure}
\section{Fully-Pipelined Circuits}
\label{sec:pipelined}
The main disadvantages of the retractile circuits considered in Section~\ref{sec:retractile} are that many different power clocks are required (one for each stage) and that a computation can be conducted only every $2N+1$ time steps---resulting in a rather low throughput.
These issues can be avoided by using fully-pipelined circuits.
In conventional design, this would require a register after each stage of the circuit. For the adiabatic circuits considered here, however, this is not necessary, because the gates inherently allow for latching their output (cf.~Section~\ref{sec:background}).
In fact, we only have to compute the outputs of a stage $s_i$ while decomputing the signals of stage $s_{i-1}$ (i.e.~resetting them back to 0). This way, only two different power clocks (four if we take the dual-rail encoding into account) are required (independent from the circuit depth) and computations can be conducted in a pipelined fashion (leading to a much higher throughput).
To realize this, however,
the functions computed in the individual stages have to be (conditionally) reversible. This can easily be achieved by forwarding all the input signals of stage $s_{i-1}$ to the stage $s_i$ by using buffers.
The following example illustrates the idea of such buffers.
\begin{myexample}
Fig.~\ref{fig:buffer} shows the structure of a buffer that sets \mbox{$x_{t} = x_{t-1}$} while decomputing $x_{t-1}$ (i.e.~while resetting~$x_{t-1}$ back to 0). Initially, both clocks $\phi_0$ and $\phi_1$ as well as $x_{t}$ are set to~0.
If $x_{t-1}=1$, the transmission gate on the right connects $\phi_1$ with $x_{t}$. In the first time step, $\phi_0$ transitions to 1 (c.f.~Fig.~\ref{fig:clocks_fully_pipelined}). Afterwards, $\phi_1$ transitions to 1, setting $x_{t} = x_{t-1}$. If $x_{t}=x_{t-1}=1$, the transmission gate on the left hand side in Fig.~\ref{fig:buffer} connects $\phi_0$ with $x_{t-1}$.
This does not violate the switching rules discussed in Section~\ref{sec:background} since $\phi_0$ is also 1. In the next time step, $\phi_0$ transitions back to 0---decomputing $x_{t-1}$ and, thus, disconnecting $\phi_i$ and $x_{t}$. Consequently, the output $x_{t}$ remains at its voltage level when eventually transitioning $\phi_1$ back to 0---the output is latched.
\end{myexample}
\begin{figure}
\centering
\begin{subfigure}[b]{.43\linewidth}
\centering
\scalebox{0.7}{
\begin{tikzpicture}
\draw (0.25,-1.25) node[below] {$\phi_0$} |- (0.75, 0.5);
\draw (0.25,0.5) -- (-0.25,0.5) node[left] {$x_{t-1}$};
\draw (1.,1.25) node[above] {$\phi_1$} |- (0.5, -0.5);
\draw (1,-0.5) -- (1.5,-0.5) node[right] {$x_{t}$};
\draw[line width=0.75pt, fill=white] (0,0) rectangle ++(0.5,-1);
\draw[line width=0.75pt, fill=white] (0.75,0) rectangle ++(0.5,1);
\end{tikzpicture}}
\captionof{figure}{Transmission gates}
\label{fig:buffer}
\end{subfigure}
\begin{subfigure}[b]{0.54\linewidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\draw[->](0,0) -- (2.5, 0) node[right] {$t$};
\draw[-](0, 0) -- (0, 1.5) node[above] {};
\draw (0,-0.1) node[below] {0};
\foreach \x in {1,...,4} {
\draw (0.5*\x,0.1) -- (0.5*\x,-0.1) node[below]{\x};
}
\draw[domain=0:0.5, smooth, variable=\x, red]
plot ({\x}, {0.6});
\draw[domain=0.5:1, smooth, variable=\x, red]
plot ({\x}, {.1+\x)});
\draw[domain=1:1.5, smooth, variable=\x, red]
plot ({\x}, {1.1});
\draw[domain=1.5:2, smooth, variable=\x, red]
plot ({\x}, {-\x+2.6});
\draw[domain=0:0.5, smooth, variable=\x, red]
plot ({\x}, {\x});
\draw[domain=0.5:1, smooth, variable=\x, red]
plot ({\x}, {.5});
\draw[domain=1:1.5, smooth, variable=\x, red]
plot ({\x}, {1.5-\x});
\draw[domain=1.5:2, smooth, variable=\x, red]
plot ({\x}, {0});
\node[anchor = east] at (-0.0,0.85) {$\phi_1$};
\node[anchor = east] at (-0.0,0.25) {$\phi_0$};
\end{tikzpicture}}
\captionof{figure}{Clocks}
\label{fig:clocks_fully_pipelined}
\end{subfigure}
\vspace*{-3mm}
\caption{Buffer element for fully-pipelined circuits}
\vspace*{-3mm}
\end{figure}
To allow for inverted inputs of gates, a quad-rail encoding is required for the signals to properly decompute the inputs~\cite{anantharam2004driving}. Here, each signal $X$ is represented by two dual-rail signals (one for $X=1$ and one for $X=0$). Initially, both dual-rail signals are set to 0. This again allows to realize inverters without any transmission gates---just swapping the two dual-rail signals $X=1$ and $X=0$.
In the following we again abstract this fact when illustrating the required transmission gates.
\subsection{Straightforward Solution}
\label{sec:pipelined_sf}
As for retractile circuits, we again map the OIG nodes to an adiabatic realizations of an OR gate. As mentioned above, this requires to realize each OR gate as shown in Fig.~\ref{fig:or_pipelined}.\footnote{Signals with fanout do not have to be buffered multiple times.}
This way, the signals from stage $s_{t-1}$ (e.g.~$A_{t-1}$ and $B_{t-1}$) serve as input to compute $(A+B)_t = A_{t-1} + B_{t-1}$.
Since $(A+B)_{t}$ is driven by clock $\phi_1$, its value is inherently latched. In fact, the input signals $A_{t-1}$ and $B_{t-1}$ are reset to 0 by the according buffers (disconnecting $\phi_1$ and $(A+B)_t$), before the clock $\phi_1$ is transitioned back to 0.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\begin{tikzpicture}
\draw[line width=0.75pt] (0.1,0) rectangle ++(0.25,0.5);
\draw[line width=0.75pt] (0.85,0) rectangle ++(0.25,0.5);
\draw[line width=0.75pt] (1.35,0.5) rectangle ++(0.25,0.5);
\draw[line width=0.75pt] (-0.1,0) rectangle ++(-0.25,0.5);
\draw[line width=0.75pt] (-0.85,0) rectangle ++(-0.25,0.5);
\draw[line width=0.75pt] (-1.35,0.5) rectangle ++(-0.25,0.5);
\draw (0.35,0.25) -| (0.45,1.25) node[above] {$B_{t-1}$};
\draw (0.45,0.75) -- (1.35,0.75);
\draw (0.975,0.5) -- (0.975,0.75);
\draw (1.1,0.25) -| (1.475,0.5);
\draw (0.975,0) -- (0.975,-0.25) node[below] {$\phi_0$};
\draw (1.475,0.25) -| (1.475,-0.5) node[below] {$B_t$};
\draw (1.475,1) -| (1.475,1.25) node[above] {$\phi_1$};
\draw (0.225,0) |- (-0.225,-0.1) -- ++(0,0.1);
\draw (0,-0.1) -- (0,-0.5)node[below]{$(A+B)_{t}$};
\draw (0.225,0.5) |- (-0.225,0.6) -- ++(0,-0.1);
\draw (0,0.6) -- (0,.85)node[above]{$\phi_1$};
\draw (-0.35,0.25) -| (-0.45,1.25) node[above] {$A_{t-1}$};
\draw (-0.45,0.75) -- (-1.35,0.75);
\draw (-0.975,0.5) -- (-0.975,0.75);
\draw (-1.1,0.25) -| (-1.475,0.5);
\draw (-0.975,0) -- (-0.975,-0.25) node[below] {$\phi_0$};
\draw (-1.475,0.25) -| (-1.475,-0.5) node[below] {$A_t$};
\draw (-1.475,1) -| (-1.475,1.25) node[above] {$\phi_1$};
\end{tikzpicture}
\caption{Computing OR}
\label{fig:or_pipelined}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\begin{tikzpicture}
\draw[line width=0.75pt] (0.1,0) rectangle ++(0.25,-0.5);
\draw[line width=0.75pt] (0.85,0) rectangle ++(0.25,-0.5);
\draw[line width=0.75pt] (1.35,-0.5) rectangle ++(0.25,-0.5);
\draw[line width=0.75pt] (-0.1,0) rectangle ++(-0.25,-0.5);
\draw[line width=0.75pt] (-0.85,0) rectangle ++(-0.25,-0.5);
\draw[line width=0.75pt] (-1.35,-0.5) rectangle ++(-0.25,-0.5);
\draw (0.35,-0.25) -| (0.45,-1.25) node[below] {$B_{t}$};
\draw (0.45,-0.75) -- (1.35,-0.75);
\draw (0.975,-0.5) -- (0.975,-0.75);
\draw (1.1,-0.25) -| (1.475,-0.5);
\draw (0.975,0) -- (0.975,0.25) node[above] {$\phi_1$};
\draw (1.475,-0.25) -| (1.475,0.5) node[above] {$B_{t-1}$};
\draw (1.475,-1) -| (1.475,-1.25) node[below] {$\phi_0$};
\draw (0.225,0) |- (-0.225,0.1) -- ++(0,-0.1);
\draw (0,0.1) -- (0,0.5)node[above]{$(A+B)_{t-1}$};
\draw (0.225,-0.5) |- (-0.225,-0.6) -- ++(0,0.1);
\draw (0,-0.6) -- (0,-.85)node[below]{$\phi_0$};
\draw (-0.35,-0.25) -| (-0.45,-1.25) node[below] {$A_{t}$};
\draw (-0.45,-0.75) -- (-1.35,-0.75);
\draw (-0.975,-0.5) -- (-0.975,-0.75);
\draw (-1.1,-0.25) -| (-1.475,-0.5);
\draw (-0.975,0) -- (-0.975,0.25) node[above] {$\phi_1$};
\draw (-1.475,-0.25) -| (-1.475,0.5) node[above] {$A_{t-1}$};
\draw (-1.475,-1) -| (-1.475,-1.25) node[below] {$\phi_0$};
\end{tikzpicture}
\caption{Decomputing OR}
\label{fig:or_pipelined_decompute}
\end{subfigure}
\vspace*{-3mm}
\caption{OR gate for fully-pipelined circuits}
\vspace*{-3mm}
\end{figure}
Now, in contrast to retractile circuits, new hardware is required to decompute the result (after e.g.~copying it elsewhere) since the stages of the pipeline already contain the values of the next computation.
The (conditionally) reversible function calculated by the pipeline is $F = f_{N-1}\circ f_{N-2} \circ \cdots \circ f_0$, where $f_i$ is the conditionally reversible function computed by stage $s_i$.\footnote{Note that $\circ$ denotes functional composition, i.e.~$g(x)\circ f(x) = g(f(x))$.} Since the function $f_i$ computed by each stage is conditionally reversible, the inverse of $F$ (i.e.~$F^{-1}$) exists and is determined by $F^{-1} = f_0^{-1} \circ f_1^{-1} \circ \cdots \circ f_{N-1}^{-1}$.
The inverse $f_i^{-1}$ of the function $f_i$ computed by stage $s_i$ can be easily realized by duplicating the hardware for stage $s_i$ and connecting the power clocks $\phi_0$ and $\phi_1$ in opposite fashion (as shown for an OR gate in~Fig.~\ref{fig:or_pipelined_decompute}).
Consequently, decomputing the results requires to double the depth of the pipeline and, thus, doubles the number of required transmission gates.
\begin{myexample}
Consider again the circuit shown in Fig.~\ref{fig:circuit_retractile}. The first stage contains a single OR gate. Additionally, three buffers are required to forward the inputs $x_2$, $x_1$, and $x_0$ to stage $s_1$ (while decomputing them in stage $s_0$). Consequently, $(1+3)\cdot 2=8$ transmission gates are required. The second stage has four input signals and requires two OR gates. Therefore, $(4+2)\cdot 2 = 12$ transmission gates are required to realize stage $s_1$. The third stage has then 6 inputs and requires $16$ transmission gates. Finally, the last stage has 8 inputs and requires $20$ transmission gates. Overall, this sums up to $56$ transmission gates. The reverse cascade of the stages again requires 56 transmission gates. Consequently, a total of 112 transmission gates are required (448 if we take the quad-rail encoding into account) to realize the function in a fully-pipelined fashion---a huge overhead compared to the retractile design methodology. However, the circuit has a higher throughput and only requires two different clocks to be operated (four if we take into account that their complement is also needed due to a dual-rail encoding).
\end{myexample}
\subsection{Advanced Solution}
\label{sec:pipelined_opt}
The mapping scheme discussed above yields circuits with a huge overhead since many signals are pushed through the whole pipeline---even though they are not required as outputs, nor to obtain reversibility of a stage. Hence, we propose to decompute such unnecessary signals as soon as possible.
As shown in Fig.~\ref{fig:or_pipelined_decompute}, the inputs of a gate have to be present until its output is decomputed.
This means, the signals resulting from the gates in the next-to-last stage can be decomputed while computing the outputs of the function to be realized. Afterwards, the signals generated in the stage before can be decomputed---eventually resulting in the mapping scheme discussed in the previous subsection---hence, no signal can be decomputed before the final outputs of the function to be realized are determined.
However, we can easily circumvent this problem by choosing some signals that shall not be decomputed.\footnote{Note that, in the end, all signals are decomputed since each stage is duplicated as discussed in Section~\ref{sec:pipelined_sf}.}
To this end, we mark the corresponding OIG nodes that generate these signals. This allows to decompute several other signals earlier---while continuing to compute the outputs of the function. Consequently, fewer signals are pushed through the pipeline---reducing the number of required transmission gates.
Recall, that each node $v$ of the OIG is translated to an OR gate on a certain stage of the circuit.
To determine when the signal resulting from $v$ can be decomputed we traverse all parents (denoted $p_j$ in the following). For each parent node $p_j$ we determine the stage in which the signal generated by $v$ can be decomputed at the earliest. Then, we take the stage with the largest index, since the constraints for all parents have to be satisfied.
If $p_j$ is a node that is marked, we can immediately decompute the signal generated by $v$ in the same stage (since the signal computed by $p_j$ is not decomputed afterwards).
If $p_j$ is not marked, we can decompute the signal generated by $v$ at the earliest one stage after the signal generated by $p_j$ can be decomputed (because the signal generated by $v$ is required to decompute the signal generated by $p_j$).
\begin{myexample}
\label{ex:pipelined_opt}
Consider again the OIG shown in Fig.~\ref{fig:oig} (as well as the corresponding circuit shown in Fig.~\ref{fig:circuit_retractile}). Assume that we marked the nodes labeled $\vee_2$ and $\vee_3$ (the nodes labeled $\vee_6$ and $\vee_7$ are inherently marked since they are directly connected to an output).
Consequently, we want to decompute the signals generated by the OIG nodes labeled $\vee_1$, $\vee_4$, and $\vee_5$ as soon as possible.
In the second stage (i.e.~$s_1$) of the circuit, we compute the result of the nodes labeled $\vee_2$ and $\vee_3$. Since the signal generated by node $\vee_1$ is not required anymore (its single parent labeled $\vee_3$ is marked), it can be decomputed in the second stage as well. Consequently, we can save the buffers for this signal in the third and fourth stage of the circuit. Furthermore, the signals generated by nodes labeled $\vee_4$ and $\vee_5$ can be decomputed while computing the outputs of the function (in stage $s_3$). Since this is the last stage of the circuit, no buffers can be saved. However, fewer output signals result.
Considering the fact that each pipeline stage has to be duplicated, a reduction of four buffers (i.e.~8 transmission gates) can be obtained.
\end{myexample}
This leads to the question how to determine a suitable marking scheme for the nodes, i.e.~a marking scheme that results in a circuit with a smaller number of transmission gates.
A very simple but also effective marking scheme is to mark all nodes of the OIG with a depth that is a multiple of a constant $k\in \mathbb{N}$. For $k=2$, this means to mark all nodes with an even depth (as done in Example~\ref{ex:pipelined_opt}). The experimental evaluations summarized in Section~\ref{sec:exp} show that significant improvements can be obtained by using this marking scheme.
\section{Evaluation}
\label{sec:exp}
In this section, we summarize and discuss the results obtained by our evaluations of the proposed design methods for adiabatic circuits. To this end, we implemented the approaches discussed in Section~\ref{sec:retractile} and Section~\ref{sec:pipelined} in C++
and used the tool ABC~\cite{DBLP:conf/cav/BraytonM10} to generate the initially required AIGs/OIGs (to reduce the number of AIG nodes, we used the synthesis command \emph{dc2}).
Afterwards, we evaluated the resulting methods using benchmarks taken from the ISCAS~\cite{ISCAS:89} and the IWLS benchmark suite~\cite{McE:93}.
Table~\ref{tab:results} summarizes the obtained results. The first columns show the name of the benchmark as well as the number of primary inputs~\emph{PI} and primary outputs \emph{PO}.
Then, we list the results obtained for retractile and fully-pipelined adiabatic circuits. For each design style, we list the number of required transmission gates (denoted~$\left|tg\right|$) and the number of required power clocks (denoted~$\left|\phi\right|$) of the straightforward solution as well as the advanced solution (columns denoted \emph{Straight-forward} and \emph{Advanced}, respectively). Having a dual-rail (for retractile circuits) or quad-rail encoding (for fully-pipelined circuits) is taken into account in the numbers listed for the required transmission gates, as well as the fact that each power clock has to be supplied in two polarities (i.e.~a power clock is dual-rail encoded for both types of circuits).
For sake of completeness, we also list the parameter $k$ used in the solution discussed in Section~\ref{sec:pipelined_opt}.
The runtime is not listed in Table~\ref{tab:results} since all methods are capable to produce these results in negligible runtime (i.e.~a fraction of a second).
\begin{table}[t]
\caption{Evaluation}
\label{tab:results}
\vspace*{-2mm}
\centering
\scriptsize
\setlength{\tabcolsep}{3pt}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{lrr||rr|rr||rr|rrr}
\multicolumn{3}{c||}{} & \multicolumn{4}{c||}{Retractile (Section~\ref{sec:retractile})} & \multicolumn{5}{c}{Fully-pipelined (Section~\ref{sec:pipelined})} \\
\multicolumn{3}{c||}{} & \multicolumn{2}{c|}{Straight-forward} & \multicolumn{2}{c||}{Advanced} & \multicolumn{2}{c|}{Straight-forward} & \multicolumn{3}{c}{Advanced} \\
\multicolumn{3}{c||}{} & \multicolumn{2}{c|}{(Section~\ref{sec:retractile_sf})} & \multicolumn{2}{c||}{(Section~\ref{sec:retractile_opt})} & \multicolumn{2}{c|}{(Section~\ref{sec:pipelined_sf})} & \multicolumn{3}{c}{(Section~\ref{sec:pipelined_opt})} \\
Name & $PI$ & $PO$ & $\left|\phi\right|$ & $\left|tg\right|$ & $\left|\phi\right|$ & $\left|tg\right|$ & $\left|\phi\right|$ & $\left|tg\right|$ & $k$ & $\left|\phi\right|$ & $\left|tg\right|$ \\ \hline
\csvreader[
late after line=\\,
late after last line=\\,
]{results.csv}
{1=\Name,2=\In, 3=\Out, 4=\Dsf, 5=\Dopt, 6=\retractileSF, 7=\retractileOPT, 8=\pipelinedSF, 9=\kopt, 10=\pipelinedOPT}
{\Name & \In & \Out & \Dsf & \optnum{\retractileSF} & \Dopt & \optnum{\retractileOPT} & 4 & \optnum{\pipelinedSF} & \kopt & 4 & \optnum{\pipelinedOPT}}
\end{tabular}\\
\raggedright{$\left|\phi\right|$: \#required clocks \hspace*{0.4cm} $\left|tg\right|$: \#transmission gates \hspace*{0.4cm} $k$: parameter discussed in Section~\ref{sec:pipelined_opt}}
\vspace*{-5mm}
\end{table}
First, the results nicely show the impact of the respectively chosen design style.
Retractile circuits are clearly the better choice when it comes to reducing the number of gates, while pipelined circuits are efficient with respect to the number of power clocks and, following that, also the throughput. At a first glance, it might look that the costs of having fewer power clocks in pipelined circuits is not acceptable (in fact, magnitudes more gates are required). However, if area is not an issue, this might still acceptable since, as discussed in Section~\ref{sec:background}, gates in adiabatic circuits do not affect the energy consumption as much as they do in conventional circuits.
Hence, each design style has its own advantages and disadvantages and, eventually, the user is presented with complementary solutions out of which the best suitable can be chosen.
Besides that, the results clearly show the improvement of the advanced schemes. On average an improvement of approx.~42\% in the number of required power clocks, as well as an average improvement of approx.~37\% with respect to the number of required transmission gates is obtained for retractile circuits.
For the fully-pipelined circuits, we observe a reduction in the number of transmission gates of approx.~30\% on average.
Overall, these results clearly confirm the benefit and applicability of the proposed design automation techniques for this kind of circuits. While previously considered circuits were either handcrafted (following approaches e.g.~proposed in~\cite{younis1993practical,anantharam2004driving}) or relied on fully reversible realizations which led to an unnecessarily large overhead (as conducted in~\cite{DBLP:journals/tcad/MorrisonR14,rauchenecker2017exploiting} and discussed in~\cite{DBLP:conf/rc/Frank17}),
the proposed design flow allows for generating the desired adiabatic circuits in an automatic fashion while, at the same time, satisfying the switching rules by conditional reversibility only. The improvements obtained by the advanced schemes additionally show the further potential that can be exploited following this direction.
\section{Conclusions}
\label{sec:conclusions}
In this work, we proposed an automatic and dedicated design flow for adiabatic circuits which explicitly takes recent findings in this domain (namely that conditional reversibility is sufficient for adiabatic circuits) into account. The proposed flow first realizes the desired functionality in terms of an AIG/OIG and, afterwards, dedicatedly maps the resulting structure to an adiabatic description. For the latter step, two complementary schemes (namely retractile or fully-pipelined) are considered which allow the designer to either focus on reducing the number of gates or keeping the number of power clocks small. Furthermore, optimizations are proposed which allow for a reduction in the number of gates by approx.~37\% and 30\%, respectively, for both design styles on average.
By this, expertise from both, adiabatic circuits and design automation, is combined yielding an automatic \emph{and} dedicated design scheme for this promising technology. This eventually provides the basis for further studies including, besides others, more sophisticated optimizations, the design and use of larger building blocks, as well as
the application of the proposed design flow in the physical implementation of adiabatic circuits.
\begin{acks}
This work has partially been supported by the European Union through the
COST Action IC1405. M. Frank was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories and by the Advanced Simulation and Computing program under the U.S. Department of Energy’s National Nuclear Security Administration (NNSA). Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for NNSA under contract DE-NA0003525. Approved for public release, SAND2018-9936 O.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
{
|
{
"timestamp": "2018-11-07T02:05:05",
"yymm": "1809",
"arxiv_id": "1809.02421",
"language": "en",
"url": "https://arxiv.org/abs/1809.02421"
}
|
\section{Introduction}
The Schr\"odinger equation for a free particle has attracted the search for wave functions that evolve without distortion. Berry and Balasz have shown that an Airy wave function keeps its form under evolution, just showing some acceleration \cite{Berry}. However, Airy wave functions are not square integrable functions and therefore are not proper wave functions. If one wants to use them, they need to be apodized, either by cutting them or by super-imposing a Gaussian function; i.e., instead considering a Gauss-Airy beam. In such case, it is too much to say that they loose their shape as they evolve, and therefore, their beauty. Effects such as focusing of waves may occur when particles go through a single slit \cite{Schleich2}, as it has been shown by studying the time dependent wave function in position space and its Wigner function \cite{Schleich1}.\\
In this contribution, we want to show that by adding a positive quadratic phase to an initial arbitrary wavefunction, its free evolution maintains an invariant structure, while it spreads by the action of an squeeze operator. That means, that the effect of passing a beam of particles (for instance electrons \cite{elec}, neutrons \cite{neut} or atoms \cite{atom}) through a negative lens, provides the wave function with the property of evolution invariance, while it diffracts by the application of a squeeze operator to the initial state \cite{Yuen,Caves,Satya,Vidiella,Knight,Schleich}.\\
In the following, we will revisit Airy beams and Airy-Gauss beams in order to show that the later ones deform as they evolve. In Section III, we show that the acquisition of a quadratic phase helps any field to become invariant under free evolution; in Section IV, we give some examples, namely initial Sinc and Bessel functions, while Section V is left for conclusions.
\section{Revisiting Airy beams}
Berry and Balasz \cite{Berry} have shown that an initial wave function of the form (for simplicity we set $\hbar=1$)
\begin{equation}\label{Airy0}
\psi(x,0)=\mathrm{Ai}(\epsilon x),
\end{equation}
where $\epsilon$ is an arbitrary real constant, evolves according to the Schr\"odinger equation for a free particle of mass $m=1$
\begin{equation}\label{schr0}
i\frac{\partial \psi(x,t)}{\partial t}=\frac{\hat{p}^2}{2}\psi(x,t),
\end{equation}
as
\begin{equation}
\psi(x,t)=\mathrm{Ai}\left[\epsilon \left(x-\frac{\epsilon^3t^2}{4}\right)\right]
\exp\left[ i \frac{\epsilon^3t}{2}\left( x-\frac{\epsilon^3t^2}{6}\right)\right],
\end{equation}
as can be verified by substitution into (\ref{schr0}). It is clear from this solution that the Airy wave packet is conserved, meaning that it evolves without spreading. Besides, the evolution shows an acceleration which may be obtained also in some other initial distributions of wave packets, like half Bessel functions \cite{Aleahmad}. Propagation of Airy wavelet-related patterns has also been considered in \cite{Torre0} and it has been shown they provide \textquotedblleft source functions\textquotedblright \; for freely propagating paraxial fields. The acceleration may be corrected by propagating the Airy function in a linear potential \cite{Chavez}. Unfortunately, the Airy wave packet is not a proper wave function as it is not square integrable. A possibility for making it normalizable would be to cut it (have a window) or to {\it apodize} it by multiplying it by a Gauss function, and effectively cutting it. If instead of the initial state (\ref{Airy0}), we consider as initial condition the normalizable wave function
\begin{equation}\label{Airy1}
\psi(x,0)=\textrm{Ai}(\epsilon x)\exp\left( -\beta x^2\right),
\end{equation}
with $\beta$ another arbitrary real constant, the solution then reads
\begin{equation}\label{Airy1Sol}
\psi(x,t)=\frac{1}{\sqrt{1-2i\beta t}}\textrm{Ai}\left[ \zeta(x,t)\right] \exp\left( \frac{\beta x^2}{2i\beta t -1}\right) \exp\left[i\gamma(x,t)\right]
\end{equation}
with
\begin{equation}
\zeta(x,t)=\frac{\epsilon^4t^2+\epsilon x(2i\beta t -1)}{(2\beta t+i)^2}, \qquad
\gamma(x,t)=\frac{3\epsilon^3xt(2\beta t+i)-2i\epsilon^6t^3}{3(2\beta t+i)^3};
\end{equation}
again, this can be proved by direct substitution into Eq.(\ref{schr0}). In Figure 1, we plot the probability density $|\psi(x,t)|^2$ for Eq.(\ref{Airy1Sol}) for different times. We can see that for $\beta=0.01$, the Airy-Gauss beam still accelerates, but it looses its shape.
\begin{figure}[H]
\centering{}
\includegraphics[width=12cm]{victor1}
\caption{Plot of the probability density $|\psi(x,t)|^2$ of the wavefuntion in equation (\ref{Airy1Sol}) for the parameters $\epsilon=1$ and $\beta=0.01$ at (a) $t=0$, (b) $t=1$ and (c) $t=2$.}
\label{fig1}
\end{figure}
\section{Evolution invariant beams}
Now consider an initial condition of the form
\begin{equation}
\psi(x,0)=\exp\left( i\alpha x^2\right) \phi(x,0),
\end{equation}
where $\alpha$ is a real parameter which must be set in each specific case \cite{victor18}. The solution of the Schr\"odinger equation then reads
\begin{equation}\label{sol}
\psi(x,t)=\exp\left( -i\frac{t}{2}\hat{p}^2\right) \exp\left( i\alpha x^2\right) \phi(x,0).
\end{equation}
Writing the identity operator as $\hat{I}=\left( i\frac{t}{2}\hat{p}^2\right) \exp\left( -i\frac{t}{2}\hat{p}^2\right)$, the previous equation can be cast as
\begin{equation}\label{ec090}
\psi(x,t)=\exp\left( -i\frac{t}{2}\hat{p}^2\right)
\exp\left( i\alpha x^2\right)
\exp \left( i\frac{t}{2}\hat{p}^2\right) \exp\left( -i\frac{t}{2}\hat{p}^2\right) \phi(x,0).
\end{equation}
As is well known, $\exp\left( -i\frac{t}{2}\hat{p}^2\right) x \exp\left( i\frac{t}{2}\hat{p}^2\right) =x-t\hat{p}$, and this implies that
\begin{eqnarray}
\exp\left( -i\frac{t}{2}\hat{p}^2\right)
\exp\left( i\alpha x^2\right)
\exp \left( i\frac{t}{2}\hat{p}^2\right)&=&\exp \left[ i \alpha \left( x-t\hat{p}\right) ^2 \right]
\\ \nonumber
&=&\exp\left\lbrace i\alpha [x^2-t(x\hat{p}+\hat{p}x)+t^2\hat{p}^2]\right\rbrace,
\end{eqnarray}
which substituted in equation (\ref{ec090}) gives us
\begin{equation}
\psi(x,t)=\exp\left\lbrace i\alpha [x^2-t(x\hat{p}+\hat{p}x)+t^2\hat{p}^2]\right\rbrace \exp\left( -i\frac{t}{2}\hat{p}^2\right) \phi(x,0).
\end{equation}
It is not difficult to show that the first exponential above may be factorized as \cite{metop}
\begin{equation}
\exp \left[ if_1(t)x^2\right] \exp\left[ i f_2(t)(x\hat{p}+\hat{p}x)\right] \exp\left[ i f_3(t)\hat{p}^2\right] ,
\end{equation}
with
\begin{equation}
f_1(t)= \frac{\alpha}{1+2\alpha t}, \qquad f_2(t)=-\frac{1}{2} \ln(1+2\alpha t), \qquad f_3(t)= \frac{\alpha t^2}{1+2\alpha t}.
\end{equation}
This allows us to give a final form for equation (\ref{sol}) as
\begin{equation}
\psi(x,t)=\exp\left[ if_1(t)x^2\right] \exp\left[ if_2(t)(x\hat{p}+\hat{p}x)\right] \exp\left[ i f_4(t)\hat{p}^2\right] \phi(x,0)
\end{equation}
with $f_4(t)=f_3(t)-t/2$.\\
We now examine the behaviour of $f_4(t)$ as a function of the parameter $\alpha$. The Taylor series of $f_4(t)$ for $\alpha \approx 0$ is
\begin{equation}\label{0150}
f_4(t) = -\frac{t}{2}+t^2 \alpha-2t^3 \alpha^2+\textrm{O}(\alpha)^3
\end{equation}
and for $\alpha \approx \infty$ is
\begin{equation}\label{0160}
f_4(t) = -\frac{1}{4\alpha}+\frac{1}{8t\alpha^2}+\textrm{O}\left( \frac{1}{\alpha}\right) ^3.
\end{equation}
In Figure 2, we plot $f_4(t)$ as a function of time for different values of the $\alpha$ parameter. It may be seen that for small values of $\alpha$ it remains close to zero, and for large values of $\alpha$ it becomes very small, as expected from the approximation in Equation (16).
\begin{figure}[H]
\centering{}
\includegraphics[width=10cm]{figura2.jpg}
\caption{Plot of the function $f_4(t)$ for $\alpha=10$ (dotted line), $\alpha=5.0$ (dashed line) and $\alpha=0.5$ (continuous line).}
\label{fig2}
\end{figure}
Thus, for small values of $\alpha$, we take the first two terms in the Taylor development of the operator
$\exp\left[ i f_4(t)\hat{p}^2\right]$ and we get
\begin{equation} \label{appsol}
\psi_1(x,t)\approx \exp\left[ if_1(t)x^2\right] \exp\left[ if_2(t)(x\hat{p}+\hat{p}x)\right] \left[1+if_4(t)\hat{p}^2\right]\phi(x,0).
\end{equation}
For $\alpha$ large enough, we completely disregard the term $\exp\left[ i f_4(t)p^2\right] $ and, to a very good approximation (as will be see below), we write simply the zeroth order solution
\begin{equation}\label{appsol2}
\psi_0(x,t)\approx \exp\left[ if_1(t)x^2\right] \exp\left[ i f_2(t)(x\hat{p}+\hat{p}x)\right] \phi(x,0).
\end{equation}
The operator $\exp\left[i f_2(t)(x\hat{p}+\hat{p}x)\right]$ is the squeeze operator, and by its application to the initial function, the equation above may be cast into
\begin{equation}\label{appsol3}
\psi_0(x,t)=\frac{1}{\sqrt{1+2\alpha t}} \exp\left[ if_1(t)x^2\right] \phi\left(\frac{x}{1+2\alpha t},0\right).
\end{equation}
It is clear that the above wave function gives a probability density that remains invariant during evolution
\begin{equation}\label{invariant}
|\psi_0(x,t)|^2=\frac{1}{{1+2\alpha t}}\left\vert \phi\left(\frac{x}{1+2\alpha t},0\right)\right\vert^2.
\end{equation}
The choice of the $\alpha$ parameter depends of the problem that is being studied and on the propagation distance that must be considered, as will be shown in the examples below. From Eqs. (\ref{0150}) and (\ref{0160}), it is also clear that different values of $\alpha$ must be considered if the zeroth order or the first order solutions are going to be used. In \cite{victor18} we present a discussion on the election of this parameter in the realm of classical optics.
\section{Some examples}
In this section, we study some examples where we apply our approximation and compare it with the exact solution.
\subsection{Sinc function}
We start with an initial (unnormalized, but normalizable) wave packet of the form
\begin{equation}
\psi(x,0)= \exp\left( i\alpha x^2\right) \mathrm{Sinc}(bx),
\end{equation}
where $b$ is an arbitrary real constant and where we define the Sinc function as
\begin{equation}
\mathrm{Sinc}(bx)=\frac{1}{b}\int_{-b}^b \exp\left( i u x\right) du.
\label{sinc}
\end{equation}
We write the approximations to zeroth and first order as
\begin{equation}
\psi_0(x,t)=\frac{\exp\left[ if_1(t)x^2\right]} {b\sqrt{1+2\alpha t}} \int_{-b}^b \exp\left( iu\frac{x}{1+2\alpha t}\right) du,
\end{equation}
and
\begin{equation}
\psi_1(x,t)=\psi_0(x,t)+i\frac{f_4(t) \exp\left[ if_1(t)x^2\right]} {b\sqrt{1+2\alpha t}} \int_{-b}^b u^2 \exp\left( iu\frac{x}{1+2\alpha t}\right) du,
\end{equation}
respectively. For the sake of comparison, we can also write the exact solution as
\begin{equation}
\psi(x,t)=\frac{\exp\left[ if_1(t)x^2\right] }{b\sqrt{1+2\alpha t}} \int_{-b}^b \exp\left[ if_4(t)u^2\right] \exp\left( iu\frac{x}{1+2\alpha t}\right) du.
\end{equation}
We plot in Figure \ref{fig3} (a) and (c) the probability densities for the zeroth order and exact solutions, showing that they match very well for a value of $\alpha=0.3$ and have an excellent agreement for a greater value ($\alpha=3$). In Figure \ref{fig3} (b) and (d), the quantities $|\psi_0(x,t)|^2$ (dashed line) and $|\psi_0(x,t)-\psi_0(x,t)|^2$ (solid line) are plotted in order to show that their contributions to the first order approximation are negligible, already for such small values of $\alpha$.
\begin{figure}[H]
\centering{}
\includegraphics[width=11cm]{victorsinc} \caption{\label{fig3} Plot of the probability densities for an initial Sinc function as given in equation (\ref{sinc}) with $b=1$ and the parameters (a) $\alpha=0.3$ and (c) $\alpha=3$ for $t=5$ for the exact (solid line) and approximate solutions (dashed line). In (b) and (d) are the comparison between the zero (dashed line) and first order (solid line) contributions.}
\end{figure}
\subsection{Bessel function}
We consider now the initial wave function given by a Bessel function \cite{Leija,Optica}
\begin{equation}
\psi(x,0)=\exp \left( i\alpha x^2\right) J_n(x),
\end{equation}
with $J_n(x)$ a Bessel function of order $n$, defined as \cite{Arfken}
\begin{equation}
J_n(x)=\frac{1}{2\pi}\int_{-\pi}^{\pi} \exp\left( in\theta\right) \exp\left( -ix\sin\theta\right) d\theta.
\end{equation}
It is not difficult to show that the zeroth order solution is given by
\begin{equation}
\psi_0(x,t)=\frac{\exp\left[ if_1(t)x^2\right]} {\sqrt{1+2\alpha t}} J_n\left(\frac{x}{1+2\alpha t}\right),
\end{equation}
while the solution to first order reads
\begin{eqnarray}
& & \psi_1 \left( x,t\right)= \frac{\exp\left[ i f_1\left( t \right) x^2\right]}{\sqrt{1+2\alpha t}} \times
\nonumber \\
& & \left\lbrace
\left[1+\frac{f_4\left( t \right) }{2}\right]
J_n\left( \frac{x}{1+2\alpha t} \right)
-i \frac{f_4\left( t \right) }{4}
\left[ J_{n+2}\left( \frac{x}{1+2\alpha t} \right)+
J_{n-2}\left( \frac{x}{1+2\alpha t} \right)
\right]
\right\rbrace.
\end{eqnarray}
In order to show that the approximation is good, we write also the exact solution as
\begin{eqnarray}
\psi(x,t)=\frac{ \exp\left[ if_1(t)x^2\right] }{2\pi\sqrt{1+2\alpha t}}\int_{-\pi}^{\pi} \exp\left( in\theta\right) \exp\left( -ix\sin\theta\right) \exp\left[ if_4(t)\sin^2\theta\right] d\theta,
\end{eqnarray}
which is a so-called generalized Bessel function \cite{Leija,Dattoli,Torre}. In Figure \ref{fig4}, we plot the probability densities for the exact (solid lines) and zeroth order solutions (dashed lines) which again show an excellent agreement.
\begin{figure}[H]
\centering{}\includegraphics[width=12cm]{VictorBessel} \caption{\label{fig4} Plot of the probability densities for a initial wave function given by a Bessel function $J_0(x)$ with parameters (a) $\alpha=10$, (b) $\alpha=5$ and (c) $\alpha=0.5$ for $t=5$ for the exact (solid lines) and first order approximate solutions (dashed lines). }
\end{figure}
\section{Conclusions}
We have shown that by adding a quadratic phase to an initial wave packet, its structure may be kept invariant through free evolution. The main result of this contribution is equation (\ref{invariant}), which shows clearly this fact. Although the invariance is an approximation, it was shown that it perfectly matches the exact evolution. The price that has to be paid is the usual spread of the wave function due to free evolution, which is given here by the application of the squeeze operator to the initial wave function.\\
|
{
"timestamp": "2018-09-10T02:01:45",
"yymm": "1809",
"arxiv_id": "1809.02190",
"language": "en",
"url": "https://arxiv.org/abs/1809.02190"
}
|
\section{Introduction}
Ultra-faint dwarf galaxies (UFDs) are the least luminous galaxies known.
They have only been discovered relatively recently, after the advent of deep, wide-area photometric surveys like the Sloan Digital Sky Survey, Pan-STARRS, and the Dark Energy Survey found several low surface-brightness satellites of the Milky Way \citep[e.g.,][]{Willman05a,Belokurov07,Laevens15a,Bechtol15,Koposov15a}.
Though at first it was unclear if such objects were dwarf galaxies or globular clusters \citep{Willman05a}, subsequent spectroscopic followup found most of them displayed velocity dispersions implying mass-to-light ratios $>100$ and large metallicity spreads \citep[e.g.,][]{Simon07}.
These properties contrast with globular clusters, which display no evidence for dark matter or large metallicity spreads \citep{Willman12}.
UFDs are now understood to be the natural result of galaxy formation in small dark matter halos in standard $\Lambda$CDM cosmology.
Theoretically, these galaxies begin forming at $z \sim 10$ in small $\sim 10^8~M_\odot$ dark matter halos \citep{Bromm11}.
Supernova feedback is especially effective in these small galaxies \citep[e.g.,][]{BlandHaw15},
so they form stars inefficiently for $1-2$ Gyr before their star formation is quenched by reionization \citep{Bullock00,Benson02}.
All observed properties of UFDs are also consistent with this picture.
Color-magnitude diagrams show they contain uniformly old stellar populations \citep{Brown14,Weisz14a}.
Spectroscopy shows their stars have low metallicities that extend the mass-metallicity relation all the way to $M_\star \sim 1000 M_\odot$ \citep{Kirby08,Kirby13b}.
At such tiny stellar masses, the chemical abundances of individual UFDs will not even sample a full initial mass function's worth of supernovae \citep[e.g.,][]{Koch08,Simon10,Lee13}, let alone rarer nucleosynthesis events like neutron star mergers \citep{Ji16b}.
Given the likely association between UFDs and small scale dark matter substructure, it is extremely important to distinguish between UFDs and globular clusters.
Currently, the largest telescopes can perform spectroscopy to establish velocity and metallicity dispersions from a reasonable number of stars in the closest and/or most luminous UFDs \citep[e.g.,][]{Simon07}.
However, many of the most recently discovered UFDs are very faint and/or far away.
In such cases, only a handful of stars are accessible for followup spectroscopy, so it is difficult to clearly establish a velocity or metallicity dispersion for these galaxy candidates \citep[e.g.,][]{Koch09,Koposov15b,Kirby15,Kirby15b,Martin16,Martin16b}.
Exacerbating this concern is the presence of unresolved binary stars, which can inflate velocity dispersions and can therefore lead to premature UFD classifications \citep{McConnachie10,Ji16a,Kirby17}.
As a result, many UFD candidates still do not have clear velocity and/or metallicity dispersions \citep{Kirby15,Kirby17,Martin16,Martin16b,Walker16,Simon17}.
For some UFDs, an alternative is to examine the detailed chemical abundances of the brightest stars.
The first high-resolution spectroscopic abundances of stars in UFDs revealed that most elemental abundances in UFDs follow the average trends defined by metal-poor Milky Way halo stars, with the obvious exception of neutron-capture elements (e.g. Sr, Ba, Eu) that were extremely \emph{low} \citep{Koch08,Koch13,Frebel10b,Frebel14,Simon10}.
This view was recently revised by the discovery that some UFDs (Reticulum~II and Tucana~III) have extremely high abundances of neutron-capture elements synthesized in the $r$-process \citep{Ji16b,Roederer16b,Hansen17}.
In stark contrast, neutron-capture elements in globular clusters closely follow the abundance trends of the Milky Way halo \citep[e.g.,][]{Gratton04,Gratton12,Pritzl05}, including the globular clusters that display some internal neutron-capture abundance scatter \citep{Roederer11b}.
Extreme neutron-capture element abundances have thus been suggested to be a distinguishing factor between UFDs and globular clusters \citep{Frebel15}.
Here we study the detailed chemical abundances of the dwarf galaxy candidates Grus~I (Gru~I) and Triangulum~II (Tri~II).
Gru~I was discovered in Dark Energy Survey data by \citet{Koposov15a}.
\citet{Walker16} identified seven likely members of this galaxy, but did not resolve a metallicity or velocity dispersion.
Tri~II was discovered by \citet{Laevens15a} in Pan-STARRS.
As one of the closest but also least luminous galaxy candidates ($d_\odot = 28.4$ kpc, $M_V = -1.2$; \citealt{Carlin17}), Tri~II has already been the subject of numerous spectroscopic studies \citep{Kirby15,Kirby17,Martin16,Venn17}.
We report the first detailed chemical abundances of two stars in Gru~I and a reanalysis of two stars in Tri~II with additional data.
We describe our observations and abundance analysis in Sections~\ref{s:obs} and \ref{s:analysis}.
Section~\ref{s:abunds} details the results for individual elements.
We consider the classification of Gru~I and Tri~II in Section~\ref{s:discussion}, with an extended discussion of the origin and interpretation of neutron-capture elements in UFDs, larger dSph satellites, and globular clusters.
We conclude in Section~\ref{s:conclusion}.
\vspace{1cm}
\section{Observations and Data Reduction} \label{s:obs}
Our program stars were observed from two telescopes with two different echelle spectrographs.
Details of the observations can be found in Table~\ref{tbl:obs}.
Selected spectral regions of these four stars are shown in Figure~\ref{f:spec}.
The Gru~I stars were selected as the two brightest probable members of Gru~I from \citet{Walker16}.
We observed these stars with the Magellan Inamori Kyocera Echelle (MIKE) spectrograph \citep{Bernstein03} on the Magellan-Clay telescope in Aug 2017 with the 1\farcs0 slit, providing resolution $R \sim 28,000$ from ${\sim}3900-5000${\AA} on the blue arm and $R \sim 22,000$ from ${\sim}5000-9000${\AA} on the red arm. Individual exposures were 50-55 minutes long.
The data were reduced with CarPy \citep{Kelson03}.
Heliocentric corrections were determined with \texttt{rvcor} in IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.}.
The two stars in Tri~II were observed with the Gemini Remote Access to CFHT ESPaDOnS Spectrograph (GRACES) \citep{Donati03, Chene14}\footnote{See \url{http://www.gemini.edu/sciops/instruments/visiting/graces} for more details}. These stars were selected as the brightest probable members of Tri~II from \citet{Kirby15} and \citet{Martin16}.
We combined data from two programs\footnote{GN-2015B-DD-2 (PI Venn) and GN-2016B-Q-44 (PI Ji)} that both used the 2-fiber object+sky GRACES mode providing $R \sim 40,000$ from ${\sim}5000-10,000${\AA}.
The GRACES throughput for these faint stars was worse than predicted by the integration time calculator, especially at wavelengths $<6000${\AA} where the signal-to-noise ratio (S/N) was less than half that expected.
The data were reduced with the OPERA pipeline for ESPaDOnS that was adapted for GRACES \citep{Martioli12}.
This pipeline automatically includes a heliocentric velocity correction.
\begin{deluxetable*}{lccrcrlrrrl}
\tablecolumns{11}
\tablewidth{0pt}
\tabletypesize{\footnotesize}
\tablecaption{Observing Details\label{tbl:obs}}
\tablehead{
\colhead{Star} & \colhead{$\alpha$} & \colhead{$\delta$} & \colhead{$V$} & \colhead{Observation Date} & \colhead{$t_{\rm exp}$} & \colhead{$v_{\rm hel}$} & \colhead{S/N} & \colhead{S/N} &\colhead{S/N} & \colhead{Instrument}\\
\colhead{} & \colhead{(J2000)} & \colhead{(J2000)} & \colhead{(mag)} & \colhead{} & \colhead{(min)} & \colhead{(km s$^{-1}$)} & \colhead{(4500\AA)} & \colhead{(5300\AA)} & \colhead{(6500\AA)} & \colhead{}
}
\startdata
GruI-032 & 22 56 58.1 & $-$50 13 57.9 & 18.1 & 2017 Aug 16,25 & 165 & $-139.8 \pm 0.7$ & 22 & 25 & 60 & MIKE 1\farcs0 slit\\
GruI-038 & 22 56 29.9 & $-$50 04 33.3 & 18.7 & 2017 Aug 15,16,25 & 430 & $-143.9 \pm 0.4$ & 20 & 22 & 55 & MIKE 1\farcs0 slit\\
TriII-40 & 02 13 16.5 & $+$36 10 45.9 & 17.3 & 2015 Dec 15 & 60 & $-381.5 \pm 1.3$ & 5 & 15 & 35 & GRACES 2-fiber\\
& & & & 2016 Sep 8 & 80 & $-381.5$ & & & & GRACES 2-fiber\\
TriII-46 & 02 13 21.5 & $+$36 09 57.6 & 18.8 & 2015 Dec 16,17 & 160 & $-396.5 \pm 3.2$ & 1 & 7 & 17 & GRACES 2-fiber\\
& & & & 2016 Sep 7 & 120 & $-381.5 \pm 5.0$ & & & & GRACES 2-fiber\\
\enddata
\tablecomments{S/N values are per pixel. S/N values for Tri~II stars were determined after coadding. Velocity precision is computed with coadded spectra except for TriII-46, where each visit is measured separately because of the binary orbital motion.}
\end{deluxetable*}
We used IRAF and SMH \citep{Casey14} to coadd, normalize, stitch orders, and Doppler correct the reduced spectra.
We estimated the S/N per pixel on coadded spectra by running a median absolute deviation filter across the normalized spectra in a ${\approx}$5{\AA} window.
The signal-to-noise at the order center closest to rest wavelengths of 4500{\AA}, 5300{\AA}, and 6500{\AA} is given in Table~\ref{tbl:obs}.
Radial velocities were determined by cross correlating the Mg b triplet against a MIKE spectrum of HD122563.
\citet{Venn17} found one of the stars in Tri~II (TriII-46) to be a binary, so we Doppler shifted spectra from each visit to rest frame before coadding.
The implications of this binary star were previously considered in \citet{Venn17} and \citet{Kirby17}. Our added velocity measurement does not affect their conclusions.
Other than TriII-46, the velocities are consistent with constant heliocentric velocity in our data and with previous velocity measurements \citep{Kirby15,Kirby17,Martin16,Walker16,Venn17}.
Velocity precision was estimated using the coadded spectra by cross-correlating all orders from $5000-6500${\AA} for MIKE and $4500-6500${\AA} for GRACES against HD122563.
We excluded orders where the velocity was not within 10~km~s$^{-1}$ of the Mg b velocity, then took the standard deviation of the remaining order velocities.
This value was added in quadrature to the combined statistical velocity uncertainty to obtain the velocity uncertainties listed in Table~\ref{tbl:obs}.
The most discrepant velocity other than TriII-46 is for GruI-032, which is ${\approx}1$~km~s$^{-1}$ away from the measurement in \citet{Walker16} ($-138.4 \pm 0.4$~km~s$^{-1}$), but not large enough that we would consider this a clear binary candidate.
Note that the two Gru~I stars differ by ${\approx}4$ km s$^{-1}$, which could be consistent with a significant velocity dispersion.
\begin{figure*}
\centering
\includegraphics[height=8cm]{spec_mgb.pdf}
\includegraphics[height=8cm]{spec_ba.pdf}
\caption{
\emph{Left panels}: Spectra of the target stars around the Mg b triplet. Mg b, Ti II, and Fe I lines are labeled in black, blue, and red, respectively. Notice the large drop in S/N in TriII-46 at the red end due to reaching the order edge.
\emph{Right panels}: our four stars near the Ba line at 6497{\AA}. For Gru~I stars, the solid red curve indicates our best-fit synthesis, while the dotted red curves indicate $\pm 0.15$ dex. For Tri~II stars, the dashed red curves indicate upper limits. In all panels, the dashed blue line indicates $\mbox{[Ba/Fe]} = 0$ for comparison.
\label{f:spec}}
\end{figure*}
\section{Abundance Analysis}\label{s:analysis}
We analyzed all four stars using the 2011 version of the 1D LTE radiative transfer code MOOG \citep{Sneden73, Sobeck11} with the \citet{Castelli04} (ATLAS) model atmospheres.
We measured equivalent widths and ran MOOG with SMH \citep{Casey14}.
The abundance of most elements was determined from equivalent widths.
We used spectral synthesis to account for blends, molecules, and hyperfine structure for the species CH, Sc, Mn, Sr, Ba, and Eu.
Atomic data references can be found in table~3 of \citet{Roederer10}.
Measurements and uncertainties of individual features are in Table~\ref{tbl:eqw}.
Stellar parameters and uncertainties for this work and previous measurements are in Table~\ref{tbl:sp}.
Final abundances and uncertainties are in Table~\ref{tbl:abunds}.
Detailed abundance uncertainties due to stellar parameter variations are in Table~\ref{tbl:spabunderr}.
\input{ew_short}
\subsection{Standard analysis for brighter stars}
For three of our stars (GruI-032, GruI-038, and TriII-40), our spectra are of sufficient quality for a standard equivalent width analysis.
We first fit Gaussian profiles to the line list in \citet{Roederer10}.
We applied the formula from \citet{Battaglia08} to determine equivalent width uncertainties.
The S/N per pixel was calculated with median absolute deviation in a running 5{\AA} window.
Varying the window size affected the S/N estimates by only 2-3\%, but we conservatively add an additional 10\% uncertainty to each equivalent width.
Using this estimate, we rejected most lines with equivalent width uncertainties larger than 30\%.
The exceptions were lines of Al, Si, Cr, Co, and Zn that otherwise would have had all lines of that element rejected; and some clean lines near regions of large true variation (e.g., near CH bands) where the S/N was clearly underestimated.
We propagate these to a $1\sigma$ abundance uncertainty for each line (Table~\ref{tbl:eqw}).
Synthesis uncertainties are calculated by varying abundances until the entire synthesized profile encompasses the spectrum noise around the feature, corresponding to $1\sigma$ uncertainties.
We derived the effective temperature, surface gravity, and microturbulence ($T_{\rm eff}$, $\log g$, $\nu_t$) with excitation, ionization, and line strength balance of Fe lines. We then applied the $T_{\rm eff}$ correction from \citet{Frebel13} and redetermined $\log g$ and $\nu_t$.
Statistical uncertainties for $T_{\rm eff}$ and $\nu_t$ correspond to the $1\sigma$ error on the fitted slopes of abundance with respect to excitation potential and reduced equivalent width, respectively.
The statistical uncertainty for $\log g$ was derived by varying the parameter to match the combined standard error of the Fe\,I and Fe\,II abundances.
We then further adopt systematic uncertainties of 150 K for $T_{\rm eff}$ from scatter in the \citet{Frebel13} calibration; and 0.3 dex for $\log g$, and 0.3 km s$^{-1}$ for $\nu_t$ to reflect this systematic temperature uncertainty.
We use the standard deviation of Fe\,I lines as the statistical uncertainty in the stellar atmosphere's model metallicity.
We add the statistical and systematic uncertainties in quadrature to obtain the stellar parameter uncertainties in Table~\ref{tbl:sp}.
These three stars are all $\alpha$-enhanced, so we used the $\mbox{[$\alpha$/Fe]} = +0.4$ \citet{Castelli04} model atmospheres.
\subsection{Analysis of TriII-46}
The data for star TriII-46 has very low signal-to-noise (Table~\ref{tbl:obs}) and thus requires special care.
We rebin the spectra by a factor of 2 to improve the signal-to-noise.
This allowed us to measure equivalent widths for lines at the center of echelle orders with wavelengths $>5000${\AA}.
After keeping only lines with equivalent width uncertainty less than 30\%, we have 18 Fe\,I lines and only one Fe\,II line.
For this small number of lines, spectroscopic determination of stellar parameters is subject to many degeneracies based on line selection.
Still, we examine here what parameters would be derived with the information from Fe lines.
If we apply the same procedure as for the other three stars (i.e., excitation, ionization, and line strength balance with the \citealt{Frebel13} correction but using only these 19 lines), we obtain $T_{\rm eff}=5260$\,K, $\log g=2.1$\,dex, $\nu_t=2.60$\,km/s, and $\mbox{[Fe/H]} = -2.01$.
However, the ionization equilibrium is set by a single Fe\,II line with equivalent width $164 \pm 50$ m{\AA}, so this is extremely unreliable.
Ignoring the Fe\,II line and using a $\mbox{[Fe/H]}=-2$ Yonsei-Yale isochrone to set $\log g$ as a function of $T_{\rm eff}$ \citep{Kim02}, we obtain $T_{\rm eff}=5260$\,K, $\log g=2.7$\,dex, $\nu_t=2.50$\,km/s, and $\mbox{[Fe/H]} = -2.01$.
The statistical errors are large: 240K, 0.6 dex, 0.5 km/s, and 0.3 dex respectively.
We summarize this and other derived stellar parameters for this star in Table~\ref{tbl:sp}.
For comparison, \citet{Venn17} derived $T_{\rm eff}=5050$\,K, $\log g=2.6$\,dex, and $\nu_t=2.5$\,km/s for TriII-46 using photometry, distance, and a modified scaling relation for $\nu_t$.
An updated distance modulus \citep{Carlin17} would slightly increase $\log g$ to 2.7\,dex.
\citet{Kirby17} derived $T_{\rm eff}=5282$\,K, $\log g=2.74$\,dex, and $\nu_t=1.5$\,km/s using photometry and distance to set $\log g$ and $\nu_t$ but allowing $T_{\rm eff}$ to vary to fit their spectrum.
Our stellar parameters are somewhat in between their values, preferring the higher temperature from \citet{Kirby17} but with the higher microturbulence from \citet{Venn17}.
Our data for this star are insufficient to make any further refinements, so we decided to adopt intermediate values with large uncertainties that encompass other stellar parameter determinations: $T_{\rm eff}=5150 \pm 200$\,K, $\log g=2.7 \pm 0.5$\,dex, and $\nu_t=2.0 \pm 0.5$\,km/s.
Regardless of the stellar parameters, this star is not $\alpha$-enhanced so we use the \citet{Castelli04} model atmospheres with $\mbox{[$\alpha$/Fe]} = 0$.
We propagate these uncertainties through to the final abundance uncertainties.
\subsection{Final Abundances and Uncertainties}
Table~\ref{tbl:abunds} contains the final abundance results for our stars.
For each element, $N$ is the number of lines measured.
$\log \epsilon(X)$ is the average abundance of those lines weighted by the abundance uncertainty.
Letting $\log\epsilon_i$ and $\sigma_i$ be the abundance and uncertainty of line~$i$, we define $w_i = 1/\sigma_i^2$ and $\log\epsilon(X) = \sum_i (w_i \log\epsilon_i) / \sum_i w_i$.
$\sigma$ is the standard deviation of those lines.
$\sigma_{\rm w}$ is the standard error from propagating individual line uncertainties, i.e., $1/\sigma_{\rm w}^2 = \sum_i w_i$ \citep{McWilliam95}.
$\mbox{[X/H]}$ is the abundance relative to solar abundances from \citet{Asplund09}.
$\mbox{[X/Fe]}$ is calculated using either [Fe\,I/H] or [Fe\,II/H], depending on whether X is neutral or ionized; except for TriII-46, where all [X/Fe] are calculated relative to [Fe\,I/H] because of an unreliable Fe\,II abundance.
$\sigma_{\rm [X/H]}$ is the quadrature sum of $\sigma/\sqrt{N}$, $\sigma_{\rm w}$, and abundance uncertainties due to $1\sigma$ stellar parameter variations.
Detailed abundance variations from changing each stellar parameter are given in Table~\ref{tbl:spabunderr}.
$\sigma_{\rm [X/Fe]}$ is similar to $\sigma_{\rm [X/H]}$, but when calculating the stellar parameter uncertainties we include variations in Fe.
We use the difference in Fe\,I abundance for neutral species and the difference in Fe\,II abundance for ionized species to calculate this error.
The [X/Fe] error is usually smaller than the [X/H] error, since abundance differences from changing $T_{\rm eff}$ and $\log g$ usually (but not always) affect X and Fe in the same direction when using the same ionization state.
Since most of our elements have very few lines, we adopt the standard deviation of the Fe\,I lines as the minimum $\sigma$ when calculating $\sigma_{\rm [X/H]}$ and $\sigma_{\rm [X/Fe]}$.
Upper limits were derived by spectrum synthesis.
Using several features of each element (Table~\ref{tbl:eqw}), we found the best-fit synthesis to the observed spectrum to determine a reference $\chi^2$ and smoothing for the synthetic spectrum.
The minimum smoothing was calculated using $\rm{FWHM} = \lambda/R$ where $\lambda$ is the line wavelength.
Holding the continuum, smoothing, and radial velocity fixed, we increased the abundance until $\Delta\chi^2 = 25$. This is formally a $5\sigma$ upper limit, though it does not include uncertain continuum placement.
\begin{deluxetable}{lcrrllllll}
\tablecolumns{12}
\tablewidth{\linewidth}
\tabletypesize{\footnotesize}
\tablecaption{Stellar Parameters\label{tbl:sp}}
\tablehead{
\colhead{Star} & \colhead{Ref} & \colhead{$T_{\rm eff}$} & \colhead{$\sigma$} & \colhead{$\log g$} & \colhead{$\sigma$} & \colhead{$\nu_t$}\tablenotemark{a} & \colhead{$\sigma$} & \colhead{[Fe/H]} & \colhead{$\sigma$}}
\startdata
GruI-032 & TW & 4495 & 155 & 0.85 & 0.37 & 2.60 & 0.32 & $-$2.57 & 0.19 \\
GruI-032 &W16\tablenotemark{b} & 4270 & 69 & 0.72 & 0.22 & 2.0\ & \nodata & $-$2.69 & 0.10 \\
\hline
GruI-038 & TW & 4660 & 158 & 1.45 & 0.39 & 2.40 & 0.32 & $-$2.50 & 0.24 \\
GruI-038 &W16\tablenotemark{b} & 4532 & 100 & 0.87 & 0.31 & 2.0 & \nodata & $-$2.42 & 0.15 \\
\hline
TriII-40 & TW & 4720 & 175 & 1.35 & 0.42 & 2.48 & 0.34 & $-$2.95 & 0.21 \\
TriII-40 &V17 & 4800 & 50 & 1.80 & 0.06 & 2.7 & 0.2 & $-$2.87 & 0.19 \\
TriII-40 &K17\tablenotemark{c} & 4816 & \nodata & 1.64 & \nodata & 2.51 & \nodata & $-$2.92 & 0.21 \\
TriII-40 &K17\tablenotemark{d} & 4917 & \nodata & 1.89 & \nodata & 1.70 & \nodata & $-$2.78 & 0.11 \\
\hline
TriII-46 & TW & 5150 & 200 & 2.7 & 0.5 & 2.00 & 0.5 & $-$1.96 & 0.28 \\
TriII-46 &V17 & 5050 & 50 & 2.60 & 0.06 & 2.5 & \nodata & $-$2.5 & 0.2 \\
TriII-46 &K17\tablenotemark{d} & 5282 & \nodata & 2.74 & \nodata & 1.50 & \nodata & $-$1.91 & 0.11 \\
TriII-46 &Spec\tablenotemark{e} & 5260 & 240 & 2.7 & 0.6 & 2.5 & 0.5 & $-$2.01 & 0.26 \\
\enddata
\tablerefs{TW = this work; W16 = \citealt{Walker16}; V17 = \citealt{Venn17}; K17 = \citealt{Kirby17}}
\tablenotetext{a}{$\nu_t$ for W16 is always 2 km/s \citep{Lee08a}. $\nu_t$ for DEIMOS data in K17 according to the equation $\nu_t = 2.13 - 0.23 \log g$ \citep{Kirby09}}
\tablenotetext{b}{[Fe/H] for W16 stars have a 0.32 dex offset removed; see text}
\tablenotetext{c}{HIRES data}
\tablenotetext{d}{DEIMOS data}
\tablenotetext{e}{Spectroscopic balances in this work using isochrones to determine $\log g$}
\end{deluxetable}
\subsection{Comparison to literature measurements}\label{s:litcomp}
For the two Gru~I stars, \citet{Walker16} determined stellar parameters and metallicities from high-resolution M2FS spectra near the Mg b triplet using a large synthesized grid. The grid fixes $\nu_t = 2.0$ \citep{Lee08a}. \citet{Walker16} increased all their [Fe/H] measurements by $0.32$ dex, which is the offset they obtained from fitting twilight spectra of the Sun. It is not clear that the same offset should be applied for both dwarf stars (like the Sun) and giants. If we remove the offset, our stellar parameters and metallicities are in good agreement (also see \citealt{Ji16d}).
\citet{Venn17} analyzed both stars in Tri~II, and we have combined their previous GRACES data with additional observations\footnote{\citet{Venn17} labeled the stars as Star~40 and Star~46 instead of TriII-40 and TriII-46. We have retained the number but changed the label to TriII for clarity.}.
For TriII-40, we find good agreement for all stellar parameters except $\log g$. This is because we determined our $\log g$ spectroscopically, while \citet{Venn17} did so photometrically using the distance to Tri~II.
Adjusting for the different $\log g$, our abundances for this star agree within $1\sigma$.
For TriII-46, \citet{Venn17} fixed stellar parameters with photometry and used spectral synthesis to measure all abundances.
We measured $\mbox{[Fe/H]} = -2.01 \pm 0.37$, while \citet{Venn17} obtained $\mbox{[Fe/H]} = -2.5 \pm 0.2$.
Our large abundance uncertainty means these are only $1.2 \sigma$ discrepant, but we might expect better agreement given that so much of the data overlaps.
Detailed investigation of the discrepancy shows that 0.3 dex of the difference is due to differences in stellar parameters (mostly $T_{\rm eff}$ and $\nu_t$).
The remaining 0.2 dex is attributable to systematic differences in continuum placement that are individually within $1\sigma$ uncertainties.
Finally, we note that the stellar parameter uncertainties in \citet{Venn17} reflect statistical photometric errors, but could be larger due to systematic uncertainties in photometric calibrations, filter conversions, and reddening maps.
\citet{Kirby17} determined abundances of TriII-40 with a high-resolution, high signal-to-noise Keck/HIRES spectrum.
Our abundances agree within 0.15 dex, except for Cr which is still within $1\sigma$.
\citet{Kirby17} also analyzed the Mg, Ca, Ti, and Fe abundance of TriII-46 by matching a synthetic grid to an $R \sim 7000$ Keck/DEIMOS spectrum.
They measured $\mbox{[Mg/Fe]}=+0.21 \pm 0.28$, $\mbox{[Ca/Fe]}=-0.39 \pm 0.15$, and $\mbox{[Ti/Fe]}=-0.79 \pm 0.76$.
There are some significant discrepancies, especially for Mg.
One possible reason for these differences is that we used stronger blue lines with lower excitation potentials for Mg and Ti, while the synthetic grid is driven by combining multiple higher excitation potential lines that we could not individually measure in our spectrum. This explanation is supported by the fact that our Ca abundances agree better because they are derived from similar spectral features.
\section{Abundance Results}\label{s:abunds}
In Gru~I we measured the abundances of C, Na, Mg, Al, Si, K, Ca, Sc, Ti, Cr, Mn, Fe, Co, Ni, Sr, and Ba.
In Tri~II we were only able to measure Mg, K, Ca, Ti, Cr, Fe, Co, Ni, and Ba due to a combination of lower S/N and the fact that the strongest features for other elements are found $\lambda<5000${\AA}.
Figures~\ref{f:grid1}, \ref{f:kmg}, and \ref{f:grid2} show the abundances of our four stars compared to other UFDs and a literature sample of halo stars (\citealt{Frebel10}, and \citealt{Roederer14c} for K).
The UFDs are Bootes~I \citep{Feltzing09,Norris10a,Gilmore13,Ishigaki14,Frebel16}, Bootes~II \citep{Ji16b}, CVn~II \citep{Francois16}, Coma Berenices \citep{Frebel10b}, Hercules \citep{Koch08, Koch13}, Hor~I \citep{Nagasawa18}, Leo~IV \citep{Simon10,Francois16}, Reticulum~II \citep{Ji16c,Roederer16b}, Segue~1 \citep{Frebel14}, Segue~2 \citep{Roederer14a}, Tuc~II \citep{Ji16d,Chiti18}, Tuc~III \citep{Hansen17}, and UMa~II \citep{Frebel10b}.
Overall, the two Gru~I stars have the same [Fe/H] to within our abundance uncertainties, and all [X/Fe] ratios are very similar except for Ba.
The metallicities of the Tri~II stars differ by more than $2\sigma$ and display different abundance ratios.
We now discuss each element in more detail.
\startlongtable
\input{abund_final}
\begin{figure*}
\centering
\includegraphics[width=18cm]{lightgrid.pdf}
\caption{Abundance of light elements in Gru~I (red squares) and Tri~II (red triangles) compared to halo stars (gray points) and other UFDs (colored points).
Upper limits are indicated as open symbols with arrows.
The element $X$ is indicated in the top-left corner of each panel.
Tri~II stars are not plotted for C. Limits on Tri~II abundances are above the top axis for Sc and Mn.
Essentially all [X/Fe] ratios in these two galaxies follow trends defined by the Milky Way halo stars and other UFDs. The notable exceptions are the Na and Ni in TriII-40, and the low Mg and Ca in TriII-46.
\label{f:grid1}}
\end{figure*}
\subsection{Carbon}
Spectral synthesis of the $G$-band features at 4313{\AA} and 4323{\AA} was used to measure carbon in the Gru~I stars (using a list from B. Plez 2007, private communication).
The oxygen abundance can affect molecular equilibrium, but since oxygen cannot be measured in these stars we assume $\mbox{[O/Fe]}=0.4$.
Since they are red giant branch stars, some C has been converted to N.
The corrections from \citet{Placco14} were applied to estimate the natal abundance, which are $\mbox{[C/Fe]} = +0.21$ and $+0.57$ for GruI-032 and GruI-038, respectively.
Varying $\log g$ by the uncertainty in Table~\ref{tbl:sp} causes the correction to change by $\pm 0.1$ dex.
Both stars are carbon-normal ($\mbox{[C/Fe]} < 0.7$) even after this carbon correction. Note that the uncorrected carbon abundances are used in Figure~\ref{f:grid1} and Table~\ref{tbl:abunds}.
We were unable to place any constraints on carbon in Tri~II. The GRACES spectra are not usable below 4800{\AA}, so the $G$-band cannot be measured.
The CH lists from \citet{Masseron14,Kurucz11} do suggest strong CH features should exist at 5893{\AA} and 8400{\AA} that were used to place a [C/Fe] upper limit by \citet{Venn17}, but we could not find these features in several carbon-enhanced metal-poor stars or atlas spectra of the Sun and Arcturus \citep{Hinkle03}\footnote{\url{ftp://ftp.noao.edu/catalogs/arcturusatlas}}.
No other C features are available.
\citet{Kirby17} were able to measure $\mbox{[C/Fe]} \sim -0.1$ for TriII-40 from their HIRES spectrum, so this star is not carbon enhanced.
\subsection{$\alpha$-elements: Mg, Si, Ca, Ti}
The abundances of these four $\alpha$-elements are determined from equivalent widths.
The magnesium abundance is determined from $3-6$ lines, but always using two of the Mg b lines.
Silicon can only be measured in the Gru~I stars, using the 4102{\AA} line that is in the wing of H$\delta$. The abundance uncertainty from only this single line is quite large, ${\approx}0.6$ dex.
Neutral calcium is well-determined by a large number of lines, and it should be considered the most reliable $\alpha$-element.
Titanium has several strong lines in both the neutral and singly ionized state, though only a handful ($1-5$) Ti lines can be measured in the Tri~II stars.
The abundance of Ti\,I is affected by NLTE effects \citep[e.g.,][]{Mashonkina17}, so we only plot Ti\,II abundances in Figure~\ref{f:grid1} both to avoid NLTE effects and because a Ti\,II line can be measured in all four of our stars. The literature sample also uses Ti\,II whenever possible.
\subsection{Odd-Z elements: Na, Al, K, Sc}
Sodium is measured from the Na D lines for GruI-032, GruI-038, and TriII-40.
While we can identify the presence of Na D lines in TriII-46, the lines are too noisy for a reliable abundance measurement. An upper limit $\mbox{[Na/Fe]} < 1.04$ is found from the subordinate Na lines near 8190\,{\AA}, and for completeness we include the best estimate of equivalent widths for the Na D lines in Table~\ref{tbl:eqw}.
NLTE corrections are not applied since most stars in the literature comparison sample do not have these corrections, but the grid from \citet{Lind11} gives corrections of $-0.28$ for GruI-032, $-0.32$ for GruI-038, and $-0.06$ for TriII-40.
The two Gru~I stars have solar ratios of Na, following the usual halo trend.
In contrast, TriII-40 has significantly subsolar $\mbox{[Na/Fe]} =-0.79 \pm 0.22$ that is an outlier from the halo trend, as first reported by \citet{Venn17}.
A similarly low [Na/Fe] ratio has previously been seen in one of three stars in the UFD Coma Berenices \citep{Frebel10b}. The primordial (first generation) population of stars in globular clusters also have low Na, but all with $\mbox{[Na/Fe]} > -0.5$, \citep{Gratton12}.
Aluminum and scandium are only measured in the Gru~I stars.
Al is determined from a single line at 3961{\AA}. Given the low S/N in this region, Al is the least certain abundance of all elements measured here. The measurement is consistent with that of other halo stars at $\mbox{[Fe/H]} \approx -2.5$, but it is not a meaningful constraint.
Sc lines in Gru~I are synthesized due to hyperfine structure \citep{Kurucz95}, and the abundances are also similar to other halo stars.
For completeness, we place Sc upper limits in the Tri~II stars with some weak red lines that provide no interesting constraint.
\begin{figure*}
\centering
\includegraphics[width=18cm]{spec_k.pdf}
\caption{Spectrum around K lines for our four stars.
Black lines are the data, solid red lines indicate synthesis fit, dotted red lines indicate uncertainty.
In the third row (TriII-40), dark blue lines indicate data from Dec 2015, while cyan lines indicate data from Sep 2016, showing how the location of telluric absorption shifts relative to the K line. The 7699 line is cleanly detected in Sep 2016 (telluric lines are at 7699.5 {\AA} and 7701.8 {\AA}). The same abundance is synthesized at the expected strength of the 7665 line, but we do not use that line because it is too blended with telluric absorption.
In the fourth row (TriII-46), the dashed red line is the K upper limit, and the dashed blue line indicates [K/Fe]=1.
\label{f:kspec}}
\end{figure*}
Potassium has two strong lines at 7665\,{\AA} and 7699\,{\AA}. These lines are located near several telluric absorption features.
Figure~\ref{f:kspec} shows these two lines and the best-fit synthetic spectrum or upper limits.
The top two spectra are Gru~I observations, where observations were conducted within the span of one month, so the telluric features do not move much due to heliocentric corrections. Both Gru~I stars have K lines that are easily distinguished from the telluric features.
The bottom two spectra of Figure~\ref{f:kspec} are Tri~II observations, which were conducted in Dec 2015 and Sep 2016.
The heliocentric correction is different between these epochs by ${\sim}40$ km/s, so the telluric features shift by ${\approx}1${\AA} between 2015 and 2016.
We emphasize this for TriII-40 by showing individual frames from Dec 2015 (thin blue lines) and Sep 2016 (thin cyan lines).
Note that we used \texttt{scombine} in IRAF with \texttt{avsigclip} rejection to obtain the coadded black spectra, so it tends to follow the telluric lines from Sep 2016 (four exposures) rather than Dec 2015 (two exposures).
The 7699\,{\AA} line is detected in TriII-40. It is significantly blended with a telluric line in the Dec 2015 observations (see \citealt{Venn17} figure 4, dark blue lines here), but cleanly separated in the Sep 2016 observations.
We find [K/Fe] = 0.8 in TriII-40, in agreement with the measurement by \citet{Venn17}.
The 7665\,{\AA} line is severely blended with telluric lines in both epochs, so we do not use it but just highlight its position in Figure~\ref{f:kspec} with a synthesized K line.
Neither K line is detected for TriII-46, and an upper limit $\mbox{[K/Fe]} < 0.77$ is set with the 7699\,{\AA} line.
We could not account for the telluric lines when setting this upper limit, but this makes the limit more conservative.
NLTE corrections have not been applied, but they can be large (as high as $-0.4$ dex for the most K-enhanced stars in LTE, \citealt{Andrievsky10}).
\begin{figure}
\centering
\includegraphics[width=8cm]{KMg.pdf}
\caption{K and Mg abundances of stars in the Tri~II and Gru~I from this work, compared to K and Mg in the stellar halo \citep{Roederer14c}, NGC2419 \citep{Mucciarelli12} and other UFDs (Boo~II, \citealt{Ji16a}; Ret~II, \citealt{Ji16c}; Tuc~II, \citealt{Ji16d}; Segue~2, \citealt{Roederer14a}; and Tuc~III, \citealt{Hansen17}).
The K abundance of TriII-46 is not enhanced, so Tri~II does not follow the strange K-Mg anticorrelation in NGC2419.
Note that the halo sample here is different than in Figures~\ref{f:grid1} and \ref{f:grid2} because our usual halo compilation does not have K abundances \citep{Frebel10}.
Adapted from \citet{Venn17}.
\label{f:kmg}}
\end{figure}
\subsection{Iron-peak elements: Cr, Mn, Co, Ni, Zn}
The Fe-peak abundances were determined with equivalent widths, except for Mn, which is synthesized due to hyperfine structure \citep{Kurucz95}.
In Gru~I we can constrain Cr, Mn, Co, and Ni, finding that both stars have essentially identical abundances of these elements.
Though the Cr and Mn abundances are similar to those in metal-poor stars in other UFDs or in the Milky Way halo, the Co and Ni abundances are somewhat higher.
However, this difference is not very significant, especially for Co which is derived from only a few bluer lines.
One Zn line is marginally detected in GruI-032 with an abundance consistent to the halo trend, though with large uncertainty.
In Tri~II, we can detect Cr and Ni in TriII-40 and provide upper limits in TriII-46. Mn, Co, and Zn are unconstrained as they only have strong lines blueward of 5000{\AA}.
The upper limits for Cr and Ni in TriII-46 are uninteresting.
For TriII-40, we detect a normal [Cr/Fe] ratio, but Ni appears significantly enhanced ($\mbox{[Ni/Fe]} = 0.57 \pm 0.16$), in agreement with \citet{Kirby17} and \citet{Venn17}.
\subsection{Neutron-capture elements: Sr, Ba, Eu}
Strontium is detected only in Gru~I, as the strong Sr\,II lines at 4077{\AA} and 4215{\AA} are out of the range of the Tri~II (GRACES) spectra. The abundance of both lines is determined with spectrum synthesis.
The Sr abundances in these two stars are very similar, $\mbox{[Sr/Fe]} \approx -2$, which is much lower than what is found in most halo stars but similar to most UFDs (Figure~\ref{f:grid2}).
Barium is measured with four different lines in the Gru~I stars including hyperfine structure and isotope splitting \citep{McWilliam98}.
We use solar isotope ratios \citep{Sneden08}, but given the low overall abundance, changing this to $r$- or $s$-process ratios does not significantly affect our abundances.
GruI-032 has a low $\mbox{[Ba/Fe]} \approx -1.6$, but GruI-038 has a much higher Ba abundance $\mbox{[Ba/Fe]} \approx -1.0$.
This is formally only 1.6$\sigma$ different, but differential comparison of the line strengths (e.g., the 6497{\AA} line in Figure~\ref{f:spec}) suggests that the difference is real.
We discuss this more in Section~\ref{s:discoutlier}, but both Ba abundances are low and similar to those in most UFDs.
Ba is not detected in either Tri~II star, so instead we place $5\sigma$ upper limits. The Ba limit for TriII-40 is $\mbox{[Ba/Fe]} < -1.25$, suggesting a low Ba abundance similar to other UFDs.
\citet{Kirby17} determined $\mbox{[Sr/Fe]}=-1.5$ and $\mbox{[Ba/Fe]}=-2.4$ from their HIRES spectrum of this star, consistent with our upper limit and showing TriII-40 clearly has very low neutron-capture element abundances.
The Ba limit for TriII-46 is only $\mbox{[Ba/Fe]} \lesssim -0.2$, but this is still at the lower envelope of the halo trend (Figure~\ref{f:grid2}).
Eu is not detected in any of these four stars, as expected given the low Sr and Ba abundances. Upper limits are placed from the 4129{\AA} line for the Gru~I stars (MIKE data) and from the 6645{\AA} line for the Tri~II stars (GRACES data).
\begin{figure*}
\centering
\includegraphics[width=18cm]{ncap_1x3_v2.pdf}
\caption{Abundances of Sr and Ba in UFDs compared to halo stars.
Symbols are as in Figure~\ref{f:grid1}.
The left two panels show the abundance trend with respect to [Fe/H].
Note that there is no constraint on Sr for Tri~II stars.
The rightmost panel shows that most halo stars cluster near $\mbox{[Sr,Ba/Fe]} \approx 0$, but most UFDs are clearly offset to lower Sr and Ba.
\label{f:grid2}}
\end{figure*}
\section{Discussion}
\label{s:discussion}
\subsection{Abundance Anomalies}\label{s:discoutlier}
The abundance ratios of the two stars in Gru~I are nearly identical to each other, and similar to typical UFD stars at $\mbox{[Fe/H]} \approx -2.5$. The most notable exception is the Ba abundance, where GruI-038 has 0.6 dex higher [Ba/Fe] than GruI-032. After applying corrections from \citet{Placco14}, GruI-038 also has a higher carbon abundance than GruI-032 ($\mbox{[C/Fe]} = +0.57$ vs $+0.21$, respectively).
The differences are both somewhat low significance, and it is reasonable to consider these two stars chemically identical.
However if the differences are real, one possible explanation is that GruI-038 formed from gas that had been polluted by more AGB stars compared to GruI-032.
A lower mass ($1-4 M_\odot$) AGB star could add significant Ba and C without changing the Sr abundance too much \citep[e.g.,][]{Lugaro12}.
Since AGB winds are low velocity, their C and Ba production would be more inhomogeneously distributed in the star-forming gas of a UFD progenitor \citep[e.g.,][]{Emerick18}.
However, many more stars in Gru~I would be needed to test this scenario.
We confirm the result from \citet{Venn17} that TriII-46, the more Fe-rich star in Tri~II, has very low [Mg/Fe] and [Ca/Fe] ratios.
The standard interpretation is that TriII-46 must have formed after significant enrichment by Type~Ia supernovae, and this star does follow the decreasing [$\alpha$/Fe] trend of other stars in Tri~II \citep{Kirby17}.
Indeed, most other UFDs show a similar downturn in [$\alpha$/Fe] ratios as [Fe/H] increases \citep{Vargas13}, though Horologium~I is unique in that all known stars in the system have low [$\alpha$/Fe] \citep{Nagasawa18}, and Segue~1 is unique in that it shows no downturn in $\alpha$-elements at high [Fe/H] \citep{Frebel14}.
It is actually somewhat surprising that the very low-luminosity Tri~II appears to have formed stars long enough to be enriched by Type~Ia supernovae, since its luminosity is very similar to Segue~1.
If Tri~II were significantly tidally stripped by now \citep{Kirby15,Kirby17,Martin16} this would help reconcile enrichment by Type~Ia supernovae with the small present-day luminosity.
However, the orbital pericenter of Tri~II is 20 kpc, where tidal effects are not too strong \citep{Simon18}; and there are no visible signs of tidal disruption in deep imaging \citep{Carlin17}.
An alternate explanation could be the presence of very prompt Type~Ia supernovae \citep[e.g.,][]{Mannucci06}.
If this is the case, it may have implications for the single-degenerate vs. double-degenerate debate of Type~Ia supernova progenitors.
Short detonation delay times (${\sim}100s$ of Myr) are a common feature of double-degenerate models, and less common (though still possible) for single-degenerate models \citep[e.g.,][]{Maoz14}.
One way to distinguish these models in Tri~II would be to examine Fe-peak elements like Mn, Co, and Ni \citep[see][]{McWilliam18}; but these elements are unavailable in our GRACES spectra.
\citet{Venn17} first noticed that the K and Mg abundances in Tri~II could match the unusual globular cluster NGC2419, which displays a K-Mg anticorrelation of unknown origin \citep{Cohen12,Mucciarelli12}.
If so, then TriII-46 should have very high $1 < \mbox{[K/Fe]} < 2$ (Figure~\ref{f:kmg}).
Our new limit of $\mbox{[K/Fe]} \lesssim 0.8$ in TriII-46 suggests that Tri~II probably does not display the same K-Mg anticorrelation as NGC2419.
[K/Fe] is often enhanced in LTE, both for UFD stars and halo stars \citep{Roederer14c}.
NLTE effects tend to amplify the strengths of the resonance lines for K-enhanced stars, so they likely contribute to the apparent overabundance of K in these stars \citep{Andrievsky10}.
We also confirm results from \citet{Kirby17} and \citet{Venn17} that TriII-40 has very low $\mbox{[Na/Fe]} = -0.79 \pm 0.22$ and somewhat high $\mbox{[Ni/Fe]} = 0.57 \pm 0.16$.
This star has $\mbox{[Fe/H]} \sim -3$ and enhanced $\alpha$-elements, so we would nominally expect its abundance ratios to predominantly reflect the yields of metal-poor core-collapse supernovae (CCSNe).
It is somewhat counterintuitive to find enhanced Ni and depressed Na in a CCSN, as the production of both elements is positively correlated with the neutron excess in a supernova \citep[e.g.,][]{Venn04,Nomoto13}.
However, this appears to break down at the lowest metallicities, and the online \emph{Starfit} tool\footnote{\url{http://starfit.org/}} finds that a Pop\,III supernova progenitor (11.3 $M_\odot$, $E=3 \times 10^{51}$\,erg, from the supernova yield grid of \citealt{Heger10}) provides a decent fit to the Mg, Ca, Ti, Fe, and Ni abundances ($\mbox{[Na/Fe]} \approx -1.0$, $\mbox{[Ni/Fe]} \approx +0.2$).
An alternate possibility is that this $\mbox{[Fe/H]} \sim -3$ star formed from gas already affected by Type~Ia supernovae, as Chandrasekhar mass explosions can produce high [Ni/Fe] \citep[e.g.,][]{Fink14} while reducing [Na/Fe] by adding iron. It seems very unlikely to form and explode a white dwarf so early in this galaxy's history, but age and metallicity may be decoupled at early times due to inhomogeneous metal mixing \citep[e.g.,][]{Frebel12,Leaman12,Nomoto13}. A very prompt population of Type Ia's with merging delay times as low as 30 Myr could also exist \citep{Mannucci06}.
We note that the Na and Ni lines in our spectrum of TriII-46 are very noisy and cannot provide a reliable abundance, but the best-fit abundance estimates (Table~\ref{tbl:eqw}) do suggest this star also has low [Na/Fe] and enhanced [Ni/Fe].
\subsection{Classification as dwarf galaxy or globular cluster} \label{s:discclassify}
In this paper, we consider three criteria that can be used to classify Tri~II and Gru~I as either ultra-faint dwarf galaxies or globular clusters.
\noindent
(1) a velocity dispersion indicating the presence of dark matter, \\
\noindent
(2) an [Fe/H] spread implying the ability to form multiple generations of stars despite supernova feedback, or significant internal mixing, and \\
\noindent
(3) unusually low neutron-capture element abundances compared to halo stars.
The first two criteria were codified by \citet{Willman12} and imply that the stellar system is the result of extended star formation in a dark matter halo.
The third criterion is based on previous studies of UFDs confirmed by the other two criteria \citep[e.g.,][]{Frebel10b, Frebel14, Frebel15, Simon10, Koch13}, and it has recently been used as a way to distinguish UFD stars from other stars \citep[e.g.,][]{Kirby17,Casey17,Roederer17}.
Unlike the first two criteria, this is a criterion specifically for the lowest mass galaxies, rather than defining galaxies in general.
Note that violating the criterion also does not preclude an object from being a UFD, as is evident from the $r$-process outliers Ret~II and Tuc~III that experienced rare $r$-process enrichment events. However, when multiple stars are observed in the same UFD, the majority of stars do tend to have similar neutron-capture element abundances.
We discuss possible explanations for criterion (3) in Section~\ref{s:discwhy}, but first accept it as an empirical criterion.
\subsubsection{Triangulum~II}
The case of Tri~II was already extensively discussed by \citet{Kirby15,Kirby17,Martin16,Venn17,Carlin17}, generally finding that it is most likely a UFD rather than a star cluster.
Our high-resolution abundance results are consistent with the discussion in \citet{Venn17} and \citet{Kirby17}, namely that we find a difference in [Fe/H] between these two stars at about $2\sigma$ significance, and TriII-46 has lower [$\alpha$/Fe] ratios compared to TriII-40.
\citet{Kirby17} previously found very low Sr and Ba abundances in TriII-40, and our Ba limit on TriII-46 is consistent with overall low neutron-capture element abundances in Tri~II (though additional data is needed to confirm that TriII-46 is well below the halo scatter).
Tri~II thus likely satisfies criteria (2) and (3), though it is unclear if it satisfies criterion (1) (see \citealt{Kirby17}, figure~2).
Our main additional contribution here is a more stringent upper limit on K in TriII-46
(Figure~\ref{f:kmg}) as discussed above in Section~\ref{s:discoutlier}, which shows Tri~II does not have the abundance signature found in the globular cluster NGC2419.
\subsubsection{Grus~I}
\citet{Walker16} identified seven probable members in Gru~I. This sample was insufficient to resolve either a velocity dispersion or metallicity dispersion.
Our high-resolution followup of two stars has found that those stars have indistinguishable [Fe/H].
Thus, Gru~I does not currently satisfy criteria (1) or (2) to be considered a galaxy.
However, we have found that the neutron-capture element abundances in Gru~I are both low and similar to UFDs, satisfying criterion (3). Gru~I thus most likely appears to be a UFD, and we expect that further spectroscopic study of Gru~I will reveal both metallicity and velocity dispersions.
We note that the velocity difference in our two Gru~I stars alone does already suggest a potentially significant velocity dispersion.
The mean metallicity determined by \citet{Walker16} for Gru~I is $\mbox{[Fe/H]} \sim -1.4 \pm 0.4$, which placed it far from the luminosity-metallicity trend of other dSph galaxies, while globular clusters do not have such a relationship.
However, the two brightest stars, analyzed here, both have $\mbox{[Fe/H]} \sim -2.5$ that would be consistent with the mean trend.
Only ${\sim}0.3$\,dex of the difference can be attributed to their metallicity zero-point offset (see Section~\ref{s:litcomp}). The rest of the discrepancy is due to the fact that \citet{Walker16} found their other five members of Gru~I to have a much higher [Fe/H] than these two stars, ranging from [Fe/H] = $-2$ to $-1$.
Those five fainter stars are over 1 mag fainter than our stars, currently out of reach for high-resolution spectroscopic abundances so we cannot test the true metallicity of Gru~I with our data.
However, those stars also have very low S/N, and inferred effective temperatures that are much higher than expected based on photometry alone.
We thus suggest the metallicity of Gru~I is probably closer to the value measured from our two stars.
Recently, \citet{Jerjen18} published deep photometry of Gru~I, with isochrone-based metallicities of $\mbox{[Fe/H]} = -2.5 \pm 0.3$.
\subsection{Why do most UFDs have low neutron-capture element abundances?}\label{s:discwhy}
\begin{figure*}
\includegraphics[width=18cm]{cldw_ncapfe_allsame_2x2.pdf}
\caption{Neutron-capture element abundances for UFDs (yellow diamonds; separating Boo~I as red diamonds; and Ret~II and Tuc~III as large dark red stars), classical dSphs (blue and orange symbols), globular clusters (large purple circles), and halo stars (grey points).
Classical dSph stars come from \citealt{Aoki09,Cohen09,Cohen10,Frebel10a,Fulbright04,Geisler05,JHansen18,Jablonka15,Kirby12,Norris17b,Shetrone01,Shetrone03,Simon15Scl,Skuladottir15,Tafelmeyer10,Tsujimoto15a,Tsujimoto17,Ural15,Venn12}.
\label{f:ncapcomp}}
\end{figure*}
\begin{figure}
\includegraphics[width=8.5cm]{dsph_Mstar_Mdyn_ncap.pdf}
\caption{Absolute $V$ magnitude vs dynamical mass within half light radius for dSphs with neutron-capture element constraints.
Galaxies are color-coded according to their [Sr/Fe] and [Ba/Fe] abundance at $-3.5 \lesssim \mbox{[Fe/H]} \lesssim -2.5$.
Yellow points have both low Sr and Ba, orange points have low Sr but regular Ba, blue points have regular Sr and Ba.
For comparison, we also show globular clusters in the \citet{Pritzl05} sample with $\mbox{[Fe/H]} \lesssim -2$.
The dynamical data and luminosity for dwarf galaxies come from \citet{Munoz18}, supplemented by \citealt{Majewski03,Bechtol15}.
Velocity dispersions are from \citealt{Bellazzini08,Kirby13a,Kirby17,Koch09,Koposov11,Simon07,Simon11,Simon15,Simon17,Simon19,Walker09c,Walker09,Walker16}.
$M_{\rm dyn}$ is computed with the equation in \citet{Walker09}.
Globular cluster data are from \citet{Harris10}.
\label{f:dsphbyncap}}
\end{figure}
Figure~\ref{f:ncapcomp} shows the neutron-capture element abundances of UFD stars relative to halo stars and classical dSph stars.
Excluding Ret~II and Tuc~III, it is clear that UFDs have low neutron-capture element abundances relative to these other populations in both Sr and Ba, and most apparent in Sr.
The astrophysical origin (or origins) of these low-but-nonzero neutron-capture element abundances is still an open question (see Section~\ref{s:discncap}).
However, the abundance signature of this (or these) low-yield site(s) is usually hidden in more metal-rich stars.
This is clearly seen by examining the classical dSph galaxies, which are somewhat more evolved than UFDs due to their higher mass.
In Sculptor, we can see a ${>}1$ dex rise in [Sr/Fe] from UFD levels to typical halo star levels occurring at very low metallicity $-4 < \mbox{[Fe/H]} < -3.3$; while a rise in [Ba/Fe] occurs later, at $\mbox{[Fe/H]} \sim -2.5$ \citep[also see][]{Jablonka15,Mashonkina17}.
Similar trends exist for Sagittarius, Sextans, and Ursa Minor.
We highlight Draco and Carina separately, as their stars' [Sr/Fe] ratios stay similarly low to UFDs until $\mbox{[Fe/H]} \gtrsim -2.5$, but unlike UFDs their [Ba/Fe] ratios rise with [Fe/H].
The UFD Boo~I is similar to Draco and Carina and unlike most UFDs in this sense, as well.
The rise in Sr and Ba suggests the delayed onset of different, more prolific, sources of neutron-capture elements, presumably some combination of AGB stars and neutron star mergers.
These higher-yield later-onset sources of Sr and Ba will eventually dominate total Sr and Ba production.
Overall, it seems that larger galaxies manage to reach a ``normal'' halo-like neutron-capture element abundance at lower [Fe/H] than smaller galaxies, implying that they can be enriched by those dominant sources of Sr and Ba \citep[also see][]{Tafelmeyer10,Jablonka15}.
The question of why UFDs have low neutron-capture element abundances thus boils down to why these high-yield sources of neutron-capture elements do not contribute metals to most UFDs while they are forming stars.
We can imagine three possible reasons:
\begin{enumerate}
\item UFDs do not form enough stars to fully sample all metal yields from a stellar population.
If the dominant sources of Sr and Ba are produced rarely or stochastically, they will only occasionally enrich a given UFD; so most UFDs would have low [Sr,Ba/Fe] \citep[e.g.,][]{Koch08,Koch13,Simon10,Venn12,Venn17,Ji16b}.
\item UFDs form in small potential wells, so they do not retain metals very well \citep[e.g.,][]{Kirby11outflow,Venn12}. If the dominant sources of Sr and Ba are lost with higher efficiency in UFDs (relative to iron), this would result in low [Sr,Ba/Fe].
\item UFDs form stars for only a short time. If the dominant neutron-capture element sources have long delay times (e.g., neutron star mergers or AGBs), these sources may only produce metals after UFDs have finished forming stars. Then, surviving UFD stars would not preserve the metals from those sources.
\end{enumerate}
We note that Sr and Ba appear to have differing trends, so the explanations for Sr and Ba may differ as well.
As one attempt to distinguish between these possibilities, we consider whether there are correlations with stellar mass or current dynamical mass.
Figure~\ref{f:dsphbyncap} shows the absolute magnitude and inferred dynamical mass within the half light radius for several classical dSphs and UFDs.
The yellow points are UFDs that have low [Sr/Fe] and [Ba/Fe].
Blue points are classical dSphs (UMi, Sex, Scl, Sgr) that have regular Sr and Ba trends.
In orange we highlight Boo~I, Carina, and Draco, which have low Sr at $\mbox{[Fe/H]} \sim -3$, but Ba behavior similar to the more massive dSphs.
We also note that Draco, UMi, and all UFDs have CMDs indicating purely old stellar populations ($>10-12$\,Gyr old), while more luminous dSphs (Carina and above) show evidence for some late time star formation \citep{Weisz14a,Brown14}.
There is a broad transition in neutron-capture element content occurring somewhere between $-6 > M_V > -10$ and $10^6 < M_{\rm dyn}/M_\odot < 10^7$, also roughly corresponding to the purely old dSphs.
Unfortunately, given the strong correlations between luminosity, dynamical mass, and overall age in this sample, it is hard to distinguish between the three reasons listed above for low neutron-capture elements in UFDs.
Explanation (2) is somewhat disfavored if one accepts two stronger assumptions.
First, $M_{\rm dyn}(<r_{1/2})$ is not a good measure of the total halo mass, because the half light radius is only a tiny fraction of the overall halo size. Correcting for this requires extrapolating an assumed density profile to larger radii, but such extrapolations imply that UFDs and even some of the larger dSphs may all reside in dark halos of similar mass \citep{Strigari08}. A similar dark halo mass is also expected from a stellar-mass-to-halo-mass relation with large intrinsic scatter \citep[e.g.,][]{Jethwa18}.
Second, one must assume that $z=0$ halo masses are highly correlated with halo masses at the time of star formation. This is true on average in $\Lambda$CDM, but it breaks down in specific cases due to scatter in halo growth histories \citep[e.g.,][]{Torrey15} and tidal stripping from different subhalo infall times \citep[e.g.,][]{Dooley14}.
Together, these two assumptions would imply that neutron-capture element behavior is uncorrelated with halo mass, disfavoring explanation (2).
Furthermore, comparison to classical dSphs suggests the short star formation timescale (3) is unlikely for Sr: more massive dSphs like Scl and Sgr are much more efficient at forming stars, but they are already Sr-enriched at $\mbox{[Fe/H]} \sim -3$.
It may thus be the case that explanation (1) is the most likely one for Sr, i.e. that the dominant source of Sr is stochastically produced.
However, explanations (1) and (3) both remain viable for Ba; and explanation (2) remains for both Sr and Ba as well if the two stronger assumptions do not hold.
\subsection{Comparison to globular clusters}\label{s:gc}
Globular clusters (GCs) have very different neutron-capture element abundances than UFDs.
Figure~\ref{f:ncapcomp} shows the mean abundances of GCs as purple circles (compiled in \citealt{Pritzl05}\footnote{We have removed NGC 5897, NGC 6352, and NGC 6362 from this compilation, which were outliers in [Ba/Fe]. These three GCs were all observed by \citet{Gratton87} and scaled to a common $\log gf$ scale by \citet{Pritzl05}. However, the abundances derived by \citet{Gratton87} appear to conflict with the $\log gf$, and we suspect a typographical error for $\log gf$. We confirm this in NGC 5897 with more recent measurements by \citet{Koch14c}.}).
Sr is usually not measured in GCs, so we also show Y (which has similar nucleosynthetic origins as Sr).
It is immediately obvious that all neutron-capture elements in globular clusters closely trace the overall halo trend, as well as more metal-rich stars in classical dSphs.
In contrast, UFDs tend to lie at the extremes of the halo trend.
The origin of globular clusters is unknown, but one class of theories posits that metal-poor GCs form as the dominant stellar component of a small dark matter halo, rather than as a part of a larger galaxy \citep[e.g.,][and references therein]{Forbes18}.
Such theories usually have GCs form in the \emph{same} dark matter halos as UFDs (i.e., ${\sim}10^8 M_\odot$ dark matter halos that experience atomic line cooling), but something (e.g., a gas-rich merger) triggers them to become GCs instead of UFDs \citep[e.g.,][]{Griffen10,Trenti15,Ricotti16,Creasey18}.
However, if GCs do form in these small atomic cooling halos, their neutron-capture element enrichment should match that of UFDs, i.e. be very low, or at least show significant GC-to-GC scatter\footnote{At least one metal-poor globular cluster, M15, does show a significant \emph{internal} dispersion in neutron-capture element abundances ($>0.6$\,dex; \citealt{Sneden97}).
Some other GCs might also display such a dispersion, though it is much smaller ($0.3$\,dex) and could be due to systematic effects \citep{Roederer11b,Cohen11b,Roederer15}.
Either way, this dispersion is not enough to match the neutron-capture element deficiency seen in most UFD stars with $\mbox{[Fe/H]} \gtrsim -2.5$.}.
The difference in neutron-capture element abundances thus seems to imply that the known metal-poor GCs in the Milky Way formed as part of larger galaxies (e.g., \citealt{BoylanKolchin17}), rather than in their own dark matter halos.
Note that the neutron-capture element abundances are not affected by the multiple abundance populations usually discussed in globular clusters \citep[e.g.,][]{Gratton04,Roederer11b}.
Those variations in lighter abundances are due to an internal mechanism, rather than tracing the natal abundance of the gas the GCs formed from \citep[see e.g.,][and references therein.]{Gratton12,Bastian18}.
\subsection{On the origin of the ubiquitous neutron-capture element floor} \label{s:discncap}
We briefly discuss the most viable candidates for this ubiquitous low-yield neutron-capture element source occurring at low metallicity.
This is important not just for understanding UFD enrichment, but also for the most metal-poor halo stars, where Sr and/or Ba appear to be ubiquitously present at the level $\mbox{[Sr,Ba/H]} \sim -6$
\citep{Roederer13}\footnote{To our knowledge, the only star with limits below this threshold is a star with no detected Fe, SMSS 0313$-$6708, with extremely low limits $\mbox{[Sr/H]} < -6.7$ and $\mbox{[Ba/H]} < -6.1$ \citep{Keller14}.}.
The sources must explain the ubiquitous presence of both Sr and Ba, the overall low but nonzero yield of both Sr and Ba, and the fact that the [Sr/Ba] ratio in UFDs varies over ${\sim}2$ dex.
\emph{Neutrino-driven wind.}
The high-entropy neutrino-driven wind in CCSNe was initially thought to be a promising site for Sr and Ba production in the $r$-process \citep[e.g.,][]{Woosley92}, but contemporary simulations suggest wind entropies an order of magnitude too low to produce the full set of $r$-process elements up to uranium \citep[e.g.,][]{Arcones07}.
It still seems that this mechanism robustly produces a limited form of the $r$-process that always synthesizes Sr, but a little bit of Ba only under extreme conditions (e.g., neutron star mass $>2 M_\odot$, \citealt{Wanajo13}).
Supporting this, \citet{Mashonkina17} recently argued for two types of Sr production, one of which was highly correlated with Mg, implying CCSNe could produce Sr alone.
However, current models suggest that even extreme neutrino-driven winds cannot produce $\mbox{[Sr/Ba]} \sim 0$ \citep{Arcones11,Wanajo13}, so while they may be an important factor they probably are not the only source of neutron-capture elements in most UFDs.
\emph{Magnetorotationally driven jets.}
A dying massive star with extremely strong magnetic fields and fast rotation speeds can launch a neutron-rich jet that synthesizes copious Sr and Ba in the r-process \citep[e.g.,][]{Winteler12,Nishimura15}.
It is still debated whether such extreme conditions can be physically achieved in massive star evolution \citep[e.g.,][]{Rembiasz16a,Rembiasz16b, Mosta17}.
However, if the conditions are less extreme, such supernovae can actually produce both Sr and Ba without synthesizing the heaviest r-process elements in a delayed jet \citep{Nishimura15,Nishimura17,Mosta17}.
These more moderate rotation speeds and magnetic fields may be more plausible results of massive and metal-poor stellar evolution, and so the rate of these moderate jet explosions could occur much more often than is invoked to explain prolific $r$-process yields.
If so, then we propose that delayed magnetorotationally driven jets are a viable source of the low Sr and Ba abundances in UFDs.
Additional modeling focusing on the frequency of less-extreme jets is needed for a more detailed evaluation, and zinc abundances may help as well \citep{Ji18}.
\emph{Spinstars.}
Spinstars are rapidly rotating massive stars that can produce Sr and Ba in the $s$-process \citep[e.g.,][]{Meynet06}.
The amount of rotation changes the amount of internal mixing in the star, allowing these models to produce a wide range of [Sr/Ba] ratios,
though the amount of Ba is still subject to nuclear reaction rate uncertainties \citep{Cescutti13, Frischknecht16, Choplin18}.
The fiducial spinstar models in \citet{Frischknecht16} underproduce Sr and Ba by a factor of ${>}100$ to explain the observed values in UFDs \citep[e.g.,][]{Ji16d}, and having hundreds of spinstars in each UFD is unlikely given there are only hundreds of massive stars to begin with in each galaxy.
However, extreme spinstar models with particularly fast rotation velocities and a modified nuclear reaction rate increase the abundance yields by a factor $>10$ \citep{Cescutti13,Frischknecht12,Frischknecht16}\footnote{Yields from \url{http://www.astro.keele.ac.uk/shyne/datasets/s-process-yields-from-frischknecht-et-al-12-15}}.
These models also produce [C/Sr] and [C/Ba] $\sim +2.0$, consistent or somewhat lower than the C abundances in UFDs like Gru~I.
The [C/Fe] ratios are very high ($>3.0$), but the spinstar yields do not include any carbon or iron generated in a supernova explosion, which would reduce this extreme abundance ratio.
Thus, the extreme spinstar models are also a viable source for the neutron-capture elements found in UFDs.
Note that rotation is not the only way that neutron-capture processes can occur in metal-poor or metal-free stars, as it is just one of many possible mechanisms that can induce internal mixing and thus create free neutrons. Recently, \citet{Banerjee18a} and \citet{Clarkson18} have shown that proton ingestion into convective He shells can result in a low level of $s$-, $i$-, and $r$-processes in even in metal-free stars.
Some of the metal-poor models by \citet{Banerjee18a} are able to produce explain the low but nonzero amounts of Sr and Ba found in UFDs, as well as the diversity of [Sr/Ba] ratios.
\emph{An unknown low-yield r-process source.}
As of now, binary neutron star mergers are the only confirmed source of the full $r$-process (i.e., produces all elements from the 1st through 3rd $r$-process peaks).
However, there is evidence from halo stars with low Sr and Ba that UFDs are enriched by a low-yield (or heavily diluted) version of the same abundance pattern.
\citet{Roederer17} found three halo stars with low Sr and Ba as well as Eu detections consistent with the $r$-process ($-4 < \mbox{[Eu/H]} < -3.5$).
\citet{Casey17} found a halo star with $\mbox{[Sr,Ba/H]} \approx -6$, with $\mbox{[Sr/Ba]} \sim 0$ consistent with the full $r$-process.
Assuming that these halo stars originated in now-tidally-disrupted UFDs, that might imply that a low-yield but robust $r$-process does occur.
This has long been assumed to take place in some subset of core-collapse supernovae, but as mentioned above, current models cannot achieve this reliably.
However, UFDs display variations in [Sr/Ba] that cannot be explained by just a single $r$-process.
Disentangling these different sites will require determining abundances of neutron-capture elements other than Sr and Ba in UFD stars. Given the distance to known UFDs, this will require significant time investments with echelle spectrographs on 30m class telescopes.
In the meantime, progress can be made by study of bright, nearby halo stars with low Sr and Ba abundances \citep[e.g.,][]{Roederer17}.
For this purpose, the best stars are the relatively Fe-rich but Sr- and Ba-poor stars, as these are the ones most clearly associated with UFDs (Figure~\ref{f:ncapcomp}).
Such stars are expected to comprise $1-3$\% of halo stars at $-2.5 < \mbox{[Fe/H]} < -2.0$ \citep{Brauer18}.
\section{Conclusion} \label{s:conclusion}
We present detailed chemical abundances from high-resolution spectroscopy of two stars in Gru~I and two stars in Tri~II.
Overall, the abundance ratios of these stars are generally similar to those found in other ultra-faint dwarf galaxies, including extremely low neutron-capture element abundances.
The Gru~I stars are nearly chemically identical, except for possibly a different Ba abundance.
A possible similarity between Tri~II and the cluster NGC~2419 is probably ruled out by a new K upper limit, and there may also be an anomaly in Na and Ni (Section~\ref{s:discoutlier}).
The velocity and metallicity dispersions of Gru~I and Tri~II have not been decisive about whether they are ultra-faint dwarf galaxies or globular clusters, but we conclude they are both likely UFDs rather than GCs because both systems have extremely low neutron-capture element abundances (Section~\ref{s:discclassify}).
We thus expect future observations of these systems to confirm metallicity spreads, as well as significant velocity dispersions or signs of tidal disruption.
The low neutron-capture element abundances in UFDs reflect chemical enrichment at the the extreme low-mass end of galaxy formation in $\Lambda$CDM (Section~\ref{s:discwhy}): stochastic enrichment, metal loss in winds, and short star formation durations.
The dissimilarity in neutron-capture elements also suggests that globular clusters and UFDs do not form in the same environments, and thus that globular clusters probably did not form in their own dark matter halos (Section~\ref{s:gc}).
However, the nucleosynthetic origin of the low neutron-capture element abundances in UFDs like Gru~I and Tri~II is still an open question (Section~\ref{s:discncap}).
\acknowledgments
We thank Nidia Morrell for assisting with MIKE observations of Gru~I;
Kristin Chiboucas and Lison Malo for assistance with GRACES and data reduction;
Vini Placco for computing carbon corrections;
and Projjwal Banerjee, Gabriele Cescutti, Anirudh Chiti, Brendan Griffen, Evan Kirby, Andrew McWilliam, Tony Piro, and \'Asa Sk\'ulad\'ottir for useful discussions.
A.P.J. is supported by NASA through Hubble Fellowship grant HST-HF2-51393.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.
J.D.S. and T.T.H. acknowledge support from the National Science Foundation under grant AST-1714873.
A.F. acknowledges support from NSF grants AST-1255160 and AST-1716251.
K.A.V. acknowledges funding from the National Science and Engineering Research Council of Canada (NSERC), funding reference number 327292-2006.
Based on observations obtained with ESPaDOnS, located at the Canada-France-Hawaii Telescope (CFHT). CFHT is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientique of France, and the University of Hawai'i. ESPaDOnS is a collaborative project funded by France (CNRS, MENESR, OMP, LATT), Canada (NSERC), CFHT and ESA. ESPaDOnS was remotely controlled from the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina) and Ministério da Ciência, Tecnologia e Inovação (Brazil).
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France \citep{Simbad},
and NASA's Astrophysics Data System Bibliographic Services.
\facilities{Magellan-Clay (MIKE, \citealt{Bernstein03}), Gemini-N (GRACES, \citealt{Chene14,Donati03})}
\software{CarPy \citep{Kelson03}, OPERA \citep{Martioli12}, IRAF, MOOG \citep{Sneden73,Sobeck11}, SMH \citep{Casey14}, \texttt{numpy} \citep{numpy}, \texttt{scipy} \citep{scipy}, \texttt{matplotlib} \citep{matplotlib}, \texttt{pandas} \citep{pandas}, \texttt{seaborn}, \citep{seaborn}, \texttt{astropy} \citep{astropy}}
\input{sperr}
|
{
"timestamp": "2018-11-27T02:01:40",
"yymm": "1809",
"arxiv_id": "1809.02182",
"language": "en",
"url": "https://arxiv.org/abs/1809.02182"
}
|
\section{Introduction}
\label{sec:intro}
As a longstanding, fundamental and challenging problem in computer vision, object detection (illustrated in Fig.~\ref{Fig:conferencekeywords}) has been an active area of research for several decades \cite{Fischler1973}. The goal of object detection is to determine whether there are any instances of objects from given categories (such as humans, cars, bicycles, dogs or cats) in an image and, if present, to return the spatial location and extent of each object instance (\emph{e.g.,} via a bounding box \cite{Everingham2010,Russakovsky2015}). As the cornerstone of image understanding and computer vision, object detection forms the basis for solving complex or high level vision tasks such as segmentation, scene understanding, object tracking, image captioning, event detection, and activity recognition. Object detection supports a wide range of applications, including robot vision, consumer electronics, security, autonomous driving, human computer interaction, content based image retrieval, intelligent video surveillance, and augmented reality.
Recently, deep learning techniques \cite{Hinton2006Reducing,LeCun15} have emerged as powerful methods for learning feature representations automatically from data. In particular, these techniques have provided major improvements in object detection, as illustrated in Fig.~\ref{fig:GODResultsStatistics}.
As illustrated in Fig.~\ref{fig:ObjectInstancevsCategory}, object detection can be grouped into one of two types \cite{Grauman2011Visual,Zhang13}:
detection of specific instances versus the detection of broad categories. The first type aims to detect instances of a particular object (such as Donald Trump's face, the Eiffel Tower, or a neighbor's dog), essentially a matching problem. The goal of the second type is to detect (usually previously unseen) instances of some predefined object categories (for example humans, cars, bicycles, and dogs). Historically, much of the effort in the field of object detection has focused on the detection of a single category (typically faces and pedestrians) or a few specific categories. In contrast, over the past several years, the research community has started moving towards the more challenging goal of building general purpose object detection systems where the breadth of object detection ability rivals that of humans.
\begin {figure}[!t]
\centering
\includegraphics[width=0.48\textwidth]{conferencekeywords.pdf}
\caption{Most frequent keywords in ICCV and CVPR conference papers from 2016 to 2018. The size of each word is proportional to the frequency of that keyword. We can see that object detection has received significant attention in recent years.}
\label{Fig:conferencekeywords}
\end {figure}
\begin {figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{ObjectInstancevsCategory.pdf}
\caption{Object detection includes localizing instances of a {\em particular} object (top), as well as generalizing to detecting object {\em categories} in general (bottom). This survey focuses on recent advances for the latter problem of generic object detection.}
\label{fig:ObjectInstancevsCategory}
\end {figure}
\begin {figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{GODResultsStatistics.pdf}
\caption{An overview of recent object detection performance: We can observe a
significant improvement in performance (measured as mean average precision) since the arrival of deep learning in 2012. (a) Detection results of winning entries in the VOC2007-2012 competitions, and (b) Top object detection competition results in ILSVRC2013-2017 (results in both panels use only the provided training data).}
\label{fig:GODResultsStatistics}
\end {figure}
\begin {figure*}[!t]
\centering
\includegraphics[width=0.95\textwidth]{ADecadeOfObjectDetection.pdf}
\caption{Milestones of object detection and recognition, including feature representations \cite{Csurka2004,Dalal2005HOG,He2016ResNet,Krizhevsky2012,Lazebnik2006SPM,Lowe1999Object,Lowe2004,
Perronnin2010,Simonyan2014VGG,Sivic2003,GoogLeNet2015,Viola2001,HOGLBP2009}, detection frameworks \cite{Felzenszwalb2010b,Girshick2014RCNN,OverFeat2014,Uijlings2013b,Viola2001}, and datasets \cite{Everingham2010,Lin2014,Russakovsky2015}.
The time period up to 2012 is dominated by handcrafted features, a transition took place in 2012 with the development of DCNNs for image classification by Krizhevsky \emph{et al.} \cite{Krizhevsky2012}, with methods after 2012 dominated by related deep networks. Mostof the listed methods are highly cited and won a major ICCV or CVPR prize. See Section~\ref{Sec:Progress} for details.}
\label{fig:milestones}
\end {figure*}
In 2012, Krizhevsky \emph{et al.} \cite{Krizhevsky2012} proposed a Deep Convolutional Neural Network (DCNN) called AlexNet which achieved record breaking image classification accuracy in the Large Scale Visual Recognition Challenge (ILSVRC) \cite{Russakovsky2015}. Since that time, the research focus in most aspects of computer vision has been specifically on deep learning methods, indeed including the domain of generic object detection \cite{Girshick2014RCNN,He2014SPP,Girshick2015FRCNN,OverFeat2014,Ren2016a}. Although tremendous progress has been achieved, illustrated in Fig.~\ref{fig:GODResultsStatistics}, we are unaware of comprehensive surveys of this subject over the past five years. Given the exceptionally rapid rate of progress,
this article attempts to track recent advances and summarize their achievements in order to gain a clearer picture of the current panorama in generic object detection.
\begin{table*}[!t]
\caption {Summary of related object detection surveys since 2000.}\label{Tab:Surveys}
\centering
\renewcommand{\arraystretch}{1.2}
\setlength\arrayrulewidth{0.2mm}
\setlength\tabcolsep{2pt}
\resizebox*{16cm}{!}{
\begin{tabular}{!{\vrule width1.2bp}c|p{6cm}<{\centering}|c|c|c|p{9cm}<{\centering}!{\vrule width1.2bp}}
\Xhline{1pt}
\footnotesize No. & \footnotesize Survey Title & \footnotesize Ref. & \footnotesize Year & \footnotesize Venue & \footnotesize Content \\
\hline
\raisebox{-1.5ex}[0pt]{\footnotesize 1 }& \footnotesize Monocular Pedestrian Detection: Survey and Experiments &\raisebox{-1.5ex}[0pt]{ \footnotesize \cite{Enzweiler2009Monocular} }
&\raisebox{-1.5ex}[0pt]{ \footnotesize 2009}
& \raisebox{-1.5ex}[0pt]{\footnotesize PAMI} & \footnotesize An evaluation of three pedestrian detectors \\
\hline
\raisebox{-1.5ex}[0pt]{\footnotesize 2 }& \footnotesize Survey of Pedestrian Detection for Advanced Driver Assistance Systems & \raisebox{-1.5ex}[0pt]{\footnotesize \cite{Geronimo2010Survey}} &\raisebox{-1.5ex}[0pt]{ \footnotesize 2010 }
& \raisebox{-1.5ex}[0pt]{\footnotesize PAMI }& \footnotesize \raisebox{-1.5ex}[0pt]{ A survey of pedestrian detection for advanced driver assistance systems} \\
\hline
\raisebox{-1.5ex}[0pt]{\footnotesize 3} & \footnotesize Pedestrian Detection: An Evaluation of the State of The Art & \raisebox{-1.5ex}[0pt]{\footnotesize \cite{Dollar2012Pedestrian} }
& \raisebox{-1.5ex}[0pt]{\footnotesize 2012 }& \raisebox{-1.5ex}[0pt]{\footnotesize PAMI }
& \footnotesize A thorough and detailed evaluation of detectors in monocular images \\
\hline
\footnotesize 4 & \footnotesize Detecting Faces in Images: A Survey & \footnotesize \cite{Yang2002b} & \footnotesize 2002 & \footnotesize PAMI & \footnotesize First survey of face detection from a single image \\
\hline
\raisebox{-1.5ex}[0pt]{\footnotesize 5 } & \footnotesize A Survey on Face Detection in the
Wild: Past, Present and Future & \raisebox{-1.5ex}[0pt]{\footnotesize \cite{Zafeiriou2015}}
& \raisebox{-1.5ex}[0pt]{\footnotesize 2015 }& \raisebox{-1.5ex}[0pt]{ \footnotesize CVIU}& \raisebox{-1.5ex}[0pt]{ \footnotesize A survey of face detection in the wild since 2000} \\
\hline
\raisebox{-1.5ex}[0pt]{\footnotesize 6 }& \raisebox{-1.5ex}[0pt]{
\footnotesize On Road Vehicle Detection: A Review} &\raisebox{-1.5ex}[0pt]{ \footnotesize \cite{Sun2006Road} }&\raisebox{-1.5ex}[0pt]{ \footnotesize 2006 }&
\raisebox{-1.5ex}[0pt]{ \footnotesize PAMI} & \footnotesize A review of vision based on-road vehicle detection systems \\
\hline
\footnotesize 7 & \footnotesize Text Detection and Recognition in Imagery: A Survey & \footnotesize \cite{Ye2015Text} & \footnotesize 2015 & \footnotesize PAMI & \footnotesize A survey of text detection and
recognition in color imagery \\
\hline
\raisebox{-1.5ex}[0pt]{ \footnotesize 8 }&\raisebox{-1.5ex}[0pt]{
\footnotesize Toward Category Level Object Recognition} & \raisebox{-1.5ex}[0pt]{\footnotesize \cite{Ponce2007Toward}} &\raisebox{-1.5ex}[0pt]{ \footnotesize 2007}
&\raisebox{-1.5ex}[0pt]{ \footnotesize Book } & \footnotesize Representative papers on object categorization, detection,
and segmentation \\
\hline
\raisebox{-1.5ex}[0pt]{ \footnotesize 9 }& \footnotesize The Evolution of Object Categorization and the Challenge of Image Abstraction & \raisebox{-1.5ex}[0pt]{\footnotesize \cite{Dickinson2009}}
&\raisebox{-1.5ex}[0pt]{ \footnotesize 2009}&\raisebox{-1.5ex}[0pt]{ \footnotesize Book } & \raisebox{-1.5ex}[0pt]{ \footnotesize A trace of the evolution of object categorization over four decades} \\
\hline
\raisebox{-1.5ex}[0pt]{\footnotesize 10} &
\footnotesize Context based Object Categorization: A Critical Survey &\raisebox{-1.5ex}[0pt]{ \footnotesize \cite{Galleguillos2010}} &\raisebox{-1.5ex}[0pt]{ \footnotesize 2010}
& \raisebox{-1.5ex}[0pt]{\footnotesize CVIU} & \footnotesize A review of contextual information
for object categorization \\
\hline
\footnotesize 11 & \footnotesize 50 Years of Object Recognition: Directions
Forward & \footnotesize \cite{Andreopoulos13} & \footnotesize 2013& \footnotesize CVIU & \footnotesize A review of the evolution of object recognition
systems over five decades \\
\hline
\raisebox{-1.5ex}[0pt]{ \footnotesize 12} & \raisebox{-1.5ex}[0pt]{
\footnotesize Visual Object Recognition} &\raisebox{-1.5ex}[0pt]{
\footnotesize \cite{Grauman2011Visual} } &\raisebox{-1.5ex}[0pt]{ \footnotesize2011}&\raisebox{-1.5ex}[0pt]{ \footnotesize Tutorial}
& \footnotesize Instance and category object recognition techniques \\
\hline
\footnotesize 13 & \footnotesize Object Class Detection: A Survey & \footnotesize \cite{Zhang13} & \footnotesize 2013 & \footnotesize ACM CS & \footnotesize Survey of generic object detection methods before 2011 \\
\hline
\raisebox{-1.5ex}[0pt]{ \footnotesize 14} & \footnotesize Feature Representation for Statistical
Learning based Object Detection: A Review &\raisebox{-1.5ex}[0pt]{ \footnotesize \cite{Li2015Feature}}
&\raisebox{-1.5ex}[0pt]{ \footnotesize 2015} & \raisebox{-1.5ex}[0pt]{ \footnotesize PR} & \footnotesize Feature representation methods in statistical learning
based object detection, including handcrafted and deep learning based features \\
\hline
\footnotesize 15 & \footnotesize Salient Object Detection: A Survey & \footnotesize \cite{Borji14} & \footnotesize2014 & \footnotesize arXiv & \footnotesize A survey for salient object detection \\
\hline
\raisebox{-1.5ex}[0pt]{ \footnotesize 16}& \footnotesize Representation Learning: A Review and New Perspectives & \raisebox{-1.5ex}[0pt]{ \footnotesize \cite{Bengio13Feature}}
&\raisebox{-1.5ex}[0pt]{ \footnotesize 2013} &\raisebox{-1.5ex}[0pt]{ \footnotesize PAMI} & \footnotesize Unsupervised feature learning and deep learning, probabilistic models, autoencoders, manifold learning, and deep networks \\
\hline
\footnotesize 17 & \footnotesize Deep Learning & \footnotesize \cite{LeCun15} & \footnotesize2015& \footnotesize Nature & \footnotesize An introduction to deep learning and applications \\
\hline
\raisebox{-1.5ex}[0pt]{ \footnotesize 18 }&
\footnotesize A Survey on Deep Learning in Medical Image Analysis& \raisebox{-1.5ex}[0pt]{\footnotesize \cite{Litjens2017} }&\raisebox{-1.5ex}[0pt]{ \footnotesize 2017}
&\raisebox{-1.5ex}[0pt]{ \footnotesize MIA} & \footnotesize A survey of deep learning for image classification, object detection, segmentation and registration in medical image analysis \\
\hline
\raisebox{-1.5ex}[0pt]{ \footnotesize 19} & \footnotesize Recent Advances in Convolutional Neural
Networks&\raisebox{-1.5ex}[0pt]{ \footnotesize \cite{Gu2015Recent}}
&\raisebox{-1.5ex}[0pt]{ \footnotesize 2017}& \raisebox{-1.5ex}[0pt]{\footnotesize PR}& \footnotesize A broad survey of the recent advances in CNN and its applications in computer vision, speech and natural language processing \\
\hline
\footnotesize 20 & \footnotesize Tutorial: Tools for Efficient Object Detection & \footnotesize $-$ & \footnotesize 2015& \footnotesize ICCV15& \footnotesize A short course for object detection only covering recent milestones \\
\hline
\raisebox{-1.5ex}[0pt]{\footnotesize 21 }&\raisebox{-1.5ex}[0pt]{
\footnotesize Tutorial: Deep Learning for Objects and Scenes} &\raisebox{-1.5ex}[0pt]{
\footnotesize $-$} & \raisebox{-1.5ex}[0pt]{ \footnotesize 2017}&\raisebox{-1.5ex}[0pt]{ \footnotesize CVPR17}
& \footnotesize A high level summary of recent work on deep learning for visual recognition of objects and scenes \\
\hline
\raisebox{-1.5ex}[0pt]{\footnotesize 22} &\raisebox{-1.5ex}[0pt]{
\footnotesize Tutorial: Instance Level Recognition}&\raisebox{-1.5ex}[0pt]{ \footnotesize $-$ }& \raisebox{-1.5ex}[0pt]{ \footnotesize 2017} &\raisebox{-1.5ex}[0pt]{ \footnotesize ICCV17}
& \footnotesize A short course of recent advances on instance level recognition, including object detection, instance segmentation and human pose prediction \\
\hline
\raisebox{-1.5ex}[0pt]{\footnotesize 23}&\raisebox{-1.5ex}[0pt]{
\footnotesize Tutorial: Visual Recognition and Beyond} &\raisebox{-1.5ex}[0pt]{ \footnotesize $-$} & \raisebox{-1.5ex}[0pt]{\footnotesize 2018}&\raisebox{-1.5ex}[0pt]{ \footnotesize CVPR18 }& \footnotesize A tutorial on methods and principles behind image classification, object detection, instance segmentation, and semantic segmentation. \\
\hline
\footnotesize \textbf{24}& \footnotesize \textbf{Deep Learning for Generic Object Detection} & \footnotesize \textbf{Ours} & \footnotesize \textbf{2019} & \footnotesize \textbf{VISI} & \footnotesize \textbf{A comprehensive survey of deep learning for generic object detection} \\
\Xhline{1pt}
\end{tabular}
}
\end{table*}
\subsection{Comparison with Previous Reviews}
Many notable object detection surveys have been published, as summarized in Table~\ref{Tab:Surveys}. These include many excellent surveys on the problem of {\em specific} object detection, such as pedestrian detection \cite{Enzweiler2009Monocular,Geronimo2010Survey,Dollar2012Pedestrian}, face detection \cite{Yang2002b,Zafeiriou2015}, vehicle detection \cite{Sun2006Road} and text detection \cite{Ye2015Text}.
There are comparatively few recent surveys focusing directly on the problem of generic object detection, except for the work by Zhang \emph{et al.} \cite{Zhang13} who conducted a survey on the topic of object class detection. However, the research reviewed in \cite{Grauman2011Visual}, \cite{Andreopoulos13} and \cite{Zhang13} is mostly pre-2012, and therefore prior to the recent striking success and dominance of deep learning and related methods.
Deep learning allows computational models to learn fantastically complex, subtle, and abstract representations, driving significant progress in a broad range of problems such as visual recognition, object detection, speech recognition, natural language processing, medical image analysis, drug discovery and genomics. Among different types of deep neural
networks, DCNNs \cite{LeCun1998Gradient,Krizhevsky2012,LeCun15} have brought about breakthroughs in processing images, video, speech and audio. To be sure, there have been many published surveys on deep learning, including that of Bengio \emph{et al.} \cite{Bengio13Feature}, LeCun \emph{et al.} \cite{LeCun15}, Litjens \emph{et al.} \cite{Litjens2017}, Gu \emph{et al.} \cite{Gu2015Recent}, and more recently in tutorials at ICCV and CVPR.
In contrast, although many deep learning based methods have been proposed for object detection, we are unaware of any comprehensive recent survey. A thorough review and summary of existing work is essential for further progress in object detection, particularly for researchers wishing to enter the field. Since our focus is on {\em generic} object detection, the extensive work on DCNNs for {\em specific} object detection, such as face detection \cite{Li2015CasecadeCNN,Zhang2016Joint,Hu2017Finding}, pedestrian detection \cite{Zhang2016faster,Hosang2015taking}, vehicle detection \cite{Zhou2016dave} and traffic sign detection \cite{Zhu2016traffic} will not be considered.
\begin {figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{TheProblem.pdf}
\caption{Recognition problems related to generic object detection: (a) Image level object classification, (b) Bounding box level generic object detection, (c) Pixel-wise semantic segmentation, (d) Instance level semantic segmentation.}
\label{Fig:TheProblem}
\end {figure}
\subsection{Scope}
The number of papers on generic object detection based on deep learning is
breathtaking. There are so many, in fact, that compiling any comprehensive
review of the state of the art is beyond the scope of any reasonable length paper. As a result, it is necessary to establish selection criteria, in such a way that we have limited our focus to top journal and conference papers. Due to these limitations, we sincerely apologize to those authors whose works are not included in this paper. For surveys of work on related topics, readers are referred to the articles in Table~\ref{Tab:Surveys}.
This survey focuses on major progress of the last five years, and we restrict our attention to still pictures, leaving the important subject of video
object detection as a topic for separate consideration in the future.
The main goal of this paper is to offer a comprehensive survey of deep learning based generic object detection techniques, and to present some degree of taxonomy, a high level perspective and organization, primarily on the basis of popular datasets, evaluation metrics, context modeling, and detection proposal methods. The intention is that our categorization be helpful for readers to have an accessible understanding of similarities and differences between a wide variety of strategies. The proposed taxonomy gives researchers
a framework to understand current research and to identify
open challenges for future research.
The remainder of this paper is organized as follows. Related
background and the progress
made during the last two decades are summarized in Section~\ref{Sec:Background}. A brief introduction to deep learning is given in Section \ref{Sec:CNNintro}. Popular datasets and evaluation criteria are summarized in Section \ref{Sec:DataEval}.
We describe the milestone object detection frameworks in Section~\ref{Sec:Frameworks}. From Section \ref{Sec:DCNNFeatures} to Section \ref{sec:otherissue}, fundamental sub-problems and the relevant issues involved in designing object detectors
are discussed. Finally, in Section \ref{Sec:Conclusions}, we conclude the paper with an overall discussion of object detection, state-of-the- art performance, and future research directions.
\section{Generic Object Detection}
\label{Sec:Background}
\subsection{The Problem}
\label{Sec:TheProblem}
\emph{Generic object detection}, also called generic object category detection, object class detection, or object category detection \cite{Zhang13}, is defined as follows. Given an image, determine whether or not there are instances of objects from predefined categories (usually \emph{many} categories, \emph{e.g.,} 200 categories in the ILSVRC object detection challenge) and, if present, to return the spatial location and extent of each instance. A greater emphasis is placed on detecting a broad range of natural categories, as opposed to specific object category detection where only a narrower predefined category of interest (\emph{e.g.,} faces, pedestrians, or cars) may be present. Although thousands of objects occupy the visual world in which we live, currently the research community is primarily interested in the localization of highly structured objects (\emph{e.g.,} cars, faces, bicycles and airplanes) and articulated objects (\emph{e.g.,} humans, cows and horses) rather than unstructured scenes (such as sky, grass and cloud).
The spatial location and extent of an object can be defined coarsely using a bounding box (an axis-aligned rectangle tightly bounding the object) \cite{Everingham2010,Russakovsky2015}, a precise pixelwise segmentation mask \cite{Zhang13}, or a closed boundary \cite{Lin2014,Russell2008}, as illustrated in Fig.~\ref{Fig:TheProblem}.
To the best of our knowledge, for the evaluation of generic object detection algorithms, it is bounding boxes which are most widely used in the current literature \cite{Everingham2010,Russakovsky2015}, and therefore this is also the approach we adopt in this survey. However, as the research community moves towards deeper scene understanding (from image level object classification to single object localization, to generic object detection, and to pixelwise object segmentation), it is anticipated that future challenges will be at the pixel level \cite{Lin2014}.
There are many problems closely related to that of generic object detection\footnote{To the best of our knowledge, there is no universal agreement in the literature on the definitions of various vision subtasks. Terms such as detection, localization, recognition, classification, categorization, verification, identification, annotation, labeling, and understanding are often differently defined \cite{Andreopoulos13}.}.
The goal of \emph{object classification} or \emph{object categorization} (Fig.~\ref{Fig:TheProblem} (a)) is to assess the presence of objects from a given set of object classes in an image; \emph{i.e.,} assigning one or more object class labels to a given image, determining the presence without the need of location. The additional requirement to locate the instances in an image makes detection a more challenging task than classification. The \emph{object recognition} problem denotes the more general problem of identifying/localizing all the objects present in an image, subsuming the problems of object detection and classification \cite{Everingham2010,Russakovsky2015,Opelt2006generic,Andreopoulos13}.
Generic object detection is closely related to \emph{semantic image segmentation} (Fig.~\ref{Fig:TheProblem} (c)), which aims to assign each pixel in an image to a semantic class label.
\emph{Object instance segmentation} (Fig.~\ref{Fig:TheProblem} (d)) aims to distinguish different instances of the same object class, as opposed to semantic segmentation which does not.
\begin {figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{challenges.pdf}
\caption{Taxonomy of challenges in generic object detection.}
\label{Fig:challenges}
\end {figure}
\begin {figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{IntraInterClass.pdf}
\caption{\footnotesize{Changes in appearance of the same class with variations in imaging conditions (a-h). There is an astonishing variation in what is meant to be a single object class (i). In contrast, the four images in (j) appear very similar, but in fact are from four {\emph different} object classes. Most images are from ImageNet \cite{Russakovsky2015} and MS COCO \cite{Lin2014}.}}
\label{Fig:IntraInterClass}
\end {figure}
\subsection{Main Challenges}
\label{Sec:MainChallenges}
The ideal of generic object detection is to develop a general-purpose algorithm that achieves two competing goals of \emph{high quality/accuracy} and \emph{high efficiency} (Fig.~\ref{Fig:challenges}). As illustrated in Fig.~\ref{Fig:IntraInterClass}, high quality detection must accurately localize and recognize objects in images or video frames, such that the large variety of object categories in the real world can be distinguished (\emph{i.e.,} high distinctiveness), and that object instances from the same category, subject to intra-class appearance variations, can be localized and recognized (\emph{i.e.,} high robustness). High efficiency requires that the entire detection task runs in real time with acceptable memory and storage demands.
\subsubsection{Accuracy related challenges}
Challenges in detection accuracy stem from 1) the vast range of intra-class variations and 2) the huge number of object categories.
Intra-class variations can be divided into two types: intrinsic factors and imaging conditions.
In terms of intrinsic factors, each object category can have many different object instances, possibly varying in one or more of color, texture, material, shape, and size, such as the ``chair'' category shown in Fig.~\ref{Fig:IntraInterClass} (\emph{i}). Even in a more narrowly defined class, such as human or horse, object instances can appear in different poses, subject to nonrigid deformations or with the addition of clothing.
Imaging condition variations are caused by the dramatic impacts unconstrained environments can have on object appearance, such as lighting (dawn, day, dusk, indoors), physical location, weather conditions, cameras, backgrounds, illuminations, occlusion, and viewing distances. All of these conditions produce significant variations in object appearance, such as illumination, pose, scale, occlusion, clutter, shading, blur and motion, with examples illustrated in Fig.~\ref{Fig:IntraInterClass} (\emph{a}-\emph{h}). Further challenges may be added by digitization artifacts, noise corruption, poor resolution, and filtering distortions.
In addition to \emph{intra}class variations, the large number of object categories, on the order of $10^4-10^5$, demands great discrimination power from the detector to distinguish between subtly different \emph{inter}class variations, as illustrated in Fig.~\ref{Fig:IntraInterClass} (j). In practice, current detectors focus mainly on structured object categories, such as the 20, 200 and 91 object classes in PASCAL VOC \cite{Everingham2010}, ILSVRC \cite{Russakovsky2015} and MS COCO \cite{Lin2014} respectively. Clearly, the number of object categories under consideration in existing benchmark datasets is much smaller than can be recognized by humans.
\subsubsection{Efficiency and scalability related challenges}
The prevalence of social media networks and mobile/wearable devices has led to increasing demands for analyzing visual data. However, mobile/wearable devices have limited computational capabilities and storage space, making efficient object detection critical.
The efficiency challenges stem from the need to localize and recognize, computational complexity growing with the (possibly large) number of object categories, and with the (possibly very large) number of locations and scales within a single image, such as the examples in Fig.~\ref{Fig:IntraInterClass} (c, d).
A further challenge is that of scalability: A detector should be able to handle previously unseen objects, unknown situations, and high data rates. As the number of images and the number of categories continue to grow, it may become impossible to annotate them manually, forcing a reliance on weakly supervised strategies.
\begin {figure*}[!t]
\centering
\includegraphics[width=0.45\textwidth]{ConvReLuMax.pdf}
\includegraphics[width=0.45\textwidth]{VGGNet.pdf}
\caption{\textcolor{black}{(a) Illustration of three operations that are repeatedly applied by a typical CNN:
Convolution with a number of linear filters;
Nonlinearities (\emph{e.g.} ReLU);
and Local pooling (\emph{e.g.} Max Pooling). The $M$ feature maps from a previous layer are convolved with $N$ different filters (here shown as size $3\times3\times M$), using a stride of 1. The resulting $N$ feature maps are then passed through a nonlinear function (\emph{e.g.} ReLU), and pooled (\emph{e.g.} taking a maximum over $2\times2$ regions) to give $N$ feature
maps at a reduced resolution. (b) Illustration of the architecture of VGGNet \cite{Simonyan2014VGG}, a typical CNN with 11 weight layers. An image with 3 color channels is presented as
the input. The network has 8 convolutional layers, 3 fully connected layers, 5 max pooling layers and a softmax classification layer. The last three fully connected layers take features from the top convolutional layer as input in vector form. The final layer is a $C$-way softmax function, $C$ being the number of classes. The whole network can be learned from labeled training data by optimizing an objective function (\emph{e.g.} mean squared error or cross entropy
loss) via Stochastic Gradient Descent.}}
\label{fig:ConvReLuMax}
\end {figure*}
\subsection{Progress in the Past Two Decades}
\label{Sec:Progress}
Early research on object recognition was based on template matching techniques and simple part-based models \cite{Fischler1973}, focusing on specific objects whose spatial layouts are roughly rigid, such as faces. Before 1990 the leading paradigm of object recognition was based on geometric representations~\cite{Mundy2006Object,Ponce2007Toward}, with the focus later moving away from geometry and prior models towards the use of statistical classifiers (such as Neural Networks \cite{Rowley1998}, SVM \cite{Osuna1997Train} and Adaboost \cite{Viola2001,Xiao2003Boosting}) based on appearance features \cite{Murase1995,Schmid1997Local}. This successful family of object detectors set the stage for most subsequent research in this field.
The milestones of object detection in more recent years are presented in Fig.~\ref{fig:milestones}, in which two main eras (SIFT\emph{ vs.} DCNN) are highlighted. The appearance features moved from global representations \cite{Murase1995Visual,Swain1991Color,Turk1991Face} to local representations that are designed to be invariant to changes in translation, scale, rotation, illumination, viewpoint and occlusion. Handcrafted local invariant features gained tremendous popularity, starting from the Scale Invariant Feature Transform (SIFT) feature \cite{Lowe1999Object}, and the progress on various visual recognition tasks was based substantially on the use of local descriptors \cite{Mikolajczyk2005} such as Haar-like features \cite{Viola2001}, SIFT \cite{Lowe2004}, Shape Contexts \cite{Belongie2002shape}, Histogram of Gradients (HOG) \cite{Dalal2005HOG} Local Binary Patterns (LBP) \cite{Ojala02}, and region covariances \cite{Tuzel2006Region}. These local features are usually aggregated by simple concatenation or feature pooling encoders such as the Bag of Visual Words approach, introduced by Sivic and Zisserman \cite{Sivic2003} and Csurka \emph{et al.} \cite{Csurka2004}, Spatial Pyramid Matching (SPM) of BoW models \cite{Lazebnik2006SPM}, and Fisher Vectors \cite{Perronnin2010}.
For years, the multistage hand tuned pipelines of handcrafted local descriptors and discriminative classifiers dominated a variety of domains in computer vision, including object detection, until the significant turning point in 2012 when DCNNs \cite{Krizhevsky2012} achieved their record-breaking results in image classification.
The use of CNNs for detection and localization \cite{Rowley1998} can be traced back to the 1990s, with a modest number of hidden layers used for object detection \cite{Vaillant1994,Rowley1998,Sermanet2013c}, successful in restricted domains such as
face detection. However, more recently, deeper CNNs have led to record-breaking
improvements in the detection of more general object
categories, a shift which came about when the successful application
of DCNNs in image classification \cite{Krizhevsky2012} was
transferred to object detection, resulting in the milestone Region-based CNN (RCNN) detector of Girshick
\emph{et al.} \cite{Girshick2014RCNN}.
The successes of deep detectors rely heavily on vast training data and large networks with millions or even billions of parameters. The availability of GPUs with very high computational capability and large-scale detection datasets (such as ImageNet \cite{ImageNet2009,Russakovsky2015} and MS COCO \cite{Lin2014}) play a key role in their success. Large datasets have allowed researchers to target more realistic and complex problems from images with large intra-class variations and inter-class similarities \cite{Lin2014,Russakovsky2015}. However, accurate annotations are labor intensive to obtain, so detectors must consider methods that can relieve annotation difficulties or can learn with smaller training datasets.
The research community has started moving towards the challenging goal of building general purpose object detection systems whose ability to detect many object categories matches that of humans. This is a major challenge: according to cognitive scientists, human beings can identify around 3,000 entry level categories and 30,000 visual categories overall, and the number of categories distinguishable with domain expertise may be to the order of $10^5$ \cite{Biederman1987}. Despite the remarkable progress of the past years, designing an accurate, robust, efficient detection and recognition system that approaches human-level performance on $10^4-10^5$ categories is undoubtedly an unresolved problem.
\section{A Brief Introduction to Deep Learning}
\label{Sec:CNNintro}
\textcolor{black}{Deep learning has revolutionized a wide range of machine learning tasks, from image classification and video processing to speech recognition and natural language understanding. Given this tremendously rapid evolution, there exist many recent survey papers on deep learning \cite{Bengio13Feature,Goodfellow2016Deep,Gu2015Recent,LeCun15,
Litjens2017,Pouyanfar2018Survey,
Wu2019Comprehensive,Young2018Recent,
Zhang2018Deep,Zhou2018Graph,Zhu2017Deep}. These surveys have reviewed deep learning techniques from different perspectives \cite{Bengio13Feature,Goodfellow2016Deep,Gu2015Recent,
LeCun15,Pouyanfar2018Survey,Wu2019Comprehensive,Zhou2018Graph}, or with applications to medical image analysis \cite{Litjens2017},
natural language processing \cite{Young2018Recent}, speech recognition systems \cite{Zhang2018Deep}, and remote sensing \cite{Zhu2017Deep}.}
Convolutional Neural Networks (CNNs), the most representative models of deep learning, are able to exploit the basic properties underlying natural signals: translation invariance, local connectivity, and compositional hierarchies \cite{LeCun15}. A typical CNN, illustrated in Fig.~\ref{fig:ConvReLuMax}, has a hierarchical structure and is composed of a number of layers to learn representations of data with multiple levels of abstraction \cite{LeCun15}. We begin with a convolution
\begin{equation}
\textbf{\emph{x}}^{l-1} * \textbf{\emph{w}}^{l}
\end{equation}
between an input feature map $\textbf{\emph{x}}^{l-1}$ at a feature map from previous layer $l-1$, convolved with a 2D convolutional kernel (or filter or weights) $\textbf{\emph{w}}^{l}$. This convolution appears over a sequence of layers, subject to a nonlinear operation $\sigma$, such that
\begin{equation}
\textbf{\emph{x}}^l_j = \sigma(\sum_{i=1}^{N^{l-1}} \textbf{\emph{x}}^{l-1}_i * \textbf{\emph{w}}^{l}_{i, j} +b^{l}_j), \label{eq:conv}
\end{equation}
with a convolution now between the $N^{l-1}$ input feature maps $\textbf{\emph{x}}^{l-1}_i$ and the corresponding kernel $\textbf{\emph{w}}^{l}_{i, j}$, plus a bias term $b^{l}_j$. The elementwise nonlinear function $\sigma(\cdot)$ is typically a rectified linear unit (ReLU) for each element,
\begin{equation}
\sigma(x) = \max\{x, 0\}.
\end{equation}
Finally, pooling corresponds to the downsampling/upsampling of feature maps. These three operations (convolution, nonlinearity, pooling) are illustrated in Fig. \ref{fig:ConvReLuMax} (a); CNNs having a large number of layers, a ``deep'' network, are referred to as Deep CNNs (DCNNs), with a typical DCNN architecture illustrated in Fig.~\ref{fig:ConvReLuMax} (b).
Most layers of a CNN consist of a number of
feature maps, within which each pixel acts like a neuron. Each neuron in a convolutional
layer is connected to feature maps of the previous
layer through a set of weights $\textbf{\emph{w}}_{i,j}$ (essentially a set of 2D filters). As can be seen in Fig.~\ref{fig:ConvReLuMax} (b),
where the early CNN layers are typically composed of convolutional and pooling layers, the later layers are normally fully connected.
From earlier to later layers, the input
image is repeatedly convolved, and with each layer, the receptive
field or region of support increases.
In general, the initial CNN layers extract low-level features (\emph{e.g.,} edges),
with later layers extracting more general features of increasing complexity \cite{ZeilerFergus2014,Bengio13Feature,LeCun15,Oquab2014Learning}.
DCNNs have a number of outstanding advantages: a hierarchical structure to learn representations of data with multiple levels of abstraction, the capacity to learn very complex functions, and learning feature representations directly and automatically from data with minimal domain knowledge. What has particularly made DCNNs successful has been the availability of large scale labeled datasets and of GPUs with very high computational capability.
Despite the great successes, known deficiencies remain. In particular, there is an extreme need for labeled training data and a requirement of expensive computing resources, and considerable skill and experience are still needed to select appropriate learning parameters and network architectures. Trained networks are poorly interpretable, there is a lack of robustness to degradations, and many DCNNs have shown serious vulnerability to attacks \cite{Goodfellow2015Explaining}, all of which currently limit the use of DCNNs in real-world applications.
\begin{table}[!t]
\caption{Most frequent object classes for each detection challenge. The size of each word is proportional to the frequency of that class in the training dataset.}
\centering
\setlength\tabcolsep{2pt}
\begin{tabular}{!{\vrule width1.2bp}c!{\vrule width1.2bp}c!{\vrule width1.2bp}}
\Xhline{1.2pt}
\includegraphics[width=0.24\textwidth]{ClassFreqpascal2012_trainval_first.pdf}&
\includegraphics[width=0.24\textwidth]{ClassFreqcoco2017_trainval_first.pdf}\\
\Xhline{1.2pt}
\multicolumn{1}{c}{(a) PASCAL VOC (20 Classes) }& \multicolumn{1}{c}{(b) MS COCO (80 Classes)}\\ \multicolumn{1}{c}{} \\
\Xhline{1.2pt}
\multicolumn{2}{!{\vrule width1.2bp}c!{\vrule width1.2bp}}{\includegraphics[width=0.48\textwidth]{ClassFreqImageNet_trainval.pdf}}\\
\Xhline{1.2pt}
\multicolumn{2}{c}{(c) ILSVRC (200 Classes)}\\ \multicolumn{1}{c}{} \\
\Xhline{1.2pt}
\multicolumn{2}{!{\vrule width1.2bp}c!{\vrule width1.2bp}}{\includegraphics[width=0.48\textwidth]{ClassFreqopen2018_train_first.pdf}}\\
\Xhline{1.2pt}
\multicolumn{2}{c}{(d) Open Images Detection Challenge (500 Classes)}
\end{tabular}
\label{fig:ClassFrequency}
\end {table}
\section{Datasets and Performance Evaluation}
\label{Sec:DataEval}
\subsection{Datasets}
\label{sec:datasets}
Datasets have played a key role throughout the history of object recognition research, not only as a common ground for measuring and comparing the performance of competing algorithms, but also pushing the field towards increasingly complex and challenging problems. In particular, recently, deep learning techniques have brought tremendous success to many visual recognition problems, and it is the large amounts of annotated data which play a key role in their success. Access to large numbers of images on the Internet makes it possible to build comprehensive datasets in order to capture a vast richness and diversity of objects, enabling unprecedented performance in object recognition.
For generic object detection, there are four famous datasets: PASCAL VOC \cite{Everingham2010,Everingham2015}, ImageNet \cite{ImageNet2009}, MS COCO \cite{Lin2014} and Open Images \cite{Kuznetsova2018Open}. The attributes of these datasets are summarized in Table~\ref{Tab:maindatasets}, and selected sample images are shown in Fig.~\ref{fig:ObjectImages}. There are three steps to creating large-scale annotated datasets: determining the set of target object categories, collecting a diverse set of candidate images to represent the selected categories on the Internet, and annotating the collected images, typically by designing crowdsourcing strategies. Recognizing space limitations, we refer interested readers to the original papers \cite{Everingham2010,Everingham2015,Lin2014,Russakovsky2015,Kuznetsova2018Open} for detailed descriptions of these datasets in terms of construction and properties.
\begin{table*}[!t]
\caption {Popular databases for object recognition. Example images from PASCAL VOC, ImageNet, MS COCO and Open Images are shown in Fig.~\ref{fig:ObjectImages}.}\label{Tab:maindatasets}
\centering
\renewcommand{\arraystretch}{1.2}
\setlength\arrayrulewidth{0.2mm}
\setlength\tabcolsep{1pt}
\resizebox*{18cm}{!}{
\begin{tabular}{!{\vrule width1.2bp}c|c|c|c|c|c|c|p{8cm}!{\vrule width1.2bp}}
\Xhline{1pt}
\scriptsize \shortstack [c] {\textbf{Dataset}\\ \textbf{Name}} & \scriptsize \shortstack [c]
{\textbf{Total} \\ \textbf{Images}} & \scriptsize \shortstack [c]
{\textbf{Categories}} & \scriptsize \shortstack [c]
{\textbf{Images Per} \\ \textbf{Category}} & \scriptsize \shortstack [c]
{\textbf{Objects Per }\\ \textbf{Image}} & \scriptsize \shortstack [c] {\textbf{Image} \\ \textbf{Size} }
& \scriptsize \shortstack [c] {\textbf{Started} \\ \textbf{Year}}
&\raisebox{1.3ex}[0pt]{ \scriptsize \shortstack [c] {$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$\textbf{Highlights}}} \\
\hline
\raisebox{-6.3ex}[0pt]{\scriptsize \shortstack [c] {PASCAL \\VOC \\ (2012) \cite{Everingham2015} } } &\raisebox{-5ex}[0pt]{\scriptsize $11,540$ }& \raisebox{-5ex}[0pt]{\scriptsize $20$} & \raisebox{-5ex}[0pt]{ \scriptsize $303\sim4087$}& \raisebox{-5ex}[0pt]{\scriptsize $2.4$}
& \raisebox{-5ex}[0pt]{\scriptsize $470\times380$}& \raisebox{-5ex}[0pt]{ \scriptsize $2005$} & \scriptsize Covers only 20 categories that are common in everyday life; Large number of training images; Close to real-world applications; Significantly larger intraclass variations; Objects in scene context; Multiple objects in one image; Contains many difficult samples. \\
\hline
\raisebox{-3.3ex}[0pt]{\scriptsize ImageNet \cite{Russakovsky2015}} &\raisebox{-4.3ex}[0pt]{\shortstack [c] {14 \\ millions+} } &\raisebox{-3.3ex}[0pt]{ \scriptsize $21,841$} &\raisebox{-3.3ex}[0pt]{ \scriptsize $-$}
&\raisebox{-3.3ex}[0pt]{ \scriptsize $1.5$}
& \raisebox{-3.3ex}[0pt]{\scriptsize $500\times400$}&\raisebox{-3.3ex}[0pt]{ \scriptsize $2009$} & \scriptsize Large number of object categories; More instances and more categories of objects per image; More challenging than PASCAL VOC; Backbone of the ILSVRC challenge; Images are object-centric. \\
\hline
\raisebox{-3.3ex}[0pt]{\scriptsize MS COCO \cite{Lin2014}} & \raisebox{-3.3ex}[0pt]{\scriptsize $328,000+$ } & \raisebox{-3.3ex}[0pt]{\scriptsize $91$} & \raisebox{-3.3ex}[0pt]{ \scriptsize $-$}
& \raisebox{-3.3ex}[0pt]{ \scriptsize $7.3$}
& \raisebox{-3.3ex}[0pt]{ \scriptsize $640\times480$}& \raisebox{-3.3ex}[0pt]{\scriptsize $2014$} & \scriptsize Even closer to real world scenarios; Each image contains more instances of objects and richer object annotation information; Contains object segmentation notation data that is not available in the ImageNet dataset. \\
\hline
\raisebox{-1.3ex}[0pt]{\scriptsize Places \cite{Zhou2017Places}} &\raisebox{-2.3ex}[0pt]{\shortstack [c] {10 \\ millions+} } &\raisebox{-1.3ex}[0pt]{ \scriptsize $434$ }&\raisebox{-1.3ex}[0pt]{ \scriptsize $-$}&\raisebox{-1.3ex}[0pt]{ \scriptsize $-$}
& \raisebox{-1.3ex}[0pt]{ \scriptsize $256\times256$}&\raisebox{-1.3ex}[0pt]{ \scriptsize $2014$} & \scriptsize The largest labeled dataset for scene recognition; Four subsets Places365 Standard, Places365 Challenge, Places 205 and Places88 as benchmarks. \\
\hline
\raisebox{-3.3ex}[0pt]{\scriptsize Open Images \cite{Kuznetsova2018Open}} &\raisebox{-4.3ex}[0pt]{\shortstack [c] {9 \\ millions+} } &\raisebox{-3.3ex}[0pt]{ \scriptsize $6000$+ }&\raisebox{-3.3ex}[0pt]{ \scriptsize $-$}&\raisebox{-3.3ex}[0pt]{ \scriptsize $8.3$}
& \raisebox{-3.3ex}[0pt]{ \scriptsize varied}&\raisebox{-3.3ex}[0pt]{ \scriptsize $2017$} & \scriptsize Annotated with image level labels, object bounding boxes and visual relationships; Open Images V5 supports large scale object detection, object instance segmentation and visual relationship detection. \\
\Xhline{1pt}
\end{tabular}
}
\end{table*}
\begin {figure*}[!t]
\centering
\includegraphics[width=0.98\textwidth]{ObjectImages.pdf}
\caption{Some example images with object annotations from PASCAL VOC, ILSVRC, MS COCO and Open Images. See Table \ref{Tab:maindatasets} for a summary of these datasets.}
\label{fig:ObjectImages}
\end {figure*}
The four datasets form the backbone of their respective detection challenges.
Each challenge consists of a publicly available dataset of images together with ground truth annotation and standardized evaluation software, and an annual competition
and corresponding workshop. Statistics for the number of images and object instances
in the training, validation and testing datasets\footnote{The annotations on the test set are not publicly
released, except for PASCAL VOC2007.} for the detection challenges are given in
Table~\ref{Tab:detdatasets}. The most frequent object classes in VOC, COCO, ILSVRC and Open Images detection datasets are visualized in Table \ref{fig:ClassFrequency}.
\textbf{PASCAL VOC} \cite{Everingham2010,Everingham2015} is a multi-year effort devoted to the creation and maintenance of a series of benchmark datasets for classification and object detection, creating the precedent for standardized evaluation of recognition algorithms in the form of annual competitions. Starting from only four categories in 2005, the dataset has increased to 20 categories that are common in everyday life.
Since 2009, the number of images has grown every year, but with all previous images retained to allow test results to be compared from year to year. Due the availability of larger datasets like ImageNet, MS COCO and Open Images, PASCAL VOC has gradually fallen out of fashion.
\textbf{ILSVRC}, the ImageNet Large Scale Visual Recognition Challenge \cite{Russakovsky2015}, is derived from ImageNet \cite{ImageNet2009}, scaling up PASCAL VOC's goal of standardized training and evaluation of detection algorithms by more
than an order of magnitude in the number of object classes and images. ImageNet1000, a subset of ImageNet images with 1000 different object categories and a total of 1.2 million images, has been fixed to provide a standardized benchmark for the ILSVRC image classification challenge.
\textbf{MS COCO} is a response to the criticism of ImageNet that objects in its dataset tend
to be large and well centered, making the ImageNet dataset atypical of real-world scenarios.
To push for richer image
understanding, researchers created the MS COCO database \cite{Lin2014} containing complex everyday scenes with common objects in their natural context, closer to real life, where objects are labeled using fully-segmented instances to provide more accurate detector evaluation. The COCO object detection challenge \cite{Lin2014} features two object detection tasks: using either bounding box output or object instance segmentation output. COCO introduced three new challenges:
\begin{enumerate}
\item It contains objects at a wide range of scales, including a high percentage of small objects \cite{Singh2018SNIP};
\item Objects are less iconic and amid clutter or heavy occlusion;
\item The evaluation metric (see Table~\ref{Tab:Metrics}) encourages more accurate object localization.
\end{enumerate}
Just like ImageNet in its time, MS COCO has become the standard for object detection today.
\textbf{OICOD} (the Open Image Challenge Object Detection) is derived from Open Images V4 (now V5 in 2019) \cite{Kuznetsova2018Open}, currently
the largest publicly available object detection dataset. OICOD is different from previous large scale object detection datasets like ILSVRC and MS COCO, not merely in terms of the significantly increased number of classes, images, bounding box annotations and instance segmentation mask annotations,
but also regarding the annotation process. In ILSVRC and MS COCO,
instances of all classes in the dataset are exhaustively annotated, whereas for Open Images V4 a classifier was applied to each image and only those labels with sufficiently high scores were sent for human verification. Therefore in OICOD only the object instances of human-confirmed positive labels are annotated.
\begin{table*}[!t]
\caption {Statistics of commonly used object detection datasets. Object statistics for VOC challenges list the non-difficult objects used in the evaluation (all annotated objects). For the COCO challenge, prior to 2017, the test set had four splits (\emph{Dev}, \emph{Standard}, \emph{Reserve}, and \emph{Challenge}), with each having about 20K images. Starting in 2017, the test set has only the \emph{Dev} and \emph{Challenge} splits, with the other two splits removed. Starting in 2017, the train and val sets are arranged differently, and the test set is divided into two roughly equally sized splits of about $20,000$ images each: Test Dev and Test Challenge. Note that the 2017 Test Dev/Challenge splits contain the same images as the 2015 Test Dev/Challenge splits, so results across the years are directly comparable.}\label{Tab:detdatasets}
\centering
\renewcommand{\arraystretch}{1.2}
\setlength\arrayrulewidth{0.2mm}
\setlength\tabcolsep{2pt}
\resizebox*{14cm}{!}{
\begin{tabular}{!{\vrule width1.2bp}c|c|r|r|c!{\vrule width1.2bp}r|c!{\vrule width1.2bp}r|r|c!{\vrule width1.2bp}}
\Xhline{1pt}
\multirow{2}{*}{\footnotesize Challenge} & \multirow{2}{*}{\footnotesize \shortstack [c] {Object \\Classes}}& \multicolumn{3}{c!{\vrule width1.2bp}}{\footnotesize Number of Images} & \multicolumn{2}{c!{\vrule width1.2bp}}{\footnotesize \shortstack [c] {Number of Annotated Objects}} & \multicolumn{3}{c!{\vrule width1.2bp}}{\footnotesize Summary (Train$+$Val)} \\
\cline{3-10}
\footnotesize & \footnotesize & \footnotesize Train & \footnotesize Val & \footnotesize Test & \footnotesize \shortstack [c] { Train} & \footnotesize Val & \footnotesize Images & \footnotesize Boxes & \footnotesize Boxes/Image\\
\Xhline{1pt}
\multicolumn{10}{!{\vrule width1.2bp}c!{\vrule width1.2bp}}{\footnotesize PASCAL VOC Object Detection Challenge}\\
\hline
\footnotesize VOC07 & \footnotesize $20$ & \footnotesize$ 2,501$ & \footnotesize $2,510 $ & \footnotesize$4,952 $
& \footnotesize $6,301(7,844)$ & \footnotesize $6,307(7,818)$ & \footnotesize $5,011$ & \footnotesize$12,608$& \footnotesize $2.5$\\
\hline
\footnotesize VOC08 & \footnotesize $20$
& \footnotesize$2,111$ & \footnotesize$2,221$ & \footnotesize$4,133$ & \footnotesize $5,082(6,337) $ & \footnotesize$5,281(6,347) $ & \footnotesize $4,332$& \footnotesize$10,364$& \footnotesize $2.4$ \\
\hline
\footnotesize VOC09 & \footnotesize $20 $& \footnotesize$3,473$& \footnotesize $3,581$ & \footnotesize $6,650 $& \footnotesize$8,505(9,760) $ & \footnotesize$ 8,713(9,779)$& \footnotesize$7,054$& \footnotesize $17,218$& \footnotesize $2.3$\\
\hline
\footnotesize VOC10 & \footnotesize $20$ & \footnotesize$4,998 $& \footnotesize$5,105$ & \footnotesize$9,637$ & \footnotesize $11,577(13,339)$ & \footnotesize $11,797(13,352)$ & \footnotesize$10,103$& \footnotesize $23,374$& \footnotesize$2.4$ \\
\hline
\footnotesize VOC11 & \footnotesize $20$ & \footnotesize $5,717$ & \footnotesize $5,823$ & \footnotesize$10,994$ & \footnotesize$13,609 (15,774) $& \footnotesize $13,841(15,787) $ & \footnotesize$11,540$& \footnotesize$27,450$& \footnotesize $2.4$ \\
\hline
\footnotesize VOC12 & \footnotesize $20$ & \footnotesize$5,717$& \footnotesize $5,823$ & \footnotesize$10,991$& \footnotesize $13,609 (15,774) $& \footnotesize $13,841(15,787) $& \footnotesize $11,540$& \footnotesize $27,450$& \footnotesize$2.4$\\
\Xhline{1pt}
\multicolumn{10}{!{\vrule width1.2bp}c!{\vrule width1.2bp}}{\footnotesize ILSVRC Object Detection Challenge}\\
\hline
\footnotesize ILSVRC13& \footnotesize $200$ & \footnotesize$395,909$ & \footnotesize $20,121 $& \footnotesize$40,152$ & \footnotesize$345,854 $& \footnotesize$55,502$& \footnotesize$416,030$& \footnotesize $401,356$& \footnotesize $1.0$\\
\hline
\footnotesize ILSVRC14 & \footnotesize$200 $& \footnotesize$456,567$ & \footnotesize$20,121$ & \footnotesize$40,152$& \footnotesize$ 478,807 $& \footnotesize$55,502$& \footnotesize $476,668$& \footnotesize$534,309$& \footnotesize$1.1$\\
\hline
\footnotesize ILSVRC15& \footnotesize $200$& \footnotesize $456,567$ & \footnotesize$20,121$& \footnotesize $51,294$ & \footnotesize$478,807$& \footnotesize $55,502$& \footnotesize$476,668$& \footnotesize $534,309$ & \footnotesize$1.1$\\
\hline
\footnotesize ILSVRC16 & \footnotesize$200$& \footnotesize$ 456,567$& \footnotesize$ 20,121$& \footnotesize $60,000$& \footnotesize$ 478,807$& \footnotesize $55,502$& \footnotesize$476,668$& \footnotesize $534,309$ & \footnotesize$1.1$\\
\hline
\footnotesize ILSVRC17 & \footnotesize$ 200$& \footnotesize $456,567 $& \footnotesize$ 20,121$& \footnotesize$ 65,500 $& \footnotesize$478,807$ & \footnotesize$55,502$& \footnotesize$476,668$& \footnotesize $534,309$& \footnotesize$1.1$\\
\Xhline{1pt}
\multicolumn{10}{!{\vrule width1.2bp}c!{\vrule width1.2bp}}{\footnotesize MS COCO Object Detection Challenge}\\
\hline
\footnotesize MS COCO15 & \footnotesize $80 $& \footnotesize $82,783$ & \footnotesize $40,504 $ & \footnotesize$81,434$ & \footnotesize $604,907$ & \footnotesize$ 291,875$& \footnotesize$123,287$& \footnotesize $896,782$& \footnotesize$7.3$\\
\hline
\footnotesize MS COCO16 & \footnotesize $80$ & \footnotesize$ 82,783$ & \footnotesize $40,504$ & \footnotesize $81,434$ & \footnotesize$ 604,907 $& \footnotesize $291,875$& \footnotesize$123,287$& \footnotesize$896,782$& \footnotesize $7.3$ \\
\hline
\footnotesize MS COCO17 & \footnotesize$ 80$ & \footnotesize$118,287$ & \footnotesize$ 5,000$& \footnotesize$40,670$& \footnotesize$860,001$& \footnotesize$36,781$ & \footnotesize$123,287$& \footnotesize $896,782$& \footnotesize $7.3$ \\
\hline
\footnotesize MS COCO18 & \footnotesize$ 80$ & \footnotesize$118,287$ & \footnotesize$ 5,000$& \footnotesize$40,670$& \footnotesize$860,001$& \footnotesize$36,781$& \footnotesize$123,287$& \footnotesize $896,782$& \footnotesize $7.3$ \\
\Xhline{1pt}
\multicolumn{10}{!{\vrule width1.2bp}c!{\vrule width1.2bp}}{\footnotesize Open Images Challenge Object Detection (OICOD) (Based on Open Images V4 \cite{Kuznetsova2018Open})}\\
\hline
\footnotesize OICOD18 & \footnotesize $500$& \footnotesize $1,643,042$ & \footnotesize $100,000$ & \footnotesize$99,999$ & \footnotesize $11,498,734$ & \footnotesize $696,410$ & \footnotesize $1,743,042$& \footnotesize $12,195,144$& \footnotesize $7.0$\\
\Xhline{1pt}
\end{tabular}
}
\end{table*}
\subsection{Evaluation Criteria}
\label{sec:EvaluationCriteria}
There are three criteria for evaluating the performance
of detection algorithms: detection speed in Frames Per Second (FPS), precision, and recall.
The most commonly used metric is \emph{Average Precision} (AP), derived from precision and recall.
AP is usually evaluated in a category specific manner, \emph{i.e.}, computed for each object
category separately. To compare performance over all object categories, the \emph{mean AP} (mAP) averaged over all object categories is adopted as the final measure of performance\footnote{In object detection challenges, such as PASCAL VOC and ILSVRC, the winning entry of
each object category is that with the highest AP score, and the winner of the challenge is the team that
wins on the most object categories. The mAP is also used as the measure of a team's performance, and is
justified since the ranking of teams by mAP was always the same as the ranking by the number of object categories won \cite{Russakovsky2015}.}. More details on these metrics can be found
in \cite{Everingham2010,Everingham2015,Russakovsky2015,Hoiem2012}.
\begin {figure}[!t]
\centering
\includegraphics[width=0.49\textwidth]{Algorithm.pdf}
\caption{The algorithm for determining TPs and FPs by greedily matching object detection results to ground truth boxes.}
\label{fig:Algorithm1}
\end {figure}
The standard outputs of a detector applied to a testing image $\textbf{I}$ are the predicted detections
$\{(b_j,c_j,p_j)\}_j$, indexed by object $j$, of Bounding Box (BB) $b_j$, predicted category $c_j$, and confidence $p_j$. A predicted detection $(b,c,p)$
is regarded as a True Positive (TP) if
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item The predicted category $c$ equals the ground truth label $c_g$.
\item The overlap ratio IOU (Intersection Over Union) \cite{Everingham2010,Russakovsky2015}
\begin{equation}\label{eqn:IOU}
\textrm{IOU}(b,b^g)=\frac{area(b\cap b^g)}{area(b\cup b^g)},
\end{equation}
between the predicted BB $b$ and the ground truth $b^g$ is not smaller than a predefined threshold $\varepsilon$, where $\cap$ and $cup$ denote intersection and union, respectively.
A typical value of $\varepsilon$ is 0.5.
\end{itemize}
Otherwise, it is considered as a False Positive (FP). The confidence level $p$
is usually compared with some threshold $\beta$ to determine whether the predicted class label
$c$ is accepted.
AP is computed separately for each of the object classes, based on \emph{Precision} and \emph{Recall}.
For a given object class $c$ and a testing image $\textbf{I}_i$, let
$\{(b_{ij},p_{ij})\}_{j=1}^M$ denote the detections returned by a detector, ranked by confidence $p_{ij}$ in decreasing order. Each detection $(b_{ij},p_{ij})$ is either a TP or an FP, which can be determined via the algorithm\footnote{It is worth noting that for a given threshold $\beta$, multiple detections of the same object in an image are not considered as all correct detections, and only the detection with the highest confidence level is considered as
a TP and the rest as FPs.} in Fig.~\ref{fig:Algorithm1}. Based on the TP and FP detections,
the precision $P(\beta)$ and recall $R(\beta)$ \cite{Everingham2010} can be computed as a function of the confidence threshold $\beta$,
so by varying the confidence threshold different pairs $(P,R)$ can be obtained, in principle allowing precision to be regarded as a function of recall, \emph{i.e.} $P(R)$,
from which the Average Precision (AP) \cite{Everingham2010,Russakovsky2015} can be found.
Since the introduction of MS COCO, more attention has been placed on the accuracy of the bounding box location. Instead of using a fixed IOU
threshold, MS COCO introduces a few metrics (summarized in Table~\ref{Tab:Metrics}) for characterizing the performance of an object detector. For instance, in contrast to the traditional mAP computed at a single IoU of $0.5$, $AP_{coco}$ is averaged across all object categories and multiple IOU values from $0.5$ to $0.95$ in steps of $0.05$. Because $41\%$ of the objects in MS COCO are small and $24\%$ are large, metrics $AP_{coco}^{small}$, $AP_{coco}^{medium}$ and $AP_{coco}^{large}$ are also introduced. Finally, Table~\ref{Tab:Metrics} summarizes the main metrics used in the PASCAL, ILSVRC and MS COCO object detection challenges, with metric modifications for the Open Images challenges proposed in \cite{Kuznetsova2018Open}.
\begin{table}[!t]
\caption {Summary of commonly used metrics for evaluating object detectors.}\label{Tab:Metrics}
\centering
\renewcommand{\arraystretch}{1.2}
\setlength\arrayrulewidth{0.2mm}
\setlength\tabcolsep{1pt}
\resizebox*{9cm}{!}{
\begin{tabular}{!{\vrule width1.2bp}c|c|l|l!{\vrule width1.2bp}}
\Xhline{1pt}
\footnotesize Metric & \footnotesize Meaning & \multicolumn{2}{c!{\vrule width1.2bp}}{\footnotesize Definition and Description} \\
\hline
\raisebox{1ex}[0pt]{\footnotesize TP} & \footnotesize \shortstack [c] {True \\ Positive} & \multicolumn{2}{l!{\vrule width1.2bp}}{\raisebox{1ex}[0pt]{\footnotesize A true positive detection, per Fig.~\ref{fig:Algorithm1}.}} \\
\hline
\raisebox{1ex}[0pt]{\footnotesize FP} & \footnotesize \shortstack [c] {False \\ Positive} & \multicolumn{2}{l!{\vrule width1.2bp}}{\raisebox{1ex}[0pt]{\footnotesize A false positive detection, per Fig.~\ref{fig:Algorithm1}.}} \\
\hline
\raisebox{1ex}[0pt]{\footnotesize $\beta$ } & \footnotesize \shortstack [c] {Confidence \\ Threshold} & \multicolumn{2}{l!{\vrule width1.2bp}}{\raisebox{1ex}[0pt]{\footnotesize A confidence threshold for computing $P(\beta)$ and $R(\beta)$.}} \\
\hline
\multirow{3}{*}{\footnotesize $\varepsilon$} & \multirow{3}{*}{\footnotesize \shortstack [c] {IOU \\ Threshold}} & \footnotesize VOC & \footnotesize Typically around $0.5$ \\
\cline{3-4}
& \footnotesize & \footnotesize ILSVRC & \footnotesize $\min(0.5,\frac{wh}{(w+10)(h+10)})$; $w\times h$ is the size of a GT box. \\
\cline{3-4}
& \footnotesize & \footnotesize MS COCO & \footnotesize Ten IOU thresholds $\varepsilon\in\{0.5:0.05:0.95\}$ \\
\hline
\footnotesize $P(\beta)$ & \raisebox{1.3ex}[0pt]{\footnotesize \shortstack [c] {Precision}} & \multicolumn{2}{l!{\vrule width1.2bp}}{\footnotesize \shortstack [l] {The fraction of correct detections out of the total detections returned \\ by the detector with confidence of at least $\beta$.}} \\
\hline
\footnotesize $R(\beta)$ & \raisebox{1.3ex}[0pt]{\footnotesize \shortstack [c] {Recall}}& \multicolumn{2}{l!{\vrule width1.2bp}}{\footnotesize \shortstack [l] {The fraction of all $N_c$ objects detected by the detector having a \\ confidence of at least $\beta$.}} \\
\hline
\raisebox{1ex}[0pt]{\footnotesize AP } & \footnotesize \shortstack [c] {Average \\ Precision} & \multicolumn{2}{l!{\vrule width1.2bp}}{\footnotesize \shortstack [l] {Computed over the different levels of recall achieved by varying \\ the confidence $\beta$.}} \\
\hline
\multirow{8}{*}{\footnotesize mAP} & \multirow{8}{*}{\footnotesize \shortstack [c] {mean \\Average\\Precision}} & \footnotesize VOC & \footnotesize AP at a single IOU and averaged over all classes. \\
\cline{3-4}
& \footnotesize & \footnotesize ILSVRC & \footnotesize AP at a modified IOU and averaged over all classes. \\
\cline{3-4}
& \footnotesize & \multirow{6}{*}{\footnotesize MS COCO}& \footnotesize $\bullet AP_{coco}$: mAP averaged over ten IOUs: $\{0.5:0.05:0.95\}$;\\
& \footnotesize& \footnotesize& \footnotesize$\bullet$ $AP^{\textrm{IOU}=0.5}_{coco}$: mAP at IOU=0.50 (PASCAL VOC metric);\\
& \footnotesize& \footnotesize& \footnotesize$\bullet$ $AP^{\textrm{IOU}=0.75}_{coco}$: mAP at IOU=0.75 (strict metric);\\
& \footnotesize& \footnotesize& \footnotesize$\bullet$ $AP^{\textrm{small}}_{coco}$: mAP for small objects of area smaller than $32^2$;\\
& \footnotesize& \footnotesize& \footnotesize$\bullet$ $AP^{\textrm{medium}}_{coco}$: mAP for objects of area between $32^2$ and $96^2$;\\
& \footnotesize& \footnotesize& \footnotesize$\bullet$ $AP^{\textrm{large}}_{coco}$: mAP for large objects of area bigger than $96^2$; \\
\hline
\raisebox{1ex}[0pt]{\footnotesize AR} & \footnotesize \shortstack [c] {Average \\ Recall} & \multicolumn{2}{c!{\vrule width1.2bp}}{\footnotesize \shortstack [l] {The maximum recall given a fixed number of detections per image, \\ averaged over all categories and IOU thresholds.}} \\
\hline
\multirow{6}{*}{\footnotesize AR} & \multirow{6}{*}{\footnotesize \shortstack [c] {Average\\Recall}} & \multirow{6}{*}{\footnotesize MS COCO}& \footnotesize $\bullet AR^{\textrm{max}=1}_{coco}$: AR given 1 detection per image;\\
& \footnotesize& \footnotesize& \footnotesize$\bullet$ $AR^{\textrm{max}=10}_{coco}$: AR given 10 detection per image;\\
& \footnotesize& \footnotesize& \footnotesize$\bullet$ $AR^{\textrm{max}=100}_{coco}$: AR given 100 detection per image;\\
& \footnotesize& \footnotesize& \footnotesize$\bullet$ $AR^{\textrm{small}}_{coco}$: AR for small objects of area smaller than $32^2$;\\
& \footnotesize& \footnotesize& \footnotesize$\bullet$ $AR^{\textrm{medium}}_{coco}$: AR for objects of area between $32^2$ and $96^2$;\\
& \footnotesize& \footnotesize& \footnotesize$\bullet$ $AR^{\textrm{large}}_{coco}$: AR for large objects of area bigger than $96^2$; \\
\Xhline{1pt}
\end{tabular}
}
\end{table}
\section{Detection Frameworks}
\label{Sec:Frameworks}
There has been steady progress in object feature representations and classifiers for recognition, as evidenced by the dramatic change from handcrafted features \cite{Viola2001,Dalal2005HOG,Felzenszwalb08CVPR,
Harzallah2009Combining,Vedaldi09Multiple} to learned DCNN
features \cite{Girshick2014RCNN,Ouyang2015deepid,Girshick2015FRCNN,
Ren2015NIPS,Dai2016RFCN}. In contrast, in terms of localization, the basic ``sliding window'' strategy \cite{Dalal2005HOG,Felzenszwalb2010b,Felzenszwalb08CVPR}
remains mainstream, although with some efforts to avoid exhaustive search \cite{lampert2008beyond,Uijlings2013b}. However, the number of windows is large and grows
quadratically with the number of image pixels, and the need to search over multiple scales and aspect ratios
further increases the search space. Therefore, the design of efficient and effective detection frameworks plays a key role in reducing this computational cost. Commonly adopted strategies include cascading, sharing feature computation, and reducing per-window computation.
This section reviews detection frameworks, listed in Fig.~\ref{fig:MilestonesAfter2014} and Table~\ref{Tab:Detectors}, the milestone approaches appearing since deep learning entered the field, organized into two main categories:
\begin{enumerate}
\item [a.] Two stage detection frameworks, which include a preprocessing step for generating object proposals;
\item [b.] One stage detection frameworks, or region proposal free frameworks, having a single proposed method which does not separate the process of the detection proposal.
\end{enumerate}
Sections~\ref{Sec:DCNNFeatures} through~\ref{sec:otherissue} will discuss fundamental sub-problems involved in detection frameworks in greater detail, including DCNN features, detection proposals, and context modeling.
\begin {figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{MilestonesPastSeveralYears.pdf}
\caption{Milestones in generic object detection.}
\label{fig:MilestonesAfter2014}
\end {figure*}
\subsection{Region Based (Two Stage) Frameworks}
\label{Sec:RegionBased}
In a region-based framework, category-independent region proposals\footnote{Object proposals, also called region proposals or detection proposals, are a set of candidate regions or bounding boxes in an image that may potentially contain an object. \cite{Chavali2016,Hosang2016}} are generated from an image, CNN \cite{Krizhevsky2012} features are extracted from these regions, and then category-specific classifiers are used
to determine the category labels of the proposals. As can be observed from Fig.~\ref{fig:MilestonesAfter2014}, DetectorNet \cite{Szegedy2013Deep}, OverFeat \cite{OverFeat2014}, MultiBox \cite{MultiBox1} and RCNN \cite{Girshick2014RCNN} independently and almost simultaneously proposed using CNNs for generic object detection.
\begin {figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{RegionBased.pdf}
\caption{Illustration of the RCNN detection framework \cite{Girshick2014RCNN,Girshick2016TPAMI}.}
\label{fig:RegionBased}
\end {figure}
\begin {figure}[!t]
\centering
\includegraphics[width=0.46\textwidth]{RegionVsUnified1.pdf}
\includegraphics[width=0.46\textwidth]{RegionVsUnified2.pdf}
\caption{High level diagrams of the leading frameworks for generic object detection. The properties of these methods are summarized in Table \ref{Tab:Detectors}.}
\label{Fig:RegionVsUnified}
\end {figure}
\textbf{RCNN} \cite{Girshick2014RCNN}: Inspired by the breakthrough image classification results obtained by CNNs and the success of the selective search in region proposal for handcrafted features \cite{Uijlings2013b}, Girshick \emph{et al.} were among the first to explore CNNs for generic object detection and developed RCNN \cite{Girshick2014RCNN,Girshick2016TPAMI}, which integrates AlexNet \cite{Krizhevsky2012} with a region proposal selective search \cite{Uijlings2013b}. As illustrated in detail in Fig.~\ref{fig:RegionBased}, training an RCNN framework consists of multistage pipelines:
\begin{enumerate}
\item \emph{Region proposal computation:} Class agnostic region proposals, which are candidate regions that might contain objects, are obtained via a selective search \cite{Uijlings2013b}.
\item \emph{CNN model finetuning:} Region proposals, which are cropped from the image and warped into the same size, are used as the input for fine-tuning a CNN model pre-trained using a large-scale dataset such as ImageNet. At this stage, all region proposals with $\geqslant0.5$ IOU \footnote{Please refer to Section \ref{sec:EvaluationCriteria} for the definition of IOU.} overlap with a ground truth box are defined as positives for that ground truth box's class and the rest as negatives.
\item \emph{Class specific SVM classifiers training:} A set of class-specific linear SVM classifiers are trained using fixed length features extracted with CNN, replacing the softmax classifier learned by fine-tuning. For training SVM classifiers, positive examples are
defined to be the ground truth boxes for each class. A region proposal with less than 0.3 IOU overlap with all ground truth instances of a class is negative for that
class. Note that the positive and negative examples defined for training the SVM classifiers are different from those for fine-tuning the CNN.
\item \emph{Class specific bounding box regressor training:} Bounding box regression is learned for each object class with CNN features.
\end{enumerate}
In spite of achieving high object detection quality, RCNN has notable drawbacks \cite{Girshick2015FRCNN}:
\begin{enumerate}
\item Training is a multistage pipeline, slow and hard to optimize because each individual stage must be trained separately.
\item For SVM classifier and bounding box regressor training, it is expensive in both disk space and time, because CNN features need to be extracted from each object proposal in each image, posing great challenges for large scale detection, particularly with very deep networks, such as
VGG16 \cite{Simonyan2014VGG}.
\item Testing is slow, since CNN features are extracted per object proposal in each test image, without shared computation.
\end{enumerate}
All of these drawbacks have motivated successive innovations, leading to a number of improved detection frameworks such as SPPNet, Fast RCNN, Faster RCNN \emph{etc}., as follows.
\textbf{SPPNet} \cite{He2014SPP}: During testing, CNN feature extraction is the main bottleneck of the RCNN detection pipeline, which requires the extraction of CNN features from thousands of warped region proposals per image. As a result, He \emph{et al.} \cite{He2014SPP} introduced traditional spatial pyramid pooling (SPP) \cite{Grauman2005pyramid,Lazebnik2006SPM} into CNN architectures.
Since convolutional layers accept inputs of arbitrary sizes, the requirement of fixed-sized images in CNNs is due only to the Fully Connected (FC) layers, therefore He \emph{et al.} added an SPP layer on top of the last convolutional (CONV) layer to obtain features of fixed length for the FC layers. With this SPPNet, RCNN obtains a significant speedup without sacrificing any detection quality, because it only needs to run the convolutional layers {\emph once} on the entire test image to generate fixed-length features for region proposals of arbitrary size. While SPPNet accelerates RCNN evaluation by orders of magnitude, it does not result in a comparable speedup of the detector training. Moreover, fine-tuning in SPPNet \cite{He2014SPP} is unable to update the convolutional layers before the SPP layer, which limits the accuracy of very deep networks.
\textbf{Fast RCNN} \cite{Girshick2015FRCNN}:
Girshick proposed Fast RCNN \cite{Girshick2015FRCNN} that addresses some of the disadvantages of RCNN and SPPNet, while improving on their detection speed and quality. As illustrated in Fig.~\ref{Fig:RegionVsUnified}, Fast RCNN enables end-to-end detector training by developing a streamlined training process that
simultaneously learns a softmax classifier and class-specific bounding
box regression, rather than separately training a softmax
classifier, SVMs, and Bounding Box Regressors (BBRs) as in RCNN/SPPNet.
Fast RCNN employs the idea of sharing the computation of convolution
across region proposals, and adds a Region of Interest (RoI) pooling layer
between the last CONV layer and the first FC layer to extract a fixed-length
feature for each region proposal.
Essentially, RoI pooling uses warping at the feature level to approximate warping at the image level. The features after the RoI pooling layer are fed into a sequence of FC layers that finally branch into two sibling output layers: softmax probabilities for object category prediction, and class-specific bounding box regression offsets for proposal refinement. Compared to RCNN/SPPNet, Fast RCNN improves the efficiency considerably -- typically 3 times faster in training and 10 times faster in testing. Thus there is higher detection quality, a single training process that updates all network layers, and no storage required for feature caching.
\textbf{Faster RCNN} \cite{Ren2015NIPS,Ren2016a}:
Although Fast RCNN significantly sped up the detection process,
it still relies on external region proposals, whose computation
is exposed as the new speed bottleneck in Fast RCNN.
Recent work has shown that CNNs have a remarkable ability to
localize objects in CONV layers \cite{Zhoubolei2014,Zhou2016learning,
Cinbis2017,Oquab2015object,Hariharan2016}, an ability which is weakened in the
FC layers. Therefore, the selective search can be replaced by a CNN in producing region proposals.
The Faster RCNN framework proposed by Ren \emph{et al.} \cite{Ren2015NIPS,Ren2016a}
offered an efficient and accurate Region Proposal Network (RPN) for
generating region proposals. They utilize the same backbone network, using features from the last shared convolutional
layer to accomplish the task of RPN for region proposal and Fast RCNN for region classification, as
shown in Fig.~\ref{Fig:RegionVsUnified}.
RPN first initializes $k$ reference boxes (\emph{i.e.} the so called \emph{anchors}) of different scales and aspect ratios at each CONV feature map location. The anchor {\emph positions} are image content independent, but the feature vectors themselves, extracted from anchors, are image content dependent. Each anchor is mapped to a lower dimensional vector, which is fed into two sibling FC layers --- an object category classification layer and a box regression layer. In contrast to detection in Fast RCNN, the features used for regression in RPN are of the same shape as the
anchor box, thus $k$ anchors lead to $k$ regressors. RPN shares CONV features with Fast RCNN, thus enabling highly efficient region proposal computation. RPN is, in fact, a kind of Fully Convolutional Network (FCN) \cite{FCNCVPR2015,FCNTPAMI}; Faster RCNN is thus a purely CNN based framework without using handcrafted features.
For the VGG16 model \cite{Simonyan2014VGG}, Faster RCNN can test at 5 FPS (including all stages) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 using 300 proposals per image. The initial Faster RCNN in \cite{Ren2015NIPS} contains several alternating training stages, later simplified in \cite{Ren2016a}.
Concurrent with the development of Faster RCNN, Lenc and Vedaldi \cite{Lenc2015} challenged the role of region proposal generation methods such as selective search, studied the role of region proposal generation in CNN based detectors, and found that CNNs contain sufficient geometric information for accurate object detection in the CONV rather than FC layers. They showed the possibility of building integrated, simpler, and faster object detectors that rely exclusively on CNNs, removing region proposal generation methods such as selective search.
{\textbf{RFCN (Region based Fully Convolutional Network)}}: While Faster RCNN is an order of magnitude faster than Fast RCNN, the fact that the region-wise sub-network still needs to be applied per RoI (several hundred RoIs per image) led Dai \emph{et al.} \cite{Dai2016RFCN} to propose the RFCN detector which is \emph{fully convolutional} (no hidden FC layers) with almost all computations shared over the entire image. As shown in Fig.~\ref{Fig:RegionVsUnified}, RFCN differs from Faster RCNN only in the RoI sub-network. In Faster RCNN, the computation after the RoI pooling layer cannot be shared, so Dai \emph{et al.} \cite{Dai2016RFCN} proposed using all CONV layers to construct a shared RoI sub-network, and RoI crops are taken from the last layer of CONV features prior to prediction. However, Dai \emph{et al.} \cite{Dai2016RFCN} found that this naive design turns out to have considerably inferior detection accuracy, conjectured to be that deeper CONV layers are more sensitive to category semantics, and less sensitive to translation, whereas object detection needs localization representations that respect translation invariance. Based on this observation, Dai \emph{et al.} \cite{Dai2016RFCN} constructed a set of position-sensitive score maps by using a bank of specialized CONV layers as the FCN output, on top of which a position-sensitive RoI pooling layer is added. They showed that RFCN with ResNet101 \cite{He2016ResNet} could achieve comparable accuracy to Faster RCNN, often at faster running times.
\textbf{Mask RCNN}: He \emph{et al.} \cite{MaskRCNN2017} proposed Mask RCNN to tackle pixelwise object instance segmentation by extending Faster RCNN. Mask RCNN adopts the same two stage pipeline, with an identical first stage (RPN), but in the second stage, in parallel to predicting the class and box offset, Mask RCNN adds a branch which outputs a binary mask for each RoI. The new branch is a Fully Convolutional Network (FCN) \cite{FCNCVPR2015,FCNTPAMI} on top of a CNN feature map. In order to avoid the misalignments caused by the original RoI pooling (RoIPool) layer, a RoIAlign layer was proposed to preserve the pixel level spatial correspondence. With a backbone network ResNeXt101-FPN \cite{Xie2016Aggregated,FPN2016}, Mask RCNN achieved top results for the COCO object instance segmentation and bounding box
object detection. It is simple to train, generalizes well, and adds only a small overhead to Faster RCNN, running at 5 FPS \cite{MaskRCNN2017}.
\textbf{Chained Cascade Network and Cascade RCNN}: The essence of cascade \cite{Felzenszwalb2010Cascade,Bourdev2005Robust,Li2004Floatboost} is to learn more discriminative classifiers by using multistage
classifiers, such that early stages discard a large number
of easy negative samples so that later stages
can focus on handling more difficult examples. Two-stage object detection can be considered as a cascade, the first detector removing large amounts of background, and the second stage classifying the remaining regions. Recently, end-to-end learning of more than two cascaded classifiers and DCNNs for generic object detection were proposed in the Chained Cascade Network \cite{Ouyang2017Chained}, extended in Cascade RCNN \cite{CascadeRCNN2018}, and more recently applied for simultaneous object detection and instance segmentation \cite{Chen2019Hybrid}, winning the COCO 2018 Detection Challenge.
\textbf{Light Head RCNN}:
In order to further increase the detection speed of RFCN \cite{Dai2016RFCN},
Li \emph{et al.} \cite{Li2018Light} proposed Light Head RCNN, making the head of the detection network as light as possible to reduce the RoI computation. In particular, Li \emph{et al.} \cite{Li2018Light} applied a convolution to produce thin feature maps with small channel numbers (\emph{e.g.,} 490 channels for COCO) and a
cheap RCNN sub-network, leading to an excellent trade-off of speed and accuracy.
\subsection{Unified (One Stage) Frameworks}
\label{Sec:Unified}
The region-based pipeline strategies of Section~\ref{Sec:RegionBased}
have dominated
since RCNN \cite{Girshick2014RCNN}, such that the leading results on popular benchmark
datasets are all based on Faster RCNN \cite{Ren2015NIPS}.
Nevertheless, region-based approaches are computationally
expensive for current mobile/wearable devices, which have limited storage and computational capability,
therefore instead of trying to optimize the individual components of a complex region-based pipeline, researchers have begun to develop \emph{unified} detection strategies.
Unified pipelines refer to architectures that directly predict class probabilities and bounding box offsets from full images with a single feed-forward CNN in a monolithic setting that does not involve region proposal generation or post classification / feature resampling, encapsulating all computation in a single network. Since the whole pipeline is a single network, it can be optimized end-to-end directly on detection performance.
\textbf{DetectorNet}: Szegedy \emph{et al.} \cite{Szegedy2013Deep} were among the first to explore CNNs for object detection. DetectorNet formulated object detection a regression problem to object bounding box masks. They use AlexNet \cite{Krizhevsky2012} and replace the final softmax classifier layer with a regression layer. Given an image window, they use one network to predict foreground pixels over a coarse grid, as well as four additional networks to predict the object's top, bottom, left and right halves. A grouping process then converts the predicted masks into
detected bounding boxes. The network needs to be trained per object type and mask type, and does not scale to multiple classes. DetectorNet must take many crops of the image, and run multiple networks for each part on every crop, thus making it slow.
\begin {figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{OverFeat.pdf}
\caption{Illustration of the OverFeat \cite{OverFeat2014} detection framework.}
\label{fig:OverFeat}
\end {figure}
\textbf{OverFeat}, proposed by Sermanet \emph{et al.} \cite{OverFeat2014} and illustrated in Fig.~\ref{fig:OverFeat}, can be considered as one of the first single-stage object detectors based on fully convolutional deep networks. It is one of the most influential object detection frameworks, winning the ILSVRC2013
localization and detection competition. OverFeat performs object detection via a single forward pass through
the fully convolutional layers in the network (\emph{i.e.} the ``Feature Extractor", shown in Fig. \ref{fig:OverFeat} (a)). The key steps of object detection at test time can be summarized as follows:
\begin{enumerate}
\item \emph{Generate object candidates by performing object classification via a sliding window fashion on multiscale images.} OverFeat uses a CNN like AlexNet \cite{Krizhevsky2012}, which would require input images ofa fixed size due to its fully connected layers, in order to make the sliding window approach computationally efficient, OverFeat casts the network (as shown in Fig. \ref{fig:OverFeat} (a)) into a fully convolutional network, taking inputs of any size, by viewing fully connected layers as convolutions with kernels of size $1\times1$. OverFeat leverages multiscale features to improve the overall performance by passing up to six enlarged scales of the original image through the network (as shown in Fig. \ref{fig:OverFeat} (b)), resulting in a significantly increased number of evaluated context views. For each of the multiscale inputs, the classifier outputs a grid of predictions (class and confidence).
\item \emph{Increase the number of predictions by offset max pooling}. In order to increase resolution, OverFeat applies offset max pooling after the last CONV layer, \emph{i.e.} performing a subsampling operation at every offset, yielding many more views for voting, increasing robustness while remaining efficient.
\item \emph{Bounding box regression.} Once an object is identified, a single bounding box regressor is applied. The classifier and the regressor share the same feature extraction (CONV) layers, only the FC layers need to be recomputed after computing the classification network.
\item \emph{Combine predictions.} OverFeat uses a greedy merge strategy to combine the individual bounding box predictions across all locations and scales.
\end{enumerate}
OverFeat has a significant speed advantage, but is less accurate than RCNN \cite{Girshick2014RCNN}, because it was difficult to train fully convolutional networks at the time. The speed advantage derives from sharing the computation of convolution between overlapping windows in the fully convolutional network. OverFeat is similar to later frameworks such as YOLO \cite{YoLo2016} and SSD \cite{Liu2016SSD}, except that the classifier and the regressors in OverFeat are trained sequentially.
\textbf{YOLO}: Redmon \emph{et al.} \cite{YoLo2016} proposed YOLO (You Only Look Once), a unified detector casting object detection as a regression
problem from image pixels to spatially separated bounding boxes and associated class probabilities, illustrated in Fig.~\ref{Fig:RegionVsUnified}.
Since the region proposal generation stage is completely dropped,
YOLO directly predicts detections using a small set of candidate regions\footnote{YOLO uses far fewer bounding boxes, only 98
per image, compared to about 2000 from Selective Search.}.
Unlike region based approaches (\emph{e.g.} Faster RCNN) that predict detections based
on features from a local region, YOLO uses features from an entire image globally.
In particular, YOLO divides an image into an $S\times S$ grid, each predicting $C$ class probabilities,
$B$ bounding box locations, and confidence scores.
By throwing out the region proposal generation step entirely, YOLO is fast by design, running in real time at 45 FPS and Fast YOLO \cite{YoLo2016} at 155 FPS. Since YOLO sees the entire image when making predictions, it implicitly encodes contextual information about object classes, and is less likely to predict false positives in the background. YOLO makes more localization errors than Fast RCNN, resulting from the coarse division of bounding box location, scale and aspect ratio.
As discussed in \cite{YoLo2016}, YOLO may fail to localize some objects, especially small ones, possibly because of the coarse grid division, and because each grid cell can only contain one object. It is unclear to what extent YOLO can translate to good performance
on datasets with many objects per image, such as MS COCO.
\textbf{YOLOv2 and YOLO9000}: Redmon and Farhadi \cite{YOLO9000} proposed YOLOv2, an improved version of YOLO, in which the custom GoogLeNet \cite{GoogLeNet2015} network is replaced with the simpler DarkNet19, plus batch normalization \cite{He2015delving}, removing the fully connected layers, and using good anchor boxes\footnote{Boxes of various sizes and aspect ratios that serve as object candidates.} learned via \emph{k}means and multiscale training.
YOLOv2 achieved state-of-the-art on standard detection tasks. Redmon and Farhadi \cite{YOLO9000} also introduced YOLO9000, which can detect over 9000 object categories in real time by proposing a joint optimization method to train simultaneously on an ImageNet classification dataset and a COCO detection dataset with WordTree to combine data from multiple sources. Such joint training allows YOLO9000 to perform weakly supervised detection, \emph{i.e.} detecting object classes that do not have bounding box annotations.
\textbf{SSD}: In order to preserve real-time speed without sacrificing too much detection accuracy, Liu \emph{et al.} \cite{Liu2016SSD} proposed SSD (Single Shot Detector), faster than YOLO \cite{YoLo2016} and with an accuracy competitive with region-based detectors such as Faster RCNN \cite{Ren2015NIPS}. SSD effectively combines ideas from RPN in Faster RCNN \cite{Ren2015NIPS}, YOLO \cite{YoLo2016} and multiscale CONV features \cite{Hariharan2016} to achieve fast detection speed, while still retaining high detection quality. Like YOLO, SSD predicts a fixed number of bounding boxes and scores, followed by an NMS step to produce the final detection. The CNN network in SSD is fully convolutional, whose early layers are based on a standard architecture, such as VGG \cite{Simonyan2014VGG}, followed by several auxiliary CONV layers, progressively decreasing in size. The information in the last layer may be too coarse spatially to allow precise localization, so SSD performs detection over multiple scales by operating on multiple CONV feature maps, each of which predicts category scores and box offsets for bounding boxes of appropriate sizes. For a $300\times300$ input, SSD achieves $74.3\%$ mAP on the VOC2007 test at 59 FPS versus Faster RCNN 7 FPS / mAP $73.2\%$ or YOLO 45 FPS / mAP $63.4\%$.
\textbf{CornerNet:} Recently, Law \emph{et al.} \cite{Law2018CornerNet} questioned the dominant role that anchor boxes have come to play in SoA object detection frameworks \cite{Girshick2015FRCNN,MaskRCNN2017,YoLo2016,Liu2016SSD}. Law \emph{et al.} \cite{Law2018CornerNet} argue that the use of anchor boxes, especially in one stage detectors \cite{DSSD2016,LinICCV2017,Liu2016SSD,YoLo2016}, has drawbacks \cite{Law2018CornerNet,LinICCV2017} such as
causing a huge imbalance between positive and negative examples, slowing down training and introducing extra hyperparameters. Borrowing ideas from the work on Associative Embedding in multiperson pose estimation \cite{Newell2017Associative}, Law \emph{et al.} \cite{Law2018CornerNet} proposed CornerNet by formulating bounding box object detection as detecting paired top-left and bottom-right keypoints\footnote{The idea of using keypoints for object detection appeared previously in DeNet \cite{SmithICCV2017}. }. In CornerNet, the backbone network consists of two stacked Hourglass networks \cite{Newell2016Stacked}, with a simple corner pooling approach to better
localize corners. CornerNet achieved a $42.1\%$ AP on MS COCO, outperforming all previous one stage detectors; however, the average inference time is about 4FPS on a Titan X GPU, significantly slower than SSD \cite{Liu2016SSD} and YOLO \cite{YoLo2016}. CornerNet generates incorrect bounding boxes because it is challenging to decide which pairs of keypoints should be grouped into the same objects. To further improve on CornerNet, Duan \emph{et al.} \cite{Duan2019CenterNet} proposed CenterNet to detect each object as a triplet of keypoints, by introducing one extra keypoint at the centre of a proposal, raising the MS COCO AP to $47.0\%$, but with an inference speed slower than CornerNet.
\section{Object Representation}
\label{Sec:DCNNFeatures}
As one of the main components in any
detector, good feature representations are of primary importance in object detection \cite{Dickinson2009,Girshick2014RCNN,Gidaris2015,Zhu2016Do}.
In the past, a great deal of effort was devoted to designing local descriptors (\emph{e.g.,} SIFT \cite{Lowe1999Object} and HOG \cite{Dalal2005HOG}) and to explore approaches (\emph{e.g.,} Bag of Words \cite{Sivic2003} and Fisher Vector \cite{Perronnin2010}) to group and abstract descriptors into higher level representations in order to allow the discriminative parts to emerge; however, these feature representation methods required careful engineering and considerable domain expertise.
In contrast, deep learning methods (especially {\em deep} CNNs) can learn powerful feature representations with multiple levels of abstraction directly from raw images \cite{Bengio13Feature,LeCun15}. As the learning procedure reduces the dependency of specific domain knowledge and complex procedures needed in traditional feature engineering \cite{Bengio13Feature,LeCun15}, the burden for feature representation has been transferred to the design of better network architectures and training procedures.
The leading frameworks reviewed in Section \ref{Sec:Frameworks} (RCNN \cite{Girshick2014RCNN}, Fast RCNN \cite{Girshick2015FRCNN}, Faster RCNN \cite{Ren2015NIPS}, YOLO \cite{YoLo2016}, SSD \cite{Liu2016SSD}) have persistently promoted detection accuracy and speed, in which it is generally accepted that the CNN architecture (Section \ref{Sec:PopularNetworks} and Table \ref{fig:ILSVRCclassificationResults}) plays a crucial
role. As a result, most of the recent improvements in detection accuracy have been via research into the development of novel networks. Therefore we begin by reviewing popular CNN architectures used in Generic Object Detection, followed by a review of the effort devoted to improving object feature representations, such as developing invariant features to accommodate geometric variations in object scale, pose, viewpoint, part deformation and performing multiscale analysis to improve object detection over a wide range of scales.
\begin {figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{ILSVRCclassificationResults.pdf}
\caption{\footnotesize{Performance of winning entries in the ILSVRC competitions from 2011 to 2017
in the image classification task.}}
\label{fig:ILSVRCclassificationResults}
\end {figure}
\begin{table*}[!t]
\caption {DCNN architectures that were commonly used for generic object detection. Regarding the statistics for ``\#Paras'' and ``\#Layers'', the final FC prediction layer is not taken into consideration. ``Test Error'' column indicates the Top 5 classification test error on ImageNet1000. When ambiguous, the ``\#Paras'', ``\#Layers'', and ``Test Error'' refer to: OverFeat (accurate model), VGGNet16, ResNet101 DenseNet201 (Growth Rate 32, DenseNet-BC), ResNeXt50 (32*4d), and SE ResNet50.
}\label{Tab:dcnnarchitectures}
\centering
\renewcommand{\arraystretch}{1.2}
\setlength\arrayrulewidth{0.2mm}
\setlength\tabcolsep{2pt}
\resizebox*{18cm}{!}{
\begin{tabular}{!{\vrule width1.2bp}c|c|c|c|c|c|p{9cm}!{\vrule width1.2bp}}
\Xhline{1pt}
\footnotesize No. & \footnotesize \shortstack [c] {DCNN \\ Architecture} & \footnotesize \shortstack [c] {\#Paras \\ ($\times10^6$)} & \footnotesize \shortstack [c] {\#Layers \\ (CONV+FC)} & \footnotesize \shortstack [c] {Test Error \\ (Top 5)} & \footnotesize \shortstack [c] { \shortstack [c] {First \\ Used In}} & \footnotesize Highlights \\
\Xhline{1pt}
\raisebox{-3.3ex}[0pt]{$1$} & \raisebox{-3.3ex}[0pt]{\footnotesize AlexNet \cite{AlexNet2012}}
& \raisebox{-3.3ex}[0pt]{ \footnotesize $57$}& \raisebox{-3.3ex}[0pt]{\footnotesize $5+2 $}
& \raisebox{-3.3ex}[0pt]{\footnotesize $15.3\%$}
& \raisebox{-3.3ex}[0pt]{\footnotesize \cite{Girshick2014RCNN}} & \footnotesize
The first DCNN found effective for ImageNet classification; the historical turning point from hand-crafted features to CNN;
Winning the ILSVRC2012 Image classification competition. \\
\hline
$2$ & \footnotesize ZFNet (fast)
\cite{ZeilerFergus2014} & \footnotesize $58$ & \footnotesize$ 5+2 $ &\footnotesize $14.8\%$ & \footnotesize \cite{He2014SPP}& \footnotesize Similar to AlexNet, different in stride for convolution, filter size, and number of filters for some layers. \\
\hline
$3 $& \footnotesize OverFeat \cite{OverFeat2014} & \footnotesize $140 $& \footnotesize$ 6+2$ &\footnotesize $13.6\%$ & \footnotesize \cite{OverFeat2014} & \footnotesize Similar to AlexNet, different in stride for convolution, filter size, and number of filters for some layers. \\
\hline
\raisebox{-1.3ex}[0pt]{$4$}&\raisebox{-1.3ex}[0pt]{ \footnotesize VGGNet \cite{Simonyan2014VGG}} & \raisebox{-1.3ex}[0pt]{\footnotesize$ 134$} & \raisebox{-1.3ex}[0pt]{\footnotesize $13+2 $} &
\raisebox{-1.3ex}[0pt]{\footnotesize $6.8\%$}&\raisebox{-1.3ex}[0pt]{ \footnotesize \cite{Girshick2015FRCNN}}& \footnotesize Increasing network depth significantly by stacking $3\times3$ convolution filters and increasing the network depth step by step. \\
\hline
\raisebox{-2.3ex}[0pt]{$5$}&\raisebox{-2.3ex}[0pt]{ \footnotesize GoogLeNet
\cite{GoogLeNet2015} }&\raisebox{-2.3ex}[0pt]{ \footnotesize$ 6$} & \raisebox{-2.3ex}[0pt]{\footnotesize $ 22$ } & \raisebox{-2.3ex}[0pt]{\footnotesize $6.7\%$} &\raisebox{-2.3ex}[0pt]{ \footnotesize \cite{GoogLeNet2015} }& \footnotesize Use Inception module, which uses multiple branches of convolutional layers with different filter sizes and then concatenates feature maps produced by these branches. The first inclusion of bottleneck structure and global average pooling. \\
\hline
$6$ & \footnotesize Inception v2 \cite{Ioffe2015} & \footnotesize $12$ & \footnotesize $31$ & \footnotesize $4.8\%$ & \footnotesize \cite{Howard2017MobileNets} & \footnotesize Faster training with the introduce of Batch Normalization.\\
\hline
$7$ & \footnotesize Inception v3
\cite{ Szegedy2016a} & \footnotesize $22$ & \footnotesize $47$ & \footnotesize $3.6\%$ & \footnotesize & \footnotesize Inclusion of separable convolution and spatial resolution reduction. \\
\hline
$8$ & \footnotesize YOLONet \cite{YoLo2016} & \footnotesize $64 $& \footnotesize $24+1$ & \footnotesize$-$ & \footnotesize \cite{YoLo2016} & \footnotesize A network inspired by GoogLeNet used in YOLO detector. \\
\hline
$9$& \footnotesize ResNet50
\cite{He2016ResNet} & \footnotesize$ 23.4 $& \footnotesize $49$ & \footnotesize $3.6\%$ & \footnotesize \cite{He2016ResNet} & \footnotesize With identity mapping, substantially deeper networks can be learned. \\
\cline{1-4}\cline{6-6}
$10$ & \footnotesize ResNet101
\cite{He2016ResNet} & \footnotesize $42$ & \footnotesize $100 $ & \footnotesize (ResNets) & \footnotesize \cite{He2016ResNet} & \footnotesize
Requires fewer parameters than VGG by using the global
average pooling and bottleneck introduced in GoogLeNet. \\
\hline
\raisebox{-1.3ex}[0pt]{$11$} & \raisebox{-1.3ex}[0pt]{ \footnotesize InceptionResNet v1
\cite{InceptionV4} } & \raisebox{-1.3ex}[0pt]{\footnotesize $21$ }& \raisebox{-1.3ex}[0pt]{\footnotesize $87$} &\multirow{3}{*}{ \footnotesize $3.1\%$} &\raisebox{-1.3ex}[0pt]{ \footnotesize} & \footnotesize Combination of identity mapping and Inception module, with similar computational cost of Inception v3, but faster training process. \\
\cline{1-4}\cline{6-7}
\raisebox{-1.3ex}[0pt]{$12$} &\raisebox{-1.3ex}[0pt]{ \footnotesize InceptionResNet v2
\cite{InceptionV4}} &\raisebox{-1.3ex}[0pt]{ \footnotesize $30$ }& \raisebox{-1.3ex}[0pt]{\footnotesize $95$ }& \raisebox{-0.3ex}[0pt]{ \footnotesize (Ensemble) } &\raisebox{-1.3ex}[0pt]{ \footnotesize \cite{Huang2016Speed}} & \footnotesize A costlier residual version of Inception, with significantly improved recognition performance. \\
\cline{1-4}\cline{6-7}
\raisebox{-1.3ex}[0pt]{$13 $}&\raisebox{-1.3ex}[0pt]{ \footnotesize Inception v4
\cite{InceptionV4}} & \raisebox{-1.3ex}[0pt]{\footnotesize $41$} &\raisebox{-1.3ex}[0pt]{ \footnotesize $75$ } & \raisebox{-1.3ex}[0pt]{\footnotesize} & \footnotesize & \footnotesize An Inception variant without residual connections, with roughly the same recognition performance as InceptionResNet v2, but significantly slower. \\
\hline
\raisebox{-1.3ex}[0pt]{$14$} &\raisebox{-1.3ex}[0pt]{ \footnotesize ResNeXt
\cite{ Xie2016Aggregated}} &\raisebox{-1.3ex}[0pt]{ \footnotesize $23 $}&\raisebox{-1.3ex}[0pt]{ \footnotesize $49 $} &\raisebox{-1.3ex}[0pt]{ \footnotesize $3.0\%$} &\raisebox{-1.3ex}[0pt]{ \footnotesize \cite{Xie2016Aggregated}}& \footnotesize Repeating a building block that aggregates a set of transformations with the same topology. \\
\hline
\raisebox{-3.3ex}[0pt]{$15$} & \raisebox{-3.3ex}[0pt]{\footnotesize DenseNet201
\cite{Huang2016Densely}} &\raisebox{-3.3ex}[0pt]{ \footnotesize $18$} & \raisebox{-3.3ex}[0pt]{ \footnotesize$ 200$ } &\raisebox{-3.3ex}[0pt]{ \footnotesize $-$} & \raisebox{-3.3ex}[0pt]{\footnotesize \cite{Zhou2018Scale}} & \footnotesize Concatenate each layer with every other layer in a feed forward fashion. Alleviate the vanishing gradient problem, encourage feature reuse, reduction in number of parameters.\\
\hline
$16$ & \footnotesize DarkNet
\cite{YOLO9000} & \footnotesize $20$ & \footnotesize $19$ & \footnotesize $-$& \footnotesize \cite{YOLO9000} & \footnotesize Similar to VGGNet, but with significantly fewer parameters. \\
\hline
\raisebox{-1.3ex}[0pt]{$17$} &\raisebox{-1.3ex}[0pt]{ \footnotesize MobileNet \cite{Howard2017MobileNets}} &\raisebox{-1.3ex}[0pt]{ \footnotesize $3.2$ }& \raisebox{-1.3ex}[0pt]{\footnotesize $27+1$ }& \raisebox{-1.3ex}[0pt]{\footnotesize $-$ } & \raisebox{-1.3ex}[0pt]{ \footnotesize \cite{Howard2017MobileNets}} & \footnotesize Light weight deep CNNs using depth-wise separable convolutions. \\
\hline
\raisebox{-3.3ex}[0pt]{$18$}&\raisebox{-3.3ex}[0pt]{ \footnotesize SE ResNet \cite{ Hu2018Squeeze} }& \raisebox{-3.3ex}[0pt]{\footnotesize $26$} &\raisebox{-3.3ex}[0pt]{ \footnotesize $50$}
&\raisebox{-3.3ex}[0pt]{ \shortstack [c] {$2.3\%$ \\ (SENets)} }& \raisebox{-3.3ex}[0pt]{\footnotesize \cite{ Hu2018Squeeze} }& \footnotesize Channel-wise attention by a novel block called \emph{Squeeze and Excitation}. Complementary to existing backbone CNNs. \\
\Xhline{1pt}
\end{tabular}
}
\end{table*}
\subsection{Popular CNN Architectures}
\label{Sec:PopularNetworks}
CNN architectures (Section \ref{Sec:CNNintro}) serve as
network backbones used in the detection frameworks of Section~\ref{Sec:Frameworks}. Representative frameworks include AlexNet \cite{AlexNet2012}, ZFNet \cite{ZeilerFergus2014} VGGNet \cite{Simonyan2014VGG}, GoogLeNet \cite{GoogLeNet2015}, Inception series \cite{Ioffe2015,Szegedy2016a,InceptionV4}, ResNet \cite{He2016ResNet}, DenseNet \cite{Huang2016Densely} and SENet \cite{ Hu2018Squeeze}, summarized in Table \ref{Tab:dcnnarchitectures}, and where the improvement over time is seen in Fig.~\ref{fig:ILSVRCclassificationResults}.
A further review of recent CNN advances can be found in \cite{Gu2015Recent}.
The trend in architecture evolution is for greater depth: AlexNet has 8 layers, VGGNet 16 layers, more recently ResNet and DenseNet both surpassed the 100 layer mark, and it was VGGNet \cite{Simonyan2014VGG} and GoogLeNet \cite{GoogLeNet2015} which showed that increasing depth can improve the representational power. As can be observed from Table~\ref{Tab:dcnnarchitectures}, networks such as AlexNet, OverFeat, ZFNet and VGGNet have an enormous number of parameters, despite being only a few layers deep, since a large fraction of the parameters come from the FC layers. Newer networks like Inception, ResNet, and DenseNet, although having a great depth, actually have far fewer parameters by avoiding the use of FC layers.
With the use of Inception modules \cite{GoogLeNet2015} in carefully designed topologies, the number of parameters of GoogLeNet is dramatically reduced, compared to AlexNet, ZFNet or VGGNet. Similarly, ResNet demonstrated the effectiveness of skip connections for learning extremely deep networks with hundreds of layers, winning the ILSVRC 2015 classification task. Inspired by ResNet \cite{He2016ResNet}, InceptionResNets \cite{InceptionV4} combined the Inception networks with shortcut connections, on the basis that shortcut connections can significantly accelerate network training. Extending ResNets, Huang \emph{et al.} \cite{Huang2016Densely} proposed DenseNets, which are built from dense blocksconnecting each layer
to every other layer in a feedforward fashion, leading to compelling
advantages such as parameter efficiency, implicit deep supervision\footnote{DenseNets perform deep supervision in an implicit way, \emph{i.e.} individual layers receive additional supervision from other layers through the shorter connections. The benefits of deep supervision have previously
been demonstrated in Deeply Supervised Nets (DSN) \cite{Lee2015Deeply}.}, and feature reuse.
Recently, Hu \emph{et al.} \cite{He2016ResNet} proposed Squeeze and Excitation (SE) blocks, which can be combined with existing deep architectures to boost their performance at minimal additional computational
cost, adaptively recalibrating channel-wise feature responses by explicitly modeling
the interdependencies between convolutional feature channels, and which led to winning the ILSVRC 2017 classification task. Research on CNN architectures remains active, with emerging networks such as Hourglass \cite{Law2018CornerNet}, Dilated Residual Networks \cite{Yu2017Dilated}, Xception \cite{Chollet2017Xception}, DetNet \cite{Li2018DetNet}, Dual Path Networks (DPN) \cite{Chen2017Dual}, FishNet \cite{Sun2018Fishnet}, and GLoRe \cite{Chen2019Graph}.
The training of a CNN requires a large-scale labeled dataset with intraclass diversity. Unlike image classification, detection requires localizing (possibly many) objects from an image. It has been shown \cite{Ouyang2016} that pretraining a deep model with a large scale dataset having object level annotations (such as ImageNet), instead of only the image level annotations, improves the detection performance. However, collecting bounding box labels is expensive, especially for hundreds of thousands of categories. A common scenario is for a CNN to be pretrained on a large dataset (usually with a large number of visual categories) with image-level labels; the pretrained CNN can then be applied to a small dataset, directly, as a generic feature extractor \cite{Razavian2014,Azizpour2016,Donahue2014DeCAF,Yosinski2014Transferable}, which can support a wider range of visual recognition tasks. For detection, the pre-trained network is typically fine-tuned\footnote{Fine-tuning is done by initializing a network with weights
optimized for a large labeled dataset like ImageNet. and then updating the network's weights using the target-task training set.} on a given detection dataset \cite{Donahue2014DeCAF,Girshick2014RCNN,Girshick2016TPAMI}.
Several large scale image classification datasets are used for CNN pre-training, among them ImageNet1000 \cite{ImageNet2009,Russakovsky2015} with 1.2 million images of 1000 object categories, Places \cite{Zhou2017Places}, which is much larger than ImageNet1000 but with fewer classes, a recent Places-Imagenet hybrid \cite{Zhou2017Places}, or JFT300M \cite{Hinton2015Distilling,Sun2017Revisiting}.
Pretrained CNNs without fine-tuning were explored for object classification and detection in \cite{Donahue2014DeCAF,Girshick2016TPAMI,Agrawal2014}, where it was shown that detection accuracies are different for features extracted from different layers; for example, for AlexNet pre-trained on ImageNet, FC6 / FC7 / Pool5 are in descending order of detection accuracy \cite{Donahue2014DeCAF,Girshick2016TPAMI}. Fine-tuning a pre-trained network can increase detection performance significantly \cite{Girshick2014RCNN,Girshick2016TPAMI}, although in the case of AlexNet, the fine-tuning performance boost was shown to be much larger for FC6 / FC7 than for
Pool5, suggesting that Pool5 features are more general. Furthermore, the relationship between the source and target datasets plays a critical role, for example that ImageNet based CNN features show better performance for object detection than for human action \cite{Zhoubolei2014,Azizpour2016}.
\begin{table*}[!t]
\caption {Summary of properties of representative methods in improving DCNN feature representations for generic object detection. Details for Groups (1), (2), and (3) are provided in Section \ref{Sec:EnhanceFeatures}. Abbreviations: Selective Search (SS), EdgeBoxes (EB), InceptionResNet (IRN). \emph{Conv-Deconv} denotes the use of upsampling and
convolutional layers with lateral connections to supplement the standard backbone network. Detection results on VOC07, VOC12 and COCO were reported with mAP@IoU=0.5, and the additional COCO results are computed as the average of mAP for IoU thresholds from 0.5 to 0.95. Training data: ``07''$\leftarrow$VOC2007 trainval; ``07T''$\leftarrow$VOC2007 trainval and test; ``12''$\leftarrow$VOC2012 trainval; CO$\leftarrow$ COCO trainval. The COCO detection results were reported with COCO2015 Test-Dev, except for MPN \cite{Zagoruyko2016} which reported with COCO2015 Test-Standard.}\label{Tab:EnhanceFeatures}
\centering
\renewcommand{\arraystretch}{1.2}
\setlength\arrayrulewidth{0.2mm}
\setlength\tabcolsep{1pt}
\resizebox*{18.5cm}{!}{
\begin{tabular}{!{\vrule width1.2bp}c|c|c|c|c|c|c|c|c|c|p{8cm}<{\centering}!{\vrule width1.2bp}}
\Xhline{1.5pt}
\footnotesize & \footnotesize Detector & \footnotesize Region & \footnotesize Backbone & \footnotesize Pipelined & \multicolumn{3}{c|}{mAP@IoU=0.5} & \footnotesize mAP & \footnotesize Published & \footnotesize \\
\cline{6-9}
\footnotesize Group & \footnotesize Name & \footnotesize Proposal & \footnotesize DCNN & \footnotesize Used & \footnotesize VOC07 & \footnotesize VOC12 & \footnotesize COCO & \footnotesize COCO & \footnotesize In & \footnotesize Highlights \\
\Xhline{1.5pt}
\footnotesize \multirow{3}{*}{\rotatebox{90}{\scriptsize \shortstack [c] {\textbf{(1) Single detection }\\ \textbf{with multilayer features}$\quad$} }} &\footnotesize \raisebox{-3.5ex}[0pt]{ION \cite{Bell2016ION}} & \footnotesize \raisebox{-4.5ex}[0pt]{ \shortstack [c] {SS+EB\\MCG+RPN} }& \footnotesize \raisebox{-3.5ex}[0pt]{VGG16} & \footnotesize \raisebox{-4.5ex}[0pt]{ \shortstack [c] {Fast \\ RCNN}} & \footnotesize \raisebox{-4.5ex}[0pt]{ \shortstack [c] {$79.4$\\(07+12)}} & \footnotesize \raisebox{-4.5ex}[0pt]{\shortstack [c] {$76.4$\\(07+12)}} & \footnotesize \raisebox{-3.5ex}[0pt]{$55.7$}& \footnotesize \raisebox{-3.5ex}[0pt]{$33.1$} & \footnotesize \raisebox{-3.5ex}[0pt]{CVPR16}& \footnotesize Use features from multiple layers; use spatial recurrent neural networks for modeling contextual information; the Best Student Entry and the $3^{\textrm{rd}}$ overall in the COCO detection challenge 2015. \\
\cline{2-11}
& \footnotesize \raisebox{-2ex}[0pt]{HyperNet \cite{HyperNet2016}} & \footnotesize\raisebox{-2ex}[0pt]{ RPN }& \footnotesize \raisebox{-2ex}[0pt]{ VGG16} & \footnotesize
\raisebox{-2.5ex}[0pt]{ \shortstack [c] {Faster \\ RCNN}} & \footnotesize \raisebox{-2.5ex}[0pt]{ \shortstack [c] {$76.3$\\(07+12)}} & \footnotesize \raisebox{-2.5ex}[0pt]{ \shortstack [c] {$71.4$\\(07T+12)}} & \footnotesize \raisebox{-2ex}[0pt]{$-$}& \footnotesize \raisebox{-2ex}[0pt]{$-$} & \footnotesize \raisebox{-2ex}[0pt]{CVPR16 } & \footnotesize Use features from multiple layers
for both region proposal and region classification. \\
\cline{2-11}
& \footnotesize \raisebox{-3ex}[0pt]{ PVANet \cite{PVANET2016}} & \footnotesize \raisebox{-3ex}[0pt]{RPN} & \footnotesize \raisebox{-3ex}[0pt]{PVANet} & \footnotesize \raisebox{-3.5ex}[0pt]{ \shortstack [c] {Faster \\ RCNN}} & \footnotesize \raisebox{-3.5ex}[0pt]{ \shortstack [c] {$\textbf{84.9}$\\(07+12+CO)}}& \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] { $\textbf{84.2}$\\(07T+12+CO)}}& \footnotesize \raisebox{-3ex}[0pt]{ $-$}& \footnotesize \raisebox{-3ex}[0pt]{ $-$} & \footnotesize \raisebox{-3ex}[0pt]{NIPSW16} & \footnotesize Deep but lightweight; Combine ideas from concatenated ReLU \cite{Shang2016Understanding}, Inception \cite{GoogLeNet2015}, and HyperNet \cite{HyperNet2016}. \\
\Xhline{1.5pt}
\footnotesize \multirow{5}{*}{\rotatebox{90}{\scriptsize \shortstack [c] {\textbf{(2) Detection at multiple layers}$\quad\quad\quad\quad\quad\quad$} }} & \footnotesize \raisebox{-5ex}[0pt]{SDP+CRC \cite{Yang2016Exploit}} & \footnotesize \raisebox{-5ex}[0pt]{EB} & \footnotesize \raisebox{-5ex}[0pt]{VGG16 }& \footnotesize \raisebox{-6ex}[0pt]{ \shortstack [c] {Fast \\ RCNN}} & \footnotesize \raisebox{-6ex}[0pt]{ \shortstack [c] { $69.4$\\(07)}}& \footnotesize \raisebox{-5ex}[0pt]{$-$} & \footnotesize\raisebox{-5ex}[0pt]{ $-$}& \footnotesize\raisebox{-5ex}[0pt]{ $-$} & \footnotesize \raisebox{-5ex}[0pt]{CVPR16} & \footnotesize Use features in multiple layers to reject easy negatives via CRC, and then classify remaining proposals using SDP. \\
\cline{2-11}
& \footnotesize \raisebox{-2ex}[0pt]{MSCNN \cite{MSCNN2016}} & \footnotesize \raisebox{-2ex}[0pt]{RPN} & \footnotesize \raisebox{-2ex}[0pt]{VGG }& \footnotesize \raisebox{-2.5ex}[0pt]{ \shortstack [c] {Faster \\ RCNN}} & \multicolumn{4}{c|}{\raisebox{-2ex}[0pt]{\footnotesize Only Tested on KITTI}} & \footnotesize\raisebox{-2ex}[0pt]{ ECCV16 } & \footnotesize Region proposal and classification are performed at multiple layers; includes feature upsampling; end to end learning. \\
\cline{2-11}
& \footnotesize \raisebox{-5ex}[0pt]{MPN \cite{Zagoruyko2016}} & \footnotesize \raisebox{-5ex}[0pt]{SharpMask \cite{Pinheiro2016}} & \footnotesize
\raisebox{-5ex}[0pt]{VGG16} & \footnotesize \raisebox{-6ex}[0pt]{ \shortstack [c] {Fast \\ RCNN}} & \footnotesize \raisebox{-5ex}[0pt]{$-$} & \footnotesize \raisebox{-5ex}[0pt]{$-$}& \footnotesize \raisebox{-5ex}[0pt]{$51.9$} & \footnotesize \raisebox{-5ex}[0pt]{$33.2$} & \footnotesize \raisebox{-5ex}[0pt]{BMVC16} & \footnotesize Concatenate features from different convolutional layers and features of different contextual regions; loss function for multiple overlap thresholds; ranked $2^{\textrm{nd}}$ in both the COCO15 detection and segmentation challenges. \\
\cline{2-11}
& \footnotesize\raisebox{-2ex}[0pt]{ DSOD \cite{ShenICCV2017}} & \footnotesize \raisebox{-2ex}[0pt]{ Free} & \footnotesize \raisebox{-2ex}[0pt]{DenseNet} & \footnotesize \raisebox{-2ex}[0pt]{SSD} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] { $77.7$\\(07+12)}}
& \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] { $72.2$\\(07T+12)}}& \footnotesize \raisebox{-2ex}[0pt]{ $47.3$ } & \footnotesize \raisebox{-2ex}[0pt]{ $29.3$ } & \footnotesize \raisebox{-2ex}[0pt]{ICCV17} & \footnotesize Concatenate feature sequentially, like DenseNet. Train from scratch on the target dataset without pre-training. \\
\cline{2-11}
& \footnotesize \raisebox{-2ex}[0pt]{RFBNet \cite{Liu2017Receptive}} & \footnotesize \raisebox{-2ex}[0pt]{Free} & \footnotesize\raisebox{-2ex}[0pt]{ VGG16 }& \footnotesize \raisebox{-2ex}[0pt]{SSD} & \footnotesize\raisebox{-3ex}[0pt]{ \shortstack [c] { $82.2$\\(07+12)}}& \footnotesize \raisebox{-3ex}[0pt]{\shortstack [c] { $81.2$\\(07T+12)}} & \footnotesize \raisebox{-2ex}[0pt]{ $55.7$} & \footnotesize \raisebox{-2ex}[0pt]{ $34.4$} & \footnotesize \raisebox{-2ex}[0pt]{ ECCV18} & \footnotesize Propose a multi-branch convolutional block similar to Inception \cite{GoogLeNet2015}, but using dilated convolution. \\
\Xhline{1.5pt}
\footnotesize \multirow{8}{*}{\rotatebox{90}{\scriptsize \textbf{(3) Combination of (1) and (2) $\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$} }} & \footnotesize \raisebox{-2ex}[0pt]{DSSD \cite{DSSD2016}} & \footnotesize\raisebox{-2ex}[0pt]{ Free} & \footnotesize \raisebox{-2ex}[0pt]{ResNet101} & \footnotesize \raisebox{-2ex}[0pt]{SSD }& \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] { $81.5$\\(07+12)}} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] { $80.0$\\(07T+12)}}& \footnotesize \raisebox{-2ex}[0pt]{$53.3$} & \footnotesize \raisebox{-2ex}[0pt]{$33.2$} & \footnotesize \raisebox{-2ex}[0pt]{2017 } & \footnotesize \raisebox{-2ex}[0pt]{Use Conv-Deconv, as shown in Fig. \ref{fig:MultiLayerCombine} (c1, c2).} \\
\cline{2-11}
& \footnotesize\raisebox{-2ex}[0pt]{ FPN \cite{FPN2016}} & \footnotesize \raisebox{-2ex}[0pt]{RPN} & \footnotesize \raisebox{-2ex}[0pt]{ResNet101} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] {Faster \\ RCNN}} & \footnotesize\raisebox{-2ex}[0pt]{ $-$}& \footnotesize \raisebox{-2ex}[0pt]{$-$}& \footnotesize\raisebox{-2ex}[0pt]{ $59.1$} & \footnotesize \raisebox{-2ex}[0pt]{$36.2$}& \footnotesize \raisebox{-2ex}[0pt]{CVPR17} & \footnotesize Use Conv-Deconv, as shown in Fig. \ref{fig:MultiLayerCombine} (a1, a2); Widely used in detectors. \\
\cline{2-11}
& \footnotesize \raisebox{-2ex}[0pt]{ TDM \cite{Shrivastava2017} }& \footnotesize \raisebox{-2ex}[0pt]{RPN} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] {ResNet101\\
VGG16} }& \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] {Faster \\ RCNN}} & \footnotesize \raisebox{-2ex}[0pt]{$-$} & \footnotesize \raisebox{-2ex}[0pt]{$-$} & \footnotesize \raisebox{-2ex}[0pt]{ $57.7$} & \footnotesize \raisebox{-2ex}[0pt]{$36.8$} & \footnotesize \raisebox{-2ex}[0pt]{CVPR17} & \footnotesize Use Conv-Deconv, as shown in Fig. \ref{fig:MultiLayerCombine} (b2). \\
\cline{2-11}
& \footnotesize \raisebox{-2ex}[0pt]{RON \cite{Kong2017ron}} & \footnotesize \raisebox{-2ex}[0pt]{RPN} & \footnotesize \raisebox{-2ex}[0pt]{VGG16} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] {Faster \\ RCNN}} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] { $81.3$\\(07+12+CO)}}
& \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] { $80.7$\\(07T+12+CO)}} & \footnotesize \raisebox{-2ex}[0pt]{$49.5$} & \footnotesize \raisebox{-2ex}[0pt]{$27.4$} & \footnotesize \raisebox{-2ex}[0pt]{CVPR17} & \footnotesize Use Conv-deconv, as shown in Fig. \ref{fig:MultiLayerCombine} (d2); Add the objectness prior to significantly reduce object search space. \\
\cline{2-11}
& \footnotesize \raisebox{-3ex}[0pt]{ZIP \cite{Hongyang2018Zoom}} & \footnotesize \raisebox{-3ex}[0pt]{RPN} & \footnotesize \raisebox{-3ex}[0pt]{Inceptionv2} & \footnotesize \raisebox{-4ex}[0pt]{ \shortstack [c] {Faster \\ RCNN}} & \footnotesize \raisebox{-4ex}[0pt]{ \shortstack [c] {$79.8$\\ (07+12)}}
& \footnotesize \raisebox{-3ex}[0pt]{$-$} & \footnotesize \raisebox{-3ex}[0pt]{$-$} & \footnotesize\raisebox{-3ex}[0pt]{ $-$ }& \footnotesize\raisebox{-3ex}[0pt]{ IJCV18 } & \footnotesize Use Conv-Deconv, as shown in Fig. \ref{fig:MultiLayerCombine} (f1). Propose a map attention decision (MAD) unit for features from different layers.\\
\cline{2-11}
& \footnotesize \raisebox{-2ex}[0pt]{STDN \cite{Zhou2018Scale} } & \footnotesize \raisebox{-2ex}[0pt]{Free} & \footnotesize \raisebox{-2ex}[0pt]{DenseNet169}
& \footnotesize \raisebox{-2ex}[0pt]{SSD} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] {$80.9$\\(07+12)}} & \footnotesize\raisebox{-2ex}[0pt]{ $-$ }& \footnotesize \raisebox{-2ex}[0pt]{$51.0$ }& \footnotesize \raisebox{-2ex}[0pt]{$31.8$ }& \footnotesize \raisebox{-2ex}[0pt]{ CVPR18} & \footnotesize
A new scale transfer module, which resizes features of different scales to the same scale in parallel. \\
\cline{2-11}
& \footnotesize \raisebox{-2ex}[0pt]{RefineDet \cite{Zhang2018Single}} & \footnotesize \raisebox{-2ex}[0pt]{RPN} & \footnotesize \raisebox{-3ex}[0pt]{\shortstack [c] {VGG16\\ResNet101}} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] {Faster \\ RCNN}} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] { $83.8$\\(07+12)}} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] {$83.5$\\(07T+12)}} & \footnotesize\raisebox{-2ex}[0pt]{ $ 62.9$} & \footnotesize \raisebox{-2ex}[0pt]{$41.8$} & \footnotesize \raisebox{-2ex}[0pt]{CVPR18} & \footnotesize Use cascade to obtain better and less anchors. Use Conv-deconv, as shown in Fig. \ref{fig:MultiLayerCombine} (e2) to improve features. \\
\cline{2-11}
& \footnotesize \raisebox{-5ex}[0pt]{PANet \cite{Liu2018Path} }& \footnotesize \raisebox{-5ex}[0pt]{RPN} & \footnotesize \raisebox{-6.5ex}[0pt]{\shortstack [c] {ResNeXt101\\+FPN}} & \footnotesize \raisebox{-5ex}[0pt]{Mask RCNN} & \footnotesize \raisebox{-5ex}[0pt]{ $-$ } & \footnotesize \raisebox{-5ex}[0pt]{ $-$ }& \footnotesize\raisebox{-5ex}[0pt]{ $\textbf{67.2}$ } & \footnotesize \raisebox{-5ex}[0pt]{$\textbf{47.4}$} & \footnotesize\raisebox{-5ex}[0pt]{ CVPR18}& \footnotesize Shown in Fig. \ref{fig:MultiLayerCombine} (g). Based on FPN, add another bottom-up path to pass information between lower and topmost layers; adaptive feature pooling. Ranked $1^{st}$ and $2^{nd}$ in COCO 2017 tasks. \\
\cline{2-11}
& \footnotesize \raisebox{-1ex}[0pt]{DetNet \cite{Li2018DetNet}}& \footnotesize \raisebox{-1ex}[0pt]{RPN} & \footnotesize \raisebox{-1ex}[0pt]{DetNet59+FPN} & \footnotesize \raisebox{-1ex}[0pt]{Faster RCNN} & \footnotesize \raisebox{-1ex}[0pt]{ $-$ } & \footnotesize \raisebox{-1ex}[0pt]{ $-$ }& \footnotesize\raisebox{-1ex}[0pt]{ $61.7$ } & \footnotesize \raisebox{-1ex}[0pt]{$40.2$} & \footnotesize\raisebox{-1ex}[0pt]{ ECCV18}& \footnotesize Introduces dilated convolution into the ResNet backbone to maintain high resolution in deeper layers; Shown in Fig. \ref{fig:MultiLayerCombine} (i). \\
\cline{2-11}
& \footnotesize \raisebox{-2ex}[0pt]{FPR \cite{Kong2018Deep} }& \footnotesize \raisebox{-2ex}[0pt]{$-$} & \footnotesize \raisebox{-3ex}[0pt]{\shortstack [c] {VGG16\\ ResNet101}} & \footnotesize \raisebox{-2ex}[0pt]{SSD} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] { $82.4$\\(07+12)}} & \footnotesize \raisebox{-3ex}[0pt]{ \shortstack [c] { $81.1$\\(07T+12)}}& \footnotesize\raisebox{-2ex}[0pt]{$54.3$} & \footnotesize \raisebox{-2ex}[0pt]{ $34.6$} & \footnotesize\raisebox{-2ex}[0pt]{ ECCV18}& \footnotesize Fuse task oriented features across different spatial locations and scales, globally and locally; Shown in Fig. \ref{fig:MultiLayerCombine} (h).\\
\cline{2-11}
& \footnotesize \raisebox{-5ex}[0pt]{M2Det \cite{Zhao2019M2Det}}& \footnotesize \raisebox{-5ex}[0pt]{$-$} & \footnotesize \raisebox{-5ex}[0pt]{SSD} & \footnotesize \raisebox{-6.5ex}[0pt]{\shortstack [c] {VGG16\\ ResNet101}} & \footnotesize \raisebox{-5ex}[0pt]{ $-$ } & \footnotesize \raisebox{-5ex}[0pt]{ $-$ } & \footnotesize\raisebox{-5ex}[0pt]{ $64.6$} & \footnotesize \raisebox{-5ex}[0pt]{$44.2$} & \footnotesize\raisebox{-5ex}[0pt]{ AAAI19}& \footnotesize Shown in Fig. \ref{fig:MultiLayerCombine} (j), newly designed top down path to learn a set of multilevel features, recombined
to construct a feature pyramid for object detection. \\
\Xhline{1.5pt}
\scriptsize \multirow{6}{*}{\rotatebox{90}{ \textbf{(4) Model Geometric Transforms$\quad$} }} & \footnotesize \raisebox{-8ex}[0pt]{DeepIDNet \cite{Ouyang2015deepid} }& \footnotesize \raisebox{-8ex}[0pt]{SS+
EB} & \footnotesize \raisebox{-12ex}[0pt]{\shortstack [c] {AlexNet \\ZFNet \\OverFeat \\GoogLeNet}} & \footnotesize \raisebox{-8ex}[0pt]{
RCNN}& \footnotesize \raisebox{-9ex}[0pt]{\shortstack [c] {$69.0$ \\(07)}} & \footnotesize \raisebox{-8ex}[0pt]{$-$}& \footnotesize\raisebox{-8ex}[0pt]{$-$}& \footnotesize \raisebox{-8ex}[0pt]{$25.6$}& \footnotesize \raisebox{-8ex}[0pt]{CVPR15} & \footnotesize Introduce a deformation constrained
pooling layer, jointly learned with convolutional layers in existing DCNNs. Utilize the following modules that are not trained end to end: cascade, context modeling, model averaging, and bounding box location refinement in the multistage detection pipeline. \\
\cline{2-11}
& \footnotesize \raisebox{-2ex}[0pt]{DCN \cite{Dai17Deformable}} & \footnotesize \raisebox{-2ex}[0pt]{RPN} & \footnotesize \raisebox{-2.5ex}[0pt]{\shortstack [c] {ResNet101\\IRN}} & \footnotesize\raisebox{-2ex}[0pt]{ RFCN} & \footnotesize \raisebox{-2.5ex}[0pt]{ \shortstack [c] {$82.6$ \\(07+12)}}& \footnotesize\raisebox{-2ex}[0pt]{ $-$ } & \footnotesize \raisebox{-2ex}[0pt]{ $58.0$} & \footnotesize\raisebox{-2ex}[0pt]{ $37.5$} & \footnotesize \raisebox{-2ex}[0pt]{CVPR17} & \footnotesize Design deformable convolution and deformable RoI pooling modules that can replace plain convolution in existing DCNNs. \\
\cline{2-11}
& \footnotesize \raisebox{-2.5ex}[0pt]{DPFCN \cite{Mordan2018End}} & \footnotesize \raisebox{-2.5ex}[0pt]{AttractioNet \cite{Gidaris2016Attend} }& \footnotesize \raisebox{-2.5ex}[0pt]{ ResNet} & \footnotesize \raisebox{-2.5ex}[0pt]{ RFCN} & \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$83.3$ \\(07+12)}}& \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$81.2$\\(07T+12)}} & \footnotesize \raisebox{-2.5ex}[0pt]{$59.1$}& \footnotesize\raisebox{-2.5ex}[0pt]{$39.1$} & \footnotesize \raisebox{-2.5ex}[0pt]{IJCV18}& \footnotesize Design a deformable part based RoI pooling layer to explicitly select discriminative regions around object proposals. \\
\Xhline{1.5pt}
\end{tabular}
}
\end{table*}
\subsection{Methods For Improving Object Representation}
\label{Sec:EnhanceFeatures}
Deep CNN based detectors such as RCNN \cite{Girshick2014RCNN}, Fast RCNN \cite{Girshick2015FRCNN}, Faster RCNN \cite{Ren2015NIPS} and YOLO \cite{YoLo2016}, typically use the deep CNN architectures listed in Table \ref{Tab:dcnnarchitectures} as the backbone network and use features from the top layer of the CNN as object representations; however, detecting objects across a large {\em range} of scales is a fundamental challenge. A classical strategy to address this issue is to run the detector over a number of scaled input images (\emph{e.g.,} an image pyramid) \cite{Felzenszwalb2010b,Girshick2014RCNN,He2014SPP}, which typically produces
more accurate detection, with, however, obvious limitations of inference time and memory.
\subsubsection{Handling of Object Scale Variations}
\label{sec:objectscale}
Since a CNN computes its feature hierarchy layer by layer, the sub-sampling
layers in the feature hierarchy already lead to an inherent multiscale
pyramid, producing feature maps at different spatial resolutions, but subject to challenges \cite{Hariharan2016,FCNCVPR2015,Shrivastava2017}. In particular, the higher layers have a large receptive field and strong semantics, and are the most robust to variations such as object pose, illumination and part deformation, but the resolution is low and the geometric details are lost. In contrast, lower layers have a small receptive field and rich geometric details, but the resolution is high and much less sensitive to semantics. Intuitively, semantic concepts of objects can emerge in different layers, depending on the size of the objects. So if a target object is small
it requires fine detail information in earlier layers and may very well disappear at later layers, in principle making small object detection very challenging, for which tricks such as dilated or ``atrous'' convolution \cite{Yu2015Multiscale,Dai2016RFCN,Chen2016deeplab} have been proposed, increasing feature resolution, but
increasing computational complexity. On
the other hand, if the target object is large, then the semantic concept
will emerge in much later layers. A number of methods \cite{Shrivastava2017,Zhang2018Object,FPN2016,Kong2017ron} have been proposed to improve detection accuracy by exploiting multiple CNN layers, broadly falling into three types of \textbf{multiscale object detection}:
\begin{enumerate}
\item Detecting with combined features of multiple layers;
\item Detecting at multiple layers;
\item Combinations of the above two methods.
\end{enumerate}
\begin {figure}[!t]
\centering
\includegraphics[width=0.48\textwidth]{HyperFeature.pdf}
\caption{Comparison of HyperNet and ION. LRN is Local Response Normalization, which performs a kind of ``lateral inhibition'' by normalizing over local input regions \cite{Jia2014Caffe}.}
\label{fig:HyperFeature}
\end {figure}
\textbf{(1) Detecting with combined features of multiple CNN layers:} Many approaches, including Hypercolumns \cite{Hariharan2016}, HyperNet \cite{HyperNet2016}, and ION \cite{Bell2016ION}, combine features from multiple layers
before making a prediction. Such feature combination is commonly
accomplished via concatenation, a classic neural network
idea that concatenates features from different layers, architectures
which have recently become popular for semantic segmentation
\cite{FCNCVPR2015,FCNTPAMI,Hariharan2016}. As shown in Fig.~\ref{fig:HyperFeature} (a), ION \cite{Bell2016ION} uses RoI
pooling to extract RoI features from multiple layers, and then the object proposals generated by selective search and edgeboxes are classified by using the concatenated features. HyperNet \cite{HyperNet2016}, shown in Fig.~\ref{fig:HyperFeature} (b), follows a similar idea, and integrates deep, intermediate and shallow features to generate object proposals and to predict objects via an end to end joint training strategy. The combined feature is more descriptive, and is more beneficial for localization and classification, but at increased computational complexity.
\begin {figure*}[!t]
\centering
\includegraphics[height=0.9\textheight]{MultiLayerCombine.pdf}
\caption{Hourglass architectures: Conv1 to Conv5 are the main Conv blocks in backbone networks such as VGG or ResNet. The figure compares a number of Feature Fusion Blocks (FFB) commonly used in recent approaches: FPN \cite{FPN2016}, TDM \cite{Shrivastava2017}, DSSD \cite{DSSD2016}, RON \cite{Kong2017ron}, RefineDet \cite{Zhang2018Single}, ZIP \cite{Hongyang2018Zoom}, PANet \cite{Liu2018Path}, FPR \cite{Kong2018Deep}, DetNet \cite{Li2018DetNet} and M2Det \cite{Zhao2019M2Det}. FFM: Feature Fusion Module, TUM: Thinned U-shaped Module}
\label{fig:MultiLayerCombine}
\end {figure*}
\textbf{(2) Detecting at multiple CNN layers:} A number of recent
approaches improve detection by predicting objects of different
resolutions at different layers and then combining these
predictions: SSD \cite{Liu2016SSD} and MSCNN \cite{MSCNN2016}, RBFNet \cite{Liu2017Receptive}, and
DSOD \cite{ShenICCV2017}. SSD \cite{Liu2016SSD} spreads out default boxes of different
scales to multiple layers within a CNN, and forces
each layer to focus on predicting objects of a certain scale. RFBNet \cite{Liu2017Receptive}
replaces the later convolution layers of SSD with a Receptive Field Block (RFB) to enhance the discriminability and robustness of features. The RFB is a multibranch convolutional block,
similar to the Inception block \cite{GoogLeNet2015}, but combining multiple branches with different kernels and convolution layers \cite{Chen2016deeplab}.
MSCNN \cite{MSCNN2016} applies deconvolution on multiple layers of a
CNN to increase feature map resolution before using the
layers to learn region proposals and pool features. Similar to RFBNet \cite{Liu2017Receptive}, TridentNet \cite{Li2019Scale}
constructs a parallel multibranch architecture where each branch
shares the same transformation parameters but with different
receptive fields; dilated convolution with different dilation rates are used to adapt the receptive fields for objects of different scales.
\textbf{(3) Combinations of the above two methods:} Features from different layers are complementary to each other
and can improve detection accuracy, as shown by Hypercolumns \cite{Hariharan2016}, HyperNet \cite{HyperNet2016} and ION \cite{Bell2016ION}. On
the other hand, however, it is natural to detect objects of different scales
using features of approximately the same size, which
can be achieved by detecting large objects from downscaled feature maps
while detecting small objects from upscaled feature maps.
Therefore, in order to combine the best of both worlds, some recent works propose to detect objects at multiple layers, and the resulting features obtained by combining features from different
layers. This approach has been found to be effective for segmentation \cite{FCNCVPR2015,FCNTPAMI} and human pose estimation \cite{Newell2016Stacked}, has been widely exploited by both one-stage and two-stage detectors to alleviate problems of scale
variation across object instances. Representative methods include SharpMask \cite{Pinheiro2016},
Deconvolutional Single Shot Detector (DSSD) \cite{DSSD2016}, Feature Pyramid Network (FPN) \cite{FPN2016}, Top Down Modulation (TDM)\cite{Shrivastava2017}, Reverse connection with Objectness prior Network (RON) \cite{Kong2017ron}, ZIP \cite{Hongyang2018Zoom}, Scale Transfer Detection Network (STDN) \cite{Zhou2018Scale}, RefineDet \cite{Zhang2018Single}, StairNet \cite{Woo18StairNet}, Path Aggregation Network (PANet) \cite{Liu2018Path}, Feature Pyramid Reconfiguration (FPR) \cite{Kong2018Deep}, DetNet \cite{Li2018DetNet}, Scale Aware Network (SAN) \cite{Kim2018San}, Multiscale Location aware Kernel Representation (MLKP) \cite{Wang2018Multiscale} and M2Det \cite{Zhao2019M2Det}, as shown in Table~\ref{Tab:EnhanceFeatures} and contrasted in Fig.~\ref{fig:MultiLayerCombine}.
Early works like FPN \cite{FPN2016}, DSSD \cite{DSSD2016}, TDM \cite{Shrivastava2017}, ZIP \cite{Hongyang2018Zoom}, RON \cite{Kong2017ron} and RefineDet \cite{Zhang2018Single} construct the feature pyramid according to the inherent multiscale,
pyramidal architecture of the backbone, and achieved encouraging results. As can be observed from Fig. \ref{fig:MultiLayerCombine} (a1) to (f1), these methods have very similar detection architectures which incorporate a top-down network with lateral connections to supplement the standard bottom-up, feed-forward network. Specifically, after a bottom-up pass the final high level semantic features are transmitted back by the top-down network to combine with the bottom-up features from intermediate layers after lateral processing, and the combined features are then used for detection. As can be seen from Fig.~\ref{fig:MultiLayerCombine} (a2) to (e2), the main differences lie in the design of the simple Feature Fusion Block (FFB), which handles the selection of features from different layers and the combination of multilayer features.
FPN \cite{FPN2016}
shows significant improvement as a generic feature extractor
in several applications including object detection \cite{FPN2016,LinICCV2017} and instance segmentation \cite{MaskRCNN2017}. Using FPN in a basic Faster
RCNN system achieved state-of-the-art results on the COCO detection dataset. STDN \cite{Zhou2018Scale} used DenseNet \cite{Huang2016Densely} to combine features of different layers and designed a scale transfer module to obtain feature maps
with different resolutions. The scale transfer module can be directly embedded
into DenseNet with little additional cost.
More recent work, such as PANet \cite{Liu2018Path}, FPR \cite{Kong2018Deep}, DetNet \cite{Li2018DetNet}, and M2Det \cite{Zhao2019M2Det}, as shown in Fig. \ref{fig:MultiLayerCombine} (g-j), propose to further improve on the pyramid architectures like FPN in different ways. Based on FPN, Liu \emph{et al.} designed PANet \cite{Liu2018Path} (Fig.~\ref{fig:MultiLayerCombine} (g1)) by adding another bottom-up path with clean lateral connections from low to top levels, in order to shorten the information path and to enhance the feature pyramid. Then, an adaptive feature pooling was proposed to aggregate features from all feature levels for each proposal. In addition, in the proposal sub-network, a complementary branch capturing different views for each proposal is created to further improve mask prediction. These additional steps bring only slightly
extra computational overhead, but are effective and allowed
PANet to reach 1st place in the COCO 2017 Challenge Instance Segmentation task and 2nd place in the Object Detection task. Kong \emph{et al.} proposed FPR \cite{Kong2018Deep} by explicitly reformulating the feature pyramid construction process (\emph{e.g.} FPN \cite{FPN2016}) as feature reconfiguration functions in a highly nonlinear but efficient way. As shown in Fig.~\ref{fig:MultiLayerCombine} (h1), instead of using a top-down path to propagate strong semantic features from the topmost layer down as in FPN, FPR first extracts features from multiple layers in the backbone network by adaptive concatenation, and then designs a more complex FFB module (Fig. \ref{fig:MultiLayerCombine} (h2)) to spread strong semantics to all scales. Li \emph{et al.} proposed DetNet \cite{Li2018DetNet} (Fig. \ref{fig:MultiLayerCombine} (i1)) by introducing dilated convolutions to the later layers of the backbone network in order to maintain high spatial resolution
in deeper layers. Zhao \emph{et al.} \cite{Zhao2019M2Det} proposed a MultiLevel Feature Pyramid Network (MLFPN) to build more effective feature pyramids for detecting objects of different scales. As can be seen from Fig. \ref{fig:MultiLayerCombine} (j1), features from two different layers of the backbone are first fused as the base feature, after which a top-down path with lateral connections from the base feature is created to build the feature pyramid. As shown in Fig.~\ref{fig:MultiLayerCombine} (j2) and (j5), the FFB module is much more complex than those like FPN, in that FFB involves a Thinned U-shaped Module (TUM) to generate a second pyramid structure, after which the feature maps with equivalent sizes from multiple TUMs are combined for object detection. The authors proposed M2Det by integrating MLFPN into SSD, and achieved better detection performance than other one-stage detectors.
\subsection{Handling of Other Intraclass Variations}
\label{sec:Otherchanges}
Powerful object representations should combine distinctiveness and robustness. A large amount of recent work has been devoted to handling changes in object scale, as reviewed in Section~\ref{sec:objectscale}. As discussed in Section~\ref{Sec:MainChallenges} and summarized in Fig.~\ref{Fig:challenges}, object detection still requires robustness to real-world variations other than just scale, which we group into three categories:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Geometric transformations,
\item Occlusions, and
\item Image degradations.
\end{itemize}
To handle these intra-class variations, the most straightforward approach is to augment the training
datasets with a sufficient amount of variations; for example, robustness to rotation could be achieved by adding rotated objects at many orientations to the training data. Robustness can frequently be learned this way, but usually at the cost of expensive
training and complex model parameters. Therefore, researchers have proposed alternative solutions to these problems.
\textbf{Handling of geometric transformations:} DCNNs are inherently limited by the lack of ability to be spatially invariant to geometric transformations of the input data \cite{Lenc2018Understanding,Liu2017Local,Chellappa2016}. The introduction of local max pooling layers has allowed DCNNs to enjoy some translation invariance, however the intermediate feature maps are not
actually invariant to large geometric transformations of the input data \cite{Lenc2018Understanding}. Therefore, many approaches have been presented to enhance robustness, aiming at learning invariant CNN representations with respect
to different types of transformations such as scale \cite{Kim2014Locally,Bruna13Invariant}, rotation \cite{Bruna13Invariant,RIFDCNN2016,Worrall2017Harmonic,Zhou2017Oriented}, or both \cite{Jaderberg2015Spatial}. One representative work is Spatial Transformer Network (STN) \cite{Jaderberg2015Spatial}, which introduces a
new learnable module to handle scaling, cropping, rotations, as well as nonrigid deformations via a global parametric transformation. STN has now
been used in rotated text detection \cite{Jaderberg2015Spatial}, rotated face
detection and generic object detection \cite{Wang2017}.
Although rotation invariance may be attractive in certain applications, such as scene text detection \cite{He2018End,Ma2018Arbitrary}, face detection \cite{Shi2018Real}, and aerial imagery \cite{Ding2018Learning,Xia2018DOTA}, there is limited generic object detection work focusing on rotation invariance because popular benchmark detection datasets (\emph{e.g.} PASCAL VOC, ImageNet, COCO) do not actually present rotated images.
Before deep learning, Deformable Part based Models (DPMs)
\cite{Felzenszwalb2010b} were successful for generic object detection, representing objects by component parts arranged in a deformable configuration. Although DPMs have been significantly outperformed by more recent object detectors, their spirit still deeply influences many recent detectors. DPM modeling is less sensitive to transformations in
object pose, viewpoint and nonrigid deformations, motivating researchers \cite{Dai17Deformable,Girshick2015DPMCNN,Mordan2018End,Ouyang2015deepid,Wan2015end}
to explicitly model object composition to improve CNN based detection.
The first attempts \cite{Girshick2015DPMCNN,Wan2015end} combined DPMs with CNNs by using deep features learned by AlexNet in DPM based detection, but without region proposals.
To enable a CNN to benefit from the built-in capability of modeling the deformations of object parts, a number of approaches were proposed, including DeepIDNet \cite{Ouyang2015deepid}, DCN \cite{Dai17Deformable} and DPFCN \cite{Mordan2018End} (shown in Table~\ref{Tab:EnhanceFeatures}). Although similar in spirit, deformations are computed in different ways: DeepIDNet \cite{Ouyang2016} designed a deformation constrained pooling layer to replace regular max pooling, to learn the shared visual patterns and their deformation properties across different object classes; DCN \cite{Dai17Deformable} designed a deformable convolution layer and a deformable RoI pooling layer, both of which
are based on the idea of augmenting regular grid sampling locations in feature maps; and DPFCN \cite{Mordan2018End} proposed a deformable part-based RoI pooling layer which selects discriminative parts of objects around object proposals by
simultaneously optimizing latent displacements of all parts.
\textbf{Handling of occlusions:} In real-world images, occlusions are common, resulting in information loss from object instances.
A deformable parts idea can be useful for occlusion handling, so deformable RoI Pooling \cite{Dai17Deformable,Mordan2018End,Ouyang2013Joint} and deformable convolution \cite{Dai17Deformable} have been proposed to alleviate occlusion by giving more flexibility to the typically fixed geometric structures. Wang \emph{et al.} \cite{Wang2017} propose to learn an adversarial network that generates examples with occlusions and deformations, and context may be helpful in dealing with occlusions \cite{Zhang2018Occluded}.
Despite these efforts, the occlusion problem is far from being solved; applying GANs to this problem may be a promising research direction.
\textbf{Handling of image degradations:}
Image noise is a common problem in many real-world applications. It is frequently caused by insufficient lighting, low quality cameras, image compression, or the intentional low-cost sensors on edge devices and wearable devices. While low image quality may be expected to degrade the performance of visual recognition, most current methods are evaluated in a
degradation free and clean environment, evidenced by the fact that PASCAL VOC, ImageNet, MS COCO and Open Images all focus on relatively high quality images.
To the best of our knowledge, there is so far very limited work to address this problem.
\section{Context Modeling}
\label{sec:ContextInfo}
\begin{table*}[!t]
\caption {Summary of detectors that exploit context information, with labelling details as in Table \ref{Tab:EnhanceFeatures}.}\label{Tab:ContextMethods}
\centering
\renewcommand{\arraystretch}{1.2}
\setlength\arrayrulewidth{0.2mm}
\setlength\tabcolsep{1pt}
\resizebox*{18.5cm}{!}{
\begin{tabular}{!{\vrule width1.5bp}c|c|c|c|c|c|c|c|c|p{8cm}<{\centering}!{\vrule width1.5bp}}
\Xhline{1.5pt}
\footnotesize & \footnotesize Detector & \footnotesize Region & \footnotesize Backbone & \footnotesize Pipelined & \multicolumn{2}{c|}{mAP@IoU=0.5} & \footnotesize mAP & \footnotesize Published & \footnotesize \\
\cline{6-8}
\footnotesize Group & \footnotesize Name & \footnotesize Proposal & \footnotesize DCNN & \footnotesize Used & \footnotesize VOC07 & \footnotesize VOC12 & \footnotesize COCO & \footnotesize In & \footnotesize Highlights \\
\Xhline{1.5pt}
\footnotesize \multirow{4}{*}{\rotatebox{90}{\scriptsize \textbf{Global Context}
$\quad\quad\quad\quad\quad$}} &
\raisebox{-1.5ex}[0pt]{\footnotesize SegDeepM \cite{SegDeepM2015} }
&\raisebox{-1.5ex}[0pt]{ \footnotesize SS+CMPC}
& \raisebox{-1.5ex}[0pt]{\footnotesize VGG16}
& \raisebox{-1.5ex}[0pt]{\footnotesize RCNN}
& \raisebox{-1.5ex}[0pt]{\footnotesize VOC10}
&\raisebox{-1.5ex}[0pt]{ \footnotesize VOC12}
& \raisebox{-1.5ex}[0pt]{\footnotesize $-$}&\raisebox{-1.5ex}[0pt]{ \footnotesize CVPR15 }
& \footnotesize
Additional features extracted from an enlarged object proposal as context information.
\\
\cline{2-10}
& \footnotesize \raisebox{-1.5ex}[0pt]{DeepIDNet \cite{Ouyang2015deepid}} & \footnotesize \raisebox{-1.5ex}[0pt]{SS+EB} &\raisebox{-2.5ex}[0pt]{ \footnotesize \shortstack [c] {AlexNet\\ZFNet}} & \footnotesize \raisebox{-1.5ex}[0pt]{ RCNN } & \footnotesize \raisebox{-2.5ex}[0pt]{\shortstack [c] {$69.0$ \\(07)}} & \footnotesize \raisebox{-1.5ex}[0pt]{$-$ } & \footnotesize \raisebox{-1.5ex}[0pt]{$-$} & \footnotesize \raisebox{-1.5ex}[0pt]{CVPR15} & \footnotesize Use image classification scores as global contextual information to refine the detection scores of each object proposal.\\
\cline{2-10}
& \raisebox{-1.5ex}[0pt]{\footnotesize ION \cite{Bell2016ION} }& \raisebox{-1.5ex}[0pt]{
\footnotesize SS+EB} &\raisebox{-1.5ex}[0pt]{ \footnotesize VGG16}
&\raisebox{-2.5ex}[0pt]{ \footnotesize\shortstack [c] { Fast \\RCNN}}
& \raisebox{-1.5ex}[0pt]{\footnotesize $80.1$} & \raisebox{-1.5ex}[0pt]{\footnotesize $77.9$ }
&\raisebox{-1.5ex}[0pt]{ \footnotesize $33.1$} &\raisebox{-1.5ex}[0pt]{ \footnotesize CVPR16 }
& \footnotesize The contextual information outside the region of interest is integrated using spatial recurrent neural networks. \\
\cline{2-10}
& \raisebox{-1ex}[0pt]{\footnotesize CPF \cite{ Shrivastava2016}} & \raisebox{-1ex}[0pt]{\footnotesize RPN} & \raisebox{-1ex}[0pt]{\footnotesize VGG16}
&\raisebox{-2.5ex}[0pt]{ \footnotesize \shortstack [c] { Faster \\ RCNN} }&\raisebox{-2.5ex}[0pt]{ \footnotesize \shortstack [c] {$76.4$\\ (07+12)}}
& \raisebox{-2ex}[0pt]{\footnotesize \shortstack [c] { $72.6$\\ (07T+12) }}& \raisebox{-1ex}[0pt]{\footnotesize
$-$ }&\raisebox{-1ex}[0pt]{ \footnotesize ECCV16} & \footnotesize
\raisebox{-1ex}[0pt]{Use semantic segmentation to provide top-down feedback. } \\
\Xhline{1.5pt}
\footnotesize \multirow{7}{*}{\rotatebox{90}{\scriptsize \textbf{Local Context}$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$}} & \footnotesize \raisebox{-3ex}[0pt]{MRCNN \cite{Gidaris2015}} & \footnotesize \raisebox{-3ex}[0pt]{SS} & \footnotesize \raisebox{-3ex}[0pt]{VGG16} & \footnotesize \raisebox{-3ex}[0pt]{SPPNet} & \footnotesize \raisebox{-4ex}[0pt]{\shortstack [c] {$78.2$ \\(07+12)}}& \footnotesize \raisebox{-4ex}[0pt]{\shortstack [c] { $73.9$\\(07+12)} }& \footnotesize
\raisebox{-3ex}[0pt]{$-$} & \footnotesize\raisebox{-3ex}[0pt]{ ICCV15 }& \footnotesize
Extract features from multiple regions surrounding or inside the object proposals. Integrate the semantic segmentation-aware features. \\
\cline{2-10}
& \raisebox{-6ex}[0pt]{\footnotesize GBDNet
\cite{ GBDCNN2016, Zeng2017Crafting}} & \raisebox{-6ex}[0pt]{\footnotesize CRAFT
\cite{ CRAFT2016}} &\raisebox{-7ex}[0pt]{ \footnotesize \shortstack [c] {Inception v2\\ResNet269\\PolyNet \cite{Zhang2017PolyNet} } }&\raisebox{-7ex}[0pt]{ \footnotesize \shortstack [c] {Fast \\ RCNN} }&\raisebox{-7ex}[0pt]{ \footnotesize\shortstack [c] { $77.2$\\
(07+12)} } &\raisebox{-6ex}[0pt]{ \footnotesize $-$} &\raisebox{-6ex}[0pt]{ \footnotesize $27.0$}
&\raisebox{-6ex}[0pt]{ \footnotesize \shortstack [c] {ECCV16\\ TPAMI18} } & \footnotesize
A GBDNet module to learn the relations of multiscale contextualized regions surrounding an object proposal; GBDNet passes messages among features from different context regions through convolution between neighboring support regions in two directions. \\
\cline{2-10}
& \raisebox{-2.5ex}[0pt]{\footnotesize ACCNN\cite{Li2017Attentive}} & \raisebox{-2.5ex}[0pt]{\footnotesize
SS }& \raisebox{-2.5ex}[0pt]{\footnotesize VGG16 }& \raisebox{-2.5ex}[0pt]{\footnotesize \shortstack [c] {Fast \\ RCNN} }&\raisebox{-2.5ex}[0pt]{ \footnotesize \shortstack [c] { $72.0$\\
(07+12) }}& \raisebox{-2.5ex}[0pt]{\footnotesize \shortstack [c] { $70.6$\\ (07T+12)} }&\raisebox{-5.5ex}[0pt]{ \footnotesize
$-$ }&\raisebox{-2.5ex}[0pt]{ \footnotesize TMM17 }& \footnotesize
Use LSTM to capture global context. Concatenate features from multi-scale contextual regions surrounding an object proposal. The global and local context features are concatenated for recognition. \\
\cline{2-10}
& \raisebox{-2.5ex}[0pt]{\footnotesize CoupleNet\cite{ ZhuICCV2017}} &\raisebox{-2.5ex}[0pt]{ \footnotesize RPN} &\raisebox{-2.5ex}[0pt]{ \footnotesize ResNet101}
& \raisebox{-2.5ex}[0pt]{\footnotesize RFCN }&\raisebox{-2.5ex}[0pt]{ \footnotesize \shortstack [c] { $\textbf{82.7}$\\(07+12)}}
&\raisebox{-2.5ex}[0pt]{ \footnotesize\shortstack [c] { $\textbf{80.4}$ \\(07T+12)} }& \raisebox{-2.5ex}[0pt]{\footnotesize
$34.4$}&\raisebox{-2.5ex}[0pt]{ \footnotesize ICCV17} & \footnotesize
Concatenate features from multiscale contextual regions surrounding an object proposal. Features of different contextual regions are then combined by convolution and element-wise sum. \\
\cline{2-10}
& \raisebox{-2.5ex}[0pt]{\footnotesize SMN \cite{ChenSpatial2017}} & \raisebox{-2.5ex}[0pt]{\footnotesize RPN} & \raisebox{-2.5ex}[0pt]{\footnotesize VGG16 } & \raisebox{-3.5ex}[0pt]{\footnotesize\shortstack [c] { Faster \\ RCNN} } &\raisebox{-3.5ex}[0pt]{ \footnotesize \shortstack [c] {$70.0$ \\(07)}} & \raisebox{-2.5ex}[0pt]{\footnotesize
$-$} & \raisebox{-2.5ex}[0pt]{\footnotesize $-$}& \raisebox{-2.5ex}[0pt]{\footnotesize ICCV17} & \footnotesize
Model object-object relationships efficiently through a spatial memory network. Learn the functionality of NMS automatically.
\\
\cline{2-10}
&\raisebox{-3.5ex}[0pt]{ \footnotesize ORN \cite{Hu2018Relation}}
& \raisebox{-3.5ex}[0pt]{\footnotesize RPN} &\raisebox{-5ex}[0pt]{ \footnotesize
\shortstack [c] {ResNet101\\+DCN}}& \raisebox{-5ex}[0pt]{\footnotesize
\shortstack [c] { Faster \\ RCNN}}&\raisebox{-3.5ex}[0pt]{ \footnotesize $-$}
& \raisebox{-3.5ex}[0pt]{\footnotesize $-$} & \raisebox{-3.5ex}[0pt]{\footnotesize $\textbf{39.0}$}&\raisebox{-3.5ex}[0pt]{ \footnotesize CVPR18} & \footnotesize
Model the relations of a set of object proposals through the interactions between their appearance features and geometry. Learn the functionality of NMS automatically.
\\
\cline{2-10}
&\raisebox{-3.5ex}[0pt]{ \footnotesize SIN \cite{Liu2018Structure}} & \raisebox{-3.5ex}[0pt]{\footnotesize RPN} &\raisebox{-3.5ex}[0pt]{ \footnotesize VGG16}& \raisebox{-4.5ex}[0pt]{\footnotesize \shortstack [c] { Faster \\ RCNN}}&\raisebox{-4.5ex}[0pt]{ \footnotesize \shortstack [c] {$76.0$\\(07+12)}} &
\raisebox{-4.5ex}[0pt]{\footnotesize\shortstack [c] { $73.1$\\(07T+12)}}
& \raisebox{-3.5ex}[0pt]{\footnotesize \shortstack [c] {$23.2$}}&\raisebox{-3.5ex}[0pt]{ \footnotesize CVPR18} & \footnotesize
Formulate object detection as graph-structured inference, where objects are graph nodes and relationships the edges.\\
\Xhline{1.5pt}
\end{tabular}
}
\end{table*}
\begin {figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{LocalContext.pdf}
\caption{Representative approaches that explore local surrounding contextual features:
MRCNN \cite{Gidaris2015}, GBDNet \cite{ GBDCNN2016,Zeng2017Crafting},
ACCNN \cite{Li2017Attentive} and CoupleNet \cite{ZhuICCV2017};
also see Table~\ref{Tab:ContextMethods}.}
\label{Fig:LocalContext}
\end {figure*}
In the physical world, visual objects occur in particular environments and
usually coexist with other related objects. There is strong
psychological evidence \cite{Biederman1972Contextual,Bar2004Visual} that context plays an essential role in human object recognition, and it is recognized that a proper modeling of context
helps object detection and recognition \cite{Torralba2003,Oliva2007Role,Chen2016deeplab,
Chen2015Semantic,Divvala2009,Galleguillos2010},
especially when object appearance features are insufficient because of small object size,
object occlusion, or poor image quality. Many different types of context have been discussed \cite{Divvala2009,Galleguillos2010}, and
can broadly be grouped into one of three categories:
\begin{enumerate}
\item Semantic context: The likelihood of an object to be found in some scenes, but not in others;
\item Spatial context: The likelihood of finding an object in some position and not others with respect to other objects in the scene;
\item Scale context: Objects have a limited set of sizes relative to other objects in the scene.
\end{enumerate}
A great deal of work \cite{Chen2015c,Divvala2009,Galleguillos2010,Malisiewicz09Beyond,
Murphy03Using,Rabinovich2007Objects,Parikh2012} preceded the prevalence of deep learning, and much of this work has yet to be explored in DCNN-based object detectors \cite{ChenSpatial2017,Hu2018Relation}.
The current state of the art in object detection \cite{Ren2015NIPS,Liu2016SSD,MaskRCNN2017}
detects objects without explicitly exploiting any contextual information. It is broadly agreed that DCNNs
make use of contextual information implicitly \cite{ZeilerFergus2014,Zheng15Conditional}
since they learn hierarchical representations with multiple levels of abstraction. Nevertheless, there is value in exploring
contextual information explicitly in DCNN based detectors
\cite{Hu2018Relation,ChenSpatial2017,Zeng2017Crafting}, so the following
reviews recent work in exploiting contextual cues in DCNN-
based object detectors, organized into categories of {\em global} and {\em local} contexts, motivated by earlier work in \cite{Zhang13,Galleguillos2010}. Representative approaches are
summarized in Table~\ref{Tab:ContextMethods}.
\subsection{Global Context}
Global context \cite{Zhang13,Galleguillos2010} refers to image or scene level contexts, which can serve as cues for object detection
(\emph{e.g.,} a bedroom will predict the presence of a bed).
In DeepIDNet \cite{Ouyang2015deepid}, the image classification
scores were used as contextual features, and concatenated with
the object detection scores to improve detection results.
In ION \cite{Bell2016ION}, Bell \emph{et al.} proposed to
use spatial Recurrent Neural Networks (RNNs) to explore contextual information across the entire image. In SegDeepM \cite{SegDeepM2015}, Zhu \emph{et al.} proposed a Markov random field model that scores appearance as
well as context for each detection, and allows each candidate box to select a segment out of a large pool of object segmentation proposals and score the agreement
between them. In \cite{Shrivastava2016}, semantic segmentation was used
as a form of contextual priming.
\subsection{Local Context}
Local context \cite{Zhang13,Galleguillos2010,Rabinovich2007Objects}
considers the relationship among locally
nearby objects, as well as the interactions between an object and its surrounding area. In general, modeling object relations is challenging, requiring reasoning about bounding boxes of different classes, locations, scales \emph{etc}.
Deep learning research that explicitly models object relations is quite limited,
with representative ones being Spatial Memory Network (SMN) \cite{ChenSpatial2017},
Object Relation Network \cite{Hu2018Relation}, and Structure
Inference Network (SIN) \cite{Liu2018Structure}. In SMN,
spatial memory essentially assembles object instances back into a
pseudo image representation that is easy to be fed into
another CNN for object relations reasoning, leading to a new sequential reasoning architecture where image
and memory are processed in parallel to obtain detections
which further update memory.
Inspired by the recent success of attention modules
in natural language processing \cite{Vaswani2017Attention}, ORN processes a set of objects simultaneously through the interaction
between their appearance feature and geometry. It does not require additional supervision, and it is
easy to embed into existing networks, effective in
improving object recognition and duplicate removal steps
in modern object detection pipelines, giving rise to the first fully end-to-end object detector.
SIN \cite{Liu2018Structure} considered two kinds of context: scene contextual information
and object relationships within a single image. It formulates object
detection as a problem of graph inference, where the objects are treated as nodes in a graph
and relationships between objects are modeled as edges.
A wider range of methods has approached the context challenge with a simpler idea: enlarging
the detection window size to extract some form of local context. Representative approaches include
MRCNN \cite{Gidaris2015}, Gated BiDirectional CNN (GBDNet) \cite{ GBDCNN2016,Zeng2017Crafting},
Attention to Context CNN (ACCNN) \cite{Li2017Attentive}, CoupleNet \cite{ZhuICCV2017}, and
Sermanet \emph{et al.} \cite{Sermanet2013c}.
In MRCNN \cite{Gidaris2015} (Fig.~\ref{Fig:LocalContext} (a)), in addition to the features
extracted from the original object proposal at the last CONV layer of the backbone,
Gidaris and Komodakis proposed to extract features from a number of
different regions of an object proposal (half regions, border regions, central regions,
contextual region and semantically segmented regions),
in order to obtain a richer and more robust object representation.
All of these features are combined by concatenation.
Quite a number of methods, all closely related to MRCNN, have been proposed since then. The method in \cite{Zagoruyko2016} used only four contextual regions, organized in a foveal structure, where the classifiers along multiple paths are trained jointly end-to-end.
Zeng \emph{et al.}
proposed GBDNet \cite{GBDCNN2016,Zeng2017Crafting} (Fig.~\ref{Fig:LocalContext} (b))
to extract features from multiscale contextualized regions
surrounding an object proposal to improve detection performance.
In contrast to the somewhat naive approach of learning CNN features
for each region separately and then concatenating them, GBDNet passes messages
among features from different contextual regions.
Noting that message passing is not always
helpful, but dependent on individual samples,
Zeng \emph{et al.} \cite{GBDCNN2016} used gated functions
to control message transmission. Li \emph{et al.} \cite{Li2017Attentive}
presented ACCNN (Fig.~\ref{Fig:LocalContext} (c)) to utilize both global and local contextual information: the global context was captured using a Multiscale Local Contextualized (MLC) subnetwork, which recurrently generates an attention map
for an input image to highlight promising contextual locations; local context adopted a method similar to that of MRCNN \cite{Gidaris2015}.
As shown in Fig.~\ref{Fig:LocalContext} (d), CoupleNet \cite{ZhuICCV2017} is conceptually similar to ACCNN \cite{Li2017Attentive}, but built upon RFCN \cite{Dai2016RFCN}, which captures object information with position sensitive RoI pooling, CoupleNet added a branch to encode the global context with RoI pooling.
\section{Detection Proposal Methods}
\label{Sec:DetectionProposal}
An object can be located at any position and scale in
an image. During the heyday of handcrafted feature
descriptors (SIFT \cite{Lowe2004}, HOG \cite{Dalal2005HOG} and
LBP \cite{Ojala02}), the most successful methods for object detection (\emph{e.g.} DPM \cite{Felzenszwalb08CVPR}) used \emph{sliding window} techniques \cite{Viola2001,Dalal2005HOG,Felzenszwalb08CVPR,Harzallah2009Combining,Vedaldi09Multiple}.
However, the number of windows is huge, growing with the number of pixels in an image, and the need to search at multiple scales and aspect ratios further increases the search space\footnote{Sliding window based detection requires classifying around $10^4$-$10^5$
windows per image. The number of windows grows significantly to $10^6$-$10^7$ windows per image when considering multiple scales and aspect ratios.}. Therefore, it is computationally too expensive to apply sophisticated classifiers.
Around 2011, researchers proposed to relieve the tension between computational tractability and high detection quality by using
\emph{detection proposals}\footnote{We use the terminology \emph{detection proposals},
\emph{object proposals} and \emph{region proposals}
interchangeably.} \cite{Van2011SS,Uijlings2013b}.
Originating in the idea of \emph{objectness} proposed by \cite{Alexe2010Object},
object proposals are a set of candidate regions in an image that are likely to contain objects, and if high object recall can be achieved with a modest number of object proposals (like one hundred), significant speed-ups over the sliding window approach can be gained, allowing the use of more sophisticated classifiers.
Detection proposals are usually used as a pre-processing step, limiting the number of regions that need to be evaluated by the detector, and should have the following characteristics:
\begin{enumerate}
\item High recall, which can be achieved with only a few proposals;
\item Accurate localization, such that the proposals match the object bounding
boxes as accurately as possible; and
\item Low computational cost.
\end{enumerate}
The success of object detection based on detection proposals \cite{Van2011SS,Uijlings2013b} has attracted broad interest \cite{Carreira2012,Arbelaez2014,Alexe2012,Cheng2014bing,EdgeBox2014,Endres14Category,
Philipp14Geodesic,Manen2013Prime}. A comprehensive review of object proposal algorithms is beyond the scope of this paper, because object proposals have applications beyond object detection \cite{Arbelaez2012Semantic,Guillaumin2014ImageNet,Xhu2017Soft}.
We refer interested readers to the recent surveys \cite{Hosang2016,Chavali2016} which provide in-depth analysis of many classical object proposal algorithms and their impact on detection performance. Our interest here is to review object proposal methods that are based on DCNNs, output class agnostic proposals, and are related to generic object detection.
In 2014, the integration of object proposals \cite{Van2011SS,Uijlings2013b} and DCNN features \cite{Krizhevsky2012} led to the milestone RCNN \cite{Girshick2014RCNN} in generic object detection. Since then, detection proposal has quickly become a standard preprocessing step, based on the fact that all winning entries in the PASCAL VOC \cite{Everingham2010}, ILSVRC \cite{Russakovsky2015} and MS COCO \cite{Lin2014} object detection challenges since 2014 used detection proposals \cite{Girshick2014RCNN,Ouyang2015deepid,Girshick2015FRCNN,
Ren2015NIPS,Zeng2017Crafting,MaskRCNN2017}.
Among object proposal approaches based on traditional low-level cues (\emph{e.g.,} color, texture, edge and gradients), Selective Search \cite{Uijlings2013b}, MCG \cite{Arbelaez2014} and EdgeBoxes \cite{EdgeBox2014} are among the more popular. As the domain rapidly progressed,
traditional object proposal approaches \cite{Uijlings2013b,Hosang2016,EdgeBox2014},
which were adopted as external modules independent of the detectors, became the speed bottleneck
of the detection pipeline \cite{Ren2015NIPS}. An emerging class of object proposal algorithms
\cite{MultiBox1,Ren2015NIPS,DeepBox2015,Deepproposal2015,DeepMask2015,CRAFT2016}
using DCNNs has attracted broad attention.
\begin{table*}[!t]
\caption {Summary of object proposal methods using DCNN. Blue indicates the number of object proposals. The detection results on COCO are based on mAP@IoU[0.5, 0.95], unless stated otherwise.}\label{Tab:ObjectProposals}
\centering
\renewcommand{\arraystretch}{1.2}
\setlength\arrayrulewidth{0.2mm}
\setlength\tabcolsep{1pt}
\resizebox*{18cm}{!}{
\begin{tabular}{!{\vrule width1.5bp}c|c|p{1.5cm}<{\centering}|c|c|c|c|c|c|c|c|p{9cm}<{\centering}!{\vrule width1.5bp}}
\Xhline{1.5pt}
\footnotesize $\quad\quad$ & \footnotesize Proposer & \footnotesize Backbone & \footnotesize Detector & \multicolumn{3}{c|}{Recall@IoU (VOC07)} &\multicolumn{3}{c|}{Detection Results (mAP)} & \footnotesize Published & \footnotesize \\
\cline{5-7}\cline{8-10}
\footnotesize & \footnotesize Name & \footnotesize Network & \footnotesize Tested & \footnotesize $0.5$ & \footnotesize $0.7$ & \footnotesize $0.9$ & \footnotesize VOC07 & \footnotesize VOC12 & \footnotesize COCO & \footnotesize In & \footnotesize
Highlights \\
\cline{2-12}
\footnotesize \multirow{8}{*}{\rotatebox{90}{Bounding Box Object Proposal Methods$\quad\quad\quad\quad\quad\quad\quad\quad$ }} & \footnotesize \raisebox{-3ex}[0pt]{MultiBox1\cite{ MultiBox1}} & \footnotesize \raisebox{-3.5ex}[0pt]{AlexNet} & \footnotesize \raisebox{-3ex}[0pt]{RCNN}& \footnotesize \raisebox{-3ex}[0pt]{$-$}& \footnotesize \raisebox{-3ex}[0pt]{$-$} & \footnotesize \raisebox{-3ex}[0pt]{$-$} & \footnotesize \raisebox{-5.5ex}[0pt]{ \shortstack [c] {$29.0$ \\(\textcolor{blue}{10})\\ (12)}} & \footnotesize \raisebox{-3ex}[0pt]{ $-$} & \footnotesize \raisebox{-3ex}[0pt]{$-$} & \footnotesize \raisebox{-3ex}[0pt]{CVPR14} & \footnotesize Learns a class agnostic regressor on a small set of 800 predefined anchor boxes. Do not share features for detection. \\
\cline{2-12}
& \footnotesize \raisebox{-3.5ex}[0pt]{ DeepBox
\cite{ DeepBox2015}} & \footnotesize \raisebox{-3.5ex}[0pt]{ VGG16 } & \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {Fast \\ RCNN}} & \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$0.96$ \\ (\textcolor{blue}{1000})}}& \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$0.84$ \\ (\textcolor{blue}{1000})}}& \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$0.15$ \\ (\textcolor{blue}{1000})}}& \footnotesize \raisebox{-3.5ex}[0pt]{$-$}
& \footnotesize \raisebox{-3.5ex}[0pt]{ $-$} & \footnotesize \raisebox{-5.5ex}[0pt]{\shortstack [c] {$37.8$\\(\textcolor{blue}{500})\\(IoU@0.5)} } & \footnotesize \raisebox{-3.5ex}[0pt]{ICCV15} & \footnotesize Use a lightweight CNN to learn to rerank proposals generated by EdgeBox. Can run at 0.26s per image. Do not share features for detection. \\
\cline{2-12}
& \footnotesize\raisebox{-4.5ex}[0pt]{ RPN\cite{Ren2015NIPS,Ren2016a}} & \footnotesize \raisebox{-4.5ex}[0pt]{ \shortstack [c] { VGG16 } }& \footnotesize \raisebox{-4.5ex}[0pt]{ \shortstack [c] {Faster \\ RCNN}}& \footnotesize \raisebox{-7.5ex}[0pt]{\shortstack [c] {$0.97$ \\ (\textcolor{blue}{300})\\0.98\\(\textcolor{blue}{1000})}}& \footnotesize \raisebox{-7.5ex}[0pt]{\shortstack [c] {$0.79$ \\ (\textcolor{blue}{300})\\0.84\\(\textcolor{blue}{1000})}}& \footnotesize \raisebox{-7.5ex}[0pt]{\shortstack [c] {$0.04$ \\ (\textcolor{blue}{300})\\0.04\\(\textcolor{blue}{1000})}}& \footnotesize \raisebox{-6.5ex}[0pt]{\shortstack [c] { $73.2$ \\ (\textcolor{blue}{300})\\(07+12)}} & \footnotesize \raisebox{-6.5ex}[0pt]{\shortstack [c] { $70.4$ \\ (\textcolor{blue}{300})\\(07++12)} }& \footnotesize \raisebox{-4.5ex}[0pt]{ \shortstack [c] { $21.9$ \\ (\textcolor{blue}{300})} }& \footnotesize\raisebox{-4.5ex}[0pt]{ NIPS15} & \footnotesize
The first to generate object proposals by sharing full image convolutional features with detection.
Most widely used object proposal method. Significant improvements in detection speed. \\
\cline{2-12}
& \footnotesize \raisebox{-3.5ex}[0pt]{DeepProposal\cite{ Deepproposal2015} }& \footnotesize \raisebox{-3.5ex}[0pt]{ VGG16 } & \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {Fast \\ RCNN}}& \footnotesize \raisebox{-7.5ex}[0pt]{\shortstack [c] {$0.74$ \\ (\textcolor{blue}{100})\\0.92\\(\textcolor{blue}{1000})}}& \footnotesize \raisebox{-7.5ex}[0pt]{\shortstack [c] {$0.58$ \\ (\textcolor{blue}{100})\\0.80\\(\textcolor{blue}{1000})}}& \footnotesize\raisebox{-7.5ex}[0pt]{\shortstack [c] {$0.12$ \\ (\textcolor{blue}{100})\\0.16\\(\textcolor{blue}{1000})}}& \footnotesize \raisebox{-6.5ex}[0pt]{ \shortstack [c] {$53.2$ \\ (\textcolor{blue}{100})\\(07)}} & \footnotesize \raisebox{-3.5ex}[0pt]{ $-$}& \footnotesize \raisebox{-3.5ex}[0pt]{$-$}& \footnotesize\raisebox{-3.5ex}[0pt]{ ICCV15} & \footnotesize Generate proposals inside a DCNN in a multiscale manner.
Share features with the detection network.\\
\cline{2-12}
& \footnotesize \raisebox{-3.5ex}[0pt]{CRAFT \cite{ CRAFT2016}} & \footnotesize \raisebox{-3.5ex}[0pt]{ VGG16} & \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {Faster \\ RCNN}} & \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$0.98$ \\ (\textcolor{blue}{300}) }}& \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$0.90$ \\ (\textcolor{blue}{300}) }}& \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$0.13$ \\ (\textcolor{blue}{300}) }}& \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$75.7$ \\ (07+12) }}& \footnotesize \raisebox{-3.5ex}[0pt]{ \shortstack [c] {71.3\\ (12)}} & \footnotesize \raisebox{-3.5ex}[0pt]{ $-$} & \footnotesize\raisebox{-3.5ex}[0pt]{ CVPR16} & \footnotesize
Introduced a classification network (\emph{i.e.} two class Fast RCNN) cascade that comes after the RPN. Not sharing features extracted for detection.\\
\cline{2-12}
& \footnotesize\raisebox{-3ex}[0pt]{ AZNet \cite{ Lu2016Adaptive} }& \footnotesize
\raisebox{-3ex}[0pt]{VGG16 }& \footnotesize \raisebox{-4ex}[0pt]{\shortstack [c] {Fast \\ RCNN}} & \footnotesize\raisebox{-4ex}[0pt]{ \shortstack [c] {$0.91$ \\ (\textcolor{blue}{300}) }} & \footnotesize \raisebox{-4ex}[0pt]{\shortstack [c] {$0.71$ \\ (\textcolor{blue}{300}) }}& \footnotesize \raisebox{-4ex}[0pt]{\shortstack [c] {$0.11$ \\ (\textcolor{blue}{300}) }}& \footnotesize \raisebox{-4ex}[0pt]{\shortstack [c] {$70.4$\\
(07)} }& \footnotesize \raisebox{-3.5ex}[0pt]{$-$}& \footnotesize \raisebox{-3.5ex}[0pt]{$22.3$} & \footnotesize \raisebox{-3.5ex}[0pt]{CVPR16 }& \footnotesize
Use coarse-to-fine search: start from large regions, then recursively search for subregions that may contain objects. Adaptively guide computational resources to focus on likely subregions. \\
\cline{2-12}
& \footnotesize \raisebox{-3.5ex}[0pt]{ ZIP \cite{Hongyang2018Zoom}} & \footnotesize \raisebox{-3.5ex}[0pt]{Inception v2} & \footnotesize \raisebox{-3.5ex}[0pt]{ \shortstack [c] {Faster \\ RCNN}} & \footnotesize \raisebox{-5.5ex}[0pt]{\shortstack [c] {$0.85$ \\ (\textcolor{blue}{300})\\COCO }} & \footnotesize \raisebox{-5.5ex}[0pt]{\shortstack [c] {$0.74$ \\ (\textcolor{blue}{300})\\COCO }} & \footnotesize\raisebox{-5.5ex}[0pt]{\shortstack [c] {$0.35$ \\ (\textcolor{blue}{300})\\COCO }} & \footnotesize\raisebox{-3ex}[0pt]{ \shortstack [c] { $79.8$\\ (07+12)}}
& \footnotesize \raisebox{-3ex}[0pt]{ $-$ }& \footnotesize \raisebox{-3ex}[0pt]{$-$} & \footnotesize\raisebox{-3ex}[0pt]{ IJCV18 } & \footnotesize
Generate proposals using conv-deconv network with multilayers; Proposed a map attention decision (MAD) unit to assign the weights for features from different layers. \\
\cline{2-12}
& \footnotesize \raisebox{-3.5ex}[0pt]{DeNet\cite{SmithICCV2017}} & \footnotesize \raisebox{-3.5ex}[0pt]{ResNet101 } & \footnotesize\raisebox{-4.5ex}[0pt]{ \shortstack [c] {Fast \\ RCNN}} & \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$0.82$ \\ (\textcolor{blue}{300})}}& \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$0.74$ \\ (\textcolor{blue}{300})}}& \footnotesize\raisebox{-3.5ex}[0pt]{\shortstack [c] {$0.48$ \\ (\textcolor{blue}{300})}} & \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {$77.1$ \\ (07+12)}}& \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] { $73.9$ \\ (07++12)} }& \footnotesize \raisebox{-3.5ex}[0pt]{$33.8$} & \footnotesize \raisebox{-3.5ex}[0pt]{ ICCV17 }& \footnotesize A lot faster than Faster RCNN; Introduces a bounding box corner
estimation for predicting object proposals efficiently
to replace RPN; Does not require
predefined anchors.\\
\Xhline{1.5pt}
\footnotesize & \footnotesize \shortstack [c] {Proposer\\Name} & \footnotesize \shortstack [c] {Backbone\\Network} & \footnotesize \shortstack [c] { Detector \\ Tested}& \multicolumn{3}{c|}{\shortstack [c] {Box Proposals \\ (AR, COCO)}} & \multicolumn{3}{c|}{\shortstack [c] {Segment Proposals\\ (AR, COCO)}} & \footnotesize \shortstack [c] { \shortstack [c] {Published \\ In}} & \footnotesize Highlights \\
\cline{2-12}
\footnotesize \multirow{5}{*}{\rotatebox{90}{Segment Proposal Methods $\quad\quad$ }} & \footnotesize \raisebox{-3.5ex}[0pt]{DeepMask \cite{DeepMask2015}}& \footnotesize
\raisebox{-3.5ex}[0pt]{VGG16 }& \footnotesize \raisebox{-3.5ex}[0pt]{ \shortstack [c] {Fast \\ RCNN}} &\multicolumn{3}{c|}{\raisebox{-3.5ex}[0pt]{$0.33$ (\textcolor{blue}{100}), $0.48(\textcolor{blue}{1000})$}} &\multicolumn{3}{c|}{\raisebox{-3.5ex}[0pt]{$0.26$ (\textcolor{blue}{100}), $0.37(\textcolor{blue}{1000})$}} & \footnotesize \raisebox{-3.5ex}[0pt]{ NIPS15 }& \footnotesize First to generate object mask proposals with DCNN; Slow inference time; Need segmentation annotations for training; Not sharing features with detection network; Achieved mAP of $69.9\%$ (\textcolor{blue}{500}) with Fast RCNN. \\
\cline{2-12}
& \footnotesize \raisebox{-3ex}[0pt]{InstanceFCN \cite{Dai2016Instance}}& \footnotesize
\raisebox{-3ex}[0pt]{ VGG16}& \footnotesize \raisebox{-3ex}[0pt]{$-$}&\multicolumn{3}{c|}{\raisebox{-3ex}[0pt]{$-$}} &\multicolumn{3}{c|}{\raisebox{-3ex}[0pt]{$0.32$ (\textcolor{blue}{100}), $0.39(\textcolor{blue}{1000})$}} & \footnotesize\raisebox{-3ex}[0pt]{ECCV16} & \footnotesize
Combines ideas of FCN \cite{FCNCVPR2015} and DeepMask \cite{DeepMask2015}. Introduces instance sensitive score maps. Needs segmentation annotations to train the network. \\
\cline{2-12}
& \footnotesize \raisebox{-3.5ex}[0pt]{SharpMask \cite{Pinheiro2016} }& \footnotesize
\raisebox{-3.5ex}[0pt]{ MPN \cite{Zagoruyko2016}}& \footnotesize \raisebox{-3.5ex}[0pt]{\shortstack [c] {Fast \\ RCNN}}&\multicolumn{3}{c|}{\raisebox{-3.5ex}[0pt]{$0.39$ (\textcolor{blue}{100}), $0.53(\textcolor{blue}{1000})$}} &\multicolumn{3}{c|}{\raisebox{-3.5ex}[0pt]{$0.30$ (\textcolor{blue}{100}), $0.39(\textcolor{blue}{1000})$}}& \footnotesize\raisebox{-3.5ex}[0pt]{ ECCV16} & \footnotesize Leverages features at multiple convolutional layers by introducing a top-down refinement module. Does not share features with detection network. Needs segmentation annotations for training.
\\
\cline{2-12}
& \footnotesize \raisebox{-3.5ex}[0pt]{FastMask\cite{ Hu2017FastMask} }& \footnotesize \raisebox{-3.5ex}[0pt]{ResNet39} & \footnotesize \raisebox{-3.5ex}[0pt]{$-$} &\multicolumn{3}{c|}{\raisebox{-3.5ex}[0pt]{$0.43$ (\textcolor{blue}{100}), $0.57(\textcolor{blue}{1000})$}} & \multicolumn{3}{c|}{\raisebox{-3.5ex}[0pt]{$0.32$ (\textcolor{blue}{100}), $0.41(\textcolor{blue}{1000})$}} & \footnotesize \raisebox{-3.5ex}[0pt]{ CVPR17} & \footnotesize Generates instance segment proposals efficiently in one-shot manner similar to SSD \cite{Liu2016SSD}. Uses multiscale convolutional features. Uses segmentation annotations for training. \\
\Xhline{1.5pt}
\end{tabular}
}
\end{table*}
Recent DCNN based object proposal methods generally fall into two categories:
{\em bounding box} based and {\em object segment} based, with representative methods summarized in Table \ref{Tab:ObjectProposals}.
\begin {figure}[!t]
\centering
\includegraphics[width=0.4\textwidth]{Anchor.pdf}
\caption{Illustration of the Region Proposal Network (RPN) introduced in \cite{Ren2015NIPS}.}
\label{fig:anchor}
\end {figure}
\textbf{Bounding Box Proposal Methods} are best exemplified by the RPC method \cite{Ren2015NIPS} of Ren \emph{et al.}, illustrated in Fig.~\ref{fig:anchor}.
RPN predicts object proposals by sliding a small network over the feature map of the last shared CONV layer. At each sliding window location, $k$ proposals are predicted by using $k$ anchor boxes, where each anchor box\footnote{The concept of ``anchor'' first appeared in \cite{Ren2015NIPS}.} is centered at some location in the image, and is associated with a particular scale and aspect ratio.
Ren \emph{et al.} \cite{Ren2015NIPS} proposed integrating RPN and Fast RCNN into a single network by sharing their convolutional layers, leading to Faster RCNN, the first end-to-end detection pipeline. RPN has been broadly selected as the proposal method by many state-of-the-art object detectors, as can be observed from Tables \ref{Tab:EnhanceFeatures} and \ref{Tab:ContextMethods}.
Instead of fixing {\em a priori} a set of anchors as MultiBox \cite{MultiBox1,MultiBox2} and RPN \cite{Ren2015NIPS}, Lu \emph{et al.} \cite{Lu2016Adaptive} proposed generating anchor locations by using a recursive search strategy which can adaptively guide computational resources to focus on sub-regions likely to contain objects. Starting with the whole image,
all regions visited during the search process serve as anchors.
For any anchor region encountered during the search procedure, a
scalar zoom indicator is used to decide whether to
further partition the region, and a set of bounding
boxes with objectness scores are computed by an Adjacency and Zoom Network (AZNet), which extends RPN by adding a branch to
compute the scalar zoom indicator in parallel with the existing branch.
Further work attempts to generate object proposals
by exploiting multilayer convolutional features.
Concurrent with RPN \cite{Ren2015NIPS}, Ghodrati \emph{et al.} \cite{Deepproposal2015} proposed DeepProposal, which generates object proposals
by using a cascade of multiple convolutional features, building an inverse cascade to select the most promising object locations and to refine
their boxes in a coarse-to-fine manner. An improved variant of RPN, HyperNet \cite{HyperNet2016} designs Hyper Features which aggregate multilayer convolutional features and shares them both in generating proposals
and detecting objects via an end-to-end joint training strategy.
Yang \emph{et al.} proposed CRAFT \cite{ CRAFT2016}
which also used a cascade strategy, first
training an RPN network to generate object proposals and then using them
to train another binary Fast RCNN network to further distinguish objects from background.
Li \emph{et al.} \cite{Hongyang2018Zoom} proposed ZIP to improve RPN by predicting object proposals with multiple
convolutional feature maps at different network depths to integrate both low level
details and high level semantics. The backbone used in ZIP is a ``zoom out and in''
network inspired by the conv and deconv structure \cite{FCNCVPR2015}.
Finally, recent work which deserves mention includes Deepbox \cite{DeepBox2015}, which proposed a lightweight CNN to learn to rerank proposals generated by EdgeBox, and DeNet \cite{SmithICCV2017}
which introduces bounding box corner
estimation to predict object proposals efficiently
to replace RPN in a Faster RCNN style detector.
\textbf{Object Segment Proposal Methods} \cite{DeepMask2015,Pinheiro2016}
aim to generate segment proposals that are likely to correspond to objects.
Segment proposals are more informative than bounding box proposals, and
take a step further towards object instance segmentation \cite{Hariharan2014,Dai2016Aware,Li2017Fully}. In addition, using instance segmentation supervision can improve the performance of bounding box object detection. The pioneering work of DeepMask, proposed by Pinheiro \emph{et al.} \cite{DeepMask2015}, segments proposals learnt directly from raw image data with a deep network. Similarly to RPN,
after a number of shared convolutional layers DeepMask splits the network into two branches in order to predict a class agnostic mask and an associated objectness score. Also similar to the efficient sliding window strategy in OverFeat \cite{OverFeat2014}, the trained DeepMask network is applied in a sliding window manner to an image (and its rescaled versions) during inference. More recently, Pinheiro \emph{et al.} \cite{Pinheiro2016} proposed SharpMask by augmenting
the DeepMask architecture with a refinement module, similar to the architectures shown in Fig.~\ref{fig:MultiLayerCombine} (b1) and (b2),
augmenting the feed-forward network with a top-down refinement process. SharpMask can efficiently integrate spatially rich information from early features with strong semantic information encoded in later layers to generate high fidelity object masks.
Motivated by Fully Convolutional Networks (FCN) for semantic segmentation \cite{FCNCVPR2015} and DeepMask \cite{DeepMask2015}, Dai \emph{et al.} proposed InstanceFCN \cite{Dai2016Instance} to generate instance segment proposals. Similar to DeepMask, the InstanceFCN network is split into two fully convolutional branches, one to generate instance sensitive score maps, the other to
predict the objectness score. Hu \emph{et al.} proposed FastMask \cite{Hu2017FastMask} to efficiently generate instance segment proposals in a one-shot manner, similar to SSD \cite{Liu2016SSD},
in order to make use of multiscale convolutional features. Sliding windows extracted densely from
multiscale convolutional feature maps were input to a scale-tolerant attentional head module in order to predict segmentation masks and objectness scores. FastMask is claimed to run at 13 FPS on
$800\times600$ images.
\begin{table*}[!t]
\caption {Representative methods for training strategies and class imbalance handling. Results on COCO are reported with Test Dev. The detection results on COCO are based on mAP@IoU[0.5, 0.95].}\label{Tab:ClassImbalance}
\centering
\renewcommand{\arraystretch}{1.2}
\setlength\arrayrulewidth{0.2mm}
\setlength\tabcolsep{1pt}
\resizebox*{18.5cm}{!}{
\begin{tabular}{!{\vrule width1.5bp}c|c|c|c|c|c|c|c|p{8cm}<{\centering}!{\vrule width1.5bp}}
\Xhline{1.5pt}
\footnotesize \shortstack [c] {Detector \\ Name} & \footnotesize \shortstack [c] {Region \\ Proposal} & \footnotesize \shortstack [c] {Backbone \\ DCNN} & \footnotesize \shortstack [c] {Pipelined \\ Used} & \footnotesize \shortstack [c] {VOC07 \\ Results} & \footnotesize \shortstack [c] {VOC12 \\ Results} & \footnotesize \shortstack [c] {COCO \\ Results} & \footnotesize \shortstack [c] {Published \\ In} & \footnotesize
Highlights \\
\Xhline{1.5pt}
\raisebox{-4ex}[0pt]{MegDet \cite{Peng2018MegDet}} & \footnotesize \raisebox{-4ex}[0pt]{ RPN} & \footnotesize \raisebox{-5ex}[0pt]{ \shortstack [c] { ResNet50\\+FPN }}& \footnotesize
\raisebox{-5ex}[0pt]{ \shortstack [c] { Faster\\RCNN}} & \footnotesize \raisebox{-4ex}[0pt]{$-$}& \footnotesize \raisebox{-4ex}[0pt]{$-$} & \footnotesize \raisebox{-4ex}[0pt]{$52.5$} & \footnotesize \raisebox{-4ex}[0pt]{CVPR18} & \footnotesize Allow training with much larger minibatch size than before by introducing cross GPU batch normalization; Can finish the COCO training in 4 hours on 128 GPUs and achieved improved accuracy; Won COCO2017 detection challenge. \\
\hline
\raisebox{-2.5ex}[0pt]{SNIP
\cite{Singh2018sniper} }& \footnotesize \raisebox{-2.5ex}[0pt]{RPN }& \footnotesize \raisebox{-3.5ex}[0pt]{ \shortstack [c] {DPN \cite{Chen2017Dual}\\+DCN \cite{Dai17Deformable}}} & \footnotesize \raisebox{-2.5ex}[0pt]{RFCN} & \footnotesize \raisebox{-2.5ex}[0pt]{$-$ }& \footnotesize \raisebox{-2.5ex}[0pt]{$-$}& \footnotesize \raisebox{-2.5ex}[0pt]{$48.3$}
& \footnotesize \raisebox{-2.5ex}[0pt]{CVPR18} & \footnotesize
A new multiscale training scheme.
Empirically examined the effect of up-sampling for small object detection. During training, only select objects that fit the scale of features as positive samples. \\
\hline
\raisebox{-1ex}[0pt]{SNIPER
\cite{Singh2018sniper}} & \footnotesize \raisebox{-1ex}[0pt]{RPN} & \footnotesize \raisebox{-2.5ex}[0pt]{\shortstack [c] {ResNet101\\+DCN}} & \footnotesize \raisebox{-2.5ex}[0pt]{\shortstack [c] {Faster \\ RCNN}} & \footnotesize\raisebox{-1ex}[0pt]{ $-$} & \footnotesize \raisebox{-1ex}[0pt]{$-$} & \footnotesize \raisebox{-1ex}[0pt]{$47.6$} & \footnotesize\raisebox{-1ex}[0pt]{
2018 }&
An efficient multiscale training strategy. Process context regions around ground-truth instances at the appropriate scale. \\
\hline
\raisebox{-1.5ex}[0pt]{OHEM \cite{Shrivastava2016OHEM} }& \footnotesize \raisebox{-1.5ex}[0pt]{SS} & \footnotesize \raisebox{-1.5ex}[0pt]{VGG16} & \footnotesize \raisebox{-2.5ex}[0pt]{ \shortstack [c] { Fast \\ RCNN} }& \footnotesize\raisebox{-2.5ex}[0pt]{ \shortstack [c] {$78.9$\\
(07+12)}}& \footnotesize \raisebox{-2.5ex}[0pt]{\shortstack [c] { $76.3$\\
(07++12)}}& \footnotesize \raisebox{-1.5ex}[0pt]{$22.4$}& \footnotesize
\raisebox{-1.5ex}[0pt]{CVPR16}& \footnotesize
A simple and effective Online Hard Example Mining algorithm to improve training of region based detectors. \\
\hline
\raisebox{-1.5ex}[0pt]{ FactorNet \cite{Ouyang2016Factors} }& \footnotesize \raisebox{-1.5ex}[0pt]{SS} & \footnotesize \raisebox{-1.5ex}[0pt]{GooglNet} & \footnotesize \raisebox{-2.5ex}[0pt]{ \shortstack [c] {RCNN} }& \footnotesize\raisebox{-2.5ex}[0pt]{$-$ } & \footnotesize \raisebox{-2.5ex}[0pt]{$-$ } & \raisebox{-2.5ex}[0pt]{$-$ } & \footnotesize
\raisebox{-1.5ex}[0pt]{CVPR16}& \footnotesize Identify the imbalance in the number of samples for different object categories; propose a divide-and-conquer feature learning scheme. \\
\hline
\raisebox{-5.5ex}[0pt]{Chained Cascade \cite{CascadeRCNN2018} }& \footnotesize \raisebox{-7ex}[0pt]{\shortstack [c] {SS \\ CRAFT} } & \footnotesize \raisebox{-7ex}[0pt]{\shortstack [c] {VGG \\ Inceptionv2}} & \footnotesize \raisebox{-7.5ex}[0pt]{ \shortstack [c] {Fast RCNN, \\ Faster RCNN} }& \footnotesize\raisebox{-8.5ex}[0pt]{ \shortstack [c] {$80.4$\\
(07+12) \\ (SS+VGG)}}& \footnotesize \raisebox{-5.5ex}[0pt]{$-$}& \footnotesize \raisebox{-5.5ex}[0pt]{$-$}& \footnotesize
\raisebox{-5.5ex}[0pt]{ICCV17}& \footnotesize Jointly learn DCNN and multiple stages of cascaded classifiers. Boost detection accuracy on PASCAL VOC 2007 and ImageNet for both fast RCNN and Faster RCNN using different region proposal methods. \\
\hline
\raisebox{-5.5ex}[0pt]{Cascade RCNN \cite{CascadeRCNN2018} }& \footnotesize \raisebox{-5.5ex}[0pt]{RPN} & \footnotesize \raisebox{-8.5ex}[0pt]{\shortstack [c] {VGG\\ResNet101\\+FPN}} & \footnotesize \raisebox{-6.5ex}[0pt]{ \shortstack [c] {Faster RCNN}}& \footnotesize\raisebox{-6.5ex}[0pt]{ \shortstack [c] {$-$}}& \footnotesize \raisebox{-6.5ex}[0pt]{\shortstack [c] {$-$}}& \footnotesize \raisebox{-5.5ex}[0pt]{$42.8$}& \footnotesize
\raisebox{-5.5ex}[0pt]{CVPR18}& \footnotesize Jointly learn DCNN and multiple stages of cascaded classifiers, which are learned using different localization accuracy for selecting positive samples. Stack bounding box regression at multiple stages. \\
\hline
\raisebox{-3ex}[0pt]{RetinaNet \cite{LinICCV2017}} & \footnotesize \raisebox{-4ex}[0pt]{$-$} & \footnotesize \raisebox{-4ex}[0pt]{\shortstack [c] {ResNet101\\+FPN }}& \footnotesize \raisebox{-3ex}[0pt]{RetinaNet }& \footnotesize \raisebox{-3ex}[0pt]{$-$}& \footnotesize \raisebox{-3ex}[0pt]{$-$} & \footnotesize \raisebox{-3ex}[0pt]{$39.1$} & \footnotesize
\raisebox{-3ex}[0pt]{ICCV17} & \footnotesize Propose a novel Focal Loss which focuses training on hard examples. Handles well the problem of imbalance of positive and negative samples when training a one-stage detector. \\
\Xhline{1.5pt}
\end{tabular}
}
\end{table*}
\section{Other Issues}
\label{sec:otherissue}
\textbf{Data Augmentation.} Performing data augmentation for learning DCNNs \cite{Chatfield2014,Girshick2015FRCNN,Girshick2014RCNN} is generally recognized to be important for visual recognition. Trivial data augmentation refers to perturbing an image by
transformations that leave the underlying category unchanged, such as
cropping, flipping, rotating, scaling, translating, color perturbations, and adding noise. By artificially enlarging the number of samples, data augmentation helps in reducing overfitting and improving generalization.
It can be used at training time, at test time, or both.
Nevertheless, it has the obvious limitation that the time required for training increases significantly. Data augmentation may synthesize completely new training images \cite{Peng2015Learning,Wang2017}, however it is hard to guarantee that the synthetic images generalize well to real ones. Some researchers \cite{Dwibedi2017Cut,Gupta2016Synthetic} proposed augmenting datasets by pasting real segmented objects into natural images; indeed, Dvornik \emph{et al.} \cite{Dvornik2018Modeling} showed
that appropriately modeling the visual context surrounding objects is
crucial to place them in the right environment, and proposed a context model to automatically find appropriate locations on images to place new objects for data augmentation.
\textbf{Novel Training Strategies.} Detecting objects under a wide range of scale variations, especially the detection of very small objects, stands out as a key challenge.
It has been shown \cite{Huang2016Speed,Liu2016SSD}
that image resolution has a considerable impact on detection accuracy, therefore scaling is particularly commonly used in data augmentation, since higher resolutions increase the possibility of detecting small objects \cite{Huang2016Speed}.
Recently, Singh \emph{et al.} proposed advanced and efficient
data argumentation methods SNIP \cite{Singh2018SNIP} and SNIPER \cite{Singh2018sniper} to illustrate the scale invariance problem, as summarized in Table \ref{Tab:ClassImbalance}. Motivated by the intuitive understanding that
small and large objects are difficult to detect at smaller
and larger scales, respectively, SNIP introduces a novel training
scheme that can reduce scale variations during training, but without
reducing training samples; SNIPER allows for efficient multiscale training, only processing context regions around ground truth objects at the appropriate scale, instead of processing a whole image pyramid.
Peng \emph{et al.} \cite{Peng2018MegDet} studied a key factor in training, the minibatch size, and proposed MegDet, a Large MiniBatch Object Detector, to enable the training with a much larger minibatch size than before (from 16 to
256). To avoid the failure of convergence and significantly speed up the training process, Peng \emph{et al.} \cite{Peng2018MegDet} proposed a learning rate policy and
Cross GPU Batch Normalization, and effectively utilized 128 GPUs, allowing MegDet to finish COCO training in 4 hours on 128 GPUs, and winning the COCO 2017 Detection
Challenge.
\begin {figure}[!t]
\centering
\includegraphics[width=0.4\textwidth]{IOU.pdf}
\caption{Localization error could stem from insufficient overlap or duplicate detections. Localization error is a frequent cause of false positives.}
\label{fig:iou}
\end {figure}
\textbf{Reducing Localization Error.} In object detection, the Intersection Over Union\footnote{Please refer to Section \ref{sec:EvaluationCriteria} for more details on the definition of IOU.} (IOU) between a detected bounding box and its ground truth box is the most popular evaluation metric, and an IOU threshold (\emph{e.g.} typical value of $0.5$) is required to define positives and negatives. From Fig.~\ref{Fig:RegionVsUnified}, in most state of the art detectors \cite{Girshick2015FRCNN,Liu2016SSD,MaskRCNN2017,Ren2015NIPS,YoLo2016} object detection is formulated as a multitask learning problem, \emph{i.e.,} jointly optimizing a softmax classifier which assigns object proposals with class labels and bounding box regressors, localizing objects by maximizing IOU or other metrics between detection results and ground truth. Bounding boxes are only a crude approximation for articulated objects, consequently background pixels are almost invariably included in a bounding box, which affects the accuracy of classification and localization. The study in \cite{Hoiem2012} shows that object localization error is one of the most influential forms of error, in addition to confusion between similar objects. Localization error could stem from insufficient overlap (smaller than the required IOU threshold, such as the green box in Fig.~\ref{fig:iou}) or duplicate detections (\emph{i.e.,} multiple overlapping detections for an object instance). Usually, some post-processing step like NonMaximum Suppression (NMS) \cite{Bodla2017Soft,Hosang2017Learning} is used for eliminating duplicate
detections. However, due to misalignments the bounding box with better localization could be suppressed during NMS, leading to poorer localization quality (such as the purple box shown in Fig.~\ref{fig:iou}). Therefore, there are quite a few methods aiming at improving detection performance by reducing localization error.
MRCNN \cite{Gidaris2015} introduces iterative bounding box regression, where an RCNN is applied several times. CRAFT \cite{CRAFT2016} and AttractioNet \cite{Gidaris2016Attend} use a multi-stage detection sub-network
to generate accurate proposals, to forward to Fast RCNN. Cai and Vasconcelos proposed Cascade RCNN \cite{CascadeRCNN2018}, a multistage extension of RCNN, in which a sequence of detectors is trained sequentially with increasing IOU thresholds, based on the observation that the output of a detector trained with a certain IOU is a good distribution to train the detector of the next higher IOU threshold, in order to be sequentially more selective
against close false positives. This approach can be built with any RCNN-based detector, and is demonstrated to achieve consistent gains (about 2 to 4 points) independent of the baseline detector strength, at a marginal increase in computation.
There is also recent work \cite{Jiang2018Acquisition,Rezatofighi2019Generalized,Huang2019Mask} formulating IOU directly as the optimization objective, and in proposing improved NMS results \cite{Bodla2017Soft,He2019Bounding,Hosang2017Learning,Smith2018Improving}, such as Soft NMS \cite{Bodla2017Soft}
and learning NMS \cite{Hosang2017Learning}.
\textbf{Class Imbalance Handling.} Unlike image classification, object detection has another unique problem: the serious imbalance between the number of labeled object instances
and the number of background examples (image regions
not belonging to any object class of interest). Most background examples are easy negatives, however
this imbalance can make the training very inefficient, and the large number of easy negatives tends to overwhelm the training. In the past, this issue has typically been addressed via techniques such as
bootstrapping \cite{Sung1996Learning}. More recently, this problem has also seen some attention \cite{Li2019Gradient,LinICCV2017,Shrivastava2016OHEM}.
Because the region proposal stage rapidly filters out most background regions and proposes
a small number of object candidates, this class imbalance issue is mitigated to some extent in two-stage detectors \cite{Girshick2014RCNN,Girshick2015FRCNN,Ren2015NIPS,MaskRCNN2017}, although example mining approaches, such as Online Hard
Example Mining (OHEM) \cite{Shrivastava2016OHEM}, may be used to maintain a
reasonable balance between foreground and background. In the
case of one-stage object detectors \cite{YoLo2016,Liu2016SSD}, this imbalance is extremely serious (\emph{e.g.} 100,000 background examples to every
object). Lin \emph{et al.} \cite{LinICCV2017} proposed Focal Loss to address this by rectifying the Cross Entropy loss, such that it down-weights the
loss assigned to correctly classified examples. Li \emph{et al.} \cite{Li2019Gradient}
studied this issue from the perspective of gradient norm distribution, and proposed a Gradient Harmonizing Mechanism (GHM) to handle it.
\section{Discussion and Conclusion}
\label{Sec:Conclusions}
Generic object detection is an important and challenging problem in computer vision and has received considerable attention. Thanks to remarkable developments in deep learning techniques, the field of object detection has dramatically evolved. As a comprehensive
survey on deep learning for generic object detection, this paper has highlighted the recent achievements, provided a structural taxonomy for methods according
to their roles in detection, summarized existing popular datasets and evaluation criteria, and discussed performance for the most representative methods. We conclude this review with a discussion of the state of the art in Section~\ref{Sec:Performance}, an overall discussion of key issues in Section~\ref{Sec:Discussion}, and finally suggested future research directions in Section~\ref{Sec:Directions}.
\subsection{State of the Art Performance}
\label{Sec:Performance}
A large variety of detectors has appeared in the last few years, and the introduction of standard benchmarks, such as PASCAL VOC \cite{Everingham2010,Everingham2015}, ImageNet
\cite{Russakovsky2015} and COCO \cite{Lin2014}, has made it easier to compare detectors. As can be seen from our earlier discussion in Sections~\ref{Sec:Frameworks} through~\ref{sec:otherissue}, it may be misleading to compare detectors in terms of their originally reported performance (\emph{e.g.} accuracy, speed), as they can differ in fundamental / contextual respects, including the following choices:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Meta detection frameworks, such as RCNN \cite{Girshick2014RCNN}, Fast RCNN \cite{Girshick2015FRCNN}, Faster RCNN \cite{Ren2015NIPS}, RFCN \cite{Dai2016RFCN}, Mask RCNN \cite{MaskRCNN2017}, YOLO \cite{YoLo2016} and SSD \cite{Liu2016SSD};
\item Backbone networks such as VGG \cite{Simonyan2014VGG}, Inception \cite{GoogLeNet2015,Ioffe2015,Szegedy2016a}, ResNet \cite{He2016ResNet}, ResNeXt \cite{Xie2016Aggregated}, and Xception \cite{Chollet2017Xception} \emph{etc.} listed in Table \ref{Tab:dcnnarchitectures};
\item Innovations such as multilayer feature combination \cite{FPN2016,Shrivastava2017,DSSD2016}, deformable
convolutional networks \cite{Dai17Deformable}, deformable RoI pooling \cite{Ouyang2015deepid,Dai17Deformable}, heavier heads \cite{Ren2016NOC,Peng2018MegDet}, and lighter heads \cite{Li2018Light};
\item Pretraining with datasets such as ImageNet \cite{Russakovsky2015}, COCO \cite{Lin2014}, Places \cite{Zhou2017Places}, JFT \cite{Hinton2015Distilling} and Open Images \cite{OpenImages2017};
\item Different detection proposal methods and different numbers of object proposals;
\item Train/test data augmentation, novel multiscale training strategies \cite{Singh2018SNIP,Singh2018sniper} \emph{etc}, and model ensembling.
\end{itemize}
Although it may be impractical to compare every recently proposed
detector, it is nevertheless valuable to integrate representative and publicly available detectors into a common platform and to compare them in a unified manner. There has been very limited work in this regard, except for Huang's study \cite{Huang2016Speed} of the three main families of detectors (Faster RCNN \cite{Ren2015NIPS}, RFCN \cite{Dai2016RFCN} and SSD \cite{Liu2016SSD}) by varying the backbone network, image resolution, and the number of box proposals.
\begin {figure}[!t]
\centering
\includegraphics[width=0.49\textwidth]{cocoresults.pdf}
\caption{Evolution of object detection performance on COCO (Test-Dev results). Results are quoted from \cite{Girshick2015FRCNN,MaskRCNN2017,Ren2016a}. The backbone network, the design of detection framework and the availability of good and large scale datasets are the three most important factors in detection accuracy.}
\label{fig:cocoresults}
\end {figure}
As can be seen from Tables~\ref{Tab:EnhanceFeatures}, \ref{Tab:ContextMethods}, \ref{Tab:ObjectProposals}, \ref{Tab:ClassImbalance}, \ref{Tab:Detectors}, we have summarized the best reported performance of many methods on
three widely used standard benchmarks. The results of these
methods were reported on the same test benchmark, despite their differing in one or more of the aspects listed above.
Figs.~\ref{fig:GODResultsStatistics} and~\ref{fig:cocoresults} present a very brief
overview of the state of the art, summarizing the best detection results of the PASCAL VOC, ILSVRC and MSCOCO challenges; more results can be found at detection challenge websites \cite{ILSVRCResults,COCOResults,VOCResults}. The competition winner of the open image challenge object detection task achieved $61.71\%$ mAP
in the public leader board and $58.66\%$ mAP on the private leader board, obtained by combining the detection results of several two-stage detectors including Fast RCNN \cite{Girshick2015FRCNN}, Faster RCNN \cite{Ren2015NIPS}, FPN \cite{FPN2016}, Deformable RCNN \cite{Dai17Deformable}, and Cascade RCNN \cite{CascadeRCNN2018}. In summary, the backbone network, the detection framework, and the availability of large scale datasets are the three most important factors in detection accuracy. Ensembles of multiple models, the incorporation of context features, and data augmentation all help to achieve better accuracy.
In less than five years, since AlexNet \cite{Krizhevsky2012} was proposed, the
Top5 error on ImageNet classification \cite{Russakovsky2015} with 1000 classes has dropped from
16\% to 2\%, as shown in Fig.~\ref{fig:ILSVRCclassificationResults}. However, the mAP of the best performing detector \cite{Peng2018MegDet} on COCO \cite{Lin2014}, trained to detect only 80 classes, is only at $73\%$, even at 0.5 IoU, illustrating how object detection is much
harder than image classification. The accuracy and robustness achieved by
the state-of-the-art detectors far from satisfies the
requirements of real world applications, so there remains significant
room for future improvement.
\subsection{Summary and Discussion}
\label{Sec:Discussion}
With hundreds of references and many dozens of methods discussed throughout this paper, we would now like to focus on the key factors which have emerged in generic object detection based on deep learning.
\textbf{(1) Detection Frameworks: Two Stage vs. One Stage} \\
In Section~\ref{Sec:Frameworks} we identified two major categories of detection frameworks: region based (two stage) and unified (one stage):
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item When large computational cost is allowed, two-stage detectors generally produce higher detection accuracies than one-stage, evidenced by the fact that most winning approaches used in famous detection challenges like are predominantly based on two-stage frameworks, because their structure is more flexible and better suited for region based classification. The most widely used frameworks are Faster RCNN \cite{Ren2015NIPS}, RFCN \cite{Dai2016RFCN} and Mask RCNN \cite{MaskRCNN2017}.
\item It has been shown in \cite{Huang2016Speed} that the detection accuracy of one-stage SSD \cite{Liu2016SSD} is less sensitive to the quality of the backbone network than representative two-stage frameworks.
\item One-stage detectors like YOLO \cite{YoLo2016} and SSD \cite{Liu2016SSD} are generally faster than two-stage ones, because of avoiding preprocessing algorithms, using lightweight backbone networks, performing prediction with fewer candidate regions, and making the classification subnetwork fully convolutional. However, two-stage detectors can run in real time with the introduction of similar techniques. In any event, whether one stage or two, the most time consuming step is the feature extractor (backbone network) \cite{Law2018CornerNet,Ren2015NIPS}.
\item It has been shown \cite{Huang2016Speed,YoLo2016,Liu2016SSD} that one-stage frameworks like YOLO and SSD typically have much poorer performance when detecting small objects than two-stage architectures like Faster RCNN and RFCN, but are competitive in detecting large objects.
\end{itemize}
There have been many attempts to build better (faster, more accurate, or more robust) detectors by attacking each stage of the detection
framework. No matter whether one, two or multiple stages, the design of the detection framework has converged towards a number of crucial design choices:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item A fully convolutional pipeline
\item Exploring complementary information from other correlated tasks, \emph{e.g.}, Mask RCNN \cite{MaskRCNN2017}
\item Sliding windows \cite{Ren2015NIPS}
\item Fusing information from different layers of the backbone.
\end{itemize}
The evidence from recent success of cascade for object detection \cite{CascadeRCNN2018,Cheng2018Decoupled,Cheng18Revisiting} and instance segmentation on COCO~\cite{Chen2019Hybrid} and other challenges has shown that multistage object detection could be a future framework for a speed-accuracy trade-off. A teaser investigation is being done in the 2019 WIDER Challenge~\cite{Loy2019Wider}.
\textbf{(2) Backbone Networks} \\
As discussed in Section~\ref{Sec:PopularNetworks}, backbone networks are one of the main driving forces behind the rapid improvement of detection performance, because of the key role played by discriminative object feature representation.
Generally, deeper backbones such as ResNet \cite{He2016ResNet}, ResNeXt \cite{Xie2016Aggregated}, InceptionResNet \cite{InceptionV4} perform better; however, they are computationally more expensive and require much more data and massive computing for training. Some backbones \cite{Howard2017MobileNets,SqueezeNet2016,Zhang18ShuffleNet} were proposed for focusing on speed instead, such as MobileNet \cite{Howard2017MobileNets} which has been shown to achieve VGGNet16 accuracy on ImageNet with only $\frac{1}{30}$ the computational cost and model size.
Backbone training from scratch may become possible
as more training data and better training strategies are available
\cite{Wu2018Group,Luo2019Switchable,Luo2018Towards}.
\textbf{(3) Improving the Robustness of Object Representation}\\
The variation of real world images is a key challenge in object recognition. The variations include lighting, pose, deformations, background clutter, occlusions, blur, resolution, noise,
and camera distortions.
\textbf{(3.1) Object Scale and Small Object Size} \\
Large variations of object scale, particularly those of small objects, pose a great challenge. Here a summary and discussion on the main strategies identified in Section~\ref{Sec:EnhanceFeatures}:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Using image pyramids: They are simple and effective, helping to enlarge small
objects and to shrink large ones. They are computationally expensive, but are nevertheless commonly used during inference for better accuracy.
\item Using features from convolutional layers of different resolutions: In early work like SSD \cite{Liu2016SSD}, predictions are performed independently, and no information from other layers is combined or merged. Now it is quite standard to combine features from different layers, e.g. in FPN \cite{FPN2016}.
\item Using dilated convolutions \cite{Li2018DetNet,Li2019Scale}: A simple and effective method to incorporate broader context and maintain high resolution feature maps.
\item Using anchor boxes of different scales and aspect ratios:
Drawbacks of having many parameters, and scales and aspect ratios of anchor boxes are usually heuristically determined.
\item Up-scaling: Particularly for the detection of small objects, high-resolution networks \cite{Sun2019Deep,Sun2019High} can be developed. It remains unclear whether super-resolution techniques improve detection accuracy or not.
\end{itemize}
Despite recent advances, the detection accuracy for small objects is still much lower than that of larger ones. Therefore, the detection of small objects remains one of the key challenges in object detection. Perhaps localization requirements need to be generalized as a function of scale, since certain applications, e.g. autonomous driving, only require the identification of the existence of small objects within a larger region, and exact localization is not necessary.
\textbf{(3.2) Deformation, Occlusion, and other factors} \\
As discussed in Section~\ref{Sec:MainChallenges}, there are approaches to handling geometric transformation, occlusions, and deformation mainly based on
two paradigms. The first is a spatial transformer network, which uses regression to obtain a
deformation field and then warp features according to the deformation field \cite{Dai17Deformable}.
The second is based on a deformable part-based model \cite{Felzenszwalb2010b},
which finds the maximum response to a part filter with spatial constraints taken into
consideration \cite{Ouyang2015deepid, Girshick2015DPMCNN,Wan2015end}.
Rotation invariance may be attractive in certain applications, but there are limited generic object detection work focusing on rotation invariance, because popular benchmark detection datasets (PASCAL VOC, ImageNet, COCO) do not have large variations in rotation. Occlusion handling is intensively studied in face detection and pedestrian detection, but very little work has been devoted to occlusion handling for generic object detection. In general, despite recent advances, deep networks are still limited by the lack of
robustness to a number of variations, which significantly constrains their real-world applications.
\textbf{(4) Context Reasoning} \\
As introduced in Section \ref{sec:ContextInfo}, objects in the wild typically coexist with other objects and environments.
It has been recognized that contextual information
(object relations, global scene statistics)
helps object detection and recognition \cite{Oliva2007Role}, especially for small objects, occluded objects, and with poor image quality. There was extensive work preceding deep learning \cite{Malisiewicz09Beyond,Murphy03Using,Rabinovich2007Objects,Divvala2009,Galleguillos2010}, and also quite a few works in the era of deep learning \cite{Gidaris2015,GBDCNN2016,Zeng2017Crafting,ChenSpatial2017,Hu2018Relation}.
How to efficiently and effectively incorporate contextual information remains to be explored, possibly guided by how human vision uses context, based on scene graphs \cite{Li2017Scene}, or via the full segmentation of objects and scenes using panoptic segmentation \cite{Kirillov2018Panoptic}.
\textbf{(5) Detection Proposals} \\
Detection proposals significantly reduce search spaces. As recommended in \cite{Hosang2016}, future detection proposals will surely have to improve in repeatability, recall, localization accuracy, and speed.
Since the success of RPN \cite{Ren2015NIPS}, which integrated proposal generation and detection into a common framework, CNN based detection proposal generation methods have dominated region proposal.
It is recommended that new detection proposals should be assessed
for object detection, instead of evaluating detection proposals alone.
\textbf{(6) Other Factors} \\
As discussed in Section~\ref{sec:otherissue}, there are many other factors affecting object detection quality: data augmentation, novel training strategies, combinations of backbone models, multiple detection frameworks, incorporating information from other related tasks, methods for reducing localization error,
handling the huge imbalance between positive and negative samples,
mining of hard negative samples, and improving loss functions.
\subsection{Research Directions}
\label{Sec:Directions}
Despite the recent tremendous progress in the field of object detection, the technology remains significantly more primitive than human vision and cannot yet satisfactorily address real-world challenges like those of Section~\ref{Sec:MainChallenges}. We see a number of long-standing challenges:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item Working in an open world: being robust to any number of environmental changes, being able to evolve or adapt.
\item Object detection under constrained conditions: learning from weakly labeled data or few bounding box annotations, wearable devices, unseen object categories etc.
\item Object detection in other modalities: video, RGBD images, 3D point clouds, lidar, remotely sensed imagery \emph{etc}.
\end{itemize}
Based on these challenges, we see the following directions of future research:
\textbf{(1) Open World Learning:} The ultimate goal is to develop object detection capable of accurately and efficiently recognizing and localizing instances in thousands or more object categories in open-world scenes, at a level competitive with the human visual system. Object detection algorithms are unable, in general, to recognize object categories outside of their training dataset, although ideally there should be the ability to recognize novel object categories \cite{Lake2015Human,Hariharan2017Low}. Current detection datasets \cite{Everingham2010,Russakovsky2015,Lin2014} contain only a few dozen to hundreds
of categories, significantly fewer than those which can be recognized by humans. New larger-scale datasets \cite{Hoffman2014lsda,Singh2018RFCN,YOLO9000} with significantly more categories will need to be developed.
\textbf{(2) Better and More Efficient Detection Frameworks:} One of the reasons for the success in generic object detection has been the development of superior detection frameworks, both region-based (RCNN \cite{Girshick2014RCNN}, Fast RCNN \cite{Girshick2015FRCNN}, Faster RCNN \cite{Ren2015NIPS}, Mask RCNN \cite{MaskRCNN2017}) and
one-stage detectors (YOLO \cite{YoLo2016}, SSD \cite{Liu2016SSD}). Region-based detectors have higher accuracy, one-stage detectors are generally faster and simpler.
Object detectors depend heavily on the underlying backbone networks, which have been optimized for image classification, possibly causing a learning bias; learning object detectors from scratch could be helpful for new detection frameworks.
\textbf{(3) Compact and Efficient CNN Features:} CNNs have increased remarkably in depth, from several layers (AlexNet \cite{AlexNet2012}) to hundreds
of layers (ResNet \cite{He2016ResNet}, DenseNet
\cite{Huang2016Densely}). These networks have millions to hundreds of millions of parameters, requiring massive data and GPUs for training. In order reduce or remove network redundancy, there has been growing research interest in designing
compact and lightweight networks \cite{Chen2017Learning,Alvarez2016Learning,CondenseNet18,Howard2017MobileNets,
Lin2017Towards,Yu2017NISP} and network acceleration
\cite{Cheng2018Model,Hubara2016Binarized,Han2016Deep,Li2017Pruning,Li2017Mimicking,Wei2018Quantization}.
\textbf{(4) Automatic Neural Architecture Search:} Deep learning bypasses manual feature engineering which requires human experts with strong domain knowledge, however DCNNs require similarly significant expertise. It is natural to consider automated design of detection backbone architectures, such as the recent Automated Machine Learning (AutoML) \cite{Quanming2018Taking}, which has been applied to image classification and object detection \cite{Cai2018Path,Chen2019DetNAS,Ghiasi2019NASFPN,Liu2018Progressive,
Zoph2016Neural,Zoph2018Learning}.
\textbf{(5) Object Instance Segmentation:}
For a richer and more
detailed understanding of image content, there is a need to
tackle pixel-level object instance segmentation \cite{Lin2014,MaskRCNN2017,Ronghang2018}, which can play an important role in potential
applications that require the precise boundaries of individual objects.
\textbf{(6) Weakly Supervised Detection:}
Current state-of-the-art detectors employ fully supervised models learned from labeled data with object bounding boxes or segmentation masks
\cite{Everingham2015,Lin2014,Russakovsky2015,Lin2014}. However,
fully supervised learning has serious limitations, particularly where the collection of bounding box annotations is labor intensive and where the number of images is large. Fully supervised learning
is not scalable in the absence of fully labeled training data, so it is essential
to understand how the power of CNNs can be leveraged where only weakly / partially annotated data are provided
\cite{Bilen2016Weakly,Diba2017Weakly,Shi2017PAMI}.
\textbf{(7) Few / Zero Shot Object Detection:}
The success of deep detectors relies heavily on gargantuan
amounts of annotated training data. When the labeled data
are scarce, the performance of deep detectors frequently deteriorates and fails to
generalize well. In contrast, humans (even children) can learn a visual concept quickly from very few given examples and can often generalize well \cite{Biederman1987Recognition,Lake2015Human,Fei2006One}. Therefore, the ability to learn from only few examples, \emph{few} shot detection, is very appealing \cite{Chen2018LSTD,Dong2018Few,Finn2017Model, Kang2018Few,Lake2015Human,Ren2018Meta, Schwartz2019RepMet}.
Even more constrained, \emph{zero} shot object detection localizes and recognizes object classes that have never been seen\footnote{Although side information may be provided, such as a wikipedia page or an attributes vector.} before \cite{Bansa2018Zero,Demirel2018Zero,Rahman2018Zero,Rahman2018Polarity},
essential for life-long learning machines that need to intelligently and incrementally discover new
object categories.
\textbf{(8) Object Detection in Other Modalities:}
Most detectors are based on still 2D images; object detection in other modalities can be highly relevant in domains such as autonomous vehicles,
unmanned aerial vehicles, and robotics. These modalities raise new challenges in effectively
using depth \cite{Chen20153D,Pepik2015GCPR,Xiang2014Beyond,Wu20153D}, video \cite{Feichtenhofer17Detect,Kang2016Object}, and point clouds \cite{Qi2017PointNet,Qi2018Frustum}.
\textbf{(9) Universal Object Detection:} Recently, there has been increasing effort in learning \emph{universal representations}, those which are effective in multiple image domains, such as natural images, videos, aerial images, and medical CT images \cite{Rebuffi2017Learning,Rebuffi2018Efficient}. Most such research focuses on image classification, rarely targeting
object detection \cite{Wang2019Towards}, and developed detectors are usually domain specific. Object detection independent of image domain and cross-domain object detection represent important future directions.
The research field of generic object detection is still far from complete. However given the breakthroughs over the past five years we are optimistic of future developments and opportunities.
\begin{table*}[!t]
\begin{sideways}
\begin{minipage}{\textheight}
\centering
\caption {Summary of properties and performance of milestone detection frameworks for generic object detection. See Section~\ref{Sec:Frameworks} for a detailed discussion. Some architectures are illustrated in Fig.~\ref{Fig:RegionVsUnified}. The properties of the backbone DCNNs can be found in Table~\ref{Tab:dcnnarchitectures}. }\label{Tab:Detectors}
\renewcommand{\arraystretch}{1}
\setlength\arrayrulewidth{0.2mm}
\setlength\tabcolsep{1pt}
\resizebox*{!}{16cm}{
\begin{tabular}{!{\vrule width1.5bp}c|c|c|c|c|c|c|c|c|c|p{10cm}!{\vrule width1.5bp}}
\Xhline{1.5pt}
&\scriptsize \shortstack [c] { Detector\\Name} & \scriptsize RP & \scriptsize \shortstack [c] { Backbone\\DCNN} & \scriptsize \shortstack [c] {Input \\ ImgSize} & \scriptsize \shortstack [c] {VOC07 \\Results}& \scriptsize \shortstack [c] { VOC12\\Results }& \scriptsize \shortstack [c] { Speed \\(FPS) }
& \scriptsize \shortstack [c] {Published \\ In}& \scriptsize \shortstack [c] { Source\\Code } & \scriptsize Highlights and Disadvantages \\
\Xhline{1.5pt}
\multirow{7}*{\hfil \rotatebox{90}{\footnotesize \textbf{Region based} (Section \ref{Sec:RegionBased})$\quad\quad\quad\quad\quad\quad\quad\quad$ }}
&\raisebox{-4.3ex}[0pt]{ \scriptsize RCNN \cite{Girshick2014RCNN} }&\raisebox{-4.3ex}[0pt]{\scriptsize SS} & \raisebox{-4.3ex}[0pt]{\scriptsize AlexNet }
& \raisebox{-4.3ex}[0pt]{\scriptsize Fixed } &\raisebox{-5.3ex}[0pt]{ \scriptsize \shortstack [c] { $58.5$ \\ (07)}} & \scriptsize \raisebox{-6.3ex}[0pt]{ \scriptsize \shortstack [c] { $53.3$ \\ (12)}} &\raisebox{-4.3ex}[0pt]{ \scriptsize $<0.1$} &\raisebox{-4.3ex}[0pt]{ \scriptsize CVPR14}
&\raisebox{-4.3ex}[0pt]{\scriptsize \shortstack [c] {Caffe \\ Matlab}} & \scriptsize \textcolor{DarkGreen}{\textbf{Highlights:}} First to integrate CNN with RP methods; Dramatic performance improvement over previous state of the artP.
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} Multistage pipeline of sequentially-trained (External RP computation, CNN finetuning,
each warped RP passing through CNN, SVM and BBR training);
Training is expensive in space and time; Testing is slow. \\
\cline{2-11}
&\raisebox{-5ex}[0pt]{ \scriptsize SPPNet \cite{He2014SPP} }&\raisebox{-5ex}[0pt]{\scriptsize SS }&\raisebox{-5ex}[0pt]{ \scriptsize ZFNet } &\raisebox{-5ex}[0pt]{ \scriptsize Arbitrary } & \scriptsize \raisebox{-7ex}[0pt]{ \scriptsize \shortstack [c] { $60.9$ \\ (07)}} & \scriptsize \raisebox{-7ex}[0pt]{ \scriptsize $-$ }&\raisebox{-5ex}[0pt]{ \scriptsize $<1$}&\raisebox{-5ex}[0pt]{ \scriptsize ECCV14}
&\raisebox{-5ex}[0pt]{\scriptsize \shortstack [c] {Caffe \\ Matlab}} & \scriptsize \textcolor{DarkGreen}{\textbf{Highlights:}} First to introduce SPP into CNN architecture; Enable convolutional feature sharing;
Accelerate RCNN evaluation by orders of magnitude without sacrificing performance;
Faster than OverFeat.
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} Inherit disadvantages of RCNN; Does not result in much training speedup;
Fine-tuning not able to update the CONV layers before SPP layer. \\
\cline{2-11}
& \raisebox{-4.5ex}[0pt]{\scriptsize Fast RCNN \cite{Girshick2015FRCNN}} & \raisebox{-4.5ex}[0pt]{\scriptsize SS} & \raisebox{-7ex}[0pt]{ \scriptsize \shortstack [c] { AlexNet\\VGGM\\VGG16 }}&\raisebox{-4.5ex}[0pt]{ \scriptsize Arbitrary }
& \raisebox{-7ex}[0pt]{ \scriptsize \shortstack [c] { $70.0$ \\ (VGG)\\(07+12)}} &\raisebox{-7ex}[0pt]{ \scriptsize \shortstack [c] { $68.4$ \\ (VGG)\\(07++12)}}& \raisebox{-4.5ex}[0pt]{\scriptsize $<1$ }& \raisebox{-4.5ex}[0pt]{\scriptsize ICCV15 }
&\raisebox{-4.5ex}[0pt]{\scriptsize \shortstack [c] {Caffe \\ Python}} & \scriptsize \textcolor{DarkGreen}{\textbf{Highlights:}} First to enable end-to-end detector training (ignoring RP generation);
Design a RoI pooling layer; Much faster and more accurate than SPPNet; No disk storage required for feature caching.
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} External RP computation is exposed as the new bottleneck; Still too slow for real time applications. \\
\cline{2-11}
&\raisebox{-6.5ex}[0pt]{ \scriptsize Faster RCNN \cite{Ren2015NIPS} }&\raisebox{-6.5ex}[0pt]{\scriptsize RPN } & \raisebox{-7ex}[0pt]{ \scriptsize \shortstack [c] { ZFnet\\VGG }}&\raisebox{-6.5ex}[0pt]{ \scriptsize Arbitrary }
& \raisebox{-9ex}[0pt]{ \scriptsize \shortstack [c] { $73.2$ \\ (VGG)\\(07+12)}} &\raisebox{-9ex}[0pt]{ \scriptsize \shortstack [c] { $70.4$ \\ (VGG)\\(07++12)}}& \raisebox{-6.5ex}[0pt]{\scriptsize $<5$}& \raisebox{-6.5ex}[0pt]{\scriptsize NIPS15}
&\raisebox{-6.5ex}[0pt]{\scriptsize \shortstack [c] {Caffe \\ Matlab\\Python}}& \scriptsize \textcolor{DarkGreen}{\textbf{Highlights:}} Propose RPN for generating nearly cost-free and high quality RPs instead of selective search;
Introduce translation invariant and multiscale anchor boxes as references in RPN;
Unify RPN and Fast RCNN into a single network by sharing CONV layers;
An order of magnitude faster than Fast RCNN without performance loss;
Can run testing at 5 FPS with VGG16.
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} Training is complex, not a streamlined process;
Still falls short of real time. \\
\cline{2-11}
& \raisebox{-2ex}[0pt]{\scriptsize RCNN$\ominus$R \cite{Lenc2015} }& \raisebox{-2ex}[0pt]{\scriptsize New} & \raisebox{-4ex}[0pt]{ \scriptsize \shortstack [c] { ZFNet\\+SPP}}&\raisebox{-2ex}[0pt]{ \scriptsize Arbitrary }
& \raisebox{-4ex}[0pt]{ \scriptsize \shortstack [c] { $59.7$ \\(07)}} & \raisebox{-2ex}[0pt]{ \scriptsize $-$ }& \raisebox{-2ex}[0pt]{ \scriptsize $<5$ }& \raisebox{-2ex}[0pt]{ \scriptsize BMVC15}
&\raisebox{-2ex}[0pt]{\scriptsize $-$} & \scriptsize \textcolor{DarkGreen}{\textbf{Highlights:}} Replace selective search with static RPs;
Prove the possibility of building integrated, simpler and faster detectors that rely exclusively on CNN.
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} Falls short of real time; Decreased accuracy from poor RPs. \\
\cline{2-11}
&\raisebox{-3.5ex}[0pt]{ \scriptsize RFCN \cite{Dai2016RFCN} }&\raisebox{-3.5ex}[0pt]{ \scriptsize RPN} & \raisebox{-3.5ex}[0pt]{\scriptsize ResNet101 } &\raisebox{-3.5ex}[0pt]{ \scriptsize Arbitrary } & \raisebox{-6.5ex}[0pt]{ \scriptsize \shortstack [c] { $80.5$\\(07+12)\\$83.6$\\(07+12+CO)}} &\raisebox{-6.5ex}[0pt]{ \scriptsize \shortstack [c] { $77.6$ \\(07++12)\\$82.0$\\(07++12+CO)}}&\raisebox{-3.5ex}[0pt]{ \scriptsize $<10$}&\raisebox{-3.5ex}[0pt]{ \scriptsize NIPS16}
&\raisebox{-3.5ex}[0pt]{\scriptsize \shortstack [c] {Caffe \\ Matlab}}& \scriptsize \textcolor{DarkGreen}{\textbf{Highlights:}} Fully convolutional detection network;
Design a set of position sensitive score maps using a bank of specialized CONV layers;
Faster than Faster RCNN without sacrificing much accuracy.
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} Training is not a streamlined process;
Still falls short of real time. \\
\cline{2-11}
&\raisebox{-4.5ex}[0pt]{ \scriptsize Mask RCNN \cite{MaskRCNN2017}} & \raisebox{-4.5ex}[0pt]{\scriptsize RPN }& \raisebox{-5.5ex}[0pt]{\scriptsize \shortstack [c] { ResNet101 \\ ResNeXt101 } } &\raisebox{-4.5ex}[0pt]{ \scriptsize Arbitrary }& \multicolumn{2}{|c|}{\raisebox{-7ex}[0pt]{\scriptsize \shortstack [c] { $50.3$\\(ResNeXt101)\\ (COCO Result)}}} & \raisebox{-4.5ex}[0pt]{\scriptsize $<5$} & \raisebox{-4.5ex}[0pt]{\scriptsize ICCV17 } &\raisebox{-4.5ex}[0pt]{\scriptsize \shortstack [c] {Caffe \\ Matlab\\Python}}& \scriptsize \textcolor{DarkGreen}{\textbf{Highlights:}} A simple, flexible, and effective framework for object instance segmentation; Extends Faster RCNN by adding another branch for predicting an object mask in parallel with the existing branch for BB prediction; Feature Pyramid Network (FPN) is utilized; Outstanding performance.
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} Falls short of real time applications. \\
\Xhline{1.5pt}
\multirow{4}{*}{\rotatebox{90}{\footnotesize \textbf{Unified } (Section \ref{Sec:Unified}) $\quad\quad\quad\quad\quad\quad$ }} & \raisebox{-5.5ex}[0pt]{\scriptsize OverFeat \cite{OverFeat2014} } & \raisebox{-5.5ex}[0pt]{ \scriptsize $-$}
& \raisebox{-5.5ex}[0pt]{\scriptsize AlexNet like }&\raisebox{-5.5ex}[0pt]{ \scriptsize Arbitrary }&\raisebox{-5.5ex}[0pt]{$-$}&\raisebox{-5.5ex}[0pt]{$-$}& \raisebox{-5.5ex}[0pt]{\scriptsize $<0.1$ }& \raisebox{-5.5ex}[0pt]{\scriptsize ICLR14 } &\raisebox{-5.5ex}[0pt]{\scriptsize c++}& \scriptsize \textcolor{DarkGreen}{\textbf{Highlights:}} Convolutional feature sharing;
Multiscale image pyramid CNN feature extraction; Won the ISLVRC2013 localization competition;
Significantly faster than RCNN.
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} Multi-stage pipeline sequentially trained;
Single bounding box regressor; Cannot handle multiple object instances of the same class;
Too slow for real time applications. \\
\cline{2-11}
& \raisebox{-4.5ex}[0pt]{\scriptsize YOLO \cite{YoLo2016} } & \raisebox{-4.5ex}[0pt]{\scriptsize $-$ }& \raisebox{-4.5ex}[0pt]{ \scriptsize \shortstack [c] { GoogLeNet\\like }}
&\raisebox{-4.5ex}[0pt]{ \scriptsize Fixed }& \raisebox{-5ex}[0pt]{ \scriptsize \shortstack [c] { $66.4$ \\(07+12)}} &\raisebox{-5ex}[0pt]{ \scriptsize \shortstack [c] { $57.9$ \\(07++12)}} & \raisebox{-4.5ex}[0pt]{\scriptsize \shortstack [c] { $<25$\\(VGG)}} & \raisebox{-4.5ex}[0pt]{\scriptsize CVPR16} &\raisebox{-4.5ex}[0pt]{\scriptsize DarkNet}
& \scriptsize \textcolor{DarkGreen}{\textbf{Highlights:}} First efficient unified detector;
Drop RP process completely; Elegant and efficient detection framework; Significantly faster than previous detectors; YOLO runs at 45 FPS, Fast YOLO at 155 FPS;
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} Accuracy falls far behind state of the art detectors; Struggle to localize small objects. \\
\cline{2-11}
& \raisebox{-3.5ex}[0pt]{ \scriptsize YOLOv2\cite{YOLO9000}} & \raisebox{-3.5ex}[0pt]{\scriptsize $-$ } & \raisebox{-3.5ex}[0pt]{\scriptsize DarkNet} &\raisebox{-3.5ex}[0pt]{ \scriptsize Fixed } & \raisebox{-4ex}[0pt]{ \scriptsize \shortstack [c] { $78.6$ \\(07+12)}} &\raisebox{-4ex}[0pt]{ \scriptsize \shortstack [c] { $73.5$ \\(07++12)}} & \scriptsize $<50$ & \raisebox{-3.5ex}[0pt]{ \scriptsize CVPR17} &\raisebox{-3.5ex}[0pt]{\scriptsize DarkNet} & \scriptsize
\textcolor{DarkGreen}{\textbf{Highlights:}} Propose a faster DarkNet19; Use a number of existing strategies to improve both speed and accuracy; Achieve high accuracy and high speed; YOLO9000 can detect over 9000 object categories in real time.
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} Not good at detecting small objects. \\
\cline{2-11}
& \raisebox{-3.5ex}[0pt]{\scriptsize SSD \cite{Liu2016SSD} }& \raisebox{-3.5ex}[0pt]{ \scriptsize $-$ } & \raisebox{-3.5ex}[0pt]{ \scriptsize VGG16} &\raisebox{-3.5ex}[0pt]{ \scriptsize Fixed } & \raisebox{-6.5ex}[0pt]{ \scriptsize \shortstack [c] { $76.8$\\(07+12)\\$81.5$\\(07+12+CO)}} &\raisebox{-6.5ex}[0pt]{ \scriptsize \shortstack [c] { $74.9$ \\(07++12)\\$80.0$\\(07++12+CO)}}& \raisebox{-3.5ex}[0pt]{ \scriptsize $<60$}& \raisebox{-3.5ex}[0pt]{ \scriptsize ECCV16} &\raisebox{-3.5ex}[0pt]{\scriptsize \shortstack [c] {Caffe \\ Python}} & \scriptsize
\textcolor{DarkGreen}{\textbf{Highlights:}} First accurate and efficient unified detector;
Effectively combine ideas from RPN and YOLO to perform detection at multi-scale CONV layers; Faster and significantly more accurate than YOLO; Can run at 59 FPS;
\par \textcolor{DarkRed}{\textbf{Disadvantages:}} Not good at detecting small objects. \\
\Xhline{1.5pt}
\multicolumn{9}{c}{$\quad$}\\
\end{tabular}
}
\par
\raggedright \small{\emph{Abbreviations in this table: Region Proposal (RP), Selective Search (SS), Region Proposal Network (RPN), RCNN$\ominus$R represents ``RCNN minus R'' and used a trivial RP method. Training data: ``07''$\leftarrow$VOC2007 trainval; ``07T''$\leftarrow$VOC2007 trainval and test; ``12''$\leftarrow$VOC2012 trainval; ``CO''$\leftarrow$COCO trainval. The ``Speed'' column roughly estimates the detection speed with a single Nvidia Titan X GPU.}}
\end{minipage}
\end{sideways}
\end{table*}
\section{Acknowledgments}
The authors would like to thank the pioneering researchers in
generic object detection and other related fields. The authors would also like to express their sincere appreciation to Professor Ji\v{r}\'{\i} Matas, the associate editor
and the anonymous reviewers for their comments and suggestions. This work has been supported by the Center for Machine Vision and Signal Analysis at the University of Oulu (Finland) and the National Natural Science Foundation of China under Grant 61872379.
\bibliographystyle{spbasic}
\footnotesize
|
{
"timestamp": "2019-08-23T02:06:09",
"yymm": "1809",
"arxiv_id": "1809.02165",
"language": "en",
"url": "https://arxiv.org/abs/1809.02165"
}
|
\section*{Introduction}
For a compact complex space $X$, Kobayashi hyperbolicity is equivalent to the fact that every holomorphic map $\mathbb{C}\to X$ is constant, thanks to a classical result of Brody. When $X$ is moreover projective (or, more generally, compact K\"ahler), hyperbolicity is further expected to be completely characterized by (algebraic) positivity properties of $X$ and of its subvarieties. More precisely, we have the following conjecture, due to S.~Lang.
\begin{conj}\cite[Conjecture 5.6]{Lan86}
A projective variety $X$ is hyperbolic if and only if every subvariety (including $X$ itself) is of general type.
\end{conj}
Recall that a projective variety $X$ is of general type if the canonical bundle of any smooth projective birational model of $X$ is big, \textsl{i.e.}~has maximal Kodaira dimension. This is for instance the case when $X$ is smooth and \emph{canonically polarized}, \textsl{i.e.} with an ample canonical bundle $K_X$.
Note that Lang's conjecture in fact implies that every smooth hyperbolic projective manifold $X$ is canonically polarized, as conjectured in 1970 by S.~Kobayashi. It is indeed a well-known consequence of the Minimal Model Program that any projective manifold of general type without rational curves is canonically polarized (see for instance~\cite[Theorem A]{BBP}).
Besides the trivial case of curves and partial results for surfaces~\cite{MM83,DES79,GG80,McQ98}, Lang's conjecture is still almost completely open in higher dimension as of this writing. General projective hypersurfaces of high degree in projective space form a remarkable exception: they are known to be hyperbolic~\cite{Bro17} (see also~\cite{McQ99,DEG00,DT10,Siu04,Siu15,RY18}), and they satisfy Lang's conjecture~\cite{Cle86,Ein88,Xu94,Voi96,Pac04}.
\medskip
It is natural to test Lang's conjecture for the following two basic classes of manifolds, known to be hyperbolic since the very beginning of the theory:
\begin{itemize}
\item[(N)] compact K\"ahler manifolds $X$ with negative holomorphic sectional curvature;
\item[(B)] compact, free quotients $X$ of bounded domains $\Omega\Subset\mathbb{C}^n$.
\end{itemize}
In case (N), ampleness of $K_X$ was established in~\cite{WY16a,WY16b,TY17} (see also~\cite{DT16}). By curvature monotonicity, this implies that every smooth subvariety of $X$ also has ample canonical bundle. More generally, Guenancia recently showed~\cite{Gue18} that each (possibly singular) subvariety of $X$ is of general type, thereby verifying Lang's conjecture in that case. One might even more generally consider the case where $X$ carries an arbitrary Hermitian metric of negative holomorphic sectional curvature, which seems to be still open.
\medskip
In this note, we confirm Lang's conjecture in case (B). While the case of quotients of bounded \emph{symmetric} domains has been widely studied (see, just to cite a few,~\cite{Nad89,BKT13,Bru16,Cad16,Rou16,RT18}), the general case seems to have somehow passed unnoticed. Instead of bounded domains, we consider more generally the following class of manifolds, which comprises relatively compact domains in Stein manifolds, and has the virtue of being stable under passing to an \'etale cover or a submanifold.
\begin{defiint} We say that a complex manifold $M$ is \emph{of bounded type} if it carries a bounded, strictly plurisubharmonic function $\varphi$.
\end{defiint}
By a well-known result of Richberg, any \emph{continuous} bounded strictly psh function on a complex manifold $M$ can be written as a decreasing limit of smooth strictly psh functions, but this fails in general for discontinuous functions~\cite[p.66]{For}, and it is thus unclear to us whether every manifold of bounded type should carry also a \emph{smooth} bounded strictly psh function.
\begin{thmA}\label{thm:main}
Let $X$ be a compact K\"ahler manifold admitting an \'etale (Galois) cover $\tilde X\to X$ of bounded type. Then:
\begin{itemize}
\item[(i)] $X$ is Kobayashi hyperbolic;
\item[(ii)] $X$ has large fundamental group;
\item[(iii)] $X$ is projective and canonically polarized;
\item[(iv)] every subvariety of $X$ is of general type.
\end{itemize}
\end{thmA}
Note that $\tilde X$ can always be replaced with the universal cover of $X$, and hence can be assumed to be Galois.
\medskip
By~\cite[3.2.8]{Kob98}, (i) holds iff $\tilde X$ is hyperbolic, which follows from the fact that manifolds of bounded type are Kobayashi hyperbolic~\cite[Theorem 3]{Sib81}. Alternatively, any entire curve $f:\mathbb{C}\to X$ lifts to $\tilde X$, and the pull-back to $\mathbb{C}$ of the bounded, strictly psh function carried by $\tilde X$ has to be constant, showing that $f$ itself is constant.
\smallskip
By definition, (ii) means that the image in $\pi_1(X)$ of the fundamental group of any subvariety $Z\subseteq X$ is infinite~\cite[\S 4.1]{Kol}, and is a direct consequence of the fact that manifolds of bounded type do not contain nontrivial compact subvarieties.
According to the Shafarevich conjecture, $\tilde X$ should in fact be Stein; in case $\tilde X$ is a bounded domain of $\mathbb{C}^n$, this is indeed a classical result of Siegel~\cite{Sie50} (see also ~\cite[Theorem 6.2]{Kob59}).
\smallskip
By another classical result, this time due to Kodaira~\cite{Kod}, any compact complex manifold $X$ admitting a Galois \'etale cover $\tilde X\to X$ biholomorphic to a bounded domain in $\mathbb{C}^n$ is projective, with $K_X$ ample. Indeed, the Bergman metric of $\tilde X$ is non-degenerate, and it descends to a positively curved metric on $K_X$. Our proof of (iii) and (iv) is a simple variant of this idea, inspired by~\cite{CZ02}. For each subvariety $Y\subseteq X$ with desingularization $Z\to Y$ and induced Galois \'etale cover $\tilde Z\to Z$, we use basic H\"ormander--Andreotti--Vesentini--Demailly $L^2$-estimates for $\overline{\partial}$ to show that the Bergman metric of $\tilde Z$ is generically non-degenerate. It then descends to a psh metric on $K_Z$, smooth and strictly psh on a nonempty Zariski open set, which is enough to conclude that $K_Z$ is big, by~\cite{Bou02}.
\medskip
As a final comment, note that K\"ahler hyperbolic manifolds, \textsl{i.e.} compact K\"ahler manifolds $X$ carrying a K\"ahler metric $\omega$ whose pull-back to the universal cover $\pi:\tilde X\to X$ satisfies $\pi^*\omega=d\a$ with $\a$ bounded, also satisfy (i)--(iii) in Theorem A \cite{Gro}. It would be interesting to check Lang's conjecture for such manifolss as well.
\begin{ackn} This work was started during the first-named author's stay at SAPIENZA Universit\`a di Roma. He is very grateful to the mathematics department for its hospitality, and to INdAM for financial support.
Both authors would also like to thank Stefano Trapani for helpful discussions, in particular for pointing out the reference~\cite{For}.
\end{ackn}
\section{The Bergman metric and manifolds of general type}
\subsection{Non-degeneration of the Bergman metric}
Recall that the \emph{Bergman space} of a complex manifold $M$ is the separable Hibert space $\mathcal{H}=\mathcal{H}(M)$ of holomorphic forms $\eta\in H^0(M,K_M)$ such that
$$
\|\eta\|_\mathcal{H}^2:=i^{n^2}\int_{\tilde X}\eta\wedge\bar\eta<\infty,
$$
with $n=\dim M$. Assuming $\mathcal{H}\ne\{0\}$, we get an induced (possibly singular) psh metric $h_M$ on $K_M$, invariant under $\Aut(M)$, characterized pointwise by
$$
h/h_M=\sup_{\eta\in\mathcal{H}\setminus\{0\}}\frac{|\eta|^2_h}{\|\eta\|_\mathcal{H}^2}=\sum_j |\eta_j|^2_h,
$$
for any choice of smooth metric $h$ on $K_M$ and orthonormal basis $(\eta_j)$ for $\mathcal{H}$ (see for instance~\cite[\S 4.10]{Kob98}).
The curvature current of $h_M$ is classically called the \lq\lq Bergman metric\rq\rq{} of $M$; it is a \textsl{bona fide} K\"ahler form precisely on the Zariski open subset of $M$ consisting of points at which $\mathcal{H}$ generates $1$-jets~\cite[Proposition 4.10.11]{Kob98}.
\begin{defi} We shall say that a complex manifold $M$ has a \emph{non-degenerate (resp.~generically non-degenerate) Bergman metric} if its Bergman space $\mathcal{H}$ generates $1$-jets at each (resp.~some) point of $M$.
\end{defi}
We next recall the following standard consequence of $L^2$-estimates for $\overline{\partial}$.
\begin{lem}\label{lem:1jet} Let $M$ be a complete K\"ahler manifold with a bounded psh function $\varphi$. If $\varphi$ is strictly psh on $M$ (resp.~at some point of $M$), then the Bergman metric of $M$ is non-degenerate (resp.~generically non-degenerate).
\end{lem}
\begin{proof} Pick a complete K\"ahler metric $\omega$ on $M$. Assume $\varphi$ strictly psh at $p\in M$, and fix a coordinate ball $(U,z)$ centered at $p$ with $\varphi$ strictly psh near $\overline U$. Pick also $\chi\in C^\infty_c(U)$ with $\chi\equiv 1$ near $p$. Since $\chi\log|z|$ is strictly psh in an open neighbourhood $V$ of $p$, smooth on $U\setminus\overline V$, and compactly supported in $U$, we can then choose $A\gg 1$ such that
$$
\psi:=(n+1)\chi\log|z|+A\varphi
$$
is psh on $M$, with $dd^c\psi\ge\omega$ on $U$. Note that $\psi$ is also bounded above on $M$, $\varphi$ being assumed to be bounded.
For an appropriate choice of holomorphic function $f$ on $U$, the smooth $(n,0)$-form $\eta:=\chi f\,dz_1\wedge\dots\wedge dz_n$, which is compactly supported in $U$ and holomorphic in a neighborhood of $x$, will have any prescribed jet at $p$. The $(n,1)$-form $\bar\partial\eta$ is compactly supported in $U$, and identically zero in a neighborhood of $p$, so that $|\overline{\partial}\eta|_\omega e^{-\psi}\in L^2(U)$. Since $dd^c\psi\ge\omega$ on $U$,~\cite[Th\'eor\`eme 5.1]{Dem82} yields an $L^2_{\mathrm{loc}}$ $(n,0)$-form $u$ on $M$ such that $\overline{\partial} u=\overline{\partial}\eta$ and
\begin{equation}\label{equ:l2}
i^{n^2} \int_M u\wedge\bar u\,e^{-2\psi}\le\int_U|\overline{\partial}\eta|^2_\omega e^{-2\psi}dV_\omega.
\end{equation}
As a result, $v:=\eta-u$ is a holomorphic $n$-form on $X$. Since $u=\eta-v$ is holomorphic at $x$ and $\psi$ has an isolated singularity of type $(n+1)\log|z|$ at $x$, (\ref{equ:l2}) forces $u$ to vanish to order $2$ at $p$, so that $v$ and $\eta$ have the same $1$-jet at $p$. Finally, (\ref{equ:l2}) and the fact that $\psi$ is bounded above on $M$ shows that $u$ is $L^2$. Since $\eta$ is clearly $L^2$ as well, $v$ belongs to the Bergman space $\mathcal{H}$, with given $1$-jet at $p$, and we are done.
\end{proof}
\subsection{Manifolds of general type}
Let $X$ be a compact complex manifold, $\tilde X\to X$ a Galois \'etale cover, and assume that the Bergman metric of $\tilde X$ is non-degenerate, so that the canonical metric $h_{\tilde X}$ on $K_{\tilde X}$ defined by $\mathcal{H}(\tilde X)$ is smooth, strictly psh. Being invariant under automorphisms, this metric descends to a smooth, strictly psh metric on $K_X$, and the latter is thus ample by \cite{Kod}. This argument, which goes back to the same paper by Kodaira, admits the following variant.
\begin{lem}\label{lem:big} Let $X$ be a compact K\"ahler manifold admitting a Galois \'etale cover $\tilde X\to X$ with generically non-degenerate Bergman metric. Then $X$ is projective and of general type.
\end{lem}
\begin{proof} The assumption now means that the psh metric $h_{\tilde X}$ on $K_{\tilde X}$ is smooth and strictly psh on a non-empty Zariski open subset. It descends again to a psh metric on $K_X$, smooth and strictly psh on a non-empty Zariski open subset, and we conclude that $K_X$ is big by~\cite[\S 2.3]{Bou02} (see also~\cite[\S 1.5]{BEGZ10}). Being both Moishezon and K\"ahler, $X$ is then projective.
\end{proof}
\section{Proof of Theorem A}
Let $X$ be a compact K\"ahler manifold with an \'etale cover $\pi:\tilde X\to X$ of bounded type, which may be assumed to be Galois after replacing $\tilde X$ by the universal cover of $X$. Since $\tilde X$ is also complete K\"ahler, its Bergman metric is non-degenerate by Lemma~\ref{lem:1jet}, and $X$ is thus projective and canonically polarized by \cite{Kod}.
Now let $Y\subseteq X$ be an irreducible subvariety. On the one hand, pick any connected component $\tilde Y$ of the preimage $\pi^{-1}(Y)\subset\tilde X$, so that $\pi$ induces a Galois \'etale cover $\pi|_{\tilde Y}\colon\tilde Y\to Y$. On the other hand, let $\mu\colon Z\to Y$ be a projective modification with $Z$ smooth and $\mu$ isomorphic over $Y_{\reg}$, whose existence is guaranteed by Hironaka. Since $Y$ is K\"ahler and $\mu$ is projective, $Z$ is then a compact K\"ahler manifold. The fiber product $\tilde Z=Z\times_{Y }\tilde Y$ sits in the following diagram
$$
\xymatrix{
\tilde Z \ar[dr]^{\tilde\mu}\ar[dd]_\nu & &\\
& \tilde Y \ar@{^{(}->}[r] \ar[dd]^{\pi|_{\tilde Y}}& \tilde X\ar[dd]^\pi \\
Z \ar[dr]_\mu & &\\
& Y \ar@{^{(}->}[r] & X.
}
$$
Being a base change of a Galois \'etale cover, $\nu$ is a Galois \'etale cover, and $\tilde\mu$ is a resolution of singularities of $\tilde Y$. Since $\pi$ is \'etale, we have $\tilde Y_{\reg}=\pi^{-1}(Y_{\reg})$, and $\tilde\mu$ is an isomorphism over $\tilde Y_{\reg}$. The pull-back of $\varphi$ to $\tilde Z$ is thus a bounded psh function, strictly psh at any point $p\in\tilde\mu^{-1}(\tilde Y_{\reg})$. Since $Z$ is compact K\"ahler, $\tilde Z$ is complete K\"ahler. By Lemma~\ref{lem:1jet}, the Bergman metric of $Z$ is generically non-degenerate, and $Z$ is thus of general type, by Lemma~\ref{lem:big}.
|
{
"timestamp": "2018-12-14T02:15:09",
"yymm": "1809",
"arxiv_id": "1809.02398",
"language": "en",
"url": "https://arxiv.org/abs/1809.02398"
}
|
\section{Introduction}
\label{sec:intro}
In the field of natural language processing (NLP), one of the most prevalent neural approaches to obtaining sentence representations is to use recurrent neural networks (RNNs), where words in a sentence are processed in a sequential and recurrent manner.
Along with their intuitive design, RNNs have shown outstanding performance across various NLP tasks e.g. language modeling \citep{mikolov2010rnnlm,graves2013generating}, machine translation \citep{cho2014nmt,sutskever2014sequence,bahdanau2015nmt}, text classification \citep{zhou2015c,tang2015document}, and parsing \citep{kiperwasser2016parsing,dyer2016rnng}.
Among several variants of the original RNN \citep{elman1990finding},
gated recurrent architectures such as long short-term memory (LSTM) \citep{hochreiter1997long} and gated recurrent unit (GRU) \citep{cho2014nmt} have been accepted as de-facto standard choices for RNNs
due to their capability of addressing the vanishing and exploding gradient problem and considering long-term dependencies.
Gated RNNs achieve these properties by introducing additional gating units that learn to control the amount of information to be transferred or forgotten \citep{goodfellow2016deeplearningboook},
and are proven to work well without relying on complex optimization algorithms or careful initialization \citep{sutskever2013training}.
Meanwhile, the common practice for further enhancing the expressiveness of RNNs is to stack multiple RNN layers, each of which has distinct parameter sets (stacked RNN) \citep{schmidhuber1992learning,el1996hierarchical}.
In stacked RNNs, the hidden states of a layer are fed as input to the subsequent layer, and they are shown to work well due to increased depth \citep{pascanu2014construct} or their ability to capture hierarchical time series \citep{hermans2013training} which are inherent to the nature of the problem being modeled.
\begin{figure}[tb]
\centering
\subfigure{%
\includegraphics[width=0.36\textwidth]{plain_stacked_simple.pdf}
}
\quad
\subfigure{%
\includegraphics[width=0.36\textwidth]{ca_stacked_simple.pdf}
}
\caption{
Visualization of (\textit{a}) plain stacked LSTM and (\textit{b}) CAS-LSTM.
The red nodes indicate the blocks whose cell states directly affect the cell state $\mathbf{c}_t^l$.
}
\label{fig:comparison}
\end{figure}
However this setting of stacking RNNs might hinder the possibility of more sophisticated structures since the information from lower layers is simply treated as input to the next layer, rather than as another class of state that participates in core RNN computations.
Especially for gated RNNs such as LSTMs and GRUs, this means that the vertical layer-to-layer connections cannot fully benefit from the carefully constructed gating mechanism used in temporal transitions.
In this paper, we study a method of constructing multi-layer LSTMs where memory cell states from the previous layer are used in controlling the vertical information flow.
This system utilizes states from the left and the lower context equally in computation of the new state, thus the information from lower layers is elaborately filtered and reflected through a soft gating mechanism. %
Our method is easy-to-implement, effective, and can replace conventional stacked LSTMs without much modification of the overall architecture.
We call this architecture Cell-aware Stacked LSTM, or CAS-LSTM, and evaluate our method on multiple benchmark tasks: natural language inference, paraphrase identification, sentiment classification, and machine translation.
From experiments we show that the CAS-LSTMs consistently outperform typical stacked LSTMs, opening the possibility of performance improvement of architectures based on stacked LSTMs.
Our contribution is summarized as follows.
Firstly, we bring the idea of utilizing states coming from multiple directions to construction of stacked LSTM and apply the idea to the research of sentence representation learning.
There is some prior work addressing the idea of incorporating more than one type of state \citep{graves2007mdrnn,kalchbrenner2016grid,zhang2016highway}, however to the best of our knowledge there is little work on applying the idea to modeling sentences for better understanding of natural language text.
Secondly, we conduct extensive evaluation of the proposed method and empirically prove its effectiveness.
The CAS-LSTM architecture provides consistent performance gains over the stacked LSTM in all benchmark tasks: natural language inference, paraphrase identification, sentiment classification, and machine translation.
Especially in SNLI, SST-2, and Quora Question Pairs datasets, our models outperform or at least are on par with the state-of-the-art models.
We also conduct thorough qualitative analysis to understand the dynamics of the suggested approach.
This paper is organized in the following way.
We study prior work related to our objective in \S\ref{sec:related}, and
\S\ref{sec:caslstm} gives a detailed description about the proposed method.
Experimental results are given in \S\ref{sec:experiments}, and
\S\ref{sec:conclusion} concludes this paper.
\section{Related Work}
\label{sec:related}
In this section, we summarize prior work related to the proposed method.
We group the previous work that motivated our work into three classes: i) enhancing interaction between vertical layers, ii) RNN architectures that accepts latticed data, and iii) tree-structured RNNs.
\paragraph{Stacked RNNs.}
There is some prior work on methods of stacking RNNs beyond the plain stacked RNNs \citep{schmidhuber1992learning,el1996hierarchical}.
Residual LSTMs \citep{kim2017residual,tran2017stack} add residual connections between the hidden states computed at each LSTM layer, and shortcut-stacked LSTMs \citep{nie2017shortcut} concatenate hidden states from all previous layers to make the backpropagation path short.
In our method, the lower context is aggregated via a gating mechanism, and we believe it modulates the amount of information to be transmitted in a more efficient and effective way than vector addition or concatenation.
Also, compared to concatenation, our method does not significantly increase the number of parameters.\footnote{The $l$-th layer of a typical stacked LSTM requires $(d_{l-1} + d_l + 1) \times 4d_l$ parameters, and the $l$-th layer of a shortcut-stacked LSTM requires $(\sum_{k=0}^{l-1} {d_k} + d_l + 1) \times 4d_l$ parameters. CAS-LSTM uses $(d_{l-1} + d_l + 1) \times 5d_l$ parameters at the $l$-th ($l>1$) layer.}
Highway LSTMs \citep{zhang2016highway} and depth-gated LSTMs \citep{yao2015depth} are similar to our proposed models in that they use cell states from the previous layer, and they are successfully applied to the field of automatic speech recognition and language modeling.
However in contrast to CAS-LSTM, where the additional forget gate aggregates the previous layer states and thus contexts from the left and below participate in computation equitably, in Highway LSTMs and depth-gated LSTMs the states from the previous time step are not considered in computing vertical gates.
The comparison of our method and this architecture is presented in \S\ref{exp:variations}.
\paragraph{Multidimensional RNNs.}
There is another line of research that aims to extend RNNs to operate with multidimensional inputs.
Grid LSTMs \citep{kalchbrenner2016grid} are a general $n$-dimensional LSTM architecture that accepts $n$ sets of hidden and cell states as input and yields $n$ sets of states as output, in contrast to our architecture, which emits a single set of states.
In their work, the authors utilize 2D and 3D Grid LSTMs in character-level language modeling and machine translation respectively and achieve performance improvement.
Multidimensional RNNs \citep{graves2007mdrnn,graves2009offline} have similar formulation to ours, except that they reflect cell states via simple summation and weights for all columns (vertical layers in our case) are tied.
However they are only employed to model multidimensional data such as images of handwritten text with RNNs, rather than stacking RNN layers for modeling sequential data.
From this view, CAS-LSTM could be interpreted as an extension of two-dimensional LSTM architecture that accepts a 2D input $\{\mathbf{h}_t^l\}_{t=1,l=0}^{T,L}$ where $\mathbf{h}_t^l$ represents the hidden state at time $t$ and layer $l$.
\paragraph{Tree-structured RNNs.}
The idea of having multiple states is also related to tree-structured RNNs \citep{goller1996learning,socher2011parsing}.
Among them, tree-structured LSTMs (tree-LSTMs) \citep{tai2015treelstm,zhu2015treelstm,le2015treelstm} are similar to ours in that they use both hidden and cell states of children nodes.
In tree-LSTMs, states of children nodes are regarded as input, and they participate in computing the states of a parent node equally through weight-shared or weight-unshared projection.
From this perspective, each CAS-LSTM layer can be seen as a binary tree-LSTM where the structures it operates on are fixed to right-branching trees.
Indeed, our work is motivated by the recent analysis \citep{williams2018do,shi2018ontree} on latent tree learning models \citep{yogatama2017learning,choi2018learning} which has shown that tree-LSTM models outperform the sequential LSTM models even when the resulting parsing strategy generates strictly left- or right-branching parses, where a tree-LSTM model should read words in the manner identical to a sequential LSTM model.
We argue that the active use of cell state in computation could be one reason of these counter-intuitive results and empirically prove the hypothesis in this work.
\section{Model Description}
\label{sec:caslstm}
\begin{figure}[tb]
\centering
\includegraphics[width=0.5\textwidth]{ca_block.pdf}
\caption{Schematic diagram of a CAS-LSTM block.}
\label{fig:diagram}
\end{figure}
In this section, we give the detailed formulation of architectures used in experiments.
\subsection{Stacked LSTMs}
\label{ssec:stacked-lstm}
While there exist various versions of LSTM formulation, in this work we use the following, the most common variant:
\begin{align}
\mathbf{i}_t^l &= \sigma(\mathbf{W}_i^l \mathbf{h}_t^{l-1} + \mathbf{U}_i^l \mathbf{h}_{t-1}^l + \mathbf{b}_i^l) \\
\mathbf{f}_t^l &= \sigma(\mathbf{W}_f^l \mathbf{h}_t^{l-1} + \mathbf{U}_f^l \mathbf{h}_{t-1}^l + \mathbf{b}_f^l) \\
\tilde{\mathbf{c}}_t^l &= \tanh(\mathbf{W}_c^l \mathbf{h}_t^{l-1} + \mathbf{U}_c^l \mathbf{h}_{t-1}^l + \mathbf{b}_c^l) \\
\mathbf{o}_t^l &= \sigma(\mathbf{W}_o^l \mathbf{h}_t^{l-1} + \mathbf{U}_o^l \mathbf{h}_{t-1}^l + \mathbf{b}_o^l) \\
\mathbf{c}_t^l &= \mathbf{i}_t^l \odot \tilde{\mathbf{c}}_t^l + \mathbf{f}_t^l \odot \mathbf{c}_{t-1}^l \label{eq:conventional-cell}\\
\mathbf{h}_t^l &= \mathbf{o}_t^l \odot \tanh(\mathbf{c}_t^l),
\end{align}
where $t \in \{1,\cdots,T\}$ and $l \in \{1,\cdots,L\}$. $\mathbf{W}_{\cdot}^{l} \in \mathbb{R}^{d_l \times d_{l-1}}$, $\mathbf{U}_{\cdot}^{l} \in \mathbb{R}^{d_l \times d_l}$, $\mathbf{b}_{\cdot}^{l} \in \mathbb{R}^{d_l}$ are trainable parameters,
and $\sigma(\cdot)$ and $\tanh(\cdot)$ are the sigmoid and the hyperbolic tangent function respectively.
Also we assume that $\mathbf{h}_t^0=\mathbf{x}_t \in \mathbb{R}^{d_0}$ where $\mathbf{x}_t$ is the $t$-th element of an input sequence.
The input gate $\mathbf{i}_t^l$ and the forget gate $\mathbf{f}_t^l$ control the amount of information transmitted from $\tilde{\mathbf{c}}_t^l$ and $\mathbf{c}_{t-1}^l$, the candidate cell state and the previous cell state, to the new cell state $\mathbf{c}_t^l$.
Similarly the output gate $\mathbf{o}_t^l$ soft-selects which portion of the cell state $\mathbf{c}_t^l$ is to be used in the final hidden state.
We can clearly see that the cell states $\mathbf{c}_{t-1}^l$, $\tilde{\mathbf{c}}_t^l$, $\mathbf{c}_t^l$ play a crucial role in forming horizontal recurrence.
However the current formulation does not consider the cell state from $(l-1)$-th layer ($\mathbf{c}_t^{l-1}$) in computation and thus the lower context is reflected only through the rudimentary way, hindering the possibility of controlling vertical information flow.
\subsection{Cell-aware Stacked LSTMs}
Now we extend the stacked LSTM formulation defined above to address the problem noted in the previous subsection.
To enhance the interaction between layers in a way similar to how LSTMs keep and forget the information from the previous time step, we introduce the \textit{additional forget gate} $\mathbf{g}_t^l$ that determines whether to accept or ignore the signals coming from the previous layer.
The proposed Cell-aware Stacked LSTM (CAS-LSTM) architecture is defined as follows:
\begin{align}
\mathbf{i}_t^l &= \sigma(\mathbf{W}_i^l \mathbf{h}_t^{l-1} + \mathbf{U}_i^l \mathbf{h}_{t-1}^l + \mathbf{b}_i^l) \\
\mathbf{f}_t^l &= \sigma(\mathbf{W}_f^l \mathbf{h}_t^{l-1} + \mathbf{U}_f^l \mathbf{h}_{t-1}^l + \mathbf{b}_f^l) \\
\mathbf{g}_t^l &= \sigma(\mathbf{W}_g^l \mathbf{h}_t^{l-1} + \mathbf{U}_g^l \mathbf{h}_{t-1}^l + \mathbf{b}_g^l) \label{eq:new-forget-gate}\\
\tilde{\mathbf{c}}_t^l &= \tanh(\mathbf{W}_c^l \mathbf{h}_t^{l-1} + \mathbf{U}_c^l \mathbf{h}_{t-1}^l + \mathbf{b}_c^l) \\
\mathbf{o}_t^l &= \sigma(\mathbf{W}_o^l \mathbf{h}_t^{l-1} + \mathbf{U}_o^l \mathbf{h}_{t-1}^l + \mathbf{b}_o^l) \\
\mathbf{c}_t^l &= \mathbf{i}_t^l \odot {\tilde{\mathbf{c}}}_t^l + (\bm{1 - \lambda})\odot\mathbf{f}_t^l \odot \mathbf{c}_{t-1}^l + \bm{\lambda}\odot\mathbf{g}_t^l \odot \mathbf{c}_t^{l-1} \\
\mathbf{h}_t^l &= \mathbf{o}_t^l \odot \tanh(\mathbf{c}_t^l),
\end{align}
where $l > 1$ and $d_l=d_{l-1}$.
$\bm\lambda$ can either be a vector of constants or parameters.
When $l=1$, the equations defined in the previous subsection are used.
Therefore, it can be said that each non-bottom layer of CAS-LSTM accepts two sets of hidden and cell states---one from the left context and the other from the below context.
The left and the below context participate in computation with the equivalent procedure so that the information from lower layers can be efficiently propagated.
Fig. \ref{fig:comparison} compares CAS-LSTM to the conventional stacked LSTM architecture,
and Fig. \ref{fig:diagram} depicts the computation flow of the CAS-LSTM.
We argue that considering $\mathbf{c}_t^{l-1}$ in computation is beneficial for the following reasons.
First, contrary to $\mathbf{h}_t^{l-1}$, $\mathbf{c}_t^{l-1}$ contains information which is not filtered by $\mathbf{o}_t^{l-1}$.
Thus a model that directly uses $\mathbf{c}_t^{l-1}$ does not rely solely on $\mathbf{o}_t^{l-1}$ for extracting information, due to the fact that it has access to the raw information $\mathbf{c}_t^{l-1}$, as in temporal connections.
In other words, $\mathbf{o}_t^{l-1}$ no longer has to take all responsibility for selecting useful features for both horizontal and vertical transitions, and the burden of selecting information is shared with $\mathbf{g}_t^l$.
\begin{figure}
\centering
\includegraphics[width=0.42\textwidth]{ca_path.pdf}
\caption{
Visualization of paths between $\mathbf{c}_t^{l-1}$ and $\mathbf{c}_t^l$.
In CAS-LSTM, the direct connection between $\mathbf{c}_t^{l-1}$ and $\mathbf{c}_t^l$ exists (denoted as red dashed lines).
}
\label{fig:ca_path}
\end{figure}
Another advantage of using the $\mathbf{c}_t^{l-1}$ lies in the fact that it directly connects $\mathbf{c}_t^{l-1}$ and $\mathbf{c}_t^l$.
This direct connection could help and stabilize training, since the terminal error signals can be easily backpropagated to the model parameters by the shortened propagation path.
Fig. \ref{fig:ca_path} illustrates paths between the two cell states.
Regarding $\bm\lambda$, we find experimentally that there is little difference between having it be a constant and a trainable vector bounded in $(0, 1)$, and we practically find that setting $\lambda_i=0.5$ works well across multiple experiments.
We also experimented with the architecture without $\bm\lambda$ i.e. two cell states are combined by unweighted summation similar to multidimensional RNNs \citep{graves2009offline}, and found that it leads to performance degradation and unstable convergence, likely due to mismatch in the range of cell state values between layers ($(-2, 2)$ for the first layer and $(-3, 3)$ for the others).
Experimental results on various $\bm\lambda$ are presented in \S\ref{exp:variations}.
\subsection{Sentence Encoders}
For text classification tasks, a variable-length sentence should be represented as a fixed-length vector.
We describe the sentence encoder architectures used in experiments in this subsection.
First, we assume that a sequence of $T$ one-hot word vectors is given as input: $(\mathbf{w}_1, \cdots, \mathbf{w}_T)$, $\mathbf{w}_t \in \mathbb{R}^{|V|}$ where $V$ is the vocabulary set.
The words are projected to corresponding word representations: $\mathbf{X}=(\mathbf{x}_1, \cdots, \mathbf{x}_T)$ where $\mathbf{x}_{t} = \mathbf{E}^\top \mathbf{w}_t \in \mathbb{R}^{d_0}$, $\mathbf{E}\in \mathbf{R}^{|V| \times d_0}$.
Then $\mathbf{X}$ is fed to a $L$-layer CAS-LSTM model, resulting in the representations $\mathbf{H}=(\mathbf{h}_1^L, \cdots, \mathbf{h}_T^L)\in \mathbb{R}^{T\times d_L}$.
The encoded sentence representation $\mathbf{s} \in \mathbb{R}^{d_L}$ is computed by max-pooling $\mathbf{H}$ over time as in the work of \citet{conneau2017infersent}.
Similar to their results, from preliminary experiments we found that the max-pooling performs consistently better than the mean-pooling and the last-pooling.
For better modeling of semantics, a bidirectional CAS-LSTM network may also be used.
In the bidirectional case, the representations obtained by left-to-right reading $\mathbf{H}=(\mathbf{h}_1^L, \cdots, \mathbf{h}_T^L) \in \mathbb{R}^{T\times d_L}$
and those by right-to-left reading $\widehat{\mathbf{H}}=(\widehat{\mathbf{h}}_1^L, \cdots, \widehat{\mathbf{h}}_T^L) \in \mathbb{R}^{T \times d_L}$ are concatenated and max-pooled to yield the sentence representation $\mathbf{s} \in \mathbb{R}^{2d_L}$.
We call this bidirectional architecture Bi-CAS-LSTM in experiments.
To predict the final task-specific label, we apply a task-specific feature extraction function $\phi$ to the sentence representation(s) and feed the extracted features to a classifier network.
For the classifier network, a multi-layer perceptron (MLP) with the ReLU activation followed by the linear projection and the softmax function is used:
\begin{equation}
P(\mathbf{y}|\mathbf{X}) = \text{softmax}(\mathbf{W}_c\text{MLP}(\phi(\cdot))),
\end{equation}
where $\mathbf{W}_c \in \mathbb{R}^{|L|\times d_h}$, $|L|$ is the number of label classes, and $d_h$ the dimension of the MLP output.
\section{Experiments}
\label{sec:experiments}
We evaluate our method on three benchmark tasks on sentence encoding: natural language inference (NLI), paraphrase identification (PI), and sentiment classification.
To further demonstrate the general applicability of our method on text generation, we also evaluate the proposed method on machine translation.
In addition, we conduct analysis on gate values model variations for the understanding of the architecture.
We refer readers to the supplemental material for detailed experimental settings.
The code will be made public for reproduction.
For the NLI and PI tasks, there exists architectures specializing in sentence pair classification.
However in this work we confine our model to the architecture that encodes each sentence using a shared encoder without any inter-sentence interaction, in order to focus on the effectiveness of the architectures in extracting semantics.
But note that the applicability of CAS-LSTM is not limited to sentence encoder--based approaches.
\subsection{Natural Language Inference}
\begin{table*}[t]
\centering
\begin{tabular}{l r r}
\hline
\bf{Model} & \bf{Acc. (\%)} & \bf {\# Params} \\
\hline
300D LSTM \citep{bowman2016spinn} & 80.6 & 3.0M \\
300D TBCNN \citep{mou2016snli} & 82.1 & 3.5M \\
300D SPINN-PI \citep{bowman2016spinn} & 83.2 & 3.7M \\
600D BiLSTM with intra-attention \citep{liu2016learning} & 84.2 & 2.8M \\
4096D BiLSTM with max-pooling \citep{conneau2017infersent} & 84.5 & 40M \\
300D BiLSTM with gated pooling \citep{chen2017gated} & 85.5 & 12M \\
300D Gumbel Tree-LSTM \citep{choi2018learning} & 85.6 & 2.9M \\
600D Shortcut stacked BiLSTM \citep{nie2017shortcut} & 86.1 & 140M \\
300D Reinforced self-attention network \citep{shen2018reinforced} & 86.3 & 3.1M \\
600D BiLSTM with generalized pooling \citep{chen2018generalized} & 86.6 & 65M \\
\hline
300D 2-layer CAS-LSTM (ours) & 86.4 & 2.9M \\
300D 2-layer Bi-CAS-LSTM (ours) & {86.8} & 6.8M \\
300D 3-layer CAS-LSTM (ours) & 86.4 & 4.8M \\
300D 3-layer Bi-CAS-LSTM (ours) & \bf{87.0} & 8.6M \\
\hline
\end{tabular}
\caption{Results of the models on the SNLI dataset.}
\label{table:snli}
\end{table*}
\begin{table*}[t]
\centering
\begin{tabular}{l r r r}
\hline
\bf{Model} & \bf{In (\%)} & \bf{Cross (\%)} & \bf{\# Params} \\
\hline
CBOW \citep{williams2018mnli} & 64.8 & 64.5 & - \\
BiLSTM \citep{williams2018mnli} & 66.9 & 66.9 & - \\
Shortcut stacked BiLSTM \citep{nie2017shortcut}$^\ast$ & \bf{74.6} & 73.6 & 140M \\
BiLSTM with gated pooling \citep{chen2017gated} & 73.5 & 73.6 & 12M \\
BiLSTM with generalized pooling \citep{chen2018generalized} & 73.8 & \bf{74.0} & 18M$^{\ast\ast}$ \\
\hline
2-layer CAS-LSTM (ours) & 74.0 & 73.3 & 2.9M \\
2-layer Bi-CAS-LSTM (ours) & \bf{74.6} & 73.7 & 6.8M \\
3-layer CAS-LSTM (ours) & 73.8 & 73.1 & 4.8M \\
3-layer Bi-CAS-LSTM (ours) & 74.2 & 73.4 & 8.6M \\
\hline
\end{tabular}
\caption{Results of the models on the MultiNLI dataset.
`In' and `Cross' represent accuracy calculated from the matched and mismatched test set respectively.
$^\ast$: SNLI dataset is used as additional training data. $^{\ast\ast}$: computed from hyperparameters provided by the authors.}
\label{table:mnli}
\end{table*}
For the evaluation of performance of the proposed method on the NLI task, SNLI \citep{bowman2015snli} and MultiNLI \citep{williams2018mnli} datasets are used.
The objective of both datasets is to predict the relationship between a premise and a hypothesis sentence: \textit{entailment}, \textit{contradiction}, and \textit{neutral}.
SNLI and MultiNLI datasets are composed of about 570k and 430k premise-hypothesis pairs respectively.
GloVe pretrained word embeddings\footnote{\url{https://nlp.stanford.edu/projects/glove/}} \citep{pennington2014glove} are used and remain fixed during training.
The dimension of encoder states ($d_l$) is set to 300 and a 1024D MLP with one or two hidden layers is used.
We apply dropout \citep{srivastava2014dropout} to the word embeddings and the MLP layers.
The features used as input to the MLP classifier are extracted by the following equation:
\begin{equation}
\label{eq:nli-matching}
\phi(\mathbf{s}_1, \mathbf{s}_2) = \mathbf{s}_1 \oplus \mathbf{s}_2 \oplus |\mathbf{s}_1 - \mathbf{s}_2| \oplus (\mathbf{s}_1 \odot \mathbf{s}_2),
\end{equation}
where $\oplus$ is the vector concatenation operator.
Table \ref{table:snli} and \ref{table:mnli} contain results of the models on SNLI and MultiNLI datasets.
Along with other state-of-the-art models, the tables include several stacked LSTM--based models to facilitate comparison of our work with prior related work.
\citet{liu2016learning,chen2017gated,chen2018generalized} adopt advanced pooling algorithms motivated by the attention mechanism to obtain a fixed-length sentence vector.
\citet{nie2017shortcut} use the concatenation of all outputs from previous layers as input to the next layer.
In SNLI, our best model achieves the accuracy of 87.0\%, which is the new state-of-the-art among the sentence encoder--based models, with relatively fewer parameters.
Similarly in MultiNLI, our models match the accuracy of state-of-the-art models in both in-domain (matched) and cross-domain (mismatched) test sets.
Note that only the GloVe word vectors are used as word representations, as opposed to some models that introduce character-level features.
It is also notable that our proposed architecture does not restrict the selection of pooling method; the performance could further be improved by replacing max-pooling with other advanced algorithms e.g. intra-sentence attention \citep{liu2016learning} and generalized pooling \citep{chen2018generalized}.
\subsection{Paraphrase Identification}
\begin{table}[t]
\centering
\begin{tabular}{l r}
\hline
\bf{Model} & \bf{Acc. (\%)} \\
\hline
CNN \citep{wang2017bilateral} & 79.6 \\
LSTM \citep{wang2017bilateral} & 82.6 \\
Multi-Perspective LSTM \citep{wang2017bilateral} & 83.2 \\
LSTM + ElBiS \citep{choi2018elbis} & 87.3 \\
REGMAPR (BASE+REG) \citep{brahma2018regmapr} & 88.0 \\
\hline
CAS-LSTM (ours) & 88.4 \\
Bi-CAS-LSTM (ours) & \bf{88.6} \\
\hline
\end{tabular}
\caption{Results of the models on the Quora Question Pairs dataset.}
\label{table:quora}
\end{table}
We use Quora Question Pairs dataset \citep{wang2017bilateral} in evaluating the performance of our method on the PI task.
The dataset consists of over 400k question pairs, and each pair is annotated with whether the two sentences are paraphrase of each other or not.
Similarly to the NLI experiments, GloVe pretrained vectors, 300D encoders, and 1024D MLP are used.
The number of CAS-LSTM layers is fixed to 2 in PI experiments.
Two sentence vectors are aggregated using the following equation and fed as input to the classifier.
\begin{equation}
\label{eq:pi-matching}
\phi(\mathbf{s}_1, \mathbf{s}_2) = |\mathbf{s}_1 - \mathbf{s}_2| \oplus (\mathbf{s}_1 \odot \mathbf{s}_2)
\end{equation}
The results on the Quora Question Pairs dataset are summarized in Table \ref{table:quora}.
Again we can see that our models outperform other models, especially compared to conventional LSTM--based models.
Also note that Multi-Perspective LSTM \citep{wang2017bilateral}, LSTM + ElBiS \citep{choi2018elbis}, and REGMAPR (BASE+REG) \citep{brahma2018regmapr} in Table \ref{table:quora} are approaches that focus on designing a more sophisticated function for aggregating two sentence vectors, and their aggregation functions could be also applied to our work for further improvement.
\subsection{Sentiment Classification}
\begin{table*}[t]
\centering
\begin{tabular}{l r r}
\hline
\bf{Model} & \bf{SST-2 (\%)} & \bf{SST-5 (\%)} \\
\hline
Recursive Neural Tensor Network \citep{socher2013recursive} & 85.4 & 45.7 \\
2-layer LSTM \citep{tai2015treelstm} & 86.3 & 46.0 \\
2-layer BiLSTM \citep{tai2015treelstm} & 87.2 & 48.5 \\
Constituency Tree-LSTM \citep{tai2015treelstm} & 88.0 & 51.0 \\
Constituency Tree-LSTM with recurrent dropout \citep{looks2017deep} & 89.4 & 52.3 \\
byte mLSTM \citep{radford2017learning}$^\ast$ & \underline{91.8} & 52.9 \\
Gumbel Tree-LSTM \citep{choi2018learning} & 90.7 & \bf{53.7} \\
BCN + Char + ELMo \citep{peters2018elmo}$^\ast$ & - & \underline{54.7} \\
\hline
2-layer CAS-LSTM (ours) & 91.1 & 53.0 \\
2-layer Bi-CAS-LSTM (ours) & \bf{91.3} & 53.6 \\
\hline
\end{tabular}
\caption{Results of the models on the SST dataset. $^\ast$: models pretrained on large external corpora are used.}
\label{table:sst}
\end{table*}
In evaluating sentiment classification performance, the Stanford Sentiment Treebank (SST) \citep{socher2013recursive} is used.
It consists of about 12,000 binary-parsed sentences where constituents (phrases) of each parse tree are annotated with a sentiment label (\textit{very positive}, \textit{positive}, \textit{neutral}, \textit{negative}, \textit{very negative}).
Following the convention of prior work, all phrases and their labels are used in training but only the sentence-level data are used in evaluation.
In evaluation we consider two settings, namely SST-2 and SST-5, the two differing only in their level of granularity with regard to labels.
In SST-2, data samples annotated with `neutral' are ignored from training and evaluation.
The two positive labels (very positive, positive) are considered as the same label, and similarly for the two negative labels.
As a result 98,794/872/1,821 data samples are used in training/validation/test, and the task is considered as a binary classification problem.
In SST-5, all 318,582/1,101/2,210 data samples are used and the task is a 5-class classification problem.
Since the task is a single-sentence classification problem, we use the sentence representation itself as input to the classifier.
We use 300D GloVe vectors, 2-layer 150D or 300D encoders, and a 300D MLP classifier for the models,
however unlike previous experiments we tune the word embeddings during training.
The results on SST are listed in Table \ref{table:sst}.
Our models clearly outperform plain LSTM- and BiLSTM-based models, and are competitive to other state-of-the-art models, without utilizing parse tree information.
\subsection{Machine Translation}
\begin{table}[tb]
\centering
\begin{tabular}{l l}
\hline
\bf{Model} & \bf{BLEU} \\
\hline
256D LSTM & 28.1 $\pm$ 0.22 \\
256D CAS-LSTM & 28.8 $\pm$ 0.04$^\ast$ \\
247D CAS-LSTM & 28.7 $\pm$ 0.07$^\ast$ \\
\hline
\end{tabular}
\caption{
Results of the models on the IWSLT 2014 de-en dataset.
$^\ast$: $p < 0.0005$ (one-tailed paired t-test).
}
\label{table:mt}
\end{table}
We use the IWSLT 2014 machine evaluation campaign dataset \citep{cettolo2014report} in machine translation experiments.
We used the fairseq library\footnote{\url{https://github.com/pytorch/fairseq}} \citep{gehring2017fairseq} for experiments.
Moses tokenizer\footnote{\url{https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl}} is used for word tokenization and the byte pair encoding \citep{sennrich2016bpe} is applied to confine the size of the vocabulary set up to 10,000.
Similar to \citet{wiseman2016nmt}, a 2-layer 256D sequence-to-sequence LSTM model with the attentional decoder is used as baseline, and we replace the encoder and the decoder network with the proposed architecture for the evaluation of performance improvement.
For decoding, beam search with $B=10$ is used.
For fair comparison, we tune hyperparameters for all models based on the performance on the validation dataset and train the same model for five times with different random seeds.
Also, to cancel out the increased number of parameters, we experiment with the 247D CAS-LSTM model which has the roughly same number of parameters as the baseline model (8.2M).
From Table \ref{table:mt}, we can see that the CAS-LSTM models bring significant performance gains over the baseline model.
\subsection{Forget Gate Analysis}
\begin{figure}[t]
\centering
\subfigure{%
\label{fig:gate-a}
\includegraphics[width=0.3\textwidth]{g2.pdf}
}
\quad
\subfigure{%
\label{fig:gate-b}
\includegraphics[width=0.3\textwidth]{g3.pdf}
}
\par\bigskip
\subfigure{%
\label{fig:gate-c}
\includegraphics[width=0.3\textwidth]{g2_range.pdf}
}
\quad
\subfigure{%
\label{fig:gate-d}
\includegraphics[width=0.3\textwidth]{g3_range.pdf}
}
\par\bigskip
\subfigure{%
\label{fig:gate-e}
\includegraphics[width=0.3\textwidth]{g2o1.pdf}
}
\quad
\subfigure{%
\label{fig:gate-f}
\includegraphics[width=0.3\textwidth]{g3o2.pdf}
}
\caption{
(\textit{a}) $g^2_i$, (\textit{b}) $g^3_i$, (\textit{c}) $R(\mathbf{g}^2_\cdot)$,
(\textit{d}) $R(\mathbf{g}^3_\cdot)$, (\textit{e}) $\vert g^2_i - o^1_i \vert$,
(\textit{f}) $\vert g^3_i - o^2_i \vert$.
(\textit{a}), (\textit{b}): Histograms of vertical forget gate values.
(\textit{c}), (\textit{d}): Histograms of the ranges of vertical forget gate per time step.
(\textit{e}), (\textit{f}): Histograms of the absolute difference between the previous output gate and the current vertical forget gate values.
}
\label{fig:gate-analysis}
\end{figure}
To inspect the effect of the additional forget gate, we investigate how the values of vertical forget gates are distributed.
We sample 1,000 random sentences from the development set of the SNLI dataset, and use the 3-layer CAS-LSTM model trained on the SNLI dataset to compute gate values.
If all values from a vertical forget gate $\mathbf{g}_t^l$ were to be 0, this would mean that the introduction of the additional forget gate is meaningless and the model would reduce to a plain stacked LSTM.
On the contrary if all values were 1, meaning that the vertical forget gates were always \textit{open}, it would be impossible to say that the information is modulated effectively.
Fig. \ref{fig:gate-a} and \ref{fig:gate-b} represent histograms of the vertical forget gate values from the second and the third layer.
From the figures we can validate that the trained model does not fall into the degenerate case where vertical forget gates are ignored.
Also the figures show that the values are right-skewed, which we conjecture to be a result of focusing more on a strong interaction between adjacent layers.
To further verify that the gate values are diverse enough within each time step, we compute the distribution of the range of values per time step, $R(\mathbf{g}_t^l)=\max_i{g_{t,i}^l} - \min_i{{g}_{t,i}^l}$, where $\mathbf{g}_t^l=[g_{t,1}^l, \cdots, g_{t,d_l}^l]^\top$.
We plot the histograms in Fig. \ref{fig:gate-c} and \ref{fig:gate-d}.
From the figures we see that the vertical forget gate controls the amount of information flow effectively, making diverse decisions of retaining or discarding signals across dimensions.
Finally, to investigate the argument presented in \S\ref{sec:caslstm} that the additional forget gate helps the previous output gate with reducing the burden of extracting all needed information, we inspect the distribution of the values from $\vert \mathbf{g}_t^l - \mathbf{o}_t^{l-1} \vert$.
This distribution indicates how differently the vertical forget gate and the previous output gate select information from $\mathbf{c}_t^{l-1}$.
From Fig. \ref{fig:gate-e} and \ref{fig:gate-f} we can see that the two gates make fairly different decisions, from which we demonstrate that the direct path between $\mathbf{c}_t^{l-1}$ and $\mathbf{c}_t^l$ enables a model to utilize signals overlooked by $\mathbf{o}_t^{l-1}$.
\subsection{Model Variations}
\label{exp:variations}
In this subsection, we see the influence of each component of a model on performance by removing or replacing its components.
the SNLI dataset is used for experiments, and the best performing configuration is used as a baseline for modifications.
We consider the following variants: (\textit{i}) models with different $\bm\lambda$, (\textit{ii}) models without $\bm\lambda$, and (\textit{iii}) models that integrate lower contexts via peephole connections.
Variant (\textit{iii}) calculates and applies the forget gate $\mathbf{g}_t^l$ which takes charge of integrating lower contexts via the equations below, following the work of \citet{zhang2016highway}:
\begin{align}
\begin{split}
\mathbf{g}_t^l &= \sigma(\mathbf{W}_g^l \mathbf{h}_t^{l-1}
+ \mathbf{p}_{g_1}^l \odot \mathbf{c}_{t-1}^l + \mathbf{p}_{g_2}^l \odot \mathbf{c}_{t}^{l-1} + \mathbf{b}_g^l) \label{eq:peephole1}
\end{split} \\
\mathbf{c}_t^l &= \mathbf{i}_t^l \odot \tilde{\mathbf{c}}_t^l + \mathbf{f}_t^l \odot \mathbf{c}_{t-1}^l + \mathbf{g}_t^l \odot \mathbf{c}_t^{l-1}, \label{eq:peephole2}
\end{align}
where $\mathbf{p}_\cdot^l \in \mathbb{R}^{d_l}$ represent peephole weight vectors that take cell states into account.
We can see that the computation formulae of $\mathbf{f}_t^l$ and $\mathbf{g}_t^l$ are not consistent, in that $\mathbf{h}_{t-1}^l$ does not participate in computing $\mathbf{g}_{t-1}^l$, and that the left and the below context are reflected in $\mathbf{g}_{t-1}^l$ only via element-wise multiplications which do not consider the interaction among dimensions.
By contrast, ours uses the analogous formulae in calculating $\mathbf{f}_t^l$ and $\mathbf{g}_t^l$, considers $\mathbf{h}_{t-1}^{l}$ in calculating $\mathbf{g}_t^l$, and introduces the scaling factor $\bm{\lambda}$.
Table \ref{table:variations} summarizes the results of model variants.
From the results of \textit{baseline} and \textit{(i)}, we validate that the selection of $\bm\lambda$ does not significantly affect performance but introducing $\bm\lambda$ is beneficial (\textit{baseline vs. (ii)}) possibly due to its effect on normalizing information from multiple sources, as mentioned in \S\ref{sec:caslstm}.
Also, from the comparison between \textit{baseline} and \textit{(iii)}, we show that the proposed way of combining the left and the lower contexts leads to better modeling of sentence representations than that of \citet{zhang2016highway}.
\begin{table}[t]
\centering
\begin{tabular}{l r r}
\hline
\bf{Model} & \bf{Acc. (\%)} & \bf{$\Delta$} \\
\hline
Bi-CAS-LSTM (\textit{baseline}) & 87.0 & \\
\quad \textit{(i) Diverse $\bm\lambda$} & & \\
\qquad \textit{(a) $\lambda_i=0.25$} & 86.8 & -0.2 \\
\qquad \textit{(b) $\lambda_i=0.75$} & 86.8 & -0.2 \\
\qquad \textit{(c) Trainable $\bm\lambda$} & 86.9 & -0.1 \\
\quad \textit{(ii) No $\bm\lambda$} & 86.6 & -0.4 \\
\quad \textit{(iii) Integration through peepholes} & 86.5 & -0.5 \\
\hline
\end{tabular}
\caption{Results of model variants.}
\label{table:variations}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed a method of stacking multiple LSTM layers for modeling sentences, dubbed CAS-LSTM.
It uses not only hidden states but also cell states from the previous layer, for the purpose of controlling the vertical information flow in a more elaborate way.
We evaluated the proposed method on various benchmark tasks: natural language inference, paraphrase identification, and sentiment classification.
Our models outperformed plain LSTM-based models in all experiments and were competitive other state-of-the-art models.
The proposed architecture can replace any stacked LSTM only under one weak restriction---the size of states should be identical across all layers.
For future work we plan to apply the CAS-LSTM architecture beyond sentence modeling tasks.
Various problems such as sequence labeling and language modeling might benefit from sophisticated modulation on context integration.
Aggregating diverse contexts from sequential data, e.g. those from forward and backward reading of text, could also be an intriguing research direction.
\section*{Acknowledgments}
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) (NRF2016M3C4A7952587).
|
{
"timestamp": "2019-11-04T02:09:17",
"yymm": "1809",
"arxiv_id": "1809.02279",
"language": "en",
"url": "https://arxiv.org/abs/1809.02279"
}
|
\section{Introduction}
An ever increasing interest in the field of automatic summarization has led to a plethora of extractive techniques, with a lack of clarity about which is the best technique. Unclear implementation details and variation in evaluation setups only make the problem worse. However, there is no doubt that none of these techniques would always work. Not only will there be a difference in performance across different datasets, there is also a good amount of variation across documents in the same dataset. In several cases this difference is also attributed to the fact that different ROUGE setups are used for evaluation, which can result substantial variation in the scores. \cite{hong2014repository} propose using a fixed set of parameters for ROUGE and report comparable results for several summarization algorithms. Even with this normalization, the system performance still varies a lot, and there is a possibility of exploiting this variation to generate better performing systems. To give an example, we show a simple comparison of two extractive summarization systems from those used by \cite{hong2014repository} in their experiments. We pick two extreme systems, in terms of performance, from the those reported in the work. The FreqSum system\cite{nenkova2006compositional}, which has the weakest performance, was compared to the DPP system\cite{kulesza2012determinantal} which was the best performing system amongst those compared. On DUC 2004 dataset, FreqSum performed better on more than 10$\%$ of the document clusters. There are documents for which a system which is overall very weak, outperforms the system that has a very good performance on an average. Going a step further, an oracle of just the five baseline systems, outperforms DPP in a little over fifty percent of the document clusters. The argument is clear: ensemble of several systems can definitely improve the performance as compared to individual systems. The ideal way of forming an ensemble summary would be to select the relevant information from each candidate while discarding the rest.\\
In this work we propose a new method for estimating the authority of a particular system for a given document and at the same time also estimating the importance of each sentence within the summary generated by that system. The ensemble summary is then a function of the authority of each candidate system as well as the relative importance of each sentence in the candidate summaries. The fact, that informative or \emph{summary worthy} sentences in a document cluster are much less in number compared to the non-informative ones, forms the basis of our hypothesis. We argue that since this content is much less, any substantial overlap between two summaries will likely be due to the \emph{important} content rather than the redundant one. Simply because there is less content to choose from when it comes to the important sentences two good summaries will have a good amount of overlap in content. At the same time it is highly unlikely that two summaries will also chose the same not-important content. Keeping this argument in mind we associate higher similarity in content between two summaries with the summaries having more informative content. \\
We use graph based ranking that takes into account the similarity of a candidate summary with other candidates to generate its local (or document specific) ranking. We also determine the overall global ranking of a system from its ROUGE score on a development dataset. In the same way \emph{informativeness} of a sentence is linked to its overlap with sentences of other summaries. The HybridRank model proposed here, combines these three factors to generate a new aggregate ranking of sentences.
\section{Related Work}
In contrast to the amount of attention automatic summarization, and especially the extractive techniques, has achieved from researchers, aggregation techniques have been explored little. This is counter intuitive given several studies which show that even in cases where two systems achieve a comparable ROUGE score, the actual content can be quite different \cite{hong2014repository}. Existing aggregation techniques can be broadly classified into two categories: \emph{rank aggregation} and \emph{summary aggregation}. These techniques are used post-summarization, i.e. each candidate system is first used without any modification. The output, whether ranked lists or summaries, are then used for aggregation. The former method solely relies on candidate summaries, without any information or assumption about the original sentence rankings. In contrast the latter combines existing ranked lists to generate a new aggregate ranking. Apart from these two, there is another type of aggregation which combines various aspects of candidates and incorporates them into the algorithm itself.\\
\cite{pei2012supervised} and \cite{hongsystem} are two instances of the summary aggregation techniques. \cite{pei2012supervised} use SVM-Rank to learn the optimum ranking of sentences from candidate systems. Each sentence is labeled as $-1,0,1$ depending on its \emph{summary worthiness} and then it is used to learn pairwise ranking for each sentence. As opposed to this, \cite{hongsystem} attempt to generate rankings of entire summaries to find out the combination of sentences that maximizes ROUGE-1 or ROUGE-2. They use summaries from four different systems to begin with. Next they combine these summaries and list out all possible candidate summaries by selecting a fixed number of sentences from the combined set. Several word and summary level features are then used to train a SVM-Rank algorithm that can learn to rank candidate summaries to maximize ROUGE scores. \\
Excluding the common techniques like \emph{Round Robin}, \emph{Borda Count} or \emph{Reciprocal Rank} only notable attempt at using rank aggregation was made by \cite{wang2012weighted}. The \emph{weighted consensus summarization} system proposed by them treats rank aggregation as a optimization problem. This approach also introduced the concept of consensus between candidate summaries. They create a weighted combination of ranked lists, under the constraint that the aggregate ranking be as close to original rankings as possible. Unlike the approach proposed in this paper, \cite{wang2012weighted} do not differentiate candidates based on their \emph{trustworthiness}, and instead try to maintain as much information from each candidate as possible. Apart from not taking into account the content of candidate summaries, there are two major limitations with this approach. One, it uses $L_1$ norm for computing similarity (or distance) between candidate rankings, which can be a sub-optimal choice when compared to traditional metrics like \emph{Kendall's Tau}. The other major problem is, this technique tries to optimize rankings over all sentences in the document cluster. So even if the systems agree on ranking of top-k sentence, which actually go into the summary, the aggregation will try to optimize over the entire rankings. This is not only unnecessary, but can also affect the performance adversely.\\
Neither \emph{summary aggregation} nor \emph{rank aggregation} take into consideration the original summarization algorithms or in any way modify them. There have been a few attempts at modifying summarization algorithms to incorporate features from several candidates into a single algorithm. But such attempts are limited by the non-triviality of being able to meaningfully combine unique aspects of the candidates. One popular approach of this kind is combining several sentence similarity scores to generate an aggregate similarity which can then be used by any of the existing algorithms. The \emph{MultSum} technique proposed by \cite{mogrenextractive} builds upon the submodular optimization technique\cite{lin2012learning}. They replace the cosine similarity used in the original approach with multiplicative combination of several sentence similarity measures. This can be extended to several techniques which rely on a sentence similarity metric. But in general it is difficult to define an \emph{aggregate} for other components of a summarization algortihm. As an alternate, \cite{mehta2018effective} propose combining several aspects of candidate systems like \emph{sentence similarity}, \emph{ranking algorithm} and \emph{text representation scheme} in a post-summarization setup. This approach falls within the rank aggregation technique, except that the candidate systems are created by varying one of these three components of the original systems. The \emph{LexRank} algorithm can be used with Kullback Leibler Divergence or word overlap, instead of cosine similarity. Such variations are then combined using existing rank aggregation techniques.
\section{Proposed Approach}
In this paper we propose a new summary aggregation technique which takes into account the content of each candidate systems, rather than only ranked lists as done in \cite{wang2012weighted}. In a multi document summarization setup, especially in case of newswire, it is quite common to have duplicate or near duplicate content getting repeated across multiple documents. To put things into perspective, DUC 2003 dataset has, on an average, 34 sentences per cluster which have an exact match within the document cluster. More than 50 sentences have a 80\% match with another sentence. Most existing summarization techniques do not handle this redundancy explicitly, neither do most existing aggregation techniques. This has a huge impact on a class of aggregation techniques which rely only on ranked lists aggregation without taking into account the actual content of individual summaries. For instance, consider the example shown below where $S_1$ and $S_2$ are individual sentence rankings and $S_A$ is the aggregate ranking. This would be fine if each sentence is different and equally important. But consider a case where $sim(s_{1}, s_4) = 1$. The fact that $s_1$ and $s_5$ are repeated across documents makes them more important. But the rank aggregation techniques fail to take into account their actual content and treat these separately, which may result in lowering of their aggregate scores. $S_A^*$ indicates the ideal rank aggregate.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{aclproblem.png}
\caption{Issue with rank aggregation}
\end{figure}
Instead, in the proposed approach we take into account content of the summaries being aggregated to assign weights to candidate systems. The proposed system takes into account the overlap in content between summaries of the candidate systems to compute their reliability, and in turn use it to assign relative importance to individual sentences in those summaries. \\
The proposed method estimates importance of a given sentence in a particular candidate summary, and then uses this information to generate a meta-summary. In contrast to the solution proposed by \cite{hongsystem}, where they use word and summary level features to estimate the relative importance of each candidate, we propose using similarity between candidates as a measure for their importance. We argue that given two good summaries there is bound to be a good amount of overlap in their content, simply because the relevant content in a document cluster is much less compared to non-relevant content. Arguing the other way round, if two summaries have a good amount of overlap, it is likely because both have more \emph{informative} content. Due to a large number of \emph{non-informative} sentences, selecting the same set of \emph{non-informative} sentences in two distinct summaries is much less likely. The only possible exception where two bad summaries might have higher overlap, will be when both have very similar sentence rankings. As the number of candidate systems increase, this will automatically be taken care of. We start with a simple method to estimate informativeness of sentences in candidate summaries.
\subsection{SentRank}
This system takes into account similarity of each sentence in a candidate summary, with that of other candidate summaries, and uses it to assign a relative importance to the sentence. The argument presented above, in favor of using similarity as a measure for reliability of a summary can easily be extended at sentence level. A sentence that is \emph{summary worthy} will share more information with another \emph{summary worthy sentences}.
For $i^{th}$ sentence in the $j^{th}$ candidate summary($s_{ij}$), we find the best matching sentence in the remaining candidate summaries. The score of that sentence can then be computed as shown in equation \ref{eq1} below. Score of the sentence is the sum of its similarity with the best matching sentences from remaining candidates. Here $j$,$k$ are the candidate systems. $i$ and $l$ are the sentences in candidate $J$ and $K$ respectively.
\begin{eqnarray}
\boldsymbol{R}(s_{ij}) = \sum_{k, k\neq j} \max_{l}(Sim(s_{ij}, s_{lk}))
\label{eq1}
\end{eqnarray}
Each sentence in the candidate summary is then ranked according to their score $\boldsymbol{R}$ and top k sentences are selected in the summary. We experimented with n-gram overlap, cosine and KL Divergence for computing the similarity between sentences and empirically select cosine similarity, which is also used in the subsequent systems.
\subsection{GlobalRank}
One limitation of the SentRank approach proposed above is that does not take into account the reliability of candidate systems into account and treats each candidate equally. A sentence that comes from a well performing candidate system is more likely to be \emph{informative} compared to a sentence from a poor summary. The proposed \emph{GlobalRank} system does exactly that. It builds over the SentRank system by incorporating a candidates \emph{global reputation} score into the sentence ranking scheme. The new scoring mechanism is shown in equation \ref{eq2} below. $G(k)$ refers to the global reputation of candidate system $k$.
\begin{eqnarray}
\boldsymbol{R}(s_{ij}) = \sum_{k, k\neq j} G(k)*\max_{l}(Sim(s_{ij}, s_{lk}))
\label{eq2}
\end{eqnarray}
$G(k)$ is estimated using the average ROUGE-1 recall of each candidate systems as shown in equation 4 and 5 below. $R1_k$ is the rouge-1 recall of the $k^{th}$ candidate. $R1^{'}$ is the normalized version of $R1$. Here we do not subtract mean, to avoid negative values in scoring, and instead subtract the minimum of $R1^{'}$. Additionally we scale it using a scaling factor $a$, which is dependent on the total number of candidate systems. We empirically set $a$ to $0.1$. We used the results on DUC2002 dataset for estimating the ROUGE-1 recall, and in turn the GlobalRank of a candidate.
\begin{eqnarray}
G(k) = aR1'(k) \\
R1^{'}_{k} = \frac{R1_k}{\sigma(R1_k)} - \min_{k}\Big[\frac{R1_k}{\sigma(R1_k)}\Big]
\label{eq3}
\end{eqnarray}
As compared to SentRank, which can be overwhelmed by too many poor performing systems, GlobalRank provides a smoothing effect, by giving more importance to the systems that are known to perform well generally.
\subsection{LocalRank}
One major limitation of the existing aggregation systems, which we highlighted in section 2, is their inability to predict which candidate system will perform better for a given document cluster. Neither of the systems suggested above, \emph{SentRank} and \emph{GlobalRank}, address this problem. The next system, \emph{LocalRank} tries to mitigate this problem. We do not rely on any lexical or corpus specific features, simply because the training data is not sufficient to estimate these features reliably. Instead we continue on our line of argument, using the similarity between summaries as a measure of reliability. For a give document cluster, we estimate reliability of a candidate $k$ from the content it shares with other candidates, and also the reliability of those candidates. We first create a graph with the nodes as the candidate summaries and edge as the similarity between nodes. Each candidate starts with the same reputation score or LocalRank ($L$). The local rank is then updated iteratively using the pagerank algorithm \cite{page1999pagerank}. The Local rank for a given node is estimated as shown in equation \ref{eq4} below:
\begin{equation}
L(k) = \sum_{j}L(j)*Sim(S_j, S_k)
\label{eq4}
\end{equation}
$L(k)$ indicates local rank of $k^{th}$ candidate, $S_k$ indicate summary generated by the $k^{th}$ candidate. We use cosine similarity as the similarity score. The overall sentence scores are then computed just like in the GlobalRank algorithm.
\begin{eqnarray}
\boldsymbol{R}(s_{ij}) = \sum_{k, k\neq j} L(k)*\max_{l}(Sim(s_{ij}, s_{lk}))
\label{eq5}
\end{eqnarray}
\subsection{HybridRank}
While LocalRank is useful for estimating how well a given candidate might perform for a given document cluster, it does not make use of the actual system performance. HybridRank overcomes that limitation. As the name suggests, HybridRank combines strengths of both GlobalRank as well as LocalRank by taking a weighted combination of both. The HybridRank is defined in the equation \ref{eq6} below
\begin{equation}
H(k) = \alpha L(k) + (1-\alpha)G(k)
\label{eq6}
\end{equation}
Here the value of $\alpha$ determines the balance between Local and GlobalRank. Higher value of Alpha gives more importance to the estimate of how good a system will perform on a particular cluster, while ignoring the overall aggregate performance of candidate. $\alpha = 0$ leads to the original GlobalRank, without any local information. We empirically set the value of $\alpha$ to 0.3. Once the systems are ranked, the sentence rankings are computed in the same manner as LocalRank or GlobalRank (equation \ref{eq2} and \ref{eq5}).
\section{Experimental Setup and Results}
We report the experimental results on the DUC 2003 and DUC 2004 datasets. We report standard ROUGE scores\cite{lin2004rouge} that are well accepted across the community, ROUGE-1, ROUGE-2 and ROUGE-4 recall. To be consistent and in order to make our work reproducible, we use the same set of ROUGE parameters as that used by \cite{hong2014repository}\footnote{ROUGE-1.5.5.pl -n 4 -m -a -x -l 100 -c 95 -r 1000 -f A -p 0.5 -t 0}. We use eleven candidate systems which are a mix of several state of art extractive techniques and other well known baseline systems. The complete list is shown below, with a brief description of each system. For the DUC 2004 dataset, \cite{hong2014repository} provide summaries for all these systems. We directly use these pre-generated summaries provided for that particular year, to provide a fair comparison. We generated results for the DUC 2003 dataset ourselves. We do post-processing on the generated summaries, by dropping sentences that have a cosine similarity of more than 0.5 with the already selected sentences. Apart from that no other pre/post-processing was done.
We use the following systems as candidate systems for our experiments:
\begin{itemize}
\item[] \textbf{LexRank} This method proposed by \cite{erkan2004lexrank} treats each document as a undirected graph and each sentence in the document constitutes a node. The edges represent cosine similarity between nodes. The importance of each node is then iteratively determined by the number of other nodes to which it is connected and the importance of those nodes.
\item[] \textbf{FreqSum} A simple approach\cite{nenkova2006compositional} that ranks each sentence based on the frequency of its constituent words. Higher the frequency more the informativeness of the word.
\item[] \textbf{TsSum} This method defines \emph{topic signatures} as the words which are more frequent in a given document compared to a background corpus\cite{lin2000automated}. Sentenced with more \emph{topic signatures} are considered more informative
\item[] \textbf{Greedy-KL} This method follows a greedy algorithm\cite{haghighi2009exploring} to minimize the KL Divergence between the original document and resultant summary. Sentences are sequentially added to the summary so as to minimize the KL-Divergence between word distributions of the set of sentences selected so far and the overall document
\item[] \textbf{CLASSY04} Judged best among the submissions at DUC 2004\cite{conroy2004left}, uses a hidden markov model with topic signatures as the features. It links the usefulness of a sentence to that of its neighboring sentences.
\item[] \textbf{CLASSY11} This method builds over the CLASSY04 technique, and uses topic signatures as features while estimating the probability that a bigram will occur in human generated summary. It employs non negative matrix factorization to select a subset of non-redundant sentences with highest scores.
\item[] \textbf{Submodular} \cite{lin2012learning} treat summarization as a submodular maximization problem. It incrementally computes the \emph{informativeness} of a summary and also provides a confidence score as to how close the approximation is to globally optimum summary.
\item[] \textbf{DPP} Detrimental point processing\cite{kulesza2012determinantal} is the best performing state-of-art system amongst all the candidate systems. DPP scores each sentence individually, while at the same time trying to maintain a global diversity to reduce redundancy in the content selected.
\item[] \textbf{RegSum} uses diverse features like parts of speech tags, name entity tags, locations and categories for supervised prediction of word importance\cite{hong2014improving}. The sentence with most number of \emph{important} words are then included in the summary.
\item[] \textbf{OCCAMS\_V} The system by \cite{davis2012occams}, employs LSA to estimate word importance and then use the budgeted
maximal coverageand the knapsack problem to generate sentence rankings.
\item[] \textbf{ICSISumm} treats summarization as a global linear optimization problem\cite{gillick2009icsi}, to find globally best summary instead of selecting sentences greedily. The final summary includes most important concepts in the documents.
\end{itemize}
As benchmark ensemble techniques, we use two existing techniques: \emph{Borda count} and \emph{Weighted consensus summarization}\cite{wang2012weighted} described in section 2. For the GlobalRank system, we used DUC 2002 dataset as a development dataset to estimate the overall performance of candidate systems. We ranked the systems based on ROUGE-1 recall scores for this purpose. The results are shown in table 1 below:
\vspace{3mm}
\begin{table*}[t]
\centering
\begin{tabular}{|c|c|c|c| c |c | c|}
\hline
& \multicolumn{3}{c|}{DUC2003} & \multicolumn{3}{c|}{DUC2004} \\ \hline
System & R-1 & R-2 & R-4 & R-1 & R-2 & R-3 \\ \hline
LexRank & 0.3572 & 0.0742 & 0.0079 & 0.3595 & 0.0747 & 0.0082 \\
FreqSum & 0.3542 & 0.0815 & 0.0101 & 0.3530 & 0.0811 & 0.0099 \\
TsSum & 0.3589 & 0.0863 & 0.0103 & 0.3588 & 0.0815 & 0.0103 \\
Greedy-KL & 0.3692 & 0.0880 & 0.0129 & 0.3780 & 0.0853 & 0.0126 \\
CLASSY04 & 0.3744 & 0.0902 & 0.0148 & 0.3762 & 0.0895 & 0.0150 \\
CLASSY11 & 0.3730 & 0.0925 & 0.0142 & 0.3722 & 0.0920 & 0.0148 \\
Submodular & 0.3888 & 0.0930 & 0.0141 & 0.3918 & 0.0935 & 0.0139 \\
DPP & 0.3992 & 0.0958 & 0.0159 & 0.3979 & 0.0962 & 0.0157 \\
RegSum & 0.3840 & 0.0980 & 0.0165 & 0.3857 & 0.0975 & 0.0160 \\
OCCAMS\_V & 0.3852 & 0.0976 & 0.0142 & 0.3850 & 0.0976 & 0.0133 \\
ICSISumm & 0.3855 & 0.0977 & 0.0185 & 0.3840 & 0.0978 & 0.0173 \\ \hline
Borda Count & 0.3700 & 0.0738 & 0.0115 & 0.3772 & 0.0734 & 0.0110 \\
WCS & 0.3815 & 0.0907 & 0.0120 & 0.3800 & 0.0923 & 0.0125 \\
SentRank & 0.3880 & 0.1010 & 0.0163 & 0.3870 & 0.1008 & 0.0159 \\
GlobalRank & 0.3562 & 0.1045 & 0.0185 & 0.3955 & 0.1039 & \textbf{0.0191}$^\dagger$ \\
LocalRank & 0.3992 & 0.1058 & 0.0192 & 0.3998 & 0.1050 & 0.0187 \\
HybridRank & \textbf{0.4082}$^\dagger$ & \textbf{0.1102}$^\dagger$ & \textbf{0.0195}$^\dagger$ & \textbf{0.4127}$^\dagger$ & \textbf{0.1098}$^\dagger$ & 0.0180 \\ \hline
\end{tabular}
\caption{Results on DUC 2003 dataset}
\end{table*}
\vspace{3mm}
As shown in the table, the proposed systems outperform most existing systems on all three ROUGE scores. Both Borda and WCS failed to outperform the best state of art results. Even the simplistic SentRank algorithm outperforming most candidate systems and achieves a performance at par with the state of art systems in terms of ROUGE-2. While HybridRank achieves the best performance for ROUGE-1 and ROUGE-2 on both DUC 2003 and DUC 2004 datasets, GlobalRank performs best in terms of ROUGE-4 on DUC 2004. We performed a two sided sign test for determining whether the results were significantly different. The results clearly show that a rank aggregation technique that takes into account content of the summaries achieve a much higher ROUGE score vis-a-vis the systems that use only the ranked lists of sentences.
\section{Conclusion}
In this work we propose a new technique to estimate \emph{reliability} of a summarization system for a given document cluster. We define three systems, \emph{SentRank}, \emph{LocalRank} and \emph{GlobalRank} which take into account \emph{informativeness} of individual sentences, performance of candidates on a given document cluster, and overall performance of candidates on a held out development set, respectively. We use content overlap between summaries generated from several systems to estimate the relative importance of each system in case of LocalRank and SentRank. We combine the information from all these three systems to generate the final hybrid ranking (HybridRank) system. Summaries generated from such an aggregate system outperforms all the baseline and state of art systems as well as the baseline aggregation techniques by a good margin.
|
{
"timestamp": "2018-09-10T02:07:14",
"yymm": "1809",
"arxiv_id": "1809.02343",
"language": "en",
"url": "https://arxiv.org/abs/1809.02343"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.