prompt stringlengths 0 256 | answer stringlengths 1 256 |
|---|---|
section with the following estimate.
\begin{lemma}
\label{lem:M3vsM1}
Under the assumptions and notation of Theorem \ref{thm:genus_2constr},
\[
M_{\alpha,3} \leq 2^{-(p-1)/2} M_{\alpha,1}.
\]
\end{lemma}
\begin{proof}
Let $A\in \Sigma^{(3)}_{\alpha}$, acco | rding to Definition \ref{def:genus}. Notice
that the map
\[
A\ni u \mapsto \left(\int_\Omega |u|u , \int_\Omega |u|^p u\right)\in{\mathbb{R}}^2
\]
is continuous and equivariant. By the Borsuk-Ulam Theorem, we deduce the existence of
$u_a\in A$ such that
\[ |
\int_\Omega |u^+_a|^2 = \int_\Omega |u^-_a|^2 = \frac12,\qquad
\int_\Omega |u^+_a|^{p+1} = \int_\Omega |u^-_a|^{p+1} =
\frac12 \int_\Omega |u_a|^{p+1},
\]
while
\[
\text{either }\int_\Omega |\nabla u^+_a|^2 \leq \frac{\alpha}{2}
\qquad
\text{or }\int_\Ome | ga |\nabla u^-_a|^2 \leq \frac{\alpha}{2}.
\]
For concreteness let us assume that the first alternative holds;
as a consequence, we obtain that $v:=\sqrt2 u_a^+$ belongs to
$\overline{\mathcal{B}}_\alpha$. This yields
\[
M_{\alpha,1}\geq \int_\Omega |v|^{p |
+1} = 2^{(p+1)/2} \int_\Omega |u_a^+|^{p+1}
= \frac{2^{(p+1)/2}}{2} \int_\Omega |u_a|^{p+1} \geq 2^{(p-1)/2}
\inf_{u\in A} \int_\Omega |u|^{p+1},
\]
and since $A\in \Sigma^{(3)}_{\alpha}$ is arbitrary the proposition follows.
\end{proof}
\section{Min-m | ax principles on the unit sphere in
\texorpdfstring{$L^2$}{L\texttwosuperior}}\label{sec:1const}
According to equation \eqref{Emu}, let ${\mathcal{M}}\subset H^1_0(\Omega)$ denote the unit
sphere with respect to the $L^2$ norm and $\mathcal{E}_{\mu}$ the |
energy functional
associated to \eqref{eq:main_prob_u}. In this section we are concerned with critical
points of $\mathcal{E}_{\mu}$ on ${\mathcal{M}}$ (which, in turn, correspond to solutions of our
starting problem \eqref{eq:main_prob_U}).
By the Gaglia | rdo-Nirenberg inequality \eqref{sobest},
setting $\|\nabla u\|^2_{L^2}=\alpha$, one obtains
\begin{equation}\label{eq:boundonboundEmu}
\frac12\,\alpha- \mu\frac{C_{N,p}}{p+1}\,\alpha^{N(p-1)/4}
\leq \mathcal{E}_{\mu}(u)\le \frac12\alpha.
\end{equation}
In |
particular, $\mathcal{E}_{\mu}$ is bounded on any bounded subset of ${\mathcal{M}}$,
and it is bounded from below (and coercive) on the entire ${\mathcal{M}}$ for {subcritical} $p<1+4/N$
and for {critical} $p=1+4/N$ whenever $\mu< \frac{p+1}{2}C_{N,p}^{-1} | $ .
In these cases, one can easily show that $\mathcal{E}_{\mu}$ satifies the P.S. condition
and apply the classical {minimax principle for even functionals} on a closed symmetric
submanifold (see e.g. \cite[Thm. II.5.7]{St_2008}).
In the complementary ca |
se, when $p$ is either supercritical, i.e. $p>1+4/N$, or
critical and $\mu$ is large, then $\mathcal{E}_{\mu}$ is not bounded from below (see e.g.
\eqref{minusinfty} below). In order to provide a minimax principle suitable for this case,
we recall the Defi | nition \ref{def:genus} of genus and
that of $\mathcal{B}_\alpha$ (see equation \eqref{eq:defBU}). Furthermore, we denote with $K_{c}$ the (closed and symmetric) set of critical points of
${\mathcal{E}}_\mu$ at level $c$ contained in $\mathcal{B}_\alpha$. T |
he following theorem is an adaptation of well known arguments
of previous critical point theorems relying on index theory.
\begin{theorem}
\label{infsupteo}
Let $k\ge1$, $\alpha>\lambda_k(\Omega)$, $\mu>0$ and $\tau>0$ be fixed, and let $c_k$ be defined as |
in Theorem \ref{thm:genus_1constr}, equation \eqref{infsuplev}. If
\begin{equation}
\label{ass2}
c_k < \hat c_k:= \inf_{\substack{A\in\Sigma^{(k)}_{\alpha}\\
A\setminus\mathcal{B}_{\alpha-\tau}\neq\emptyset }}
\sup_{A\setminus\mathcal{B}_{\alpha-\tau}}{\ |
mathcal{E}}_\mu,
\end{equation}
then $K_{c_k}\neq\emptyset$, and it contains a critical point of Morse index less or equal to $k$.
\end{theorem}
\begin{remark}
In case assumption \eqref{ass2} holds for $k,k+1,\dots,k+r$, and $c=c_k=...c_{k+r}$,
then it is | standard to extend Theorem \ref{infsupteo} to obtain
\begin{equation}
\label{indexK}
\gamma(K_c)\ge r+1,
\end{equation}
so that $K_c$ contains infinitely many critical points.
\end{remark}
\begin{proof}[Proof of Theorem \ref{infsupteo}]
For any $a\in{\math |
bb{R}}$ we denote by ${\mathcal{M}}_a$ the sublevel set $\{\mathcal{E}_\mu<a\}$.
First of all we notice that both $c_k$ and $\hat c_k$ are well defined and finite,
by Lemma \ref{lemma:genusbigger} and equation \eqref{eq:boundonboundEmu}.
Suppose now by con | tradiction that $K_{c_k}=\emptyset$. By a suitably modified version of
the Deformation Lemma (recall that ${\mathcal{E}}_\mu$ satisfies the P.S. condition on ${\mathcal{M}}$),
there exist $\delta>0$ and an equivariant homeomorphism
$\eta$ such that
$\eta(u |
)=u$ outside
$\mathcal{B}_{\alpha}\cap {\mathcal{M}}_{c_k+2\delta}$ and
\begin{equation}
\label{lowlev}
\eta({{\mathcal{M}}_{c_k+\delta}\cap \mathcal{B}_{\alpha-\tau}})\subset {\mathcal{M}}_{c_k-\delta}\cap
\mathcal{B}_{\alpha}.
\end{equation}
By definitio | n of $c_{k}$ there exists $A\in \Sigma^{(k)}_{\alpha}$ such that
$A\subset {\mathcal{M}}_{c_k+\delta}$; it follows by assumption \eqref{ass2} (and by decreasing $
\delta$ if necessary) that $A\subset {\mathcal{M}}_{c_k+\delta}\cap \mathcal{B}_{\alpha-\tau} |
$.
Then, since $\eta$ is an odd homeomorphism, $\eta(A)\in \Sigma^{(k)}_{\alpha}$
and, by definition, $\sup_{\eta(A)}{\mathcal{E}}_\mu \ge c_k$, in contradiction with
\eqref{lowlev}. Finally, the estimate of the Morse index is a direct consequence of the
d | efinition of genus we deal with: see \cite{MR968487}, Proposition on page 1030, or the discussion
at the end of Section 2 in \cite{MR991264}.
\end{proof}
We now provide a sufficient condition to guarantee the validity of assumption
\eqref{ass2}.
\begin{lem |
ma}\label{lem:ckMak}
Let $k\ge1$, $\alpha>\lambda_k(\Omega)$ and $\mu>0$ satisfy
\begin{equation}
\label{muboundef}
0<\mu<\frac{p+1}{2}\,\frac{\alpha-\lambda_{k}(\Omega)}{M_{\alpha,k}-|\Omega|^{-\frac{p-1}{2}}}
\end{equation}
where $M_{\alpha,k}$ is define | d in Theorem \ref{thm:genus_2constr}.
Then, for $\tau>0$ sufficiently small, \eqref{ass2} holds true.
\end{lemma}
\begin{proof}
We first estimate $c_k$ from above. To this aim, we construct a subset
$\tilde A\in \Sigma^{(k)}_{\alpha-\tau}$ (for any $\tau$ |
sufficiently small)
as
\begin{equation}
\label{Atilde}
\tilde A= \left\{\sum_{i=1}^k x_i\varphi_i : x=(x_1,\dots,x_k)\in{\mathbb{S}}^{k-1}\right\},
\end{equation}
where, as usual $\varphi_i$ denotes the Dirichlet eigenfunction associated to
$\lambda_i(\Ome | ga)$. Indeed $\gamma(\tilde A)=k$ (it is homeomorphic to
${\mathbb{S}}^{k-1}$), and $\max_{u\in\tilde A}\|u\|^2_{H^1_0}=\lambda_{k}(\Omega)<\alpha-\tau$ for
$\tau$ small. Hence Holder inequality yields
\begin{equation}
\label{boundabove1}
c_k \leq \sup_{\t |
ilde A}\mathcal{E}_{\mu}\le \frac{1}{2}\lambda_{k}(\Omega)
- \frac{\mu}{p+1}\,|\Omega|^{-\frac{p-1}{2}}.
\end{equation}
On the other hand, let $A\in\Sigma^{(k)}_{\alpha}$. Theorem \ref{thm:genus_2constr}
implies
\[
\inf_{u\in A} \int_\Omega |u|^{p+1} \leq | M_{\alpha,k}.
\]
If moreover $A\setminus\mathcal{B}_{\alpha-\tau}\neq\emptyset$ we infer
\[
\sup_{A\setminus\mathcal{B}_{\alpha-\tau}}\mathcal{E}_{\mu}\ge
\frac{1}{2}(\alpha - \tau) - \frac{\mu}{p+1} M_{\alpha,k},
\]
and taking the infimum an analogous in |
equality holds true for $\hat c_k$. Comparing
with \eqref{boundabove1} the lemma follows.
\end{proof}
Exploiting the results above, we are ready to prove our main existence results.
\begin{proof}[End of the proof of Theorem \ref{thm:genus_1constr}]
By Theo | rem \ref{infsupteo} and Lemma \ref{lem:ckMak} the proof is completed by choosing
\[
\hat\mu_k:=\sup_{\alpha>\lambda_k(\Omega)} \frac{p+1}{2}\,\frac{\alpha-\lambda_{k}(\Omega)}{M_{\alpha,k}-|\Omega|^{-\frac{p-1}{2}}}.
\qedhere
\]
\end{proof}
\begin{proof}[P |
roof of Theorem \ref{thm:intro_GS}]
We write the proof in terms of ${\mathcal{E}}_\mu$, the theorem following by the relations in
\eqref{eq:main_prob_u}. Recall that, for every $u\in \overline{\mathcal{B}}_\alpha$, $\gamma
\left(\{u,-u\}\right)=1$. We dedu | ce that $c_1$ is actually a local
minimum for ${\mathcal{E}}_\mu$, achieved by some $u$ which solves \eqref{eq:main_prob_u}
(for a suitable $\lambda$), and it can be chosen positive by symmetry.
Since
\[
\int_\Omega |\nabla u|^2 + \lambda u^2 - p\mu|u|^{p+ |
1}\,dx=-(p-1)\int_\Omega \mu|u|^{p+1}\,dx<0,
\]
and $H^1_0(\Omega) = \spann\{u\}\oplus T_{u}\mathcal{M}$, we have that $u$
has Morse index $1$. In a standard way, the minimality
property of $u$ implies also orbital stability of the associated solitary wav | e (see e.g.
\cite{MR677997}). Turning to the estimates for $\hat\mu_1 = \hat\rho_1^{(p-1)/2}$,
we can deduce it using Lemma \ref{lem:ckMak} and Remark \ref{rem:MvsCNp}, which yield
\[
\hat\mu_1\left(\Omega,p\right):=\sup_{\alpha>\lambda_1(\Omega)} \frac{p+ |
1}{2}\,\frac{\alpha-\lambda_{1}(\Omega)}{C_{N,p} \alpha^{\frac{N(p-1)}{4}}-|\Omega|^{-\frac{p-1}{2}}}
\geq \frac{p+1}{2C_{N,p}}\,\sup_{\alpha>\lambda_1(\Omega)}\frac{\alpha-\lambda_{1}
(\Omega)}{\alpha^{\beta}},
\]
where $\beta:=N(p-1)/4$. Now, if $\beta\l | eq1$ we obtain the desired bound for the
subcritical and critical cases. On the other hand, when $\beta>1$, elementary
calculations show that
\[
\hat\mu_1\left(\Omega,p\right)\geq \frac{p+1}{2C_{N,p}}
\,\frac{(\beta-1)^{(\beta-1)}}{\beta^\beta}\, \lambda_1 |
(\Omega)^{-(\beta-1)},
\]
and finally
\[
\hat\rho_1\left(\Omega,p\right)\geq \underbrace{\left[\frac{p+1}{2C_{N,p}}
\,\frac{(\beta-1)^{(\beta-1)}}{\beta^\beta}\right]^{\frac{2}{p-1}}}_{D_{N,p}}\,
\lambda_1(\Omega)^{\frac{2}{p-1}-\frac{N}{2}}.\qedhere
\]
\e | nd{proof}
\begin{proof}[Proof of Proposition \ref{thm:intro_3>1}]
As usual, by \eqref{eq:main_prob_u}, we have to prove that
\[
\hat\mu_3\left(\Omega,p\right)\geq
2^{(p-1)/2} D_{N,p}\lambda_3(\Omega)^{\frac{2}{p-1}-\frac{N}{2}}.
\]
By Lemmas \ref{lem:ckMak |
}, \ref{lem:M3vsM1}, and Remark \ref{rem:MvsCNp} we obtain
\[
\begin{split}
\hat\mu_3 &= \sup_{\alpha>\lambda_3(\Omega)} \frac{p+1}{2}\,\frac{\alpha-\lambda_{3}(\Omega)}{M_{\alpha,3}-|\Omega|^{-\frac{p-1}{2}}} \geq
\sup_{\alpha>\lambda_3(\Omega)} \frac{p+ | 1}{2}\,\frac{\alpha-\lambda_{3}(\Omega)}{
2^{-(p-1)/2}M_{\alpha,1}-|\Omega|^{-\frac{p-1}{2}}}\\
&\geq 2^{(p-1)/2}\sup_{\alpha>\lambda_3(\Omega)} \frac{p+1}{2}\,\frac{\alpha-\lambda_{3}
(\Omega)}{C_{N,p}\alpha^\beta-2^{(p-1)/2}|\Omega|^{-\frac{p-1}{2}}},
\e |
nd{split}
\]
where $\beta:=N(p-1)/4$, and the desired result follows by arguing as in the proof of
Theorem \ref{thm:intro_GS}.
\end{proof}
To conclude this section we prove that in the supercritical case, if $\mu$ is not too
large, in addition to $(c_k)_k$ | there is a further sequence of critical levels
$(\bar c_k)$ of $\mathcal{E}_{\mu}$ constrained to $\mathcal{M}$. For concreteness,
let us first consider the case $k=1$: since in such case $c_1$ is a local minimum of
${\mathcal{E}}_\mu$ in $\mathcal{M}$, a |
nd ${\mathcal{E}}_\mu$ is unbounded from below in $\mathcal{M}$,
the critical level $\bar c_1$ is of mountain pass type.
\begin{proposition}
\label{mpcritlev}
Let $p>1+4/N$, $\mu<\hat\mu_1$, and $u_1$ denote the local minimum point of ${\mathcal{E}}_\mu$
i | n $\mathcal{M}$, according to Theorems \ref{infsupteo} and \ref{thm:intro_GS}. The value
\[
\bar c_1 : =\inf_{\gamma\in \Gamma}\sup_{[0,1]}\mathcal{E}_{\mu}(\gamma(s)),
\quad\text{where }\Gamma:=\left\{\gamma\in C([0,1];{\mathcal{M}}) : \gamma(0)=u_1,\,\ga |
mma(1)<c_1-1\right\},
\]
is a critical level for ${\mathcal{E}}_\mu$ in $\mathcal{M}$.
\end{proposition}
\begin{proof}
Notice that, if $p>1+4/N$, then $\mathcal{E}_{\mu}\to -\infty$ along some sequence in
${\mathcal{M}}$. Indeed, by defining
\begin{equati | on}
\label{concfun}
w_n(x): = \eta(x)Z_{N,p}\big ((x-x_0)/a_n\big )\quad \text{and}\quad
\tilde{w_n}:=\frac{w_n}{\| w_n\|^2_{L^2(\Omega)}}\,\in\, {\mathcal{M}},
\end{equation}
where $a_n\to 0^+$, $x_0\in \Omega$ and $\eta\in \mathcal{C}_0^{\infty}(\Omega)$ |
, $\eta(x_0)=1$, we obtain
\begin{equation}\label{minusinfty}
\alpha_n :=\|\nabla \tilde w_n\|^2_{L^2(\Omega)}\to +\infty,
\qquad
\frac{\int_{\Omega} |\tilde w_n|^{p+1} \,dx}{\alpha_n^{N(p-1)/4}}\to C_{N,p},
\qquad
\mathcal{E}_{\mu}(\tilde w_n)\to -\infty
| \end{equation}
for $n\to +\infty$. Since $u_1$ is a local minimum,
the functional $\mathcal{E}_{\mu}$ has a mountain pass structure on ${\mathcal{M}}$;
by recalling that $\mathcal{E}_{\mu}$ satisfies the P.S. condition the
proposition follows.
\end{proof}
|
\begin{remark}\label{rem:further_crit_lev}
One can generalize Proposition \ref{mpcritlev} by constructing critical points via a saddle-point theorem in the following way: let us pick $k$ points $x_1, x_2,...,x_k$ in $\Omega$ and consider the corresponding | function $\tilde w_i$; we may assume that
$\mathrm{supp}\,\tilde w_i\cap \mathrm{supp}\,\tilde w_j=\emptyset$ for $i\neq j$, so that these functions are orthogonal. Let us now define the subspace
$V_k=\mathrm{span}\{\varphi_1,\dots,\varphi_k;\tilde w_1,... |
,\tilde w_k\}$; note that dim $V_k=2k$. Let $R$ be an operator (in $L^2(\Omega)$) such that $R=I$ on $V_k^{\perp}$, $Ru_i=\tilde w_i$, $i=1,2,..,k$. Possibly after permutations, we can choose $R$ such that $R\big |_{V_k}\in SO(2k)$ (actually, there are inf | initely many different choices of $R$). Now, since $SO(2k)$ is (arcwise) connected, there is a continuous path
$\tilde{\gamma}:\,[0,1]\rightarrow SO(2k)$ such that $\gamma(0)=I$, $\gamma(1)=R\big |_{V_k}$.
Then, we can define the following map
\[
\gamma:\, |
[0,1] \times S^{k-1}\rightarrow {\mathcal{M}},
\quad\quad \gamma(s;t_1,....,t_k)=\sum_{i=1}^{k}t_i\tilde{\gamma}(s)u_i,\quad
\]
where $\sum_{i=1}^{k}t_i^2=1$. It is clear that $\gamma$ is continuous; moreover,
\[
\gamma(0;t_1,....,t_k)\in \mathrm{span }\{ | \varphi_1,\dots,\varphi_k\}\cap {\mathcal{M}}
\quad \mathrm{and}\quad
\gamma(1;t_1,....,t_k)\in \mathrm{span }\{\tilde w_1,\dots,\tilde w_k\}\cap {\mathcal{M}}.
\]
Then, by denoting with $\Gamma_k$ the set of the above paths, if $\mu$ is sufficiently
small |
we obtain the critical levels
\[
\bar c_k : =\inf_{\gamma\in \Gamma_k}\sup_{[0,1]\times S^{k-1}}\mathcal{E}_{\mu}(\gamma(s;t_1,....,t_k)).
\]
\end{remark}
\section{Results in symmetric domains}\label{sec:symm}
This section is devoted to the proof of T | heorem \ref{pro:symm}, therefore we assume $1+4/N \leq p < 2^*-1$.
We perform the proof in the case of $\Omega=B$, but it will be clear that the main assumption on $\Omega$ is the following:
\begin{itemize}
\item[\textbf{(T)}] there is a tiling of $\Omega |
$, made by $h$ copies of a subdomain $D$, in such a way that from
any solution $U_D$ of \eqref{eq:main_prob_U} on $D$ one can construct, using reflections, a solution $U_\Omega$ of \eqref{eq:main_prob_U} on $\Omega$.
\end{itemize}
Then $U_\Omega$ has $h$ | times the mass of $U_D$, and recalling Theorem \ref{thm:intro_GS} we deduce that \eqref{eq:main_prob_U} on $\Omega$ is solvable for any
$\rho< h \cdot D_{N,p} \lambda_1(D)^{\frac{2}{p-1}-\frac{N}{2}}$. At this point, for a sequence $(D_k,h_k)_k$ of tiling |
s satisfying \textbf{(T)}, we
obtain the solvability of \eqref{eq:main_prob_U} on $\Omega$ whenever
\[
\rho< h_k \cdot D_{N,p} \lambda_1(D_k)^{\frac{2}{p-1}-\frac{N}{2}},
\]
and if we can show that
\begin{equation}\label{eq:finaltarget}
\frac{ h_k }{ \lam | bda_1(D_k)^{\frac{N}{2}-\frac{2}{p-1}}} \to +\infty\qquad\text{as }k\to+\infty,
\end{equation}
we deduce the solvability of \eqref{eq:main_prob_U} on $\Omega$ for every mass. Having this scheme in mind, it is easy to prove analogous results
on rectangles a |
nd also in other kind of domains.
Then let $B\subset{\mathbb{R}}^N$ be the ball (w.l.o.g. of radius one), and let
\[
D_k:=\left\{(r\cos\theta, r\sin\theta,x_3,\dots,x_N)\in B: - \frac{\pi}{k} < \theta < \frac{\pi}{k}\right\}
\]
Then $D_k$ satisfies \text | bf{(T)}, with $h_k=k$. In order to estimate $\lambda_1(D_k)$ we observe that, by elementary trigonometry,
\[
B'_k = B_{\frac{\sin(\pi/k)}{\sin(\pi/k)+1}}\left(\frac{1}{\sin(\pi/k)+1},0,0,\dots,0\right) \subset D_k,
\]
and therefore
\[
\lambda_1(D_k) \le \l |
ambda_1(B'_k) \le C k^2,
\]
for some dimensional constant $C=C(N)$ and $k$ large. Then
\[
\frac{ h_k }{ \lambda_1(D_k)^{\frac{N}{2}-\frac{2}{p-1}}}\ge C \frac{k}{k^{{N}-\frac{4}{p-1}}} = C k^{1-{N}+\frac{4}{p-1}} = C k^{\frac{N-1}{p-1}\left[1+\frac{4}{N-1 | } - p\right]},
\]
and finally \eqref{eq:finaltarget} holds true whenever $p< 1+\frac{4}{N-1}$, thus completing the proof of Theorem \ref{pro:symm}.
\small
\subsection*{Acknowledgments}
We would like to thank Jacopo Bellazzini, who pointed out that the |
results in
\cite{MR3318740}, in the supercritical case, can be read in terms of a local minimization. We
would also like to thank Benedetta Noris, who read a preliminary version of this manuscript.
This work is partially supported by the PRIN-2 | 012-74FYK7 Grant:
``Variational and perturbative aspects of nonlinear differential problems'',
by the ERC Advanced Grant 2013 n. 339958:
``Complex Patterns for Strongly Interacting Dynamical Systems - COMPAT'',
and by the INDAM-GNAMPA group.
|
\section{Introduction}
\label{sec:intro}
Despite the immense popularity and availability of online video content via outlets such as Youtube and Facebook,
most work on object detection
focuses on static images.
Given the breakthroughs of deep convolution | al neural networks
for detecting objects in static images,
the application of these methods to video
might seem straightforward.
However, motion blur and compression artifacts
cause substantial frame-to-frame variability,
even in videos that appear smo |
oth to the eye.
These attributes complicate prediction tasks
like classification and localization.
Object-detection models trained on images
tend not to perform competitively
on videos owing to domain shift factors \cite{KalogeitonFS15}.
Moreover, obje | ct-level annotations
in popular video data-sets
can be extremely sparse,
impeding the development
of better video-based object detection models.
Girshik \emph{et al}\bmvaOneDot \cite{RCNN_girshick14CVPR} demonstrate that even given scarce labeled tra |
ining data,
high-capacity convolutional neural networks
can achieve state of the art detection performance
if first pre-trained on a related task with abundant training data,
such as 1000-way ImageNet classification.
Followed the pretraining,
the netwo | rks can be fine-tuned to a related but distinct domain.
Also relevant to our work,
the recently introduced models Faster R-CNN \cite{Faster_RCNN_RenHG015} and You Look Only Once (YOLO) \cite{YOLO_RedmonDGF15}
unify the tasks of classification and localiz |
ation. These methods, which are accurate and efficient,
propose to solve both tasks through a single model,
bypassing the separate object proposal methods
used by R-CNN \cite{RCNN_girshick14CVPR}.
In this paper, we introduce a method
to extend unified o | bject recognition and localization
to the video domain.
Our approach applies transfer learning
from the image domain to video frames.
Additionally, we present a novel recurrent neural network (RNN) method
that refines predictions by exploiting contextual |
information in neighboring frames.
In summary, we contribute the following:
\begin{itemize}
\item A new method for refining a video-based object detection consisting of two parts: (i) a \emph{pseudo-labeler}, which assigns provisional labels
to all avai | lable video frames.
(ii) A recurrent neural network,
which reads in a sequence of provisionally labeled frames, using the contextual information to output refined predictions.
\item An effective training strategy utilizing (i) category-level weak-supervi |
sion at every time-step, (ii) localization-level strong supervision at final time-step (iii) a penalty encouraging prediction smoothness at consecutive time-steps, and (iv) similarity constraints between \emph{pseudo-labels} and prediction output at every | time-step.
\item An extensive empirical investigation demonstrating that on the YouTube Objects \cite{youtube-Objects} dataset,
our framework achieves mean average precision (mAP) of $68.73$ on test data,
compared to a best published result of $37.41$ \c |
ite{Tripathi_WACV16} and $61.66$
for a domain adapted YOLO network \cite{YOLO_RedmonDGF15}.
\end{itemize}
\section{Methods}
\label{sec:method}
In this work,
we aim to refine object detection in video
by utilizing contextual information
from nei | ghboring video frames.
We accomplish this
through a two-stage process.
First, we train a \emph{pseudo-labeler},
that is, a domain-adapted convolutional neural network for object detection,
trained individually on the labeled video frames.
Specifically, |
we fine-tune the YOLO object detection network \cite{YOLO_RedmonDGF15},
which was originally trained for the 20-class PASCAL VOC \cite{PASCAL_VOC} dataset,
to the Youtube-Video \cite{youtube-Objects} dataset.
When fine-tuning to the 10 sub-categories
| present in the video dataset,
our objective is to minimize
the weighted squared detection loss
(equation \ref{eqn:obj_det_loss})
as specified in YOLO \cite{YOLO_RedmonDGF15}.
While fine-tuning, we learn only the parameters
of the top-most fully-connec |
ted layers,
keeping the $24$ convolutional layers and $4$ max-pooling layers unchanged.
The training takes roughly 50 epochs to converge, using the RMSProp \cite{RMSProp} optimizer
with momentum of $0.9$ and a mini-batch size of $128$.
As with YOLO \ci | te{YOLO_RedmonDGF15},
our fine-tuned $pseudo-labeler$
takes $448 \times 448$ frames as input
and regresses on category types and locations of possible objects
at each one of $S \times S$ non-overlapping grid cells.
For each grid cell,
the model output |
s class conditional probabilities
as well as $B$ bounding boxes
and their associated confidence scores.
As in YOLO, we consider a \emph{responsible} bounding box for a grid cell
to be the one among the $B$ boxes for which the predicted area and the grou | nd truth area
shares the maximum Intersection Over Union.
During training, we simultaneously optimize classification and localization error
(equation \ref{eqn:obj_det_loss}).
For each grid cell,
we minimize the localization error
for the \emph{respon |
sible} bounding box
with respect to the ground truth
only when an object appears in that cell.
Next, we train a Recurrent Neural Network (RNN),
with Gated Recurrent Units (GRUs) \cite{Cho14_GRU}.
This net takes as input
sequences of \emph{pseudo-labe | ls},
optimizing an objective
that encourages both accuracy on the target frame
and consistency across consecutive frames.
Given a series of \emph{pseudo-labels} $\mathbf{x}^{(1)}, ..., \mathbf{x}^{(T)}$,
we train the RNN to generate improved predicti |
ons
$\hat{\mathbf{y}}^{(1)}, ..., \hat{\mathbf{y}}^{(T)}$
with respect to the ground truth $\mathbf{y}^{(T)}$
available only at the final step in each sequence.
Here, $t$ indexes sequence steps and $T$ denotes the length of the sequence.
As output, we | use a fully-connected layer
with a linear activation function,
as our problem is regression.
In our final experiments,
we use a $2$-layer GRU with $150$ nodes per layer, hyper-parameters determined on validation data.
The following equations
defi |
ne the forward pass through a GRU layer,
where $\mathbf{h}^{(t)}_l$ denotes the layer's output at the current time step, and $\mathbf{h}^{(t)}_{l-1}$ denotes the previous layer's output at the same sequence step:
\begin{equation} \label{eqn:GRU}
\begin{al | igned}
\mathbf{r}^{(t)}_l &= \sigma(\mathbf{h}^{(t)}_{l-1}W^{xr}_l + \mathbf{h}^{(t-1)}_lW^{hr}_l + \mathbf{b}^r_l)\\
\mathbf{u}^{(t)}_l &= \sigma(\mathbf{h}^{(t)}_{l-1}W^{xu}_l + \mathbf{h}^{(t-1)}_lW^{hu}_l + \mathbf{b}^u_l)\\
\mathbf{c}^{(t)}_l &= \sigm |
a(\mathbf{h}^{(t)}_{l-1}W^{xc}_l + r_t \odot(\mathbf{h}^{(t-1)}_lW^{hc}_l) + \mathbf{b}^c_l)\\
\mathbf{h}^{(t)}_l &= (1-\mathbf{u}^{(t)}_l)\odot \mathbf{h}^{(t-1)}_l + \mathbf{u}^{(t)}_l\odot \mathbf{c}^{(t)}_l
\end{aligned}
\end{equation}
Here, $\sigma$ d | enotes an element-wise logistic function and $\odot$ is the (element-wise) Hadamard product.
The reset gate, update gate, and candidate hidden state are denoted by $\textbf{r}$, $\textbf{u}$, and $\textbf{c}$ respectively.
For $S = 7$ and $B=2$,
the pseu |
do-labels $\mathbf{x}^{(t)}$ and prediction $\hat{\mathbf{y}}^{(t)}$ both lie in $\mathbb{R}^{1470}$.
\vspace{-2.5mm}
\subsection{Training}
We design an objective function (Equation \ref{eqn:objective}) that accounts
for both accuracy at the target frame |
and consistency of predictions across
adjacent time steps in the following ways:
\begin{equation} \label{eqn:objective}
\mbox{loss} = \mbox{d\_loss} + \alpha \cdot \mbox{s\_loss} + \beta \cdot \mbox{c\_loss} + \gamma \cdot \mbox{pc\_loss}
\end{equation |
}
Here, d\_loss, s\_loss, c\_loss and pc\_loss stand for detection\_loss, similarity\_loss, category\_loss and prediction\_consistency\_loss described in the following sections.
The values of the hyper-parameters $\alpha=0.2$, $\beta=0.2$ and $\gamma=0.1$ |
are chosen based on the detection performance on the validation set.
The training converges in 80 epochs
for parameter updates using RMSProp \cite{RMSProp} and momentum $0.9$.
During training we use a mini-batch size of $128$
and sequences of length |
$30$.
\subsubsection{Strong Supervision at Target Frame}
On the final output,
for which the ground truth classification and localization is available,
we apply a multi-part object detection loss as described in YOLO \cite{YOLO_RedmonDGF15}.
\vspace{-2.5 | mm}
\begin{equation} \label{eqn:obj_det_loss}
\begin{aligned}
\mbox{detection\_loss} &= \lambda_{coord}\sum^{S^2}_{i=0}\sum^{B}_{j=0}\mathbbm{1}^{obj}_{ij}\big(\mathit{x}^{(T)}_i - \hat{\mathit{x}}^{(T)}_i\big)^2 + \big(\mathit{y}^{(T)}_i - \hat{\mathit{y} |
}^{(T)}_i\big)^2 \\
& + \lambda_{coord}\sum^{S^2}_{i=0}\sum^{B}_{j=0}\mathbbm{1}^{obj}_{ij}\big(\sqrt{w_i}^{(T)} - \sqrt{\hat{w}^{(T)}_i}\big)^2 + \big (\sqrt{h_i}^{(T)} - \sqrt{\hat{h}^{(T)}_i} \big)^2 \\
& + \sum^{S^2}_{i=0}\sum^{B}_{j=0}\mathbbm{1}^{obj | }_{ij}(\mathit{C}_i - \hat{\mathit{C}_i})^2 \\
& + \lambda_{noobj}\sum^{S^2}_{i=0}\sum^{B}_{j=0}\mathbbm{1}^{noobj}_{ij}\big(\mathit{C}^{(T)}_i - \hat{\mathit{C}}^{(T)}_i\big)^2 \\
& + \sum^{S^2}_{i=0}\mathbbm{1}^{obj}_{i}\sum_{c \in classes}\big(p_i^{(T)} |
(c) - \hat{p_i}^{(T)}(c)\big)^2
\end{aligned}
\end{equation}
where $\mathbbm{1}^{obj}_{i}$ denotes
if the object appears in cell $i$
and $\mathbbm{1}^{obj}_{ij}$
denotes that $j$th bounding box predictor
in cell $i$
is \emph{responsible} for that pred | iction.
The loss function penalizes classification
and localization error differently
based on presence or absence of an object
in that grid cell.
$x_i, y_i, w_i, h_i$ corresponds
to the ground truth bounding box center coordinates, width and height |
for objects in grid cell (if it exists) and $\hat{x_i}, \hat{y_i}, \hat{w_i}, \hat{h_i}$ stand for the corresponding predictions.
$C_i$ and $\hat{C_i}$ denote confidence score of \emph{objectness} at grid cell $i$ for ground truth and prediction.
$p_i(c | )$ and $\hat{p_i}(c)$
stand for conditional probability
for object class $c$ at cell index $i$
for ground truth and prediction respectively.
We use similar settings for YOLO's object detection loss minimization and use values of $\lambda_{coord} = 5$ |
and $\lambda_{noobj} = 0.5$.
\vspace{-2.5mm}
\subsubsection{Similarity between \emph{Pseudo-labels} and Predictions}
Our objective function also includes
a regularizer that penalizes the dissimilarity between \emph{pseudo-labels} and the prediction at | each time frame $t$.
\vspace{-2.5mm}
\begin{equation} \label{auto_enc_loss}
\mbox{similarity\_loss} =
\sum^T_{t=0}\sum^{S^2}_{i=0}\hat{C}^{(t)}_i\Big(\mathbf{x}^{(t)}_i - \hat{\mathbf{y}_i}^{(t)} \Big)^2
\end{equation}
Here, $\mathbf{x}^{(t)}_i$ and $\ |
hat{\mathbf{y}_i}^{(t)}$ denote the \emph{pseudo-labels} and predictions corresponding to the $i$-th grid cell at $t$-th time step respectively. We perform minimization of the square loss weighted by the predicted confidence score at the corresponding cell | .
\subsubsection{Object Category-level Weak-Supervision}
Replication of the static target at each sequential step has been shown to be effective in \cite{LiptonKEW15, yue2015beyond, dai2015semi}.
Of course, with video data,
different objects may move
i |
n different directions and speeds.
Yet, within a short time duration,
we could expect all objects to be present.
Thus we employ target replication for classification but not localization objectives.
We minimize the square loss
between the categories
a | ggregated over all grid cells
in the ground truth $\mathbf{y}^{(T)}$
at final time step $T$ and predictions $\hat{\mathbf{y}}^{(t)}$
at all time steps $t$.
Aggregated category from the ground truth
considers only the cell indices
where an object is p |
resent.
For predictions, contribution of cell $i$
is weighted by its predicted confidence score $\hat{C}^{(t)}_i$.
Note that cell indices with positive detection
are sparse.
Thus, we consider the confidence score of each cell while minimizing the aggr | egated category loss.
\vspace{-2.5mm}
\begin{equation} \label{category_supervision}
\mbox{category\_loss} =
\sum^T_{t=0}\bigg(\sum_{c \in classes} \Big(\sum^{S^2}_{i=0} \hat{C}^{(t)}_i\big(\hat{p}^{(t)}_i(c)\big) - \sum^{S^2}_{i=0}\mathbbm{1}^{obj^{(T)}} |
_i \big(p_i^{(T)}(c)\big)\Big) \bigg)^2
\end{equation}
\subsubsection{Consecutive Prediction Smoothness}
Additionally, we regularize the model
by encouraging smoothness of predictions
across consecutive time-steps.
This makes sense intuitively
because | we assume that objects
rarely move rapidly from one frame to another.
\vspace{-2.5mm}
\begin{equation} \label{prediction_smoothness}
\mbox{prediction\_consistency\_loss} =
\sum^{T-1}_{t=0}\Big(\hat{\mathbf{y}_i}^{(t)} - \hat{\mathbf{y}_i}^{(t+1)} \Big |
)^2
\end{equation}
\vspace{-2.5mm}
\subsection{Inference}
The recurrent neural network predicts output at every time-step.
The network predicts $98$ bounding boxes
per video frame
and class probabilities
for each of the $49$ grid cells.
We note that | for every cell,
the net predicts class conditional probabilities
for each one of the $C$ categories
and $B$ bounding boxes.
Each one of the $B$ predicted bounding boxes
per cell
has an associated \emph{objectness} confidence score.
The predicted co |
nfidence score
at that grid is the maximum among the boxes.
The bounding box with the highest score
becomes the \emph{responsible} prediction
for that grid cell $i$.
The product of class conditional probability $\hat{p}^{(t)}_i(c)$ for category ty | pe $c$ and \emph{objectness} confidence score $\hat{C}^{(t)}_i$ at grid cell $i$,
if above a threshold, infers a detection.
In order for an object of category type $c$
to be detected for $i$-th cell
at time-step $t$,
both the class conditional probabi |
lity $\hat{p}^{(t)}_i(c)$ and \emph{objectness score} $\hat{C}^{(t)}_i$ must be reasonably high.
Additionally, we employ Non-Maximum Suppression (NMS) to winnow multiple high scoring bounding boxes around an object instance
and produce a single detectio | n for an instance.
By virtue of YOLO-style prediction, NMS is not critical.
\section{Experimental Results}
\label{sec:results}
In this section,
we empirically evaluate our model on the popular
\textbf{Youtube-Objects} dataset,
providing both quantit |
ative results
(as measured by mean Average Precision)
and subjective evaluations of the model's performance, considering both successful predictions and failure cases.
The \textbf{Youtube-Objects} dataset\cite{youtube-Objects}
is composed of videos co | llected from Youtube
by querying for the names of 10 object classes
of the PASCAL VOC Challenge.
It contains 155 videos in total
and between 9 and 24 videos for each class.
The duration of each video varies
between 30 seconds and 3 minutes.
However, |
only $6087$ frames
are annotated with $6975$ bounding-box instances. The training and test split is provided.
\subsection{Experimental Setup}
We implement the domain-adaption of YOLO and
the proposed RNN model using Theano \cite{Theano2016arXiv160502 | 688short}.
Our best performing RNN model
uses two GRU layers of $150$ hidden units each and dropout of probability $0.5$ between layers,
significantly outperforming domain-adapted YOLO alone.
While we can only objectively evaluate prediction quality on |
the labeled frames,
we present subjective evaluations on sequences.
\subsection{Objective Evaluation}
We compare our approach with other methods evaluated on the Youtube-Objects dataset.
As shown in Table \ref{table:per_category_results} and Table \ref{ | table:final_mAP},
Deformable Parts Model (DPM) \cite{FelzenszwalbMR_CVPR_2008})-based detector reports \cite{KalogeitonFS15} mean average precision below $30$,
with especially poor performance
in some categories such as \emph{cat}.
The method of Tripat |
hi \emph{et al}\bmvaOneDot (VPO) \cite{Tripathi_WACV16}
uses consistent video object proposals
followed by a domain-adapted AlexNet classifier (5 convolutional layer, 3 fully connected) \cite{AlexNet12} in an R-CNN \cite{RCNN_girshick14CVPR}-like framewo | rk,
achieving mAP of $37.41$.
We also compare against YOLO ($24$ convolutional layers, 2 fully connected layers),
which unifies the classification and localization tasks,
and achieves mean Average Precision over $55$.
In our method, we adapt YOLO to gen |
erate \emph{pseudo-labels} for all video frames,
feeding them as inputs to the refinement RNN.
We choose YOLO as the \emph{pseudo-labeler}
because it is the most accurate
among feasibly fast image-level detectors.
The domain-adaptation improves YOLO's pe | rformance,
achieving mAP of $61.66$.
Our model with RNN-based prediction refinement,
achieves superior aggregate mAP to all baselines.
The RNN refinement model using both input-output similarity, category-level weak-supervision, and prediction smoothn |
ess performs best,
achieving $\mbox{68.73}$ mAP.
This amounts to a relative improvement of $\mbox{11.5\%}$ over the best baselines.
Additionally, the RNN improves
detection accuracy on most individual categories (Table \ref{table:per_category_results}).
| \begin{table} \label{table:per_category_results}
\centering
\footnotesize
\begin{tabular}{lllllllllll}
\multicolumn{11}{c}{\textbf{Average Precision on 10-categories}} \\ \midrule
Methods & airplane & bird & boat & car & cat & cow & dog & horse & mbi |
ke & train \\ \midrule
DPM\cite{FelzenszwalbMR_CVPR_2008} & 28.42 & 48.14 & 25.50 & 48.99 & 1.69 & 19.24 & 15.84 & 35.10 & 31.61 & 39.58 \\
VOP\cite{Tripathi_WACV16} & 29.77 & 28.82 & 35.34 & 41.00 & 33.7 & 57.56 & 34.42 & 54.52 & 29.77 & 29.23 \\
| YOLO\cite{YOLO_RedmonDGF15} & 76.67 & 89.51 & 57.66 & 65.52 & 43.03 & 53.48 & 55.81 & 36.96 & 24.62 & 62.03 \\
DA YOLO & \textbf{83.89} & \textbf{91.98} & 59.91 & 81.95 & 46.67 & 56.78 & 53.49 & 42.53 & 32.31 & 67.09 \\
\midrule
RNN-IOS & 82.7 |
8 & 89.51 & 68.02 & \textbf{82.67} & 47.88 & 70.33 & 52.33 & 61.52 & 27.69 & \textbf{67.72} \\
RNN-WS & 77.78 & 89.51 & \textbf{69.40} & 78.16 & 51.52 & \textbf{78.39} & 47.09 & 81.52 & 36.92 & 62.03 \\
RNN-PS & 76.11 & 87.65 & 62.16 & 80.69 & \tex | tbf{62.42} & 78.02 & \textbf{58.72} & \textbf{81.77} & \textbf{41.54} & 58.23 \\ \bottomrule
\end{tabular}
\caption{Per-category object detection results for the Deformable Parts Model (DPM),
Video Object Proposal based AlexNet (VOP),
image-trained YOLO |
(YOLO),
domain-adapted YOLO (DA-YOLO).
RNN-IOS regularizes on input-output similarity,
to which RNN-WS adds category-level weak-supervision,
to which RNN-PS adds a regularizer encouraging prediction smoothness.}
\end{table}
\begin{table}[h] \label{t | able:final_mAP}
\centering
\begin{tabular}{llllllll}
\multicolumn{8}{c}{\textbf{mean Average Precision on all categories}} \\
\midrule
Methods & DPM & VOP & YOLO & DA YOLO & RNN-IOS & RNN-WS & RNN-PS\\ \midrule
mAP & 29.41 & 37.41 & 56.53 & \tex |
tbf{61.66} & 65.04 & 67.23 & \textcolor{blue}{\textbf{68.73}}\\ \bottomrule
\end{tabular}
\caption{Overall detection results on Youtube-Objects dataset. Our best model (RNN-PS) provides $7\%$ improvements over DA-YOLO baseline.}
\end{table}
\vspace{-2.5m | m}
\vspace{-2.5mm}
\subsection{Subjective Evaluation}
We provide a subjective evaluation
of the proposed RNN model in Figure \ref{fig:subjective1}.
Top and bottom rows in every pair of sequences correspond to \emph{pseudo-labels} and results from our ap |
proach respectively.
While only the last frame
in each sequence has associated ground truth,
we can observe that the RNN produces more accurate and more consistent predictions across time frames.
The predictions are consistent
with respect to classific | ation,
localization and confidence scores.
In the first example, the RNN
consistently detects the \emph{dog}
throughout the sequence,
even though the \emph{pseudo-labels}
for the first two frames were wrong (\emph{bird}). In the second example, \emph |
{pseudo-labels}
were \emph{motorbike}, \emph{person}, \emph{bicycle}
and even \emph{none} at different time-steps. However, our approach consistently predicted \emph{motorbike}.
The third example shows that the RNN
consistently predicts both of the ca | rs
while the \emph{pseudo-labeler}
detects only the smaller car
in two frames within the sequence.
The last two examples show how the RNN increases its confidence scores,
bringing out the positive detection
for \emph{cat} and \emph{car} respectively
b |
oth of which fell below the detection threshold of the \emph{pseudo-labeler}.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.75]{result2_category_consistency-eps-converted-to.pdf}
\includegraphics[scale=0.75]{result3_category_consistency-eps | -converted-to.pdf}
\includegraphics[scale=0.75]{result1_localizations-eps-converted-to.pdf}
\includegraphics[scale=0.75]{result8_detection_through_consistency-eps-converted-to.pdf}
\includegraphics[scale=0.75]{result16_detection_through_consist |
ency-eps-converted-to.pdf}
\end{center}
\caption{
Object detection results from the final eight frames of five different test-set sequences.
In each pair of rows,
the top row shows the \emph{pseudo-labeler}
and the bottom row | shows the RNN.
In the first two examples,
the RNN consistently predicts correct categories
\emph{dog} and \emph{motorbike},
in contrast to the inconsistent baseline.
In the third sequence, the RNN
correctly predicts multiple instances |
while the \emph{pseudo-labeler} misses one.
For the last two sequences, the RNN
increases the confidence score,
detecting objects missed by the baseline.
}
\label{fig:subjective1}
\end{figure*}
\subsection{Areas For Improvement}
The YOLO | scheme for unifying classification and localization
\cite{YOLO_RedmonDGF15}
imposes strong spatial constraints
on bounding box predictions
since each grid cell can have only one class.
This restricts the set of possible predictions,
which may be undes |
irable
in the case where many objects
are in close proximity.
Additionally, the rigidity of the YOLO model
may present problems for the refinement RNN,
which encourages smoothness of predictions
across the sequence of frames.
Consider, for example, an | object which moves
slightly but transits from one grid cell to another.
Here smoothness of predictions seems undesirable.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.75]{failure_cases-eps-converted-to.pdf}
\end{center}
\caption{Failure ca |
ses for the proposed model. Left: the RNN cannot recover from incorrect \emph{pseudo-labels}.
Right: RNN localization performs worse than \emph{pseudo-labels} possibly owing to multiple instances of the same object. }
\label{fig:failure_cases}
\vspace{-2. | 5mm}
\end{figure*}
Figure \ref{fig:failure_cases} shows some failure cases.
In the first case,
the \emph{pseudo-labeler} classifies the instances as \emph{dogs} and even as \emph{birds} in two frames
whereas the ground truth instances are \emph |
{horses}.
The RNN cannot recover from the incorrect pseudo-labels.
Strangely, the model increases the confidence score marginally for a different wrong category \emph{cow}.
In the second case,
possibly owing to motion and close proximity of multiple inst | ances of the same object category,
the RNN predicts the correct category but fails
on localization.
These point to future work to make the framework robust to motion.
The category-level weak supervision in the current scheme
assumes the presence of al |
l objects in nearby frames.
While for short snippets of video
this assumption generally holds,
it may be violated in case of occlusions, or sudden arrival or departure of objects.
In addition,
our assumptions regarding the desirability of
prediction sm | oothness
can be violated in the case of rapidly moving objects.
\vspace{-2.5mm}
\section{Related Work}
\label{sec:prior-art}
Our work builds upon a rich literature in both image-level object detection,video analysis, and recurrent neural networks.
Seve |
ral papers propose ways of using deep convolutional networks for detecting objects \cite{RCNN_girshick14CVPR,fast_RCNN_15,Faster_RCNN_RenHG015, YOLO_RedmonDGF15, SzegedyREA14, Inside_Outside_Net_BellZBG15, DeepID-Net_2015_CVPR, Overfeat_SermanetEZMFL13, CR | AFTCVPR16, Gidaris_2015_ICCV}.
Some approaches classify the proposal regions \cite{RCNN_girshick14CVPR,fast_RCNN_15} into object categories and some other recent methods \cite{Faster_RCNN_RenHG015, YOLO_RedmonDGF15}
unify the localization and classificati |
on stages.
Kalogeiton \emph{et al}\bmvaOneDot \cite{KalogeitonFS15} identifies domain shift factors between still images and videos, necessitating
video-specific object detectors.
To deal with shift factors and sparse object-level annotations in video, |
researchers have proposed several strategies.
Recently, \cite{Tripathi_WACV16} proposed
both transfer learning from the image domain to video frames and optimizing for temporally consistent object proposals.
Their approach is capable of detecting
both |
moving and static objects.
However, the object proposal generation step
that precedes classification is slow.
Prest \emph{et al}\bmvaOneDot
\cite{Weak_obj_from_videoPrestLCSF12},
utilize weak supervision for object detection
in videos
via category-le | vel annotations of frames,
absent localization ground truth.
This method assumes that the target object is moving, outputting a spatio-temporal tube that captures this most salient moving object.
This paper, however, does not consider
context within video |
for detecting multiple objects.
A few recent papers \cite{DeepID-Net_2015_CVPR, Inside_Outside_Net_BellZBG15} identify the important role of context in visual recognition.
For object detection in images, Bell \emph{et al}\bmvaOneDot \cite{Inside_Outside_ | Net_BellZBG15}
use spatial RNNs
to harness contextual information,
showing large improvements on PASCAL VOC \cite{PASCAL_VOC} and Microsoft COCO \cite{COCOLinMBHPRDZ14}
object detection datasets.
Their approach adopts proposal generation followed by cla |
ssification framework.
This paper exploits spatial,
but not temporal context.
Recently, Kang \emph{et al}\bmvaOneDot \cite{KangCVPR16} introduced tubelets with convolutional neural networks (T-CNN) for detecting objects in video.
T-CNN uses spatio-tem | poral tubelet proposal generation
followed by the classification and re-scoring,
incorporating temporal and contextual information from tubelets obtained in videos.
T-CNN won the recently introduced ImageNet
object-detection-from-video (VID) task with |
provided densely annotated video clips.
Although the method is effective for densely annotated training data,
it's behavior for sparsely labeled data is not evaluated.
By modeling video as a time series,
especially via GRU \cite{Cho14_GRU} or LSTM RNN | s\cite{LSTM_Hochreiter_97},
several papers
demonstrate improvement
on visual tasks including video classification \cite{yue2015beyond},
activity recognition \cite{LongTermRecurrentDonahueHGRVSD14},
and human dynamics \cite{Fragkiadaki_2015_ICCV}.
These |
models generally aggregate CNN features
over tens of seconds, which forms the input to an RNN.
They perform well for global description tasks
such as classification \cite{yue2015beyond,LongTermRecurrentDonahueHGRVSD14} but require large annotated dataset | s.
Yet, detecting multiple generic objects
by explicitly modeling video
as an ordered sequence
remains less explored.
Our work differs from the prior art in a few distinct ways.
First, this work is the first, to our knowledge, to demonstrate the capa |
city of
RNNs to improve localized object detection in videos.
The approach may also be the first
to refine the object predictions of frame-level models.
Notably, our model produces significant improvements even on a small dataset with sparse annotations.
|
\vspace{-2.5mm}
\vspace{-2.5mm}
\section{Conclusion}
We introduce a framework
for refining object detection in video.
Our approach extracts contextual information from neighboring frames,
generating predictions
with state of the art accuracy
|
that are also temporally consistent.
Importantly, our model
benefits from context frames
even when they lack ground truth annotations.
For the recurrent model,
we demonstrate an efficient
and effective training strategy
that simultaneously employs
l | ocalization-level strong supervision,
category-level weak-supervision,
and a penalty
encouraging smoothness of predictions
across adjacent frames.
On a video dataset with sparse object-level annotation,
our framework proves effective,
as validated by |
extensive experiments.
A subjective analysis of failure cases
suggests that the current approach
may struggle most on cases
when multiple rapidly moving objects
are in close proximity.
Likely, the sequential smoothness penalty is not optimal for suc | h complex dynamics.
Our results point to several promising directions for future work. First, recent state of the art results for video classification show that longer sequences help in global inference.
However, the use of longer sequences for localizat |
ion remains unexplored.
We also plan to explore methods
to better model local motion information
with the goal of improving localization of
multiple objects in close pro | ximity.
Another promising direction, we would like
to experiment with loss functions
to incorporate specialized handling
of classification and localization objectives.
|
\section{Introduction}
The wave-particle duality is an alternative statement of the complementarity principle, and it establishes the relation between \ankb{corpuscular and undulatory}{the corpuscular and the ondulatory} nature of quantum entities \cite{B | ohr1928}. It can be illustrated in a two-way interferometer, where the apparatus can be set to observe \ankb{particle}{the particle} behavior when a single path is \ankb{taken}{taken,} or wave-like behavior, when the impossibility to define a path is shown |
by the interference. A modern approach to the wave-particle duality includes quantitative relations between quantities that represent the possible \textit{a priori} knowledge of the which-way information (\ankb{predictability}{predicability}) and the ``qu | ality'' of the interference fringes (\ankb{Visibility}{visibility}). Several publications in the literature \cite{Bohr1928, Wootters1979, Summhammer1987, Greenberger1988, Mandel1991} contributed to the formulation of the quantitative analysis of the wave-p |
article duality. For a bipartite system\ankb{ entanglement, the quantum correlations between each part, can play a role. Such correlations can}{, entanglement can} give an extra which-way (path) information about the interferometric possibilities. The quan | titative relations for systems composed by two particles were \ankb{}{extensively} studied in \cite{Jaeger1993, Jaeger1995, Englert1996, Englert2000, Scully1989, Scully1991, Mandel1995, Tessier2005, Jakob2010, Miatto2015,Bagan2016, Coles2016}. Therefore, \ |
ankb{understand}{understanding} the behavior of such quantities, in various regimes and situations, is essential to answer fundamental and/or technological questions of the quantum theory \cite{Greenberger1999}.
\alams{The Complementarity quantities can p | resent interesting dynamical behaviors,}{Concerning the study of the dynamical behavior of complementarity quantities,} an example is the so-called \textit{quantum eraser}, \alams{where an increase or preservation of the visibility of an interferometer exp |
eriment is caused when the ``which-way'' information is erased.}{\ankb{ i.e. an increasing or preservation of the \ankb{Visibility}{visibility} in an interferometric scheme (or the ``erasure'' of the which-way information probably stored in the initial sta | te).}{ where an increase or preservation of the visibility of an interferometer experiment is caused when the which-path information is erased.}} Since its proposal \cite{Scully1982}\ankb{ this phenomena}{, it} has \ankb{investigated it carefully,}{been in |
vestigated carefully} both theoretically and experimentally (see for example Refs. \cite{Englert2000, Scully1991, Mandel1995, Storey1994, Wiseman1995, Mir2007, Luis1998, Busch2006, Rossi2013, Walborn2002, Mir2007, Teklemariam2001, Teklemariam2002, Kim2000, | Salles2008, Heuer2015}). In \ankb{a recent work }{Ref.~}\cite{Rossi2013}, the authors explore the quantum eraser problem in \ankb{multipartite}{a multipartite} model\ankb{, where two cavities ($q_A+q_B$), which will be taken as a two qubit system $A + B$, |
in an initial maximally entangled state (and therefore with zero \ankb{Visibility}{visibility}), couple through a Jaynes-Cummings Hamiltonian to $N$ two-level atoms (we will call the global system as $q_A + q_B + R$, where all the individual systems are q | ubits)}{. Initially a bipartite qubit system is prepared in a maximally entangled state and interacts with $N$ other qubits. This model can be implemented considering the qubits of interest the cavity modes of two cavities and the $N$ qubits as two-level a |
toms}. In this work \cite{Rossi2013}, an increase of visibility is achieved by performing appropriate projective measurements. An intrinsic relation between the complementarity quantities and the performed measurements is outlined: since \ankb{they}{the me | asurements} were made in order to obtain an \ankb{increasing}{increase} of the \ankb{Visibility}{visibility}, the remaining quantities (Entanglement as measured by the concurrence, and the predictability) must obey a ``complementary'' behavior. In that cas |
e, \ankb{Visibility}{visibility} and predictability increases, and entanglement decreases, since the measurements are made in order to \ankb{establishes}{establish} the quantum eraser. In Reference \cite{Rossi2013} only the maximization of the visibility w | as considered, in the present work we extend the analysis and consider maximization of predictability, visibility and concurrence. Also, in the previous work \cite{Rossi2013}, only one value of the coupling constant was considered. In this contribution we |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.