Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 36
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 36663)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 36
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
\label{sec:intro}
Edge detection is a recurrent task required for several classical computer vision processes (e.g., segmentation \protect \cite{zhang2016segmentation}, image recognition \cite{yang2002detectFace,shotton2008objectRec}), or even in the modern tasks such as image-to-image translation \cite{zhu2017cyclegan}, photo sketching \cite{lips2019photo-sketch} and so on. Moreover, in fields such as medical image analysis \cite{pourreza2017medImg} or remote sensing \cite{isikdogan2017remotesens} most of their heart activities require edge detectors. In spite of the large amount of work on edge detection, it still remains as an open problem with space for new contributions.
Since the Sobel operator \cite{sobel1972sobelmethod}, many edge detectors have been proposed \cite{oskoei2010surveyedge} and most of the techniques like Canny \cite{canny1987cannymethod} are still being used nowadays. Recently, in the era of Deep Learning (DL), Convolutional Neural Netwoks (CNN) based edge detectors like DeepEdge \cite{bertasius2015deepedge}, HED \cite{xie2017hed}, RCF \cite{liu2017rcf}, BDCN \cite{he2019edgeBDCN} among others, have been proposed. These models are capable of predicting an edge-map from a given image just like the low level based methods \cite{ziou1998edgeOverview}, with better performance. The success of these methods is mainly by the CCNs applied at different scales to a large set of images together with the training regularization techniques.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{figs/banner3.png}
\end{center}
\caption{The edge-maps predictions from the proposed model in images acquired from internet.}
\label{fig:illustration}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\linewidth]{figs/dexi_qualcompv22.pdf}
\end{center}
\caption{Edge-maps predicted from the state-of-the-art models and DexiNed on three BSDS500 \protect \cite{arbelaez2011bsds500} images. Note that DexiNed was just trained with BIPED, while all the others were trained on BSDS500.}
\label{fig:qual_comp}
\end{figure*}
Most of the aforementioned DL based approaches are trained on already existing boundary detection or object segmentation datasets \cite{martin2001bsds300,silberman2012NYUD, mottaghi2014PASCALcontext} to detect edges.
Even though most of the images on those datasets are well annotated, there are a few of them that contain missing edges, which difficult the training, thus the predicted edge-maps lost some edges in the images (see Fig. \ref{fig:illustration}). In the current work, those datasets are used just for qualitative comparisons due to the objective of the current work is edge detection (not objects' boundary/contour detection). The boundary/contour detection tasks, although related and some times assumed as a synonym task, are different since just objects' boundary/contour need to be detected, but not all edges present in the given image.
This manuscript aims to demonstrate the edge detection generalization from a DL model. In other words, the model is capable of being evaluated in other datasets for edge detection without being trained on those sets. To the best of our knowledge, the unique dataset for edge detection shared to the community is Multicue Dataset for Boundary Detection (MDBD---2016) \cite{mely2016multicue}, which although mainly generated for the boundary detection study, it contains a subset of images devoted for edge detection. Therefore, a new dataset has been collected to train the proposed edge detector. The main contributions in the paper are summarized as follow:
\begin{itemize}
\item A dataset with carefully annotated edges has been generated and released to the community---BIPED: Barcelona Images for Perceptual Edge Detection.\footnote{Code $+$ dataset: \url{https://github.com/xavysp/DexiNed}}
\item A robust CNN architecture for edge detection is proposed, referred to as DexiNed: Dense Extreme Inception Network for Edge Detection. The model has been trained from the scratch, without pretrained weights.
\end{itemize}
The rest of the paper is organized as follow. Section \ref{sec:rw} summarizes the most relevant and recent work on edge detection. Then, the proposed approach is described in Section \ref{sec:pa}. The experimental setup is presented in Section \ref{sec:exp}. Experimental results are then summarized in Section \ref{sec:res}; finally, conclusions and future work are given in Section \ref{sec:con}.
\section{Related Work}
\label{sec:rw}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.99\textwidth]{figs/dexint_model.pdf}
\caption{Proposed architecture: Dense Extreme Inception Network, consists of an encoder composed by six main blocks (showed in light gray). The main blocks are connected between them through 1x1 convolutional blocks. Each of the main blocks is composed by sub-blocks that are densely interconnected by the output of the previous main block. The output from each of the main blocks is fed to an upsampling block that produces an intermediate edge-map in order to build a Scale Space Volume, which is used to compose a final fused edge-map. More details are given in Sec. \ref{sec:pa}.}
\label{fig:arch}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{figs/up_block.pdf}
\caption{Detail of the upsampling block that receives as input the learned features extracted from each of the main blocks. The features are fed into a stack of learned convolutional and transposed convolutional filters in order to extract an intermediate edge-map.}
\label{fig:upsampling-block}
\end{figure}
There are a large number of work on the edge detection literature, for a detailed review see \cite{ziou1998edgeOverview,Gong2018contourOverview}. According to the technique the given image is processed, proposed approaches can be categorized as: $i)$ Low level feature; $ii)$ Brain-biologically inspiration; $iii)$ Classical learning algorithms; $iv)$ Deep learning algorithms.
\textit{Low-level feature:} Most of the algorithms in this category generally follow a smooth process, which could be performed convolving the image with a Gaussian filter or manually performed kernels. A sample of such methods are \cite{canny1987cannymethod,schunck1987mMultiScaleF,perona1991mComEdges}. Since Canny \cite{canny1987cannymethod}, most of the nowadays methods use non-maximum suppression \cite{canny1983non-maximum} as the last process of edge detection.
\textit{Brain-biologically inspiration:} This kind of method started their research in the 60s of the last century analyzing the edge and contour formation in the vision systems of monkeys and cats \cite{daugman1985gaborFilter}. inspired on such a work, in \cite{grigorescu2003cid} the authors proposed a method based on simple cells and Gabor filters. Another study focused on boundary detection is presented in \cite{mely2016multicue}. This work proposes to use Gabor and derivative of Gaussian filters, considering three different filter sizes and machine learning classifiers. More recently, in \cite{yang2015SCO}, an orientation selective neuron is presented, by using first derivative of a Gaussian function. This work has been recently extended in \cite{Akbarinia2018SEDext} by modeling retina, simple cells even the cells from V2.
\textit{Classical learning algorithms:} These techniques are usually based on sparse representation learning \cite{mairal2008sparceModel}, dictionary learning \cite{xiaofeng2012Diclearn}, gPb (gradient descent) \cite{arbelaez2011bsds500} and structured forest \cite{dollar2015forests} (decision trees). At the time these approaches have been proposed, they outperformed state-of-the-art techniques based on low level processes reaching the best F-measure values in BSDS segmentation dataset \cite{arbelaez2011bsds500}. Although obtained results were acceptable in most of the cases, these techniques still have limitations in challenging scenarios.
\textit{Deep learning algorithms:} With the success of CNN, principally because of its result in \cite{krizhevsky2012alexnet}, many methods have been proposed \cite{ganin2014firstDLedge, bertasius2015deepedge, xie2017hed, liu2017rcf, wang2017ced}. In HED \cite{xie2017hed} for example, an architecture based on VGG16 \cite{simonyan2014vgg} and pre-trained with ImageNet dataset is proposed. The network generates edges from each convolutional block constructing a multi-scale learning architecture. The training process uses a modified cross entropy loss function for each predicted edge-maps. Using the same architecture as their backbone, \cite{liu2017rcf} and \cite{wang2017ced} have proposed improvements. While in \cite{liu2017rcf} every output is feed from each convolution from every block, in \cite{wang2017ced} a set of fusion backward process, with the data of each outputs, is performed. In general, most of the current DL based models use as their backbone the convolutional blocks of VGG16 architecture.
\section{Dense Extreme Inception Network for Edge Detection}
\label{sec:pa}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.95\textwidth]{figs/pres.png}
\caption{Edge-maps from DexiNed in BIPED test dataset. The six outputs are delivered from the upsampling blocks, the \textit{fused} is the concatenation and fusion of those outputs and the \textit{averaged} is the average of all previous predictions.}
\label{fig:multiscaleresult}
\end{figure*}
This section presents the architecture proposed for edge detection, termed DexiNed, which consists of a stack of learned filters that receive as input an image then predict an edge-map with the same resolution. DexiNed can be seen as two sub networks (see Figs. \ref{fig:arch} and \ref{fig:upsampling-block}): Dense extreme inception network (Dexi) and the up-sampling block (UB). While Dexi is fed with the RGB image, UB is fed with feature maps from each block of Dexi. The resulting network (DexiNed) generates thin edge-maps, avoiding missed edges in the deep layers. Note that even though without pre-trained data, the edges predicted from DexiNed are in most of the cases better than state-of-the-art results, see Fig. \ref{fig:illustration}.
\subsection{DexiNed Architecture}
\label{sec:pa-dexi}
The architecture is depicted in Fig. \ref{fig:arch}, it consists of an encoder with 6 main blocks inspired in the xception network \cite{chollet2017xception}. The network outputs feature maps at each of the main blocks to produce intermediate edge-maps using an upsampling block defined in Section \ref{sec:pa-upsampling}. All the edge-maps resulting from the upsampling blocks are concatenated to feed the stack of learned filters at the very end of the network and produce a fused edge-map. All six upsampling blocks do not share weights.
The blocks in blue consists of a stack of two convolutional layers with kernel size $3\times3$, followed by batch normalization and ReLU as the activation function (just the last convs in the last sub-blocks does not have such activation). The max-pool is set by $3\times3$ kernel and stride $2$. As the architecture follows the multi-scale learning, like in HED, an upsampling process (horizontal blocks in gray, Fig. \ref{fig:arch}) is followed (see details in Section \ref{sec:pa-upsampling}).
Even though DexiNed is inspired in xception, the similarity is just in the structure of the main blocks and connections. Major differences are detailed below:
\begin{itemize}
\item While in xception separable convolutions are used, DexiNed uses standard convolutions.
\item As the output is a 2D edge-map, there is "not exit flow", instead, another block at the end of block five has been added. This block has 256 filters and as in block 5 there is not maxpooling operator.
\item In block 4 and block 5, instead of 728 filters, 512 filters have been set. The separations of the main blocks are done with the blocks connections (rectangles in green) drawn on the top side of Fig. \ref{fig:arch}.
\item Concerning to skip connections, in xception there is one kind of connection, while in DexiNed there are two type of connections, see rectangles in green on the top and bottom of Fig. \ref{fig:arch}.
\end{itemize}
Since many convolutions are performed, every deep block losses important edge features and just one main-connection is not sufficient, as highlighted in DeepEdge \cite{bertasius2015deepedge}, from the forth convolutional layer the edge feature loss is more chaotic. Therefore, since block $3$, the output of each sub-block is averaged with \textit{edge-connection} (orange squares in Fig. \ref{fig:arch}). These processes are inspired in ResNet \cite{he2016resnet} and RDN \cite{zhang2018densenet} with the following notes: $i)$ as shown in Fig. \ref{fig:arch}, after the max-pooling operation and before summation with the main-connection, the edge-connection is set to average each sub-blocks output (see rectangles in green, bottom side); $ii)$ from the max-pool, block $2$, edge-connections feed sub-blocks in block $3$, $4$ and $5$, however, the sub-blocks in $6$ are feed just from block 5 output.
\subsection{Upsampling Block}
\label{sec:pa-upsampling}
DexiNed has been designed to produce thin edges in order to enhance the visualization of predicted edge-maps. One of the key component of DexiNed for the edge thinning is the upsampling block, as appreciated in Fig. \ref{fig:arch}, each output from the Dexi blocks feeds the UB. The UB consists of the conditional stacked sub-blocks. Each sub-block has 2 layers, one convolutional and the other deconvolutional; there are two types of sub-blocks. The first sub-block (sub-block1) is feed from Dexi or sub-block2; it is only used when the scale difference between the feature map and the ground truth is equal to 2. The other sub-block (sub-block2), is considered when the difference is greater than 2. This sub-block is iterated till the feature map scale reaches 2 with respect to the GT. The sub-block1 is set as follow: kernel size of the conv layer $1\times1$; followed by a ReLU activation function; kernel size of the deconv layer or transpose convolution $s \times s$, where $s$ is the input feature map scale level; both layers return one filter and the last one gives a feature map with the same size as the GT. The last conv layer does not have activation function. The sub-block2 is set similar to sub-block1 with just one difference in the number of filters, which is 16 instead of 1 in sub-block1. For example, the output feature maps from block 6 in Dexi has the scale of $16$, there will be three iterations in the sub-block2 before fed the sub-block1. The upsampling process of the second layer from the sub-blocks can be performed by bi-linear interpolation, sub-pixel convolution and transpose convolution, see Sec. \ref{sec:res} for details.
\subsection{Loss Functions}
\label{sec:pa-loss}
DexiNed could be summarized as a regression function $\eth$, that is, $\hat{Y}$ = $\eth(X,Y)$, where $X$ is an input image, $Y$ is its respective ground truth, and $\hat{Y}$ is a set of predicted edge maps. $\hat{Y} = [\hat{y}_{1},\hat{y}_{2},...,\hat{y}_{N}]$, where $\hat{y}_i$ has the same size as $Y$, and $N$ is the number of outputs from each upsampling block (horizontal rectangles in gray, Fig. \ref{fig:arch});
$\hat{y}_{N}$ is the result from the last fusion layer $f$ $(\hat{y}_{N}=\hat{y}_{f}$). Then, as the model is deep supervised, it uses the same loss as \cite{xie2017hed} (weighted cross-entropy), which is tackled as follow:
\begin{equation}
\centering
\begin{split}
\mathcal{o}^{n}(W,w^{n}) &=- \beta \sum_{j\in{Y^+}} \log{\sigma(y_j =1|X;W,w^n)}\\
&-(1-\beta) \sum_{j \in{Y^-}} \log{\sigma(y_j =0|X;W,w^n)},
\end{split}
\label{eq:sin-loss}
\end{equation}
\noindent then,
\begin{equation}
\centering
\mathcal{L}(W,w)=\sum_{n=1}^{N}\delta^n\times\mathcal{o}^{n}(W,w^{n}),
\label{eq:sum-loss}
\end{equation}
\noindent where $W$ is the collection of all network parameters and $w$ is the $n$ corresponding parameter, $\delta$ is a weight for each scale level. $\beta$ = $|Y^-|/|Y^+ + Y^-|$ and $(1-\beta)$=$|Y^+|/|Y^+ + Y^-|$ ($|Y^-|$, $|Y^+|$ denote the edge and non-edge in the ground truth). See Section \ref{sub:impl-notes} for hyper-parameters and optimizer details for the regularization in the training process.
\section{Experimental Setup}
\label{sec:exp}
This section presents details on the datasets used for evaluating the proposed model, in particular the dataset and annotations (BIPED) generated for an accurate training of the proposed DexiNed. Additionally, details on the evaluation metrics and network's parameters are provided.
\subsection{Barcelona Images for Perceptual Edge Detection (BIPED)}
\label{sub:BIPED}
The other contributions of the paper is a carefully annotated edge dataset. It contains 250 outdoor images of 1280$\times$720 pixels each. These images have been carefully annotated by experts on the computer vision field, hence no redundancy has been considered. In spite of that, all results have been cross-checked in order to correct possible mistakes or wrong edges. This dataset is publicly available as a benchmark for evaluating edge detection algorithms. The generation of this dataset is motivated by the lack of edge detection datasets, actually, there is just one dataset publicly available for the edge detection task (MDBD \cite{mely2016multicue}). Edges in MDBM dataset have been generated by different subjects, but have not been validated, hence, in some cases, the edges correspond to wrong annotations. Some examples of these missed or wrong edges can be appreciated in the ground truths presented in Fig. \ref{fig:diferentedatasets}; hence, edge detector algorithms that obtain these missed edges are penalized during the evaluation. The level of details of the dataset annotated in the current work can be appreciated looking at the GT, see Figs. \ref{fig:multiscaleresult} and \ref{fig:illustrationcomparisons}. In order to do a fair comparison between the different state-of-the-art approaches proposed in the literature, BIPED dataset has been used for training those approaches, which have been later on evaluated in ODS, OIS, and AP. From the BIPED dataset, 50 images have been randomly selected for testing and the remainders 200 for training and validation. In order to increase the number of training images a \textbf{data augmentation process }has been performed as follow: i) as BIPED data are in high resolution they are split up in the half of image width size; ii) similarly to HED, each of the resulting images is rotated by 15 different angles and crop by the inner oriented rectangle; iii) the images are horizontally flip; and finally iv) two gamma corrections have been applied (0.3030, 0.6060). This augmentation process resulted in 288 images per each 200 images.
\subsection{Test Datasets}
\label{sub:test-data}
The datasets used to evaluate the performance of DexiNed are summarized bellow. There is just one dataset intended for edged detection MDBD \cite{mely2016multicue}, while the remainders are for objects' contour/boundary extraction/segmentation: CID \cite{grigorescu2003cid}, BSDS \cite{martin2001bsds300, arbelaez2011bsds500}, NYUD \cite{silberman2012NYUD} and PASCAL \cite{mottaghi2014PASCALcontext}.
\textit{MDBD:} The Multicue Dataset for Boundary Detection has been intended for the purpose of psychophysical studies on object boundary detection in natural scenes, from the early vision system. The dataset is composed of short binocular video sequences of natural scenes \cite{mely2016multicue}, containing 100 scenes in high definition ($1280\times720$). Each scene has 5 boundary annotations and 6 edge annotations. From the given dataset 80 images are used for training and the remainders 20 for testing \cite{mely2016multicue}. In the current work, DexiNed has been evaluated using the first 20 images (the sub set for edge detection).
\textit{CID:} This dataset has been presented in \cite{grigorescu2003cid}, a brain-biologically inspired edge detector technique. The main limitation of this dataset is that it just contains a set of 40 images with their respective ground truth edges. This dataset highlight that in addition to the edges the ground truth map contains contours of object. In this case the DexiNed has been evaluated with the whole CID data.
\textit{BSDS:} Berkeley Segmentation Dataset, consists of 200 new test images \cite{arbelaez2011bsds500} additional to the 300 images contained in BSDS300 \cite{martin2001bsds300}. In previous publications, the BSDS300 is split up into 200 images for training and 100 images for testing. Currently, the 300 images from BSDS300 are used for training and validation, while the remainders 200 images are used for testing. Every image in BSDS is annotated at least by $6$ annotators; this dataset is mainly intended for image segmentation and boundary detection. In the current work both datasets are evaluated BSDS500 (200 test images) and BSDS300 (100 test images).
\textit{NYUD:} New York University Dataset is a set of 1449 RGBD images that contains 464 indoor scenarios, intended for segmentation purposes. This dataset is split up by \cite{Gupta_2013NYUDsplit} into three subsets---i.e., training, validation and testing sets. The testing set contains 654 images, while the remainders images are used for training and validation purposes. In the current work, although the proposed model was not trained with this dataset, the testing set has been selected for evaluating the proposed DexiNed.
\textit{PASCAL:} The Pascal-Context \cite{mottaghi2014PASCALcontext} is a popular dataset in segmentation; currently most of major DL methods for edge detection use this dataset for training and testing, both for edge and boundary detection purposes. This dataset contains 11530 annotated images, about $5\%$ of them (505 images) have been considered for testing DexiNed.
\subsection{Evaluation Metrics}
\label{sub:em}
The evaluation of an edge detector has been well defined since the pioneer work presented in \cite{ziou1998edgeOverview}.
Since BIPED has annotated edge-maps as GT, three evaluation metrics widely used in the community have been considered: fixed contour threshold (ODS), per-image best threshold (OIS), and average precision (AP). The F-measure (F) \cite{arbelaez2011bsds500} of ODS and OIS, will be considered, where $F=\frac{2\times Precision \times Recall}{Precision + Recall}$.
\subsection{Implementation Notes}
\label{sub:impl-notes}
The implementation is performed in TensorFlow \cite{abadi2016tensorflow}. The model converges after 150k iterations with a batch size of 8 using Adam optimizer and learning rate of $10^{-4}$. The training process takes around 2 days in a TITAN X GPU with color images of size 400x400 as input. The weights for fusion layer are initialized as: $\frac{1}{N-1}$ (see Sec. \ref{sec:pa-loss} for $N$). After a hyperparameter search to reduce the number of parameters, best performance was obtained using kernel sizes of $3\times3$, $1\times1$ and $s\times s$ on the different convolutional layers of Dixe and UB.
\section{Experimental Results}
\label{sec:res}
\begin{figure*}
\begin{tabular}{ccc}
\includegraphics[width=0.30\textwidth]{figs/dexinedvs.png} &
\includegraphics[width=0.30\textwidth]{figs/outputs.png} &
\includegraphics[width=0.30\textwidth]{figs/biedv2comp.png} \\
(a) & (b) & (c)\\
\end{tabular}
\caption{Precision/recall curves on BIPED dataset. (a) DexiNed upsampling versions. (b) The outputs of DexiNed in testing stage, the 8 outputs are considered. (c) DexiNed comparison with other DL based edge detectors.}
\label{fig:curves}
\end{figure*}
\begin{table}\smal
\centering
\begin{tabular}{ll
\centering
\setlength\tabcolsep{0.7pt}
\begin{tabular}{c|c|c|c}
\hline
Outputs& ODS& OIS& AP\\
\hline\hline
Output 1 ($\hat{y}_{1}$) & .741&.760&.162 \\
Output 2 ($\hat{y}_{2}$)& .766&.803&.817 \\
Output 3 ($\hat{y}_{3}$)& .828&.846&.838 \\
Output 4 ($\hat{y}_{4}$)& .844&.858&.843\\
Output 5 ($\hat{y}_{5}$)& .841&.8530&.776\\
Output 6 ($\hat{y}_{6}$)& .842&.852&.805\\
Fused ($\hat{y}_{f}$)& .857&.861&.805\\
Averaged &\textbf{.859}&\textbf{.865}&\textbf{.905}\\
\hline
\end{tabular}
&\hspace{-0.2cm}
\setlength\tabcolsep{1.7pt}
\begin{tabular}{c|c|c|c}
\hline
Methods & ODS& OIS& AP\\
\hline\hline
SED\cite{Akbarinia2018SEDext} &.717&.731&.756 \\
HED\cite{xie2017hed} & .829&.847&.869 \\
CED\cite{wang2017ced} & .795&.815&.830\\
RCF\cite{liu2019RCFext} & .843&.859&.882\\
BDCN\cite{he2019edgeBDCN} & .839&.854&.887\\
DexiNed-f & .857&.861&.805\\
DexiNed-a &\textbf{.859}&\textbf{.867}&\textbf{.905}\\
\hline
\end{tabular}
\end{tabular}\\
\vspace{0.1cm}
\hspace{1cm} (a) \hspace{3.5cm} (b)\\
\caption{(a) Quantitative evaluation of the 8 predictions of DexiNed on BIPED test dataset. $(b)$ Comparisons between the state-of-the-art methods trained and evaluated with BIPED.}
\label{tab:BIPED}
\end{table}
This section presents quantitative and qualitative evaluations conducted by the metrics presented in Sec. \ref{sec:exp}. Since the proposed DL architecture demands several experiments to be validated, DexiNed has been carefully tuned till reach its final version.
\begin{table}
\begin{center}
\begin{tabular}{c|c|c|c|c}
\hline
Dataset& Methods& ODS & OIS & AP\\
\hline\hline
\multicolumn{5}{c}{Edge detection dataset}\\
\hline
MDBD\cite{mely2016multicue} &HED\cite{xie2017hed}&.851&\textbf{.864}&.890 \\
&RCF\cite{liu2017rcf}&.857&.862&-\\
&DexiNed-f&.837&.837&.751\\
&DexiNed-a&.\textbf{859}&\textbf{.864}&\textbf{.917}\\
\hline
\multicolumn{5}{c}{Contour/boundary detection/segmentation datasets}\\
\hline
\hline
CID\cite{grigorescu2003cid} &SCO\cite{yang2015SCO}&.58&.64&.61\\
&SED\cite{Akbarinia2018SEDext}&\textbf{.65}&\textbf{.69}&.68 \\
&DexiNed-f&.65&.67&.59\\
&DexiNed-a&\textbf{.65}&\textbf{.69}&\textbf{.71}\\
\hline
BSDS300\cite{martin2001bsds300}&gPb\cite{arbelaez2011bsds500}&.700&.720&.660\\
&SED\cite{Akbarinia2018SEDext}&.69&.71&.71\\
&DexiNed-f&.707&.723&.52\\
&DexiNed-a&\textbf{.709}&\textbf{.726}&\textbf{.738}\\
\hline
BSDS500\cite{arbelaez2011bsds500}&HED\cite{xie2017hed}&.790&.808&.811\\
&RCF\cite{liu2017rcf}&\textbf{.806}&\textbf{.823}&-\\
&CED\cite{wang2017ced}&.803&.820&\textbf{.871}\\
&SED\cite{Akbarinia2018SEDext}&.710&.740&.740\\
&DexiNed-f&.729&.745&.583\\
&DexiNed-a&.728&.745&.689\\
\hline
NYUD\cite{silberman2012NYUD}&gPb\cite{arbelaez2011bsds500}&.632&.661&.562\\
&HED\cite{xie2017hed}&.720&\textbf{.761}&\textbf{.786} \\
&RCF\cite{liu2017rcf}&\textbf{.743}&.757&-\\
&DexiNed-f&.658&.674&.556\\
&DexiNed-a&.602&.615&.490\\
\hline
PASCAL\cite{mottaghi2014PASCALcontext}&CED\cite{wang2017ced}&\textbf{.726}&\textbf{.750}&\textbf{.778} \\
&HED\cite{xie2017hed}&.584&.592&.443\\
&DexiNed-f&.431&.458&.274\\
&DexiNed-a&.475&.497&.329\\
\hline
\end{tabular}
\end{center}
\caption{Quantitative results of \textbf{DexiNed trained on BIPED} and \textbf{the state-o-the-art methods trained with the corresponding datasets} (values from other approaches come from the corresponding publications).}
\label{tab:alldata}
\end{table}
\subsection{Quantitative Results}
Firstly, in order to select the upsampling process that achieves the best result, an empiric evaluation has been performed, see Fig. \ref{fig:curves}(a). The evaluation consists in conducting the same experiments by using the three upsampling methods; \textbf{DexiNed-bdc} refers to upsampling performed by a transpose convolution initialized with a bi-linear kernel; \textbf{DexiNed-dc} uses transpose convolution with trainable kernels; and \textbf{DexiNed-sp} uses subpixel convolution. According to F-measure, the three versions of DexiNed get the similar results, however, when analyzing the curves in Fig. \ref{fig:curves}(a), a small difference in the performance of DexiNed-dc appears. As a conclusion, the DexiNed-dc upsampling strategy is selected; from now on, all the evaluations performed on this section are obtained using a DexiNed-dc upsampling; for simplicity of notation just the term DexiNed is used instead of DexiNed-dc.
Figure \ref{fig:curves}(b) and Table \ref{tab:BIPED}(a) present the quantitative results reached from each DexiNed edge-map prediction. The results from the eight predicted edge-maps are depicted, the best quantitative results, corresponding to the fused (DexiNed-f) and averaged (DexiNed-a) edge-maps are selected for the comparisons. Similarly to \cite{xie2017hed} the averaged of all predictions (DexiNed-a) gets the best results in the three evaluation metrics, followed by the prediction generated in the fusion layer. Note that the edge-maps predicted from the block 2 till the 6 get similar results to DexiNed-f, this is due to the fact of the proposed skip-connections. For a qualitative illustration, Fig. \ref{fig:multiscaleresult} presents all edge-maps predicted from the proposed architecture. Qualitatively, the result from DexiNed-f is considerably better than the one from DexiNed-a (see illustration in Fig. \ref{fig:multiscaleresult}). However, according to Table \ref{tab:BIPED}(a), DexiNed-a produces slightly better quantitative results than DexiNed-f. As a conclusion both approaches (fused and averaged) reach similar results; through this manuscript whenever the term DexiNed is used it corresponds to DexiNed-f.
\begin{figure*}
\centering
\includegraphics[width=0.93\textwidth]{figs/quantitative.pdf}\\
\hspace{0.3cm}\textit{Image} \hspace{1.3cm} \textit{GT} \hspace{1.3cm} \textit{CED \cite{wang2017ced}} \hspace{1.2cm} \textit{HED \cite{xie2017hed}} \hspace{0.9cm} \textit{RCF \cite{liu2017rcf}} \hspace{0.7cm}
\textit{BDCN \cite{he2019edgeBDCN}} \hspace{0.6cm} \textit{DexiNed}\\
\caption{Results from different edge detection algorithms trained and evaluated in BIPED dataset.}
\label{fig:illustrationcomparisons}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.93\textwidth]{figs/allimgs.pdf}\\
\hspace{1cm}\textit{Image} \hspace{1.6cm} \textit{GT} \hspace{1.6cm} \textit{DexiNed} \hspace{1.6cm} \textit{Image} \hspace{1.6cm} \textit{GT} \hspace{1.6cm} \textit{DexiNed}\\
\caption{Results from the proposed approach using different datasets (note that DexiNed has been trained just with BIPED).}
\label{fig:diferentedatasets}
\end{figure*}
Table \ref{tab:BIPED}(b) presents a comparison between the DexiNed and the state-of-the-art techniques on edge and boundary detection. In all the cases BIPED dataset has been considered, both for training and evaluating the DL based models (i.e., HED \cite{shen2015deepcontour}, RCF \cite{liu2017rcf}, CED \cite{wang2017ced}) and BDCN \cite{he2019edgeBDCN}, the training process for each model took about two days. As can be appreciated from Table \ref{tab:BIPED}(b), DexiNed-a reaches the best results in all evaluation metrics. Actually both, DexiNed-a and DexiNed-f obtain the best results in almost all evaluation metrics. The F-measure obtained by comparing these approaches is presented in Fig. \ref{fig:curves}(c); it can be appreciated how for Recall above 75\% DexiNed gets the best results. Illustrations of the edges obtained with DexiNed and the state-of-the-art techniques are depicted in Figure \ref{fig:illustrationcomparisons}, just for four images from the BIPED dataset. As it can be appreciated, although RCF and BDCN obtain similar quantitative results than DexiNed, which were the second best ranked algorithms in Table \ref{tab:BIPED}(b), DexiNed predicts qualitative better results. Note that the proposed approach was trained from scratch without pre-trained weights.
The main objective of DexiNed is to get a precise edge-map from every dataset (RGB or Grayscale). Therefore, all the datasets presented in Sec. \ref{sub:test-data} have been considered, split up into two categories for a fair analysis; one for \textbf{edge detection} and the others for \textbf{contour/boundary detection/segmentation}. Results of edge-maps obtained with state-of-the-art methods are presented in Table \ref{tab:alldata}. It should be noted that for each dataset the methods compared with DexiNed have been trained using images from that dataset, while DexiNed is trained just once with BIPED. It can be appreciated that DexiNed obtains the best performance in the MDBD dataset. It should be noted that DexiNed is evaluated in CID and BSDS300, even though these datasets contain a few images, which are not enough for training other approaches (e.g., HED, RCF, CED). Regarding BSDS500, NYUD and PASCAL, DexiNed does not reach the best results since these datasets have not been intended for edge detection, hence the evaluation metrics penalize edges detected by DexiNed. To highlight this situation, Fig. \ref{fig:diferentedatasets} depicts results from Table \ref{tab:alldata}. Two samples from each dataset are considered. They are selected according to the best and worst F measure. Therefore, as shown in Fig. \ref{fig:diferentedatasets}, when the image is fully annotated the score reaches around 100\%, otherwise it reaches less than 50\%.
\subsection{Qualitative Results}
\label{sub:qualr}
As highlighted in previous section, when the deep learning based edge detection approaches are evaluated in datasets intended for objects' boundary detection or objects segmentation, the results will be penalized. To support this claim, we present in Fig. \ref{fig:diferentedatasets} two predictions (the best and the worst results according to F-measure) from all datasets used for evaluating the proposed approach (except BIPED that has been used for training). The F-measure obtained in the three most used datasets (i.e., BSDS500, BSDS300 and NYUD) reaches over 80$\%$ in those cases where images are fully annotated; otherwise, the F-measure reaches about 30$\%$. However, when the edge dataset (MDBD \cite{mely2016multicue}) is considered the worst F-measure reaches over 75$\%$. As a conclusion, it should be stated that edge detection and contour/boundary detection are different problems that need to be tackled separately when a DL based model is considered.
\section{Conclusions}
\label{sec:con}
A deep structured model (DexiNed) for image's edge detection is proposed. Up to our knowledge, it is the first DL based approach able to generate thin edge-maps. A large experimental results and comparisons with state-of-the-art approaches is provided showing the validity of DexiNed. Even though DexiNed is trained just one time (with BIPED) it outperforms the state-of-the-art approaches when evaluated in other edge oriented datasets. A carefully annotated dataset for edge detection has been generated and is shared to the community. Future work will be focused on tackling the contour and boundary detection problems by using the proposed architecture and approach.
\section*{Acknowledgment}
This work has been partially supported by: the Spanish Government under Project TIN2017-89723-P; the ``CERCA Programme / Generalitat de Catalunya" and the ESPOL project PRAIM (FIEC-09-2015). The authors gratefully acknowledge the support of the CYTED Network: ``Ibero-American Thematic Network on ICT Applications for Smart Cities'' (REF-518RT0559) and the NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. Xavier Soria has been supported by Ecuador government institution SENESCYT under a scholarship contract 2015-AR3R7694.
\balance
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2020-02-05T02:04:46",
"yymm": "1909",
"arxiv_id": "1909.01955",
"language": "en",
"url": "https://arxiv.org/abs/1909.01955"
}
|
\section{Background and Motivation}
\label{sec_background}
\subsection{DNNs for Streaming Video Analytics}
DNNs have become a core element of various video processing tasks such as frame classification, human action recognition, object detection, face recognition, and so on. Though accurate, DNNs are computationally expensive, requiring significant CPU and memory resources. As a result, these DNNs are often too slow when running on mobile devices and become the latency bottleneck in video analytics systems. Huynh \textit{et al.}\xspace~\cite{deepmon} experimented with VGG~\cite{simonyan2014very} of 16 layers on the Samsung Galaxy S7 and noted that classification on a single image takes as long as 644 ms, leading to less than 2 fps for continuous classification. Motivated by the observation, we explore in {ApproxNet}\xspace how we can make DNN-based video analytics pipelines more efficient through content-aware approximate computation {\em within the neural network}.
{\noindent\bf ResNet}:
Deep DNNs are typically hard to train due to the vanishing gradient problem~\cite{resnet}. ResNet solved this problem by introducing a short cut identity connection in between layers, which helps achieve at least the same accuracy upon further increasing the number of network layers. The unit of such connected layers is called a ResNet block. We leverage this key idea of a deeper model producing no higher error than its shallower counterpart, for the construction of an architecture that provides more accurate execution as it proceeds deeper into the DNN.
{\noindent{\bf Spatial Pyramid Pooling (SPP)}~\cite{spp}}:
Popular DNN models, including ResNet, consist of convolutional and max-pooling (CONV) layers and fully-connected (FC) layers and the shape of an input image is fixed. Changing the input shape in a CNN typically requires re-designing the architecture of the CNN. SPP is a special layer that eliminates this constraint. The SPP layer is added at the end of CONV layers, providing the following FC layers with a fixed-dimensioned feature representation by pooling the CONV layer output with bins whose shapes are proportional to the input shape. We use SPP layers to change input shape as an approximation knob.
{\noindent\bf Trading-off accuracy for inference latency}:
DNNs can have several variants due to different configurations, and these variants yield different accuracies and latencies. But these variants are trained and inferenced independently and cannot be switched efficiently at inference time to meet differing accuracy or latency requirements. For example, MCDNN~\cite{mcdnn} sets up an ensemble of (up to 68) model variants to satisfy different latency/accuracy/cost requirements. MSDNet~\cite{huang2017multi} enables five early exits in a {\em single} model but does not evaluate on streaming video with any variable content or contention situations. Hence, we set ourselves to design a single-model DNN that is capable of handling the accuracy-latency trade-off at inference time and guarantees our video analytics system's performance under variable content and runtime conditions.
\subsection{Content-aware Approximate Computing}
IRA~\cite{laurenzano2016input} and VideoChef~\cite{xu2018videochef} first introduced the notion of content-aware approximation and applied the idea, respectively to image and video processing pipelines. These works for the first time showed how to tune approximation knobs as content characteristics change, {e.g.,}\xspace the video scene became more complex. In particular, IRA performs approximation targeting individual images, while VideoChef exploits temporal similarity among frames in a video to further optimize accuracy-latency trade-off. However, these works do not perform approximation for ML-based inferencing, which comprises the dominant form of video analytics. In contrast, we apply approximation to the DNN model itself with the intuition that depending on complexity of the video frame, we want to feed input of a different shape and output at a different depth of layers to achieve the target accuracy.
\subsection{Contention-aware Scheduling}
Managing the resource contention of multiple jobs on high-performance clusters is a very active area of work. Bubble-Up~\cite{mars2011bubble}, Bubble-Flux~\cite{yang2013bubble}, and Pythia~\cite{xu2018pythia} develop characterization methodologies to predict the performance degradation of latency-sensitive applications due to shared resources in the memory subsystem. SMiTe~\cite{zhang2014smite} and Paragon~\cite{delimitrou2013paragon} further extend such concurrent resource contention scenario to SMT processors and thousands of different unknown applications, respectively. On the other hand, we apply contention-aware approximation to the DNN model on the embedded and mobile devices, and consider the three major sources of contention -- CPU, GPU, and memory bandwidth.
\section{Conclusion}
\label{sec_conclusion}
There is a push to support streaming video analytics close to the source of the video, such as, IoT devices, surveillance cameras, or AR/VR gadgets. However, state-of-the-art heavy DNNs cannot run on such resource-constrained devices. Further, the runtime conditions for the DNN's execution may change due to changes in the resource availability on the device, the content characteristics, or user's requirements. Although several works create lightweight DNNs for resource-constrained clients, none of these can adapt to changing runtime conditions. We introduced {ApproxNet}\xspace, a video analytics system for embedded or mobile clients. It enables novel dynamic approximation techniques to achieve desired inference latency and accuracy trade-off under changing runtime conditions. It achieves this by enabling two approximation knobs within a single DNN model, rather than creating and maintaining an ensemble of models. It then estimates the effect on latency and accuracy due to changing content characteristics and changing levels of contention. We show that {ApproxNet}\xspace can adapt seamlessly at runtime to such changes to provide low and stable latency for object classification on a video stream. We quantitatively compare its performance to ResNet, MCDNN, MobileNets, NestDNN, and MSDNet, five state-of-the-art object classification DNNs.
\section{Discussion}
\label{sec_discussion}
\noindent \textbf{Training the approximation-enabled DNN} of {ApproxNet}\xspace may take longer than conventional DNNs, since at each iteration of training, different outports and input shapes try to minimize their own softmax loss, and thus, they may adjust internal weights of the DNN in conflicting ways. In our experiments with the VID dataset, we observe that our training time is around 3 days on our evaluation edge server, described in Section~\ref{sec_platform}, compared to 1 day to train a baseline ResNet-34 model. However, training being an offline process, the time is of less concern. However, training can be sped up by using one of various actively researched techniques for optimizing training, such as~\cite{le2011optimization}.
\noindent \textbf{Generalizing the approximation-enabled DNN to other architectures}. The shape and depth knobs are general to all CNN-based architectures. Theoretically, we can attach an outport (composed of SPP to adjust the shape and fully-connected layer to generate classification) to any layers. The legitimate input shape can be tricky and dependent on the specific architecture. Considering the training cost and exploration space, we select 7 shapes in multiples of 16 and 6 outports, in mostly equally spaced positions of the layers. More input shapes and outports enable finer-grained accuracy-latency trade-offs and the granularity of such a trade-off space depends on the design goal.
\section{Evaluation}
\label{sec_evaluation}
\subsection{Evaluation Platforms} \label{sec_platform}
We evaluate {ApproxNet}\xspace by running it on an NVIDIA Jetson TX2~\cite{tx2}, which includes 256 NVIDIA Pascal CUDA cores, a dual-core Denver CPU, a quad-core ARM CPU on a 8GB unified memory~\cite{unifor-mem} between CPU and GPU. The specification of this board is close to what is available in today's high-end smart phones such as Samsung Galaxy S20 and Apple iPhone 12. We train the approximation-enabled DNN on a server with NVIDIA Tesla K40c GPU with 12GB dedicated memory and an octa-core Intel i7-2600 CPU with 24GB RAM. For both the embedded device and the training server, we install Ubuntu 16.04 and TensorFlow v1.14.
\subsection{Datasets, Task, and Metrics}
\subsubsection{ImageNet VID dataset}
We evaluate {ApproxNet}\xspace on the video object classification task using ILSVRC 2015 VID dataset~\cite{ILSVRC2015_VID}. Although the dataset is initially used for object detection, we convert the dataset so that the task is to classify the frame into one of the ground truth object categories. If multiple objects exist, the classification is considered correct if matched with any one of the ground truth classes and this rule applies to both {ApproxNet}\xspace and baselines. According to our analysis, 89\% of the video frames are single-object-class frames and thus the accuracy is still meaningful under such conversion.
For the purpose of training, ILSVRC 2015 VID training set contains too many redundant video frames, leading to an over-fitting issue. To alleviate this problem, we follow the best practice in~\cite{kang2017t} such that the VID training dataset is sub-sampled every 180 frames and the resulting subset is mixed with ILSVRC 2014 detection (DET) training dataset to construct a new dataset with DET:VID=2:1. We use 90\% of this video dataset to train {ApproxNet}\xspace's DNN model and keep aside another 10\% as validation set to fine-tune {ApproxNet}\xspace (offline profiling). To evaluate {ApproxNet}\xspace's system performance, we use ILSVRC 2015 VID validation set -- we refer to this as the ``test set'' throughout the paper.
\subsubsection{ImageNet IMG dataset}
We also use ILSVRC 2012 image classification dataset~\cite{deng2009imagenet} to evaluate the accuracy-latency trade-off of our single DNN. We use 10\% of the ILSVRC training set as our training set, first 50\% of the validation set as our validation set to fine-tune {ApproxNet}\xspace, and the remaining 50\% of the validation set as our test set. The choices made for training-validation-test in both the datasets follows common practice and there is no overlap between the three. \textbf{Throughout the evaluation, we use ImageNet VID dataset by default, unless we explicitly mention the use of the ImageNet IMG dataset.}
\subsubsection{Metrics}
We use latency and top-5 accuracy as the two metrics. The latency includes the overheads of the respective solutions, including the switching overhead, the execution time of FCE, RCE and scheduler.
\subsection{Baselines}\label{sec:baseline}
We start with the evaluation on the static models without the ability to adapt. This is because we want to reveal the relative accuracy-latency trade-offs in the traditional settings, compared to the single approximation branches in {ApproxNet}\xspace. The baselines for this static experiment include model variants, which are designed for different accuracy and latency goals, {i.e.,}\xspace ResNet~\cite{resnet} MobileNets~\cite{howard2017mobilenets} and MSDNets~\cite{huang2017multi} for which we use 5 execution branches in the single model that can provide different accuracy and latency goals. We use the ILSVRC IMG dataset to evaluate these static models, since this dataset is larger and with more classes.
We then proceed with the evaluation on the streaming videos, under varying resource contention. This brings two additional aspects for evaluation -- (1) how the video frames are being processed in a timely and streaming manner as frames in this case cannot be batched like images, and (2) how the technique can meet the latency budget in the presence of resource contention from other application that can raise the processing latency.
The baselines we use are: MCDNN~\cite{mcdnn} as a representative of the multi-model approach, and MSDNets, a representative of the single-model approach (with multiple execution branches).
We also compare the switching overhead of our single-model design with the multi-capacity models in NestDNN~\cite{fang2018nestdnn}.
Unfortunately, we were not able to use BranchyNet~\cite{teerapittayanon2016branchynet}, because their DNN is not designed for large images in the ImageNet dataset. BranchyNet was evaluated on MNIST and CIFAR datasets in thir paper and it does not provide any guidance on the parameter settings for training and makes it impossible to use on different datasets.
The details of each baseline are as follows,
\noindent \textbf {ResNet}: ResNet is the base DNN architecture of many state-of-the-art image and video object classification tasks, with superior accuracy to other architectures. While it was originally meant for server-class platforms, as resources on mobile devices increase, ResNet is also being used on such devices~\cite{lu2017modeling, zhang2018shufflenet, wang2018pelee}. We use ResNet of 18 layers (ResNet-18) and of 34 layers (ResNet-34) as base models. We modify the last FC layer to classify into 30 labels in the VID dataset and fine-tune the whole model. ResNet-34 plays a role as the reference providing the upper bound of the target accuracy. ResNet architectures with more than 34 layers (\cite{resnet} has considered up to 152 layers) become impractical as they are too slow to run on the resource-constrained mobile devices and their memory consumption is too large for the memory on the board.
\noindent \textbf {MobileNets}: This refers to 20 model variants (trained by the original authors) specifically designed for mobile devices ($\alpha=1,0.75,0.5,0.35, shape=224,192,160,128,96$).
\noindent \textbf {MSDNets}: This refers to the 5 static execution branches to meet the different latency budgets in their anytime evaluation scenario. We have enhanced MSDNets with a scheduler to dynamically choose the static branches for dynamic runtime conditions. The former is compared with static models in the IMG dataset and the latter is compared with adaptive systems in the VID dataset. For the sake of simplicity, we reuse the same term (MSDNets) to refer to both.
\noindent \textbf {NestDNN}: This solution provides multi-capacity models with ResNet-34 architecture by varying the number of filters in each convolutional layer. We compose 9 descendant models, where (1) the seed model, or the smallest model, reduces the number of filters of all convolutional layers uniformly by 50\%, (2) the largest descendant model is exactly ResNet of 34 layers, and (3) the other descendant models reduce the number of filters of all convolutional layers uniformly by a ratio equally distributed between 50\% and 100\%. We only compare the switching overhead of NestDNN inside its descendant models with {ApproxNet}\xspace because NestDNN is not open-sourced and the paper does not provide enough details about the training process or the architecture.
\noindent \textbf {MCDNN}: We change the base model in MCDNN from VGG to the more recent ResNet for a fairer comparison. This system chooses between MCDNN-18 and MCDNN-34 depending on the accuracy requirement. MCDNN-18 uses two models: a specialized ResNet-18 followed by the generic ResNet-18. The specialized ResNet-18 is the same as the ResNet-18 except the last layer, which is modified to classify the most frequent $N$ classes only. This is MCDNN's key novelty that most inputs belong to the top $N$ classes, which can be handled by a reduced-complexity DNN. If the top-1 prediction label of the specialized model in MCDNN is not among the top $N$ frequent classes, then the generic model processes the input again and outputs its final predictions. Otherwise, MCDNN uses the top-5 prediction labels of the specialized model as its final predictions. We set $N=20$ that covers 80\% of training video frames in the VID dataset.
\subsection{Typical Usage Scenarios}
\label{sec:scenarios}
We use a few usage scenarios to compare the protocols, although {ApproxNet}\xspace can support finer-grained user requirements in latency or accuracy.
\begin{itemize}
\item \textbf{High accuracy, High latency (HH)} refers to the scenario where {ApproxNet}\xspace has less than 10\% (relative) accuracy loss from ResNet-34, our most accurate single model baseline. Accordingly, the runtime latency is also high to achieve such accuracy.
\item \textbf{Medium accuracy, Medium latency (MM)} has an accuracy loss less than 20\% from our base model ResNet-34.
\item \textbf{Low accuracy, Low latency (LL)} can tolerate an accuracy loss of up to 30\% with a speed up in its inferencing.
\item \textbf{Real time (RT)} scenario, by default, means the processing pipeline should keep up with 30 fps speed, {i.e.,}\xspace maximum 33.33 ms latency. This is selected if no requirement is specified.
\end{itemize}
\begin{figure*}[b]
\centering
\begin{minipage}[t]{0.49\linewidth}
\includegraphics[width=0.99\textwidth]{Figures/TradeOff_IMGMN_allshape_img_20180723_val.png}
\caption{Pareto frontier for test accuracy and inference latency on the ImageNet IMG dataset for ApproxNet compared to ResNet and MobileNets, the latter being specialized for mobile devices.}
\label{trade_off_img_dataset}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=0.99\textwidth]{Figures/SystemComparison_VID_Caw_20181107.png}
\caption{Comparison of system performance in typical usage scenarios. ApproxNet is able to meet the accuracy requirement for all three scenarios. User requirements are shown in dashed lines.}
\label{trade_off_system}
\end{minipage}
\end{figure*}
\begin{table}[t]
\centering
\caption{Averaged accuracy and latency performance of ABs on the Pareto frontier in ApproxNet and those of the baselines on validation set of the VID dataset. Note that the accuracy on the validation dataset can be higher due to its similarity with the training dataset. Thus the validation accuracy is only used to construct the look up table in {ApproxNet}\xspace and baselines and does not reflect the true performance.}
\label{table:LUT_VID}
\begin{minipage}[]{0.99\linewidth}
\centering
\centerline{(a) Averaged accuracy and latency performance in ApproxNet.}
\begin{tabular}{lcccc}
\hline
Usage Scenario (rate) & Shape & Layers & Latency & Accuracy \\
\hline
{\bf HH} (32 fps) & 128x128x3 & 24 & 31.42 ms & 82.12\% \\
& 160x160x3 & 20 & 31.33 ms & 80.81\% \\
& 128x128x3 & 20 & 27.95 ms & 79.35\% \\
& 112x112x3 & 20 & 26.84 ms & 78.28\% \\
{\bf MM} (56 fps) & 128x128x3 & 12 & 17.97 ms & 70.23\% \\
& 112x112x3 & 12 & 17.70 ms & 68.53\% \\
& 96x96x3 & 12 & 16.78 ms & 67.98\% \\
{\bf LL} (62 fps) & 80x80x3 & 12 & 16.14 ms & 66.39\% \\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}[]{0.99\linewidth}
\centering
\centerline{(b) Lookup table in MCDNN's scheduler.}
\begin{tabular}{lcccc}
\hline
Scenario (rate) & Shape & Layers & Latency & Accuracy \\
\hline
{\bf HH} (11 fps) & 224x224x3 & 34 & 88.11 ms & 77.71\% \\
{\bf MM/LL} (17 fps) & 224x224x3 & 18 & 57.83 ms & 71.40\% \\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}[]{0.99\linewidth}
\centering
\centerline{(c) Lookup table in MSDNet's scheduler.}
\begin{tabular}{lcccc}
\hline
Scenario (rate) & Shape & Layers & Latency & Accuracy \\
\hline
{\bf HH} (5.2 fps) & 224x224x3 & 191 & 153 ms & 96.79\% \\
{\bf MM/LL} (16 fps) & 224x224x3 & 63 & 62 ms & 95.98\% \\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}[]{0.99\linewidth}
\centering
\centerline{(d) Reference performance of single model variants or execution branches.}
\begin{tabular}{ccccl}
\hline
Model name (rate) & Shape & Layers & Latency & Accuracy \\
\hline
ResNet-34 (16 fps) & 224x224x3 & 34 & 64.44 ms & 85.86\% \\
ResNet-18 (22 fps) & 224x224x3 & 18 & 45.22 ms & 84.59\% \\
MSDNet-branch5 (5.2 fps) & 224x224x3 & 191 & 153 ms & 96.79\% \\
MSDNet-branch4 (5.6 fps) & 224x224x3 & 180 & 146 ms & 96.55\% \\
MSDNet-branch3 (7.8 fps) & 224x224x3 & 154 & 129 ms & 96.70\% \\
MSDNet-branch2 (10 fps) & 224x224x3 & 115 & 100 ms & 96.89\% \\
MSDNet-branch1 (16 fps) & 224x224x3 & 63 & 62 ms & 95.98\% \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{Figures/per_category_trade_off.png}
\caption{Content-specific accuracy of Pareto frontier branches. Branches that fulfill real-time processing (30 fps) requirement are labeled in green. Note that both ResNet-18 and ResNet-34 models, though with the higher accuracy, cannot meet the 30 fps latency requirement.}
\label{per_category_trade_off}
\end{figure*}
\subsection{Accuracy-latency Trade-off of the Static Models}
We first evaluate {ApproxNet}\xspace on ILSVRC IMG dataset on the accuracy-latency trade-off of each AB in our single DNN as shown in Figure~\ref{trade_off_img_dataset}. Our AB with higher latency (satisfying 30 fps speed though higher) has close accuracy to ResNet-18 and ResNet-34 but much lower latency than ResNet-34. Meanwhile, our AB with reduced latency (25 ms to 30 ms) has close accuracy to MobileNets. And finally, our AB is superior to all baselines in achieving extremely low latency (< 20 ms). However, the single execution branches in MSDNet are much slower than {ApproxNet}\xspace and other baselines. The latency ranges from 62 ms to 153 ms, which cannot meet the real-time processing requirement and will be even worse in face of the resource contention. MobileNets can keep up with the frame rate but it lacks the configurability in the latency dimensional that {ApproxNet}\xspace has. Although MobileNets does win on the IMG dataset at higher accuracy, it needs an ensemble of models (like MCDNN) when it comes to the video where content characteristics, user requirement, and runtime resource contention change.
\subsection{Adaptability to Changing User Requirements} \label{sec:adp_user_req}
We then, from now on, switch to the ILSVRC VID dataset and show how {ApproxNet}\xspace can meet different user requirements for accuracy and latency. We list the averaged accuracy and latency of Pareto frontier branches in Table~\ref{table:LUT_VID}(a), which can serve as a lookup table in the simplest scenario, {i.e.,}\xspace without considering frame complexity categories and resource contention. {ApproxNet}\xspace, provides content-aware approximation and thus keeps a lookup table for each frame complexity category, and to be responsive to resource contention, updates the latency in the lookup table based on observed contention.
We perform our evaluation on the entire test set, but without the baseline protocols incurring any switching penalty. Figure~\ref{trade_off_system} compares the accuracy and latency performance between {ApproxNet}\xspace and baselines in three typical usage scenarios ``HH'', ``MM'', and ``LL'' (AN denotes {ApproxNet}\xspace). In this experiment, {ApproxNet}\xspace uses the content-aware lookup table for each frame complexity category and chooses the best AB at runtime to meet the user accuracy requirement. MCDNN and MSDNet use similar lookup tables (Table~\ref{table:LUT_VID}(b) and (c)) to select among model variants or execution branches to satisfy the user requirement. We can observe that ``AN-HH'' achieves the accuracy of 67.7\% at a latency of 35.0 ms, compared to ``MCDNN-HH'' that has an accuracy of 68.5\% at the latency of 87.4 ms. Thus, MCDNN-HH is 2.5X slower while achieving 1.1\% accuracy gain over {ApproxNet}\xspace. On the other hand, MSDNet is more accurate and slower than all {ApproxNet}\xspace's branches. The lightest branch and heaviest branch achieve 4.3\% and 7.3\% higher accuracy respectively, and incur 1.8X and 4.4X higher latency respectively. In ``LL'' and ``MM'' usage scenarios, MCDNN-LL/MM is 2.8-3.3X slower than {ApproxNet}\xspace, while gaining in accuracy 3\% or less. MSDNets, on the other hand, is running with much higher latency (62 ms to 146 ms) and higher accuracy (72.0\% to 76.2\%).
Thus, compared to these baseline models, {ApproxNet}\xspace wins by providing lower latency, satisfying the real-time requirement, and flexibility in achieving various points in the (accuracy, latency) space.
\subsection{Adaptability to Changing Content Characteristics \& User Requirements}\label{system_latency}
We now show how {ApproxNet}\xspace can adapt to changing content characteristics and user requirements within the same video stream. The video stream, typically at 30 fps, may contain content of various complexities and this can change quickly and arbitrarily. Our study with the FCC on the VID dataset has shown that in 97.3\% cases the frame complexity category of the video will change within every 100 frames. Thus, dynamically adjusting the AB with frame complexity category is beneficial to the end-to-end system. We see in Figure~\ref{per_category_trade_off} that {ApproxNet}\xspace with various ABs can satisfy different (accuracy, latency) requirements for each frame complexity category. According to user's accuracy or latency requirement, {ApproxNet}\xspace's scheduler picks the appropriate AB. The majority of the branches satisfy the real-time processing requirement of 30 fps and can also support high accuracy quite close to the ResNet-34.
In Figure~\ref{temporal_system}, we show how {ApproxNet}\xspace adapts for a particular representative video from the test dataset. Here, we assume the user requirement changes every 100 frames between ``HH'', ``MM'', and ``LL''. This is a synthetic setting to observe how models perform at the time of switching. We assume a uniformly distributed model selection among 20 model variants for MCDNN's scheduler (in~\cite{mcdnn}, the MCDNN catalog uses 68 model variants) while the embedded device can only cache two models in the RAM (more detailed memory results in Section~\ref{subsec:overhead}). In this case, MCDNN has a high probability to load a new model variant into RAM from Flash, whenever the user requirement changes. This results in a huge latency spike, typically from 5 to 20 seconds at each switch. It is notable that for some cases, there are also small spikes in MCDNN following the larger spikes because the generic model is invoked due to the specialized model's prediction of ``infrequent'' class. On the other hand, {ApproxNet}\xspace and MSDNets incur little overhead in switching between any two branches, because they are all available within the same single-model DNN. Similar to the results before, {ApproxNet}\xspace wins over MSDNets at lower latency to meet the real-time processing requirement even though its accuracy is slightly lower.
\begin{figure*}[t]
\centering
\begin{minipage}[thb]{0.4\linewidth}
\includegraphics[width=1\textwidth]{Figures/Runtime_VID_Caw.png}
\caption{Latency performance comparison with changing user requirements throughout video stream.}
\label{temporal_system}
\end{minipage}
\hfill
\begin{minipage}[thb]{0.58\linewidth}
\begin{minipage}[thb]{0.47\linewidth}
\includegraphics[width=1\textwidth]{Figures/MeanTransitionOverhead.png}
\centerline{(a) {ApproxNet}\xspace}
\end{minipage}
\begin{minipage}[thb]{0.52\linewidth}
\includegraphics[width=1\textwidth]{Figures/MeanTransitionOverheadNestDNN.png}
\centerline{(b) NestDNN}
\end{minipage}
\caption{Transition latency overhead across (a) ABs in ApproxNet and (b) descendant models in NestDNN. ``from'' branch on Y-axis and ``to'' branch on X-axis. Inside brackets: (input shape, outport depth). Latency unit is millisecond.}
\label{fig:transition}
\end{minipage}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{minipage}[t]{0.49\linewidth}
\includegraphics[width=1\textwidth]{Figures/Contention_Latency_WholeSet.png}
\centerline{(a) Inference latency (w/ CPU contention)}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/Contention_Accuracy_WholeSet.png}
\centerline{(b) Accuracy (w/ CPU contention)}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/GPUContention_Latency.png}
\centerline{(c) Inference latency (w/ GPU contention)}
\end{minipage}
\caption{Comparison of ApproxNet vs MCDNN under resource contention. (a) and (b) inference latency and accuracy on the whole test dataset. (c) inference latency on a test video.}
\label{fig:system_contention}
\end{figure*}
To see in further detail the behavior of {ApproxNet}\xspace, we profile the mean transition time of all Pareto frontier branches under no contention as shown in Figure~\ref{fig:transition} (a). Most of the transition overheads are extremely low, while only a few transitions are above 30 ms. Our optimization algorithm (Equations~\ref{eq:opt-latency-constraint} and~\ref{eq:opt-latency}) filters out such expensive transitions if they happen too frequently. In Figure~\ref{fig:transition} (b), we further show the transition time between the descendant models in NestDNN, which uses a multi-capacity model with varying number of filters in the convolutional layers. The notation c50 stands for the descendant model with 50\% capacity of the largest variant, and so on. We can observe that the lowest switching cost of NestDNN is still more than an order of magnitude higher than the highest switching cost of {ApproxNet}\xspace. The highest switching cost of NestDNN compared to the highest of {ApproxNet}\xspace is more than three orders of magnitude---the highest cost is important when trying to guarantee latencies with the worst-case latency spikes. We can observe that the transition overhead can be up to 25 seconds from the smallest model to the largest model, and is generally proportional to the amount of data that is loaded into the memory. This is because NestDNN only keeps a single model, the one in use in memory and loads/unloads all others when switching. In summary, the benefit of {ApproxNet}\xspace comes from the fact that (1) it can accommodate multiple (accuracy, latency) points within one model through its two approximation knobs while MCDNN has to switch between model variants, and (2) switching between ABs does not load large amount of data and computational graphs.
\subsection{Adaptability to Resource Contention}
\label{resource_contention}
We evaluate in Figure~\ref{fig:system_contention}, the ability of {ApproxNet}\xspace to adapt to resource contention on the device, both CPU and GPU contention. First, we evaluate this ability by running a \textit{bubble application}~\cite{mars2011bubble, xu2018pythia} on the CPU that creates stress of different magnitudes on the (shared) memory subsystem while the video analytics DNN is running on the GPU. We generate bubbles, of two different memory sizes 10 KB (low contention) and 300 MB (high contention). The bubbles can be ``unpinned'' meaning they can run on any of the cores or they can be ``pinned'' in which case they run on a total of 5 CPU cores leaving the 6th one for dedicated use by the video analytics application. The unpinned configuration causes higher contention. We introduce contention in phases---low pinned, low unpinned, high pinned, high unpinned.
As shown in Figure~\ref{fig:system_contention}(a), MCDNN with its fastest model variant MCDNN-18, runs between 40ms and 100 ms depending on the contention level and has no adaptation. For {ApproxNet}\xspace, on the other hand, our mean latency under low contention (10 KB, pinned) is 25.66 ms, and it increases a little to 34.23 ms when the contention becomes high (300 MB, unpinned). We also show the accuracy comparison in Figure~\ref{fig:system_contention}(b), where we are slightly better than MCDNN under low contention and high contention (2\% to 4\%) but slightly worse (within 4\%) for intermediate contention (300 MB, pinned).
To further evaluate {ApproxNet}\xspace with regard to GPU contention, we run a synthetic matrix manipulation application concurrently with {ApproxNet}\xspace. The contention level is varied in a controlled manner through the synthetic application, from 0\% to 100\% in steps of 10\%, where the control is the size of the matrix, and equivalently, the number of GPU threads dedicated to the synthetic application. The contention value is the GPU utilization when the synthetic application runs alone as measured through \texttt{tegrastats}. For baseline, we use the MCDNN-18 model again since among the MCDNN ensemble, it comes closest to the video frame rate (33.3 ms latency). As shown in Figure~\ref{fig:system_contention}(c), without the ability to sense the GPU contention and react to it, the latency of MCDNN increases by 85.6\% and is far beyond the real-time latency threshold. The latency of {ApproxNet}\xspace also increases with gradually increasing contention, 20.3 ms at no contention to 30.77 ms at 30\% contention. However, when we further raise the contention level to 50\% or above, {ApproxNet}\xspace's scheduler senses the contention and switches to a lighter-weight approximation branch such that the latency remains within 33.3 ms. The accuracy of MCDNN and {ApproxNet}\xspace were identical for this sample execution. Thus, this experiment bears out the claim that {ApproxNet}\xspace can respond to contention gracefully by recreating the Pareto curve for the current contention level and picking the appropriate AB.
\begin{figure*}[t]
\begin{minipage}[thb]{0.44\linewidth}
\begin{minipage}[thb]{1\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/SystemOverhead_Caw.png}
\caption{System overhead in ApproxNet and MCDNN.}
\label{fig:timing_share}
\end{minipage}
\begin{minipage}[thb]{1\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/bar_plot.png}
\caption{Memory consumption of solutions in different usage scenarios (unit of GB).}
\label{fig:memory-result}
\end{minipage}
\end{minipage}
\hfill
\begin{minipage}[thb]{0.53\linewidth}
\centering
\includegraphics[width=1\textwidth]{Figures/Latency_Contention.png}
\centerline{(a) Inference latency}
\includegraphics[width=1\textwidth]{Figures/Accuracy_Contention.png}
\centerline{(b) Accuracy}
\caption{Case study: performance comparison of ApproxNet vs MCDNN under resource contention for a Youtube video.}
\label{fig:system_contention_case_study}
\end{minipage}
\end{figure*}
\subsection{Solution Overheads} \label{subsec:overhead}
With the same experiment as in Section~\ref{system_latency}, we compare the overheads of {ApproxNet}\xspace, MCDNN, and MSDNets in Figure~\ref{fig:timing_share}. For {ApproxNet}\xspace, we measure the overhead of all the steps outside of the core DNN, {i.e.,}\xspace frame resizing, FCE, RCE, and scheduler. For MCDNN, the dominant overhead is the model switching and loading. The model switching overhead of MCDNN is measured at each switching point and averaged across all frames in each scenario. We see that {ApproxNet}\xspace, including overheads, is $7.0X$ to $8.2X$ faster than MCDNN and $2.4X$ to $4.1X$ faster than MSDNets. Further, we can observe that in ``MM'' and ``LL'' scenarios, {ApproxNet}\xspace's averaged latency is less than 30 ms and thus {ApproxNet}\xspace can achieve real-time processing of 30 fps videos. As mentioned before, MCDNN may be forced to reload the appropriate models whenever the user requirement changes. So, in the best case for MCDNN the requirement never changes or it has all its models cached in RAM. {ApproxNet}\xspace is still $5.1X$ to $6.3X$ faster.
Figure~\ref{fig:memory-result} compares the peak memory consumption of {ApproxNet}\xspace and MCDNN in typical usage scenarios. {ApproxNet}\xspace-mixed, MCDNN-mixed, and MSDNet-mixed are the cases where the experiment cycles through the three usage scenarios. We test MCDNN-mixed with two model caching strategies: (1) the model variants are loaded from Flash when they get triggered (named ``re-load''), simulating the minimum RAM usage (2) the model variants are all loaded into the RAM at the beginning (named ``load-all''), assuming the RAM is large enough. We see that {ApproxNet}\xspace in going from ``LL'' to ``HH'' requirement consumes 1.6 GB to 1.7 GB memory and is lower than MCDNN (1.9 GB and 2.4 GB). MCDNN's cascade DNN design (specialized model followed by generic model) is the root cause that it consumes about 15\% more memory than our model even though they only keep one model variant in the RAM and it consumes 32\% more memory if it loads two. For the mixed scenario, we can set an upper bound on the {ApproxNet}\xspace memory consumption---it never exceeds 2.1 GB no matter how we switch among ABs at runtime, an important property for proving operational correctness in mobile or embedded environments. Further, {ApproxNet}\xspace, with tens of ABs available, offers more choices than MCDNN and MSDNet, and MCDNN cannot accommodate more than two models in the available RAM.
Storage is a lesser concern but it does affect the pushing out of updated models from the server to the mobile device, a common use case. {ApproxNet}\xspace's storage cost is only 88.8 MB while MCDNN with 2 models takes 260 MB and MSDNet with 5 execution branches takes 177 MB. A primary reason is the duplication in MCDNN of the specialized and the generic models which have identical architecture except for the last FC layer. Thus, {ApproxNet}\xspace is well suited to the mobile usage scenario due to its low RAM and storage usage.
\subsection{Ablation Study with FCE} \label{sec_ablation_study_FCE}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{Figures/SystemComparison_VID_Ablation_FCE.png}
\caption{Comparison of system performance in typical usage scenarios between {ApproxNet}\xspace with FCE and {ApproxNet}\xspace without FCE.}
\label{fig:ablation_FCE}
\end{figure}
To study the necessity of FCE, we conduct an ablation study of {ApproxNet}\xspace with a content agnostic scheduler. Different from the content-aware scheduler, this scheduler picks the same AB for all video frames regardless of the content (the contention level is held unchanged). The test dataset obviously has variations in the content characteristics. With the same experiment as in Section~\ref{sec:adp_user_req}, we compare the accuracy and latency performance between {ApproxNet}\xspace with FCE and that without FCE in three typical usage scenarios ``HH'', ``MM'', and ``LL''. Figure~\ref{fig:ablation_FCE} shows that {ApproxNet}\xspace with FCE is able to improve the accuracy by 2.0\% under a ``HH'' scenario with an additional 3.57 ms latency cost. {ApproxNet}\xspace with FCE under ``MM'' and ``LL'' scenarios is either slightly slower or more accurate than that without FCE. To summarize, {ApproxNet}\xspace's FCE is more beneficial when the accuracy goal is higher, and however the latency budget is also higher.
\subsection{Case Study with YouTube Video} \label{sec_case_study}
As a case study, we evaluate {ApproxNet}\xspace on a randomly picked YouTube video~\cite{YoutubeVideoLink}, to see how it adapts to different resource contention scenarios at runtime (Figure~\ref{fig:system_contention_case_study}). The video is a car racing match with changing scenes and objects, and thus, we want to evaluate the object classification performance. The interested reader may see a demo of {ApproxNet}\xspace and MCDNN on this and other videos at \url{https://approxnet.github.io/}. Similar to the control setup in Section~\ref{resource_contention}, we test {ApproxNet}\xspace and MCDNN for four different contention levels. Each phase is 300-400 frames and the latency requirement is 33 ms to keep up with the 30 fps video. We see {ApproxNet}\xspace adapts to the resource contention well---it switches to a lightweight AB while still keeping high accuracy, comparable to MCDNN (seen on the demo site). Further, {ApproxNet}\xspace is always faster than MCDNN, while MCDNN, with a latency of 40-80 ms and even without switching overhead, has degraded performance under resource contention, and has to drop approximately every two out of three frames. As for the accuracy, there are only some occasional false classifications in {ApproxNet}\xspace (in total: 51 out of 3,000 frames, or 1.7\%). MCDNN, in this case, has slightly better accuracy (24 false classifications in 3,000 frames, or 0.8\%). We believe commonly used post-processing algorithms~\cite{kang2017t, han2016seq} can easily remove these occasional classification errors and both approaches can achieve very low inaccuracy.
\section{Introduction}
\label{sec_introduction}
There is an increasing number of scenarios where various kinds of analytics are required to be run on live video streams, on resource-constrained mobile and embedded devices. For example, in a smart city traffic system, vehicles are redirected by detecting congestion from the live video feeds from traffic cameras while in Augmented Reality (AR)/Virtual Reality (VR) systems, scenes are rendered based on the recognition of objects, faces or actions in the video. These applications require low latency for event classification or identification based on the content in the video frames. Most of these videos are captured at end-client devices such as IoT devices, surveillance cameras, or head-mounted AR/VR systems. Video transport over wireless network is slow and these applications often must operate under intermittent network connectivity. Hence such systems must be able to run video analytics in-place, on these resource-constrained client devices\footnote{For end client devices, we will use the term ``mobile devices'' and ``embedded devices'' interchangeably. The common characteristic is that they are computationally constrained. While the exact specifications vary across these classes of devices, both are constrained enough that they cannot run streaming video analytics without approximation techniques.} to meet the low latency requirements for the applications.
\noindent\textbf{State-of-the-art is too heavy for embedded devices:}
Most of the video analytics queries involve performing an inference over DNNs (mostly convolutional neural networks, {\em aka} CNNs) with a variety of functional architectures for performing the intended tasks like classification~\cite{simonyan2014very, szegedy2015going, resnet, huang2017densely}, object detection~\cite{liu2016ssd, redmon2016you, ren2015faster, shankar2020janus}, face~\cite{parkhi2015deep, schroff2015facenet, wen2016discriminative, taigman2014deepface} or action recognition~\cite{ji20133d, poppe2010survey, liu2016spatio, simonyan2014two} etc. With advancements in deep learning and emergence of complex architectures, DNN-based models have become \textit{deeper} and {\em wider}. Correspondingly their memory footprints and their inference latency have become significant. For example, DeepMon~\cite{deepmon} runs the VGG-16 model at approximately 1-2 frames-per-second (fps) on a Samsung Galaxy S7. ResNet~\cite{resnet}, with its 101-layer version, has a memory footprint of 2.8 GB and takes 101 ms to perform inference on a single video frame on the NVIDIA Jetson TX2. MCDNN, Mainstream, VideoStorm and Liu \textit{et al.}\xspace~\cite{mcdnn, jiang2018mainstream, liu2019edge, videostorm} require either the cloud or the edge servers to achieve satisfactory performance. Thus, on-device inference with a low and stable inference latency ({i.e.,}\xspace 30 fps) remains a challenging task.
\begin{figure}[b]
\centering
\begin{minipage}{0.49\columnwidth}
\begin{minipage}{0.49\columnwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/SimpleImg224Shape.JPEG}
\end{minipage}
\centering\hfill
\begin{minipage}{0.49\columnwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/SimpleImg112Shape.JPEG}
\end{minipage}
\centerline{(a) Simple video frame}
\end{minipage}
\begin{minipage}{0.49\columnwidth}
\begin{minipage}{0.49\columnwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/ComplexImg224Shape.JPEG}
\end{minipage}
\centering\hfill
\begin{minipage}{0.49\columnwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/ComplexImg112Shape.JPEG}
\end{minipage}
\centerline{(b) Complex video frame}
\end{minipage}
\caption{Examples of using a heavy DNN (on the left) and a light DNN (on the right) for simple and complex video frames in a video frame classification task. The light DNN downsamples an input video frame to half the default input shape and gets prediction labels at an earlier layer. The classification is correct for the simple video frame (red label denotes the correct answer) but not for the complex video frame.}
\label{fig:easy_hard}
\end{figure}
\noindent\textbf{Content and contention aware systems: } Content characteristics of the video stream is one of the key runtime conditions of the systems with respect to approximate video analytics. This can be leveraged to achieve the desired latency-accuracy tradeoff in the systems. For example, as shown in Figure~\ref{fig:easy_hard}, if the frame is very simple, we can downsample it to half of the original dimensions and use a shallow or small DNN model to make an accurate prediction. If the frame is complex, the same shallow model might result in a wrong prediction and would need a larger DNN model.
Resource contention is another important runtime condition that the video analytics system should be aware of. In several scenarios, the mobile devices support multiple different applications, executing concurrently. For example, while an AR application is running, a voice assistant might kick in if it detects background conversation, or a spam filter might become active, if emails are received. All these applications share common resources on the device, such as, CPU, GPU, memory, and memory bandwidth, and thus lead to \textit{resource contention}~\cite{kayiranMICRO2014, ausavarungnirunASPLOS2018, bagchi2020new} as these devices do not have advanced resource isolation mechanisms. It is currently an unsolved problem how video analytics systems running on the mobile devices can maintain a low inference latency under such variable resource availability and changing content characteristics, so as to deliver satisfactory user experience.
\noindent\textbf{Single-model vs. multi-model adaptive systems}: How do we architect the system to operate under such varied runtime conditions? Multi-model designs came first in the evolution of systems in this space. These created systems with an ensemble of multiple models, satisfying varying latency-accuracy conditions, and some scheduling policy to choose among the models. MCDNN~\cite{mcdnn}, being one of most representative works, and well-known DNNs like ResNet, MobileNets~\cite{resnet, howard2017mobilenets} all fall into this category. On the other hand, the single-model designs, which emerged after the multi-model designs, feature one model with tuning knobs inside so as to achieve different latency-accuracy goals. These typically have lower switching overheads from one execution path to another, compared to the multi-branch models. MSDNet~\cite{huang2017multi}, BranchyNet~\cite{teerapittayanon2016branchynet}, NestDNN~\cite{fang2018nestdnn} are representative works in this single-model category. However, none of these systems can adapt to runtime conditions, primarily, changes in content characteristics and contention levels on the device.
\begin{table}[b]
\centering
\caption{ApproxNet's main features and comparison to existing systems.}
\label{table:features}
\scalebox{0.85}{
\begin{tabular}{|p{1.4in}|p{0.4in}|p{0.8in}|p{0.5in}|p{0.6in}|p{0.5in}|p{0.6in}|}
\hline
Solution & Single model & Considers switching overhead & Focused on video & Handles runtime conditions & Open-sourced & Replicable in our datasets\\
\hline
MCDNN [MobiSys'16] & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_part_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png}& \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_part_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} \\
\hline
MobileNets [ArXiv'17] & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png}& \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} \\
\hline
MSDNet [ICLR'18] & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png}& \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} \\
\hline
BranchyNet [ICPR'16] & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png}& \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} \\
\hline
NestDNN [MobiCom'18] & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_part_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png}& \includegraphics[width=0.17in]{Figures/icon_part_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} & \includegraphics[width=0.17in]{Figures/icon_not_support.png} \\
\hline
ApproxNet & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} & \includegraphics[width=0.17in]{Figures/icon_support.png} \\
\hline
\multicolumn{7}{|c|}{\includegraphics[width=0.17in]{Figures/icon_support.png} Supported \includegraphics[width=0.17in]{Figures/icon_part_support.png} Partially Supported \includegraphics[width=0.17in]{Figures/icon_not_support.png} Not Supported} \\
\hline
\multicolumn{7}{|l|}{Notes for partially support:} \\
\multicolumn{7}{|l|}{1. MCDNN and NestDNN only consider the switching overhead in memory size, but in the execution latency.} \\
\multicolumn{7}{|l|}{2. NestDNN handles multiple concurrent DNN applications with joint optimization goals.} \\
\multicolumn{7}{|l|}{3. The core models in MCDNN are open-sourced while the scheduling components are not.} \\
\hline
\end{tabular}
}
\end{table}
\noindent {\bf Our solution: {ApproxNet}\xspace}.
In this paper, we present {ApproxNet}\xspace, our content and contention aware object classification system over streaming videos, geared toward GPU-enabled mobile/embedded devices. We introduce a novel workflow with a set of integrated techniques to solve the three main challenges as mentioned above: (1) on-device real-time video analytics, (2) content and contention aware runtime calibration, (3) a single-model design. The fundamental idea behind {ApproxNet}\xspace is to perform approximate computing with tuning knobs that are changed automatically and seamlessly within the same video stream. These knobs trade off the accuracy of the inferencing for reducing inference latency and thus match the frame rate of the video stream or the user's requirement on either a latency target or an accuracy target. The optimal configuration of the knobs is set, particularly considering the resource contention and complexity of the video frames, because these runtime conditions affects the accuracy and latency of the model much.
In Table~\ref{table:features}, we compare {ApproxNet}\xspace with various representative prior works in this field. First of all, none of these systems~\cite{mcdnn, howard2017mobilenets, huang2017multi, teerapittayanon2016branchynet, fang2018nestdnn} is able to adapt to dynamic runtime conditions (changes in content characteristics and contention levels) as we can. Second, although most systems are able to run at variable operation points of performance, MCDNN~\cite{mcdnn} and MobileNets~\cite{howard2017mobilenets} use a multi-model approach and incur high switching penalty. For those that works with a single model, namely, MSDNet~\cite{huang2017multi}, BranchyNet~\cite{teerapittayanon2016branchynet}, and NestDNN~\cite{fang2018nestdnn}, they do not consider switching overheads in their models (except partially for NestDNN, which considers switching cost in memory size), do not focus on video content, and do not show how their models can adapt to the changing runtime conditions (except partially for NestDNN, which considers joint optimization of multiple DNN workloads). For evaluation, we mainly compare to MCDNN, ResNet, and MobileNets, as the representatives of multi-model approaches, and MSDNet, as the single-model approach. We cannot compare to BranchyNet as it is not designed and evaluated for video analytics and thus not suitable for our datasets. BranchyNet paper evaluates it on small image dataset: MNIST and CIFAR. We cannot compare to NestDNN since it's models or source-code and architecture and hyperparameter details are publicly available and we need those for replicating the experiments.
To summarize, we make the following contributions in this paper:
\begin{enumerate}
\item We develop an end-to-end, approximate video object classification system, {ApproxNet}\xspace, that can handle dynamically changing workload contention and video content characteristics on resource-constrained embedded devices. It achieves this through performing system context-aware and content-aware approximations with the offline profiling and online lightweight sensing and scheduling techniques.
\item We design a novel workflow with a set of integrated techniques including the adaptive DNN that allows runtime accuracy and latency tuning \textit{within a single model}. Our design is in contrast to ensemble systems like MCDNN that are composed of multiple independent model variants capable of satisfying different requirements. Our single-model design avoids high switching latency when conditions change and reduces RAM and storage usage.
\item We design {ApproxNet}\xspace to make use of video features, rather than treating video as a sequence of image frames. Such characteristics that we leverage include the temporal continuity in content characteristics between adjacent frames. We empirically show that on a large-scale video object classification dataset, popular in the vision community, {ApproxNet}\xspace achieves a superior accuracy-latency tradeoff than the three state-of-the-art solutions on mobile devices, MobileNets, MCDNN, and MSDNet (Figures~\ref{trade_off_img_dataset} and \ref{trade_off_system}).
\end{enumerate}
The rest of the paper is organized as follows. Section~\ref{sec_background} gives the relevant background. Section~\ref{sec_overview} gives our high-level solution overview. Section~\ref{sec_technique} gives the detailed design. Section~\ref{sec_evaluation} evaluates our end-to-end system. Section~\ref{sec_discussion} discusses the details about training the DNN. Section~\ref{sec_related_work} highlights the related works. Finally, Section~\ref{sec_conclusion} gives concluding remarks.
\section{Overview}
\label{sec_overview}
Here we give a high-level overview of {ApproxNet}\xspace. In Section~\ref{sec_technique}, we provide details of each component.
\subsection{Design Principles and Motivation}
We set four design requirements for streaming video analytics on the embedded devices motivated by real-world scenarios and needs. {\em First}, the application should adapt to changing input characteristics, such as, the complexity of the video frames because the accuracy of the DNN may vary based on the content characteristics. We find such changes happen often enough within a single video stream and without any clear predictive pattern. {\em Second}, the application should adapt to the resource contention due to the shared CPU, GPU, memory, or memory bandwidth with other concurrent applications on the same device. Such contention can happen frequently with co-location due to limited resources and the lack of clean resource isolation on these hardware platforms. Again, we find that such changes can happen without a clear predictive pattern. {\em Third}, the application should support different target accuracies or latencies at runtime with little transition overhead. For example, the application may require low latency when a time-critical query, such as detection of a miscreant, needs to be executed and have no such constraint for other queries on the steam. Thus, the aggregate model must be able to make efficient transitions in the tradeoff space of accuracy and latency, and less obviously throughput, optionally using edge or cloud servers. {\em Fourth}, the application must provide real-time processing speed (30 fps) while running on the mobile/embedded device. To see three instances where these four requirements come together, consider mobile VR/AR games like Pokemon Go (some game consoles support multitasking, accuracy requirements may change with the context of the game), autonomous vehicles (feeds from multiple cameras are processed on the same hardware platform resulting in contention, emergency situations require lower latency than benign conditions such as for fuel efficiency) and autonomous drones (same arguments as for autonomous vehicles).
A {\em non-requirement} in our work is that multiple concurrent applications consuming the same video stream be jointly optimized. MCDNN~\cite{mcdnn}, NestDNN~\cite{fang2018nestdnn}, and Mainstream~\cite{jiang2018mainstream} bring significant design sophistication to handle the concurrency aspect. However, we are only interested in optimizing a single video analytics application.
\subsection{Design Intuition and Workflow}
\label{subsec:workflow}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\textwidth]{Figures/SystemOverview.png}
\caption{Workflow of {ApproxNet}\xspace. The input is a video frame and an optional user requirement, and the outputs are prediction labels of the video frame. Note that the shaded profiles are collected offline to alleviate the online scheduler overhead.}
\label{fig:overview}
\end{figure}
To address these challenges in our design, we propose a novel workflow with a set of integrated techniques to achieve a content and contention-aware video object classification system. We show the overall structure with three major functional units: \textit{executor}, \textit{profiler}, and \textit{scheduler} in Figure~\ref{fig:overview}. {ApproxNet}\xspace takes a video frame and optional user requirement for target accuracy or latency as an input, and produces top-5 prediction labels of the object classes as outputs.
The executor (Section~\ref{sec:approx_dnn}) is an approximation-enabled, single-model DNN. Specifically, the single-model design largely reduces the switching overhead so as to support the adaptive system. On the other hand, the multiple \textbf{approximation branches (ABs)}, each with variable latency and accuracy specs, are the key to support dynamic content and contention conditions. {ApproxNet}\xspace is designed to provide real-time processing speed (30 fps) on our target device (NVIDIA Jetson TX2). Compared to the previous single-model designs like MSDNet and BranchyNet, {ApproxNet}\xspace provides novelty in enabling both depth and shape as the approximation knob for run-time calibration.
The scheduler is the key component to react to the dynamic content characteristics and resource contention. Specifically, it selects an AB to execute by combining the precise accuracy estimation of each AB due to changing content characteristics via a {\bf Frame Complexity Estimator} ({\bf FCE}, Section~\ref{subsec_FCE}), the precise latency estimation of each AB due to resource contention via a {\bf Resource Contention Estimator} ({\bf RCE}, Section~\ref{subsec_RCE}), and the switching overhead among ABs (Section~\ref{sec:profiler}). It finally reaches a decision on which AB to use based on the user's latency or accuracy requirement and its internal accuracy, latency, and overhead estimation (Section~\ref{subsec:pareto}).
Finally, to achieve our goal of real-time processing, low switching overhead, and improved performance under dynamic conditions, we design an offline profiler (Section~\ref{sec:profiler}). We collect three profiles offline --- first, the accuracy profile for each AB on video frames of different complexity categories; second, the inference latency profile for each AB under variable resource contention, and third, the switching overhead between any two ABs.
{\bf Video-specific design}. We incorporate these video-specific designs in {ApproxNet}\xspace, which is orthogonal to the existing techniques presented in the prior works on video analytics, e.g. frame sampling~\cite{jiang2018mainstream, videostorm} and edge device offloading~\cite{liu2019edge}.
\begin{enumerate}
\item
The FCE uses a Scene Change Detector (SCD) as a preprocessing module to further alleviate its online cost. This optimization is beneficial because it reduces the frequency with which the FCE is invoked (only when the SCD flags a scene change). This relies on the intuition that discontinuous jumps in frame complexity are uncommon in a video stream.
\item The scheduler decides whether to switch to a new AB or stay depending on how long it predicts the change in the video stream to last and the cost of switching.
\item We drive our decisions about the approximation knobs by the goal of keeping up with the streaming video rate (30 fps). We achieve this under most scenarios when evaluated with a comprehensive video dataset (the ILSVRC VID dataset).
\end{enumerate}
\section{Related Work}
\label{sec_related_work}
\noindent\textbf{System-wise optimization:}
There have been many optimization attempts to improve the efficiency of video analytics or other ML pipelines by building low power hardware and software accelerators for DNNs~\cite{deepx, reagen2016minerva, chen2017eyeriss, park2015big, han2016eie, parashar2017scnn, gao2017tetris, zhang2016cambricon} or improving application performance using database optimization, either on-premise~\cite{mahgoub2019sophia} or on cloud-hosted instances~\cite{mahgoub2020optimuscloud}. These are orthogonal and {ApproxNet}\xspace can also benefit from these optimizations. VideoStorm~\cite{videostorm}, Chameleon~\cite{jiang2018chameleon}, and Focus~\cite{hsieh2018focus} exploited various configurations and DNN models to handle video analytics queries in a situation-tailored manner.
ExCamera~\cite{excamera-nsdi-2017} and Sonic~\cite{mahgoub2021sonic} enabled low-latency video processing on the cloud using serverless architecture (AWS Lambda~\cite{amazon_lambda}). Mainstream~\cite{jiang2018mainstream} proposed to share weights of DNNs across applications. These are all server-side solutions, requiring multiple models to be loaded simultaneously, which is challenging with resource constraints. NoScope~\cite{kang2017noscope} targeted to reduce the computation cost of video analytics queries on servers by leveraging a specialized model trained on a small subset of videos. Thus, its applicability is limited to a small subset of videos that the model has seen.
Closing-the-loop~\cite{xu2020closing} uses genetic algorithms to efficiently search for video editing parameters with lower computation cost. VideoChef~\cite{xu2018videochef} attempted to reduce the processing cost of video pipelines by dynamically changing approximation knobs of preprocessing filters in a content-aware manner. In contrast, {ApproxNet}\xspace, and concurrently developed ApproxDet~\cite{xu2020approxdet} (for video-specific object detection), approximate in the core DNN, which have a significantly different and computationally heavier program structure than filters. Due to this larger overhead of approximation in the core DNN, {ApproxNet}\xspace's adaptive tuning is challenging. Thus, we plan on using either distributed learning~\cite{ghoshal2015ensemble} or a reinforcement learning-based scheduler for refining this adaptive feature~\cite{thomas2018minerva}.
\noindent\textbf{DNN optimizations:}
Many solutions have been proposed to reduce computation cost of a DNN by controlling the precision of edge weights~\cite{hubara2017quantized, gupta2015deep, zhou2016dorefa, rastegari2016xnor} and restructuring or compressing a DNN model~\cite{denton2014exploiting, howard2017mobilenets, bhattacharya2016sparsification, iandola2016squeezenet, chen2015compressing, han2015learning, wen2016learning}. These are orthogonal to our work and {ApproxNet}\xspace's one-model approximation-enabled DNN can be further optimized by adopting such methods. There are several works that also present similar approximation knobs (input shape, outport depth). BranchyNet, CDL, and MSDNet~\cite{teerapittayanon2016branchynet, panda2016conditional, huang2017multi} propose early-exit branches in DNNs. However, BranchyNet and CDL only validate on small datasets like MNIST~\cite{MNIST} and CIFAR-10~\cite{CIFAR10} and have not shown practical techniques to selectively select these early-exit branches in an end-to-end pipeline. Such an adaptive system, in order to be useful (especially on embedded devices), needs to be responsive to resource contention, content characteristics, and users' requirement, {e.g.,}\xspace end-to-end latency SLA. MSDNet targets a very simple image classification task without a strong use case and does not show a data-driven manner of using the early exits. It would have been helpful to demonstrate the system's end-to-end latency on either a server-class or embedded device. BlockDrop~\cite{wu2018blockdrop} trains a policy network to determine whether to skip the execution of several residual blocks at inference time. However, its speedup is marginal and it cannot be applied directly to mobile devices for real-time classification.
\section{Design and Implementation}
\label{sec_technique}
\subsection{Approximation-enabled DNN}
\label{sec:approx_dnn}
{ApproxNet}\xspace's key enabler, an approximation-enabled DNN, is designed to support multiple accuracy and latency requirements at runtime \textit{using a single DNN model}. To enable this, we design a DNN that can be approximated using two approximation knobs. The DNN can take an input video frame in different shapes, which we call \textit{input shapes}, our first approximation knob and it can produce a classification output at multiple positions in the intervening layers, which we call \textit{outports}, our second approximation knob. There are doubtless other approximation knobs, {e.g.,}\xspace model quantization, frame sampling, and others depending on specific DNN models. These can be incorporated into {ApproxNet}\xspace and they will all fit within our novelty of the one-model DNN to achieve real-time on-device, adaptive inference. The appropriate setting of the tuning knobs can be determined on the device (as is done in our considered usage scenario) or, in case this computation is too heavyweight, this can be determined remotely and sent to the device through a reprogramming mechanism such as~\cite{panta2011efficient}.
Combining these two approximation knobs, {ApproxNet}\xspace creates various {\bf approximation branches (ABs)}, which trade off between accuracy and latency, and can be used to meet a particular user requirement. This tradeoff space defines a set of Pareto optimal frontiers, as shown in Figure~\ref{fig:pareto}. Here, the scatter points represent the accuracy and latency achieved by all ABs. A Pareto frontier defines the ABs which are either superior in accuracy or latency against all other branches.
\begin{figure*}[t]
\begin{minipage}[t!]{0.35\linewidth}
\centering
\includegraphics[width=1\columnwidth]{Figures/pareto.png}
\caption{A Pareto frontier for trading-off accuracy and latency in a particular frame complexity category and at a particular contention level.}
\label{fig:pareto}
\end{minipage}
\hfill
\begin{minipage}[t!]{0.60\linewidth}
\centering
\includegraphics[width=1\columnwidth]{Figures/outport.png}
\caption{The outport of the approximation-enabled DNN.}
\label{fig:detailed_DNN}
\end{minipage}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\columnwidth]{Figures/DNN_arch2.png}
\caption{The architecture of the approximation-enabled DNN in {ApproxNet}\xspace.}
\label{fig:dnn_arch}
\end{figure*}
\begin{table}[t]
\centering
\caption{The list of the total 30 ABs supported for a baseline DNN of ResNet-34, given by the combination of the input shape and the outport from which the result is taken. ``--'' denotes the undefined settings.}
\label{table:a_value}
\scalebox{0.8}{
\begin{tabular}{p{0.75in}|cccccc}
Input shape & Outport 1 & Outport 2 & Outport 3 & Outport 4 & Outport 5 & Outport 6 \\
\hline
224x224x3 & 28x28x64 & 28x28x64 & 14x14x64 & 14x14x64 & 14x14x64 & 7x7x64 \\
192x192x3 & 24x24x64 & 24x24x64 & 12x12x64 & 12x12x64 & 12x12x64 & -- \\
160x160x3 & 20x20x64 & 20x20x64 & 10x10x64 & 10x10x64 & 10x10x64 & -- \\
128x128x3 & 16x16x64 & 16x16x64 & 8x8x64 & 8x8x64 & 8x8x64 & -- \\
112x112x3 & 14x14x64 & 14x14x64 & 7x7x64 & 7x7x64 & 7x7x64 & -- \\
96x96x3 & 12x12x64 & 12x12x64 & -- & -- & -- & -- \\
80x80x3 & 10x10x64 & 10x10x64 & -- & -- & -- & -- \\
\end{tabular}
}
\end{table}
We describe our design using ResNet as the base DNN, though our design is applicable to any other mainstream CNN consisting of convolutional (CONV) layers and fully-connected (FC) layers such as VGG~\cite{simonyan2014very}, DenseNet~\cite{huang2017densely} and so on. Figure~\ref{fig:dnn_arch} shows the design of our DNN using ResNet-34 as the base model. This enables 7 input shapes (${s\times s\times 3}$ for $s=224,192,160,$ $128,112,96,80$) and 6 outports (after 11, 15, 19, 23, 27, and 33 layers). We adapt the design of ResNet in terms of the stride, shape, number of channels, use of convolutional layer or maxpool, and connection of the layers. In addition, we create {\em stacks}, with stacks numbering 0 through 6 and each stack having 4 or 6 ResNet layers and a variable number of blocks from the original ResNet design (Table~\ref{table:a_value}). We then design an outport (Figure~\ref{fig:detailed_DNN}), and connect with stacks 1 to 6, whereby we can get prediction labels by executing only the stacks (i.e., the constituent layers) till that stack. The use of 6 outports is a pragmatic system choice---too small a number does not provide enough granularity to approximate in a content and contention-aware manner and too many leads to a high training burden. Further, to allow the approximation knob of downsampling the input frame to the DNN, we use the SPP layer at each outport to pool the feature maps of different shapes (due to different input shapes) into one unified shape and then connect with an FC layer. The SPP layer performs max-pooling on its input by three different levels $l=1,2,3$ with window size $\lceil a/l\rceil$ and stride $\lfloor a/l\rfloor$, where $a$ is the shape of the input to the SPP layer. Note that our choice of the 3-level pyramid pooling is a typical practice for using the SPP layer~\cite{spp}. In general, a higher value of $l$ requires a larger value of $a$ on the input of each outport, thereby reducing the number of possible ABs. On the other hand, a smaller value of $l$ results in coarser representations of spatial features and thus reduces accuracy. To support the case $l=3$ in the SPP, we require that the input shape of an outport be no less than 7 pixels in width and height, \textit{i.e}., $a\ge7$. This results in ruling out some input shapes as in Table~\ref{table:a_value}. Our model has 30 configuration settings in total, instead of 7 $\times$ 6 (number of input shapes $\times$ number of outports) because too small input shapes cannot be used when the outport is deep.
To train {ApproxNet}\xspace towards finding the optimal parameter set $\theta$, we consider the softmax loss $L_{s,i}(\theta)$ defined for the input shape $s\times s\times 3$ and the outport $i$. The total loss function $L(\theta)$ that we minimize to train {ApproxNet}\xspace is a weighted average of $L_{s,i}(\theta)$ for all $s$ and $i$, defined as
\begin{equation}
L(\theta)=\sum_{\forall i} \frac{1}{n_i} \sum_{\forall s} L_{s,i}(\theta)
\end{equation}
where the value of $n_i$ is the factor that normalizes the loss at an outport by dividing by the number of shapes that are supported at that port $i$. This makes each outport equally important in the total loss function. For mini-batch, we use 64 frames for each of the 7 different shapes.
To train on a particular dataset or generalize to other architectures, we discuss more details in Section~\ref{sec_discussion}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.7\columnwidth]{Figures/FrameComplexityCategorizer.png}
\caption{Workflow of the Frame Complexity Estimator.}
\label{fig:fce}
\end{figure*}
\begin{figure*}[t]
\begin{minipage}[thb]{0.54\linewidth}
\includegraphics[width=1\textwidth]{Figures/ConCat_EdgeMap.png}
\caption{Sample frames (first row) and edge maps (second row), going from left to right as simple to complex. Normalized mean edge values from left to right: 0.03, 0.24, 0.50, and 0.99 with corresponding frame complexity categories: 1, 3, 6, and 7}
\label{fig:sample_edge_maps}
\end{minipage}
\hfill
\begin{minipage}[thb]{0.44\linewidth}
\includegraphics[width=1\textwidth]{Figures/sensitivity_curve.png}
\caption{Latency increase of several ABs in ApproxNet under resource contention with respect to those under no contention. The input shape and outport depth of the branches are labeled.}
\label{fig:sensitivity}
\end{minipage}
\end{figure*}
\subsection{Frame Complexity Estimator (FCE)}
\label{subsec_FCE}
The design goal of the Frame Complexity Estimator (FCE), which executes online, is to estimate the expected accuracy of each AB in a content-aware manner. It is composed of a Frame Complexity Categorizer (FCC) and a Scene Change Detector (SCD) and it makes use of information collected by the offline profiler (described in Section~\ref{sec:profiler}). The workflow of the FCE is shown in Figure~\ref{fig:fce}.
\noindent \textbf{Frame Complexity Categorizer (FCC)}. FCC determines how hard it is for {ApproxNet}\xspace to classify a frame of the video. Various methods have been used in the literature to calculate frame complexity such as edge information-based methods~\cite{edge-compression-complexity,edge-complexity-2}, compression information-based methods~\cite{edge-compression-complexity} and entropy-based methods~\cite{cardaci2009fuzzy}. In this paper, we use mean edge value as the feature of the frame complexity category, since it can be calculated with very low computation overhead (3.9 ms per frame on average in our implementation). Although some counterexamples may show the edge value is not relevant, we show empirically that with this feature, the FCE is able to predict well the accuracy of each AB with respect to a large dataset.
To expand, we extract an edge map by converting a color frame to a gray-scale frame, applying the Scharr operator~\cite{jahne1999handbook} in both horizontal and vertical directions, and then calculating the L2 norm in both directions. We then compute the mean edge value of the edge map and use a pre-trained set of boundaries to quantize it into several frame complexity categories. The number and boundaries of categories is discussed in Section~\ref{sec:profiler}. Figure~\ref{fig:sample_edge_maps} shows examples of frames and their edge maps from a few different complexity categories.
\noindent \textbf{Scene Change Detector (SCD)}. The Scene Change Detector is designed to further reduce the online overhead of FCC by determining if the content in a frame is significantly different from that in a prior frame in which case the FCC will be invoked. SCD tracks a histogram of pixel values, and declares a scene change when the mean of the absolute difference across all bins of the histograms of two consecutive frames is greater than a certain threshold (45\% of the total pixels in our design). To bound the execution time of SCD we use only the R-channel and downsample the shape of the frame to $112 \times 112$. We empirically find that such optimizations do not reduce the accuracy of detecting new scenes but do reduce the SCD overhead, to only 1.3 ms per frame.
\subsection{Resource Contention Estimator (RCE)}
\label{subsec_RCE}
The design goal of the Resource Contention Estimator (RCE), which also executes online, is to estimate the expected latency of each AB in a contention-aware manner. Under resource contention, each AB is affected differently and we call the latency increase pattern the \textit{latency sensitivity}. As shown in Figure~\ref{fig:sensitivity}, five approximation branches have different ratios of latency increase under a certain amount of CPU and memory bandwidth contention.
Ideally we would use a sample classification task to probe the system and observe its latency under the current contention level $C$. The use of such micro-benchmarks is commonly done in datacenter environments~\cite{lo2015heracles, xu2018pythia}. However, we do not need the additional probing since the inference latencies of the latest video frames form a natural observation of the contention level of the system. Thus we use the averaged inference latency $\overline{L_B}$ of the current AB $B$ across the latest $N$ frames. We then check the latency sensitivity $L_{B, C}$ of branch $B$ (offline profile as discussed in Section~\ref{sec:profiler}) and get an estimated contention level $\hat{C}$ with the nearest neighbor principle,
\begin{equation}
\hat{C} = \argmin_C abs(L_{B, C} - \overline{L_B})
\end{equation}
By default, we use $N=30$. This will lead to an average over last one second when frame-per-second is 30. Smaller $N$ can make {ApproxNet}\xspace adapt faster to the resource contention, while larger $N$ make it more robust to the noise. Due to the limited observation data (one data point per frame), we cannot adapt to resource contention that is changing faster than the frame rate.
Specifically in this work, we consider CPU, GPU and memory contention among tasks executing on the device (our SoC board shares the memory between the CPU and the GPU), but our design is agnostic to what causes the contention.
Our methodology considers the resource contention as a black-box model because we position ourselves as an application-level design instead of knowing the execution details of all other applications. We want to deal with the effect of contention, rather than mitigating it by modifying the source of the contention.
\subsection{Offline Profiler}
\label{sec:profiler}
\noindent \textbf{Per-AB Content-Aware Accuracy Profile}.
The boundaries of frame complexity categories are determined based on the criteria that all frames within a category should have an identical Pareto frontier curve (Figure~\ref{fig:pareto}) and frames in different categories should have distinct curves. We start with considering the whole set of frames as belonging to a single category and split the range of mean edge values into two in an iterative binary manner, till the above condition is satisfied. In our video datasets, we derive 7 frame complexity categories with 1 being the simplest and 7 the most complex. To speedup the online estimation of the accuracy on any candidate approximation branch, we create the offline accuracy profile $A_{B, F}$ given any frame complexity categories $F$ and any ABs $B$, after the 7 frame complexity categories are derived.
\noindent \textbf{Per-AB Contention-Aware Latency Profile}.
{ApproxNet}\xspace is able to select ABs at runtime in the face of resource contention. Therefore, we perform offline profiling of the inference latency of each AB under different levels of contention. To study the resource contention, we develop our synthetic contention generator (CG) with tunable ``contention levels'' to simulate resource contention and help our {ApproxNet}\xspace profile and learn to react under such scenarios in real-life. Specifically, we run each AB in {ApproxNet}\xspace with the CG in varying contention levels to collect its contention-aware latency profile. To reduce the profiling cost, we quantize the contention to 10 levels for GPU and 20 levels for CPU and memory and then create the offline latency profile $L_{B, C}$ for each AB $B$ under each contention level $C$. Note that contention increases latency of the DNN but does not affect its accuracy. Thus, offline profiling for accuracy and latency can be done independently and parallelly and profiling overhead can be reduced.
\noindent \textbf{Switching Overhead Profile}.
Since we find that the overhead of switching between some pairs of ABs is non-negligible, we profile the overhead of switching latency between any pairs of approximation branches offline. This cost is used in our optimization calculation to select the best AB.
\subsection{Scheduler}
\label{subsec:pareto}
The main job of the scheduler in {ApproxNet}\xspace is to select an AB to execute. The scheduler accepts user requirement on either the minimum accuracy, the maximum latency per frame. The scheduler requests from the FCE a runtime accuracy profile ($B$ is the variable for the AB and $\hat{F}$ is the frame category of the input video frame) $A_{B, \hat{F}} \forall B$. It then requests from the RCE a runtime latency profile ($\hat{C}$ is the current contention level) $L_{B, \hat{C}} \forall B$. Given a target accuracy or latency requirement, we can easily select the AB to use from drawing the Pareto frontier for the current $(\hat{F}, \hat{C})$. If no Pareto frontier point satisfies the user requirement, {ApproxNet}\xspace picks the AB that achieves metric value closest to the user requirement. If the user does not set any requirement, {ApproxNet}\xspace sets a latency requirement to the frame interval of the incoming video stream. One subtlety arises due to the cost of switching from one AB to another. This cost has to be considered by the scheduler to avoid too frequent switches without benefit to outweigh the cost.
To rigorously formulate the problem, we denote the set of ABs as $\mathcal{B}=\{B_1, ... B_N\}$ and the optimal AB the scheduler has to determine as $B_{opt}$. We denote the accuracy of branch $B$ on a video frame with frame complexity $F$ as $A_{B,F}$ , the estimated latency of branch $B$ under contention level $C$ as $L_{B,C}$, the one-time switch latency from branch $B_p$ to $B_{opt}$ as $L_{B_p \rightarrow B_{opt}}$, and the expected time window over which this AB can be used as $W$ (in the unit of frames). For $W$, we use the average number of frames for which the latest ABs can stay unchanged and this term introduces hysteresis to the system so that the AB does not switch back and forth frequently. The constant system overhead per frame (due to SCD, FCC, and resizing the frame) is $L_0$. Thus, the optimal branch $B_{opt}$, given the latency requirement $L_\tau$, is:
\begin{equation}
B_{opt} = \argmax_{B \in \mathcal{B}} A_{B,F},~s.t.~L_{B,C}+\frac{1}{W}L_{B_p \rightarrow B}+L_0 \leq L_\tau
\label{eq:opt-latency-constraint}
\end{equation}
When the accuracy requirement $A_\tau$ is given,
\begin{equation}
B_{opt} = \argmin_{B \in \mathcal{B}} [L_{B,C}+\frac{1}{W}L_{B_p \rightarrow B}+L_0], s.t.~A_{B,F} \geq A_\tau
\label{eq:opt-latency}
\end{equation}
|
{
"timestamp": "2021-07-16T02:02:25",
"yymm": "1909",
"arxiv_id": "1909.02068",
"language": "en",
"url": "https://arxiv.org/abs/1909.02068"
}
|
\section{Introduction} \label{sec:intro}
Novae are a sub-type of cataclysmic variables that present at least one recorded high-amplitude eruption. Nova systems are composed of a white dwarf (primary star) and a less evolved secondary star (usually main sequence or red giant). The secondary transfers mass to the primary through an accretion disk or column. When the accreted surface layer achieves the necessary pressure to start hydrogen burning, the gas is ignited to a non-stable CNO cycle runaway \citep{Warner}. When in quiescence, the nova spectra are dominated by the H and He recombination lines of the disk, which contribute to novae unresolved spectra as soon as the accretion is reestablished. In order to obtain information about the host white dwarf and the physical properties of the eruption and of the hot remnant, we need to observe the emission and kinematics of the ejected gas itself.
Among the few hundreds of detected novae, there is a select group of $\sim 20$ objects classified as neon novae. These are believed to be composed by O-Ne-Mg white dwarfs, which would explain the overabundance of neon, since this element is not produced in the eruption nucleosynthesis \citep{Williams1991}. From this group, only 5 objects have had their shells resolved, namely: V1974 Cyg, V351 Pup and QU Vul with HST observations \citep{Paresce,Downes,Krautter}, V1494 Aql with Russian BTA Telescope \citep{Barsukova} and V382 Vel with SOAR-SAMI (this paper) and SALT \citep{Tomov}, independently. In contrast, there are $26$ resolved nova shells, in total, available in the literature \citep{Camargo}.
Neon novae occur under high-gravity regimes, with massive O-Ne-Mg white dwarfs that may be close to or approaching the Chandrasekhar's limit. O-Ne-Mg cores may be formed by different processes: single stars of initial mass between 8 and 12 M$_{\odot}$ \citep{Nomoto1984}, in close binary systems with mass transfer \citep{Isern}, and by the merging of two CO cores \citep{Nomoto1985}. The study of O-Ne-Mg white dwarfs in binary systems is important to constrain the physical conditions in which they occur, and the possibilities of those systems becoming supernovae \citep{Nomoto1987}. One key information needed to understand neon novae outbursts and the evolution of O-Ne-Mg white dwarfs is the accurate neon and oxygen abundances of each object. Nevertheless, these abundances have been often derived from simplistic models, usually considering uniform or symmetric mass distribution in the shell. For reliable abundances estimates, high spatial resolution 2-D spectroscopy is essential to build complete 3-D photoionization models \citep{Moraes}.
It is quite difficult to obtain spatially resolved spectra of nova shells due to small angular diameter and surface brightness constraints, combined with the presence of a bright central source. It may take some years after the eruption so that the gas will expand far enough to be properly observed. On the other hand, the central source becomes fainter with time, and its ionization decreases with time and distance within the shell, making it more difficult to observe the shell line emission. In this scenario, one technique that facilitates the resolution of structures in nova shells is narrow band imaging aided by adaptive optics (AO).
In this paper we use the SOAR adaptive optics module imager (SAMI) to map the shell of nova V382 Vel. This nova was discovered on May 22nd of 1999 at $V = 3.1$ \citep{V382VelDisc}, making it one of the brightest novae ever detected. V382 Vel is a fast nova, with $t_{3} = 17.5$ days \citep{Liller}. \cite{Woodward} classified V382 Vel as a neon nova after the detection of [Ne II] $12.81$ $\mu$m in the IR spectra taken on July, 1999. X-rays data taken at the early eruption phases suggest an internal shock, caused by two consecutive ejections or by the interaction between the accelerated gas and the material from the expanded pseudo-photosphere. After 6 months of eruption, the X-rays observations presented a supersoft spectrum \citep{Mukai, Orio}.
\section{SOAR-SAMI observations} \label{sec:sami}
Continuum, $H\alpha$ and [O III] $\lambda 5007$ narrow band images were made of V382 Vel using SOAR imager with adaptive optics module (SAM) at $\Delta t = 5781$ days after eruption. The SAM delivered an image quality (FWHM) of $\sim 0.54"$ for H$\alpha$ and continuum filters and $\sim 0.60"$ for the [O III] one, making it possible to acquire a detailed emission map of the shell. We obtained 2 exposures of $1200$ s for H$\alpha$ and [O III] $5007$\AA\ filters and 2 exposures of $600$ s for y-Str\"{o}mgren (continuum) filter. Photometric standards LTT 1788 and LTT 3864 were used for flux calibration.
Basic reduction was performed, applying bias and sky-flat corrections. LTT 3864 images were used for the flux calibration, while LTT 1788 images were only used to crosscheck the sensitivity ratios between filters, since they were taken through thin cirrus clouds. We combined the images for each filter and performed aperture photometry and Richardson-Lucy deconvolution with IRAF tools \citep{Iraf}. As the PSF in H$\alpha$ and y-Str\"{o}mgren images were narrower than in [O III] image, we degraded them in order to match the [O III] image PSF. The emission of the central source in the [O III] $5007$\AA\ band corresponds only to the continuum emission, as neither the disk nor the nova pseudo-photosphere are expected to produce forbidden transition's lines. Therefore, we fitted a Gaussian function on the PSF of the central source and subtracted it from the image. The result is the [O III] $5007$\AA\ isolated contribution from the shell. The disentanglement of the central source continuum and shell contributions for H$\alpha$ is more complex, because all structures emit strong hydrogen lines. Using the flux of y-Str\"{o}mgren image, we isolated the H$\alpha$ line emission in our image. Then, we fitted a Gaussian function to the central source PSF and subtracted it from the image, leaving only the H$\alpha$ contribution from the shell. The resulting images for H$\alpha$ and [O III] $5007$\AA\ emission from the shell are displayed in figure \ref{sami}.
\begin{figure}
\begin{center}
\includegraphics[scale=1.0]{v382_shell.eps}
\caption{Left: H$\alpha$ (top) and [O III] (bottom) calibrated images obtained by SOAR-SAMI of nova V382 Vel. Right: Reduced, deconvolved H$\alpha$ (top) and [O III] (bottom) shell images of V382 Vel with both continuum contribution and central source emission subtracted.}
\label{sami}
\end{center}
\end{figure}
\section{V382 Vel shell}
\subsection{Angular diameter}
H$\alpha$ image reveals a thin and roughly spherical shell, with a few resolved clumps. The major and minor axes are $9.9$ arcsec and $8.9$ arcsec, respectively, with a mean value of $9.4$ arcsec (see figure \ref{overplot}). \citet{Tomov} measured an angular diameter of $12$ arcsec for the shell from their H$\alpha+$[N II] narrow band imaging with Fabry-Perot Robert Stobie Spectrograph at SALT.
[O III] $5007$\AA\ image shows a bipolar structure instead of a complete shell. This structure has a slightly larger angular diameter, of $10.9$ arcsec, and the clumps are not completely consistent with the H$\alpha$ ones. The difference in size, and perhaps the clumps mismatch, may be due to the density distribution associated to expansion velocity gradients. The H$\alpha$ emission occur in denser regions than the [O III], and the density is lower at the outer part of the shell. Figure \ref{overplot} also shows the H$\alpha$ emission map with the [O III] emission as a contour plot, stressing the difference in shell size.
Most novae with high expansion velocities present spherically symmetric shells, and the ones with low expansion velocity may present bipolar structures \citep{Camargo}. The shell morphology of V382 Vel may be also related to the occurrence of two successive eruption peaks, indicated by the x-ray detection of a internal shock soon after outburst maximum \citep{Mukai, Orio}. If there was a low velocity eruption followed by a high velocity one, the first would originate the bipolar structure seen in [O III] profile and the second may have originated the almost spherical shell seen in H$\alpha$ band.
We do not have information about the inclination of the system, and we do not know if the bipolar structure is in the orbital plane or perpendicular to the binary orbit. If the [O III] structure is perpendicular to the orbital plane, it may be an indication of the presence of the disk, generating an anisotropic ionizing field, as for the case of HR Del \citep{MoraesHRDel} and V723 Cas \citep{TakedaV723Cas}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{shell_overplot.eps}
\caption{Deconvolved shell image in H$\alpha$ filter with overlapping [O III] contours. The shell major and minor axes are displayed.}
\label{overplot}
\end{center}
\end{figure}
\subsection{Expansion velocity and parallax distance}
Our analysis and models consider a spherical shell with radius of $4.95$ arcsec, assuming the [O III] emission as bounder. Different values of expansion parallax distance were derived from several author's expansion velocities. The expansion velocities for V382 Vel were measured as the HWZI of the emission lines, thus the values found in the literature broadly vary, possibly because they are highly dependent on the continuum S/N. In some cases, other broadening effects unrelated to the gas group velocity may be present. The velocity of $\sim 1200$ km/s found by\citet{Augusto} during the first 800 days after eruption, yields a distance of $d = 810$ pc. \citet{Tomov} obtained a distance of $800$ pc, using their velocity measurement of $1800$ km/s and their angular diameter of $12.0$ arcsec. Using the same velocity of $1800$ km/s and the shell angular diameter value obtained by SOAR-SAMI, we obtain a distance of $1214$ pc.
Gaia DR2 \citep{Gaia2016,Gaia2018} contain a parallax value of $0.56$ mas with error of $\sim 10\%$ for V382 Vel, leading to a distance of $1.79$ kpc. In order to match such a distance with our angular radius value, the average expansion velocity should be around $2600$ km/s. Although higher than the velocities found by \citet{Augusto} and \citet{Tomov}, this value is still compatible with the speed class of V382 Vel (fast nova). The distance of $1.79$ kpc is also consistent with the values derived by \citet{DellaValle} and \citet{HachisuNeon}, of $1.7$ kpc (from MMRD) and $1.6$ kpc (from distance-reddening relation), respectively. \citet{ShoreV382Vel} gave a higher estimate of $2.5$ kpc using observational constraints as the presence of interstellar lines in the spectra and maximum luminosity. In the following simulations, a distance $d=1.79$ kpc and an outer shell radius $r = 1.4 \times 10^{17}$ cm were assumed.
\subsection{Line fluxes and dereddening}
As commented before, we calibrated V382 Vel data using the spectrophotometric standard LTT 3864. We also corrected the fluxes from interstellar extinction using $E(B-V) = 0.38$, taken from the IRSA Galactic Dust Reddening and Extinction calculator, which uses dust reddening from \citet{SandF}. \citet{DellaValle} derived $E(B-V)=0.1$ using the equivalent width from interstellar Na lines in their optical spectra. As these line EWs may be close to saturation, we decided to use the value from the galactic dust map. The final fluxes obtained for the shell emission lines are $f_{H\alpha}=9.9\times 10^{-16}$ erg/s/cm$^2$ and $f_{[O III]}=7.7\times 10^{-16}$ erg/s/cm$^2$. The flux ratio between [O III] and H$\alpha$ of $0.78$ is relatively high among evolved neon nova shells. Analyzing the optical spectrum of \citet{Tomov}, the ratio between [O III] and H$\alpha$ fluxes is roughly $0.1$, indicating that their spectrum is already dominated by the bright hydrogen lines from the accretion disk.
We have estimated that the flux loss due to the broad line wings and the limited width of the narrow band filters is less than $10\%$ for both bands, using previous spectroscopic observations \citep{Tomov}. Such loss is not relevant when compared to other uncertainties in our models.
\subsection{Deprojection of the shell H density}
Even though our images suggest that the shell of V382 Vel is not completely spherical, we do not have information about its eccentricity, due to the projection effects. Thus, in the following analysis we assume a clumpy structure added to a diffuse spherical component. Under the spherical simplifying assumption it was possible to use the projected recombination line emissivity to estimate a 3D mass distribution. First, we isolated the clumps from the diffuse gas, by measuring the median flux value as a function of the radius. In order to obtain a volume emissivity from the surface brightness, we used an Abel inverse transformation \citep{Park}, with a polynomial fit. The volume emissivity distribution was then rescaled into a density distribution, assuming a a fully ionized gas at a trial temperature of $10,000$ K \citep{Osterbrock}. The density gradient (figure \ref{abel}) was smoothed at the borders to guarantee the convergence of photoionization calculations and then used to produce a 3D density map.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{novo_hden.eps}
\caption{Diffuse hydrogen density radial profile derived from Abel inverse transform.}
\label{abel}
\end{center}
\end{figure}
The features on the residual image (obtained by the subtraction of the radial median flux from the H$\alpha$ image) were treated as clumps. We submitted the residual image to a 2D Fourier Transform in order to obtain a characteristic scale of the structures. The power spectrum indicated a predominance of components with diameter $d=9\times 10^{15}$ $cm$, a value chosen to define the clumps' size. The 3 brightest peaks coordinates in the residual image were used as the projected clumps centers while the positions along the line of sight were randomly chosen within the shell. For each clump, the integrated flux was scaled to the observed value, with a Gaussian radial gas density distribution. These clumps were added to the 3D hydrogen density map.
\subsection{Hydrogen mass}
Using the 3D shell H density distribution, combined with the estimated distance and shell radius, we derived a total hydrogen mass of $M_{shell}=6.2\times 10^{-5}$ M$_{\odot}$. This value leads to an estimate of the mean hydrogen density of $n_{H} \sim 15$ cm$^{-3}$. For a hydrogen mass fraction $X=0.46$, calculated from the abundances of table \ref{tab:inputpar}, the total ejected mass is $M_{shell}=1.4\times 10^{-4}$ M$_{\odot}$. This value should be considered an upper limit for the ejected mass, since the observed H$\alpha$ flux may have a contribution of [N II] lines due to the width of the filter. Our estimate is compatible with the hydrogen mass derived by \citet{ShoreV382Vel} of $M_{shell} = 4\times 10^{-4}$ M$_{\odot}$, using UV He II observed fluxes compared to photoionization predictions. These models assumed $d=2.5$ kpc, $E(B-V)=0.2$, two symmetric shell components with different densities and a covering factor of $0.8$. \citet{DellaValle} found a lower value of $M_{shell}=6.5\times 10^{-6}$ M$_{\odot}$, using the H$\alpha$ emission from year 2000 observations ($\Delta t \sim 17$ months), although it is not clear how the authors separated the shell emission from the central source. \citet{HachisuNeon} derived a $M_{shell}=4.8\times 10^{-6}$ M$_{\odot}$ at optical maximum in their light-curve models.
\section{Photoionization models}
We used RAINY3D photoionization models \citep{Moraes}, that are based on Cloudy \citep{Ferland}, to explore the properties of V382 Vel nova system. The input parameters are listed in table \ref{tab:inputpar} used with the geometry constraint from the 3D hydrogen density map previously described.
\begin{deluxetable}{cc}
\tablecaption{Input parameters \label{tab:inputpar}}
\tablehead{
\colhead{Parameter} & \colhead{Value}}
\startdata
dist & $d= 1.79$ $kpc^{[1]}$ \\
E(B-V) & $0.38^{[2]}$\\
$T_{CS}$ & $5.0-8.0$ $(\times10^4)$ $K^{[3]}$ \\
$log(L_{CS})$ & $35-36$ $erg/s^{[3]}$ \\
$r_{in}$ & $6.0\times10^{16}$ $cm^{[3]}$ \\
$r_{out}$ & $1.4\times 10^{17}$ $cm^{[3]}$ \\
$M_{H}$ & $1.4\times 10^{-4}$ $M_{\odot}^{[3]} $ \\
$log(n_{He}/n_{H})$ & $-0.6^{[4]}$ \\
$log(n_{C}/n_{H})$ & $-3.7^{[5]}$ \\
$log(n_{N}/n_{H})$ & $-2.8^{[5]}$ \\
$log(n_{O}/n_{H})$ & $-3.8^{[4]} - -2.6^{[5]}$ \\
$log(n_{Ne}/n_{H})$ & $-3.0^{[4]} - -2.7^{[5]}$ \\
$log(n_{Mg}/n_{H})$ & $-4.0^{[5]}$ \\
$log(n_{Al}/n_{H})$ & $-4.2^{[5]}$ \\
$log(n_{Si}/n_{H})$ & $-4.8^{[5]}$ \\
\enddata
\tablenotetext{1}{\citet{Gaia2016,Gaia2018}}
\tablenotetext{2}{\citet{SandF}}
\tablenotetext{3}{This paper.}
\tablenotetext{4}{\citet{Augusto}}
\tablenotetext{5}{\citet{ShoreV382Vel}}
\end{deluxetable}
The central source luminosity and temperature inputs were based on average values for novae $\sim 10$ years after eruption, since there were no X-rays data taken at the same epoch of our observations. In 2000, \citet{Ness2005} found an integrated blackbody luminosity of $2\times 10^{36}$ erg/s by the time the hydrogen burning had already turned off - a value used as an upper limit in our models. These authors also found a central source temperature of $3\times10^{5}$ K, but it is expected to have greatly declined by the epoch of SOAR-SAMI observations.
The first RAINY3D models were made without the condensations in order to limit the input parameters range, because the clumpy models are much more time-consuming. The results are displayed in figure \ref{rainy_models} and suggest an ionizing source of $T=60,000$ K, but the H$\alpha$ fluxes are lower than expected while the [O III] fluxes are higher. As in the case of nova V723 Cas \citep{TakedaV723Cas}, one possible solution that could better model V382 Vel is the anisotropy of the ionizing radiation field caused by an accretion disk. It is reasonable to assume that the ionizing source is dominated by the accretion disk after more than $15$ years after V382 Vel eruption. The luminosities and temperatures used in our models are in fact compatible with accretion disks of $ 5\times 10^{-8} \leq \dot{M} \leq 1\times 10^{-7}$ M$_{\odot}/year$.
\begin{figure}
\begin{center}
\includegraphics[scale=1.0]{models_v382vel_gaia_upd_2.eps}
\caption{Rainy3D model integrated line fluxes as a function of the ionizing source temperature. The circles correspond to an ionizing source luminosity of $10^{35}$ $erg/s$, the plus signs to $10^{35.5}$ $erg/s$ and the `x' marks to $10^{36}$ $erg/s$, respectively. The dotted lines indicate the observed line fluxes.}
\label{rainy_models}
\end{center}
\end{figure}
Therefore, we ran the clumpy RAINY3D photoionization models including a standard model accretion disk with $T_d=60,000$ K as ionizing source. We assumed a typical value of M$_{WD}=1.1$ M$_{\odot}$ for an O-Ne-Mg white dwarf and a disk radius of $80\%$ of the primary Roche-Lobe radius. The total luminosity of the disk was fixed in $10^{36}$ erg/s, because the anisotropic models usually require higher luminosities than isotropic ones, in order to to reproduce the total fluxes. The inclination of the disk is unknown. However, bipolar structures as the one presented by [O III] emission tend to be perpendicular to the disk \citep{TakedaV723Cas,HarveyGKPer}. Accretion disks parallel and perpendicular to the bipolar structure were modeled and compared. In the first case, the ionizing flux towards the bipolar structure is the lowest and in the second case, the highest.
Our best fit model for a parallel disk is displayed in figure \ref{emmap}, as a projection of the resulting 3D emissivity maps for H$\alpha$ and [O III]. A logarithmic arbitrary intensity scale is used to emphasize faint structures. The total modeled fluxes of H$\alpha$ and [O III] are f$_{H\alpha}$=$4.6\times 10^{-15}$ erg/s/cm$^2$ and $1.3\times 10^{-15}$ erg/s/cm$^2$, which are 4.6 and 6.0 times higher than observed values, respectively. The best fit model for a perpendicular disk is also displayed in figure \ref{emmap}, with total modeled fluxes of f$_{H\alpha}$=$2.3\times 10^{-15}$ erg/s/cm$^2$ and $6.5\times 10^{-16}$ erg/s/cm$^2$, with factors of 2.3 and 0.7 higher than the observations. The clumps positions, size and shape do not match exactly, but the model emissivities reproduce the observations in the sense that [O III] clumps end further from the central source than H$\alpha$ ones. Spatially resolved spectroscopy including transitions in a wide ionization range is needed to improve the models. For these best fit models, with the disk being parallel or perpendicular to the bipolar structure, the oxygen abundance is revised as $log(n_{O}/n_{H}) = -3.8$, a value which is consistent with that obtained by \citet{Augusto}. \citet{ShoreV382Vel} found a higher value for O abundance, using photoionization models based on UV spectra, with a higher distance value of $2.5$ kpc and a two component spherical shell with different filling factors. A precise derivation of the O and Ne abundances can elucidate the stellar evolution in the binary system and the dragging and mixing processes at the surface of the white dwarf during eruption.
\begin{figure}
\begin{center}
\includegraphics[scale=0.60]{3d_emissivitymap_units.eps}
\includegraphics[scale=0.60]{3d_emissivitymap_perp_units.eps}
\caption{RAINY3D projected model emissivity maps for H$\alpha$ and [O III], for bipolar structure parallel (top) and perpendicular (bottom) to accretion disk, which is aligned horizontally. The surface brightness values are displayed in arbitrary units.}
\label{emmap}
\end{center}
\end{figure}
Given that we only have data for 2 transitions, our models are poorly constrained to provide an unique solution for the shell emission. The differences between [O III] and Balmer lines are likely due to a combination of local conditions that produce different collisional excitation and ionization of O$^{+}$, including the proposed anisotropic radiation. We are able to rule out any collisional de-excitation effects to [O III] given the low densities involved.
\section{Discussion} \label{sec:discussion}
The use of adaptive optics made the detection of clumps in V382 Vel shell possible, and the 2D Fourier Transform gave a rough estimate of their scale. However, it is plausible that the structures we detect as clumps are actually large groups of smaller clumps, smaller than we can detect with our AO resulting image quality. ALMA observations of nova V5668 Sgr presented dispersed structures of $\sim 10^{15}$ cm that were interpreted as large unique structures when observed with lower resolution instruments \citep{ALMA}. The misinterpretation of the clumps size and filling factor can affect the H densities, since we considered the radial average emission as diffuse component.
The asymmetries in nova shells have been extensively detected in the past decades, but they are usually attributed to anisotropic mass distributions, driven by ejection processes, post-maximum winds and interaction of the expanding gas with the companion star and pre-existing circumstellar material \citep{Walder}. In our models, we inserted symmetric clumps, but due to the anisotropic ionizing field they seem asymmetric in the emissivity maps. This may be seen as an indication that clumps and mass distributions are not necessarily asymmetric, even if the projected observed fluxes are. The actual contribution of each factor for the observed asymmetries could eventually be studied by joining spatially resolved spectroscopy, 3D hydrodynamic simulations of clump evolution and 3D photoionization models.
In our analysis, we derived a total shell mass of $\sim 1.4 \times 10^{-4}$ M$_{\odot}$, that lies on the upper limit of nova shell masses, especially for fast and/or neon novae. Considering that the X-rays detection of a shock is due to a second eruption with larger expansion velocities, the ejected mass could be higher than expected for a single eruption. The high ejected mass found favors the scenario where O-Ne-Mg novae are not progenitors of neutron stars via accretion-induced collapse \citep{Nomoto}, as their mass loss overcomes the accretion.
V382 Vel belongs to the small group of confirmed neon novae. Among the objects of this group, only other 4 novae have had their shells resolved: V1494 Aql, V1974 Cyg, V351 Pup and QU Vul. As for V382 Vel, V1974 Cyg imaging also presented a inhomogeneous ring, or thin shell, and a bipolar structure \citep{Paresce,Paresce1995}. These structures appear differently for distinct ionization lines, a possible consequence of the presence of the accretion disk as ionizing source. \citet{Krautter} derived a high ejected mass for V1974 Cyg, of $2.0\times 10^{-4}$ M$_{\odot}$, based on data from \citet{Woodward1995}, while \citet{HachisuNeon} found the value of $1.3\times 10^{-5}$ M$_{\odot}$ through light-curve models. Images from \textit{HST} of QU Vul and V351 Pup also show clumpy rings \citep{HST,Krautter}. \citet{SaizarQUVul} restricted the total ejected mass of QU Vul to the interval $1.0\times 10^{-4}$ M$_{\odot}$ to $3.5\times 10^{-3}$ M$_{\odot}$. \citet{HachisuNeon} derived an ejected mass of $2.5\times 10^{-5}$ M$_{\odot}$ by modeling QU Vul light-curve. For V351 Pup, \citet{Wendeln} found a total ejected mass of $6.3\times 10^{-6}$ M$_{\odot}$, \citet{SaizarV351Pup} found $2.0\times 10^{-7}$ M$_{\odot}$ and \citet{HachisuNeon} found $2.0\times 10^{-5}$ M$_{\odot}$. \citet{Barsukova} describe V1494 Aql shell as a thin spherical shell or a ring.
The observed shells of neon novae seem to present common morphological features. However, the ejected masses vary over a wide range and in some cases the masses derived by different authors broadly diverge. The major problems in estimating the shell masses are the actual gas distributions or proper filling factors, and the errors in distances. Gaia mission may solve the last one, but for the first spatially resolved spectroscopy or multi-wavelength high-resolution imaging is required. Only then we will be able to characterize neon novae as a group and connect the observations to the theoretical predictions.
\section{Conclusions}
The SOAR-SAMI observations presented a detailed view of a thin, roughly spherical shell in H$\alpha$ filter and a bipolar structure in [O III] filter. Both images present clumps, although they are not aligned in both filters. We derived a shell radius of $1.4\times 10^{17}$ cm, directly from our images and Gaia parallax distance, and a upper limit for total shell mass of $\sim 1.4 \times 10^{-4}$ M$_{\odot}$. The H density distribution, the ionizing source temperature of $T_d=60,000$ K, a total central source luminosity of L=$10^{36}$ erg/s and the oxygen abundance in the ejecta were constrained from our photoionization models. These models also suggest that the gas is ionized by the re-established accretion disk.
\acknowledgments
We thank FAPESP for the support under grant 2014/10326-3 and CNPq funding under grant \#305657. This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
\vspace{5mm}
\facilities{SOAR(SAMI)}
\software{IRAF, Cloudy, RAINY3D}
\bibliographystyle{aasjournal}
|
{
"timestamp": "2019-09-06T02:02:05",
"yymm": "1909",
"arxiv_id": "1909.02051",
"language": "en",
"url": "https://arxiv.org/abs/1909.02051"
}
|
"\\section{Introduction}\n\\label{s:intro}\n\nThe stellar halo provides a unique window into the ass(...TRUNCATED)
| {"timestamp":"2019-09-06T02:00:53","yymm":"1909","arxiv_id":"1909.02007","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction\\label{sec:intro}}\n\n\nThe Standard Model (SM) of particle physics has been(...TRUNCATED)
| {"timestamp":"2020-03-26T01:00:17","yymm":"1909","arxiv_id":"1909.02044","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nProtein motion on the surface of lipid membranes is critical to a wide v(...TRUNCATED)
| {"timestamp":"2019-09-06T02:02:37","yymm":"1909","arxiv_id":"1909.02066","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nBand inversions in three-dimensional (3D) materials can lead to a variety (...TRUNCATED)
| {"timestamp":"2019-09-06T02:06:12","yymm":"1909","arxiv_id":"1909.02154","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\\subsection{Brief description of the main results and methodology}\n(...TRUNCATED)
| {"timestamp":"2019-09-06T02:03:11","yymm":"1909","arxiv_id":"1909.02080","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nThe top-quark sector of Standard-Model Extension (SME) is weakly constra(...TRUNCATED)
| {"timestamp":"2019-09-06T02:00:28","yymm":"1909","arxiv_id":"1909.01990","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nThe Standard Model (SM) has emerged as an incredibly accurate descriptio(...TRUNCATED)
| {"timestamp":"2019-11-15T02:18:35","yymm":"1909","arxiv_id":"1909.02029","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 3