FigAgent / 2003.13951 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
fb161ba verified

Introduction

**Overall Architecture** The image encoding processes is highlighted in part *a)*. The input monocular image is encoded using a ResNet encoder and then passed through the Self-Attention Context Module. The computed attention maps are then convolved with a 2D convolution with the number of output channels equal to the number dimensions for the Discrete Disparity Volume (DDV). The DDV is then projected into a 2D depth map by performing a *softargmax* across the disparity dimension resulting in the lowest resolution disparity estimation (Eq. [[eq:DDV_low_res]](#eq:DDV_low_res){reference-type="ref" reference="eq:DDV_low_res"}). In part *b)* the pose estimator is shown, and part *c)* shows more details of the Multi-Scale decoder. The low resolution disparity map is passed through successive blocks of UpConv (nearest upsample + convolution). The DDV projection is performed at each scale, in the same way as in the initial encoding stage. Finally, each of the outputs are upsampled to input resolution to compute the photometric reprojection loss.{#fig:arch width="100%"}

Perception of the 3D world is one of the main tasks in computer/robotic vision. Accurate perception, localisation, mapping and planning capabilities are predicated on having access to correct depth information. Range finding sensors such as LiDAR or stereo/multi-camera rigs are often deployed to estimate depth for use in robotics and autonomous systems, due to their accuracy and robustness. However, in many cases it might be unfeasible to have, or rely solely on such expensive or complex sensors. This has led to the development of learning-based methods [@saxena2006learning; @saxena2009make3d; @karsch2014depth], where the most successful approaches rely on fully supervised convolutional neural networks (CNNs) [@eigen2014depth; @eigen2015predicting; @fu2018deep; @guo2018learning; @mayer2018makes]. While supervised learning methods have produced outstanding monocular depth estimation results, ground truth RGB-D data is still limited in variety and abundance when compared with the RGB image and video data sets available in the field. Furthermore, collecting accurate and large ground truth data sets is a difficult task due to sensor noise and limited operating capabilities (due to weather conditions, lighting, etc.).

Recent studies have shown that it is instead possible to train a depth estimator in a self-supervised manner using synchronised stereo image pairs [@garg2016unsupervised; @godard2017unsupervised] or monocular video [@zhou2017unsupervised]. While monocular video offers an attractive alternative to stereo based learning due to wide-spread availability of training sequences, it poses many challenges. Unlike stereo based methods, which have a known camera pose that can be computed offline, self-supervised monocular trained depth estimators need to jointly estimate depth and ego-motion to minimise the photometric reprojection loss function [@garg2016unsupervised; @godard2017unsupervised]. Any noise introduced by the pose estimator model can degrade the performance of a model trained on monocular sequences, resulting in large depth estimation errors. Furthermore, self-supervised monocular training makes the assumption of a moving camera in a static (i.e., rigid) scene, which causes monocular models to estimate 'holes' for pixels associated with moving visual objects, such as cars and people (i.e., non-rigid motion). To deal with these issues, many works focus on the development of new specialised architectures [@zhou2017unsupervised], masking strategies [@zhou2017unsupervised; @monodepth2; @vijayanarasimhan2017sfm; @luo2018every], and loss functions [@godard2017unsupervised; @monodepth2]. Even with all of these developments, self-supervised monocular trained depth estimators are less accurate than their stereo trained counterparts and significantly less accurate than fully supervised methods.

In this paper, we propose two new ideas to improve self-supervised monocular trained depth estimation: 1) self-attention [@non-local; @vaswani2017attention], and 2) discrete disparity volume [@kendall2017end]. Our proposed self-attention module explores non-contiguous (i.e., global) image regions as a context for estimating similar depth at those regions. Such approach contrasts with the currently used local 2D and 3D convolutions that are unable to explore such global context. The proposed discrete disparity volume enables the estimation of more robust and sharper depth estimates, as previously demonstrated by fully supervised depth estimation approaches [@kendall2017end; @liu2019neural]. Sharper depth estimates are important to improving accuracy, and increased robustness is desirable to allow self-supervised monocular trained depth estimation to address common mistakes made by the method, such as incorrect pose estimation and matching failures because of uniform textural details. We also show that our method can estimate pixel-wise depth uncertainties with the proposed discrete disparity volume [@kendall2017end]. Depth uncertainty estimation is important for refining depth estimation [@fu2018deep], and in safety critical systems [@kendall2017uncertainties], allowing an agent to identify unknowns in an environment in order to reach optimal decisions. As a secondary contribution of this paper, we leverage recent advances in semantic segmentation network architectures that allow us to train larger models on a single GPU machine. Experimental results show that our novel approach produces the best self-supervised monocular depth estimation results for KITTI 2015 and Make3D. We also show in the experiments that our method is able to close the gap with self-supervised stereo trained and fully supervised depth estimators.

Method

In the presentation of our proposed model for self-supervised monocular trained depth estimation, we focus on showing the importance of the main contributions of this paper, namely self-attention and discrete disparity volume. We use as baseline, the Monodepth2 model [@monodepth2] based on a UNet architecture [@ronneberger2015u].

We represent the RGB image with $\mathbf{I}:\Omega \rightarrow \mathbb R^3$, where $\Omega$ denotes the image lattice of height $H$ and width $W$. The first stage of the model, depicted in Fig. 2{reference-type="ref" reference="fig:arch"}, is the ResNet-101 encoder, which forms $\mathbf{X} = resnet_{\theta}(\mathbf{I}t)$, with $\mathbf{X}:\Omega{1/8} \rightarrow \mathbb R^{M}$, $M$ denoting the number of channels at the output of the ResNet, and $\Omega_{1/8}$ representing the low-resolution lattice at $(1/8)^{th}$ of its initial size in $\Omega$. The ResNet output is then used by the self-attention module [@non-local], which first forms the query, key and value results, represented by: $$\begin{equation} \begin{split} f(\mathbf{X}(\omega)) = & \mathbf{W}f\mathbf{X}(\omega), \ g(\mathbf{X}(\omega)) = & \mathbf{W}g\mathbf{X}(\omega), \ h(\mathbf{X}(\omega)) = & \mathbf{W}h\mathbf{X}(\omega), \label{eq:self_attention_defintion_key_query_value} \end{split} \end{equation}$$ respectively, with $\mathbf{W}f,\mathbf{W}g,\mathbf{W}h \in \mathbb R^{N \times M}$. The query and key values are then combined with $$\begin{equation} \mathbf{S}{\omega} = softmax(f(\mathbf{X}(\omega))^T g(\mathbf{X}) ), \label{eq:self_attention_query_times_key} \end{equation}$$ where $\mathbf{S}{\omega}: \Omega{1/8} \rightarrow [0,1]$, and we abuse the notation by representing $g(\mathbf{X})$ as a tensor of size $N \times H/8 \times W/8$. The self-attention map is then built by the multiplication of value and $\mathbf{S}{\omega}$ in [eq:self_attention_query_times_key]{reference-type="eqref" reference="eq:self_attention_query_times_key"}, with: $$\begin{equation} \mathbf{A}(\omega) = \sum{\tilde{\omega} \in \Omega{1/8}} h(\mathbf{X}(\tilde{\omega})) \times \mathbf{S}{\omega}(\tilde{\omega}), \label{eq:self_attention} \end{equation}$$ with $\mathbf{A}:\Omega{1/8} \rightarrow \mathbb R^N$.

The low-resolution discrete disparity volume (DDV) is denoted by $\mathbf{D}{1/8}(\omega) = conv{3 \times 3}(\mathbf{A}(\omega))$, with $\mathbf{D}{1/8}:\Omega{1/8} \rightarrow \mathbb R^K$ ($K$ denotes the number of discretized disparity values), and $conv_{3 \times 3}(.)$ denoting a convolutional layer with filters of size $3 \times 3$. The low resolution disparity map is then computed with $$\begin{equation} \sigma(\mathbf{D}{1/8}(\omega)) = \sum{k=1}^{K} softmax(\mathbf{D}{1/8}(\omega)[k]) \times disparity(k), \label{eq:DDV_low_res} \end{equation}$$ where $softmax(\mathbf{D}{1/8}(\omega)[k])$ is the softmax result of the $k^{th}$ output from $\mathbf{D}{1/8}$, and $disparity(k)$ holds the disparity value for $k$. Given the ambiguous results produced by these low-resolution disparity maps, we follow the multi-scale strategy proposed by Godard  [@monodepth2]. The low resolution map from [eq:DDV_low_res]{reference-type="eqref" reference="eq:DDV_low_res"} is the first step of the multi-scale decoder that consists of three additional stages of upconv operators (i.e., nearest upsample + convolution) that receive skip connections from the ResNet encoder for the respective resolutions, as shown in Fig. 2{reference-type="ref" reference="fig:arch"}. These skip connections between encoding layers and associated decoding layers are known to retain high-level information in the final depth output. At each resolution, we form a new DDV, which is used to compute the disparity map at that particular resolution. The resolutions considered are (1/8), (1/4), (1/2), and (1/1) of the original resolution, respectively represented by $\sigma(\mathbf{D}{1/8})$, $\sigma(\mathbf{D}{1/4})$, $\sigma(\mathbf{D}{1/2})$, and $\sigma(\mathbf{D}_{1/1})$.

Another essential part of our model is the pose estimator [@zhou2017unsupervised], which takes two images recorded at two different time steps, and returns the relative transformation, as in $$\begin{equation} \mathbf{T}{t \rightarrow t'} = p{\phi}(\mathbf{I}t,\mathbf{I}{t'}), \label{eq:pose} \end{equation}$$ where $\mathbf{T}{t \rightarrow t'}$ denotes the transformation matrix between images recorded at time steps $t$ and $t'$, and $p{\phi}(.)$ is the pose estimator, consisting of a deep learning model parameterised by $\phi$.

The training is based on the minimum per-pixel photometric re-projection error [@monodepth2] between the source image $\mathbf{I}{t'}$ and the target image $\mathbf{I}{t}$, using the relative pose $\mathbf{T}{t \rightarrow t'}$ defined in [eq:pose]{reference-type="eqref" reference="eq:pose"}. The pixel-wise error is defined by $$\begin{equation} \ell_p = \frac{1}{|\mathcal{S}|}\sum{s \in \mathcal{S}} \left ( \min_{t'} \mu^{(s)} \times pe(\mathbf{I}t,\mathbf{I}^{(s)}{t \rightarrow t'}) \right ), \label{eq:photo_rep_loss} \end{equation}$$ where $pe(.)$ denotes the photometric reconstruction error, $\mathcal{S}={ \frac{1}{8},\frac{1}{4},\frac{1}{2},\frac{1}{1} }$ is the set of the resolutions available for the disparity map, defined in [eq:DDV_low_res]{reference-type="eqref" reference="eq:DDV_low_res"}, $t' \in { t-1, t+1}$, indicating that we use two frames that are temporally adjacent to $\mathbf{I}t$ as its source frames [@monodepth2], and $\mu^{(s)}$ is a binary mask that filters out stationary points (see more details below in Eq.[eq:automasking]{reference-type="ref" reference="eq:automasking"}) [@monodepth2]. The re-projected image in [eq:photo_rep_loss]{reference-type="eqref" reference="eq:photo_rep_loss"} is defined by $$\begin{equation} \mathbf{I}^{(s)}{t \rightarrow t'} = \mathbf{I}_{t'} \big < proj(\sigma(\mathbf{D}^{(s)}_t), \mathbf{T}_{t \rightarrow t'} , \mathbf{K}) \big >, \end{equation}$$ where $proj(.)$ represents the 2D coordinates of the projected depths $\mathbf{D}t$ in $\mathbf{I}{t'}$, $\big <. \big >$ is the sampling operator, and $\sigma(\mathbf{D}^{(s)}_t)$ is defined in [eq:DDV_low_res]{reference-type="eqref" reference="eq:DDV_low_res"}. Similarly to [@monodepth2], the pre-computed intrinsics $\mathbf{K}$ of all images are identical, and we use bi-linear sampling to sample the source images and $$\begin{equation} \quad pe(\mathbf{I}t, \mathbf{I}^{(s)}{t^\prime}) = \frac{\alpha}{2} (1 - \mathrm{SSIM}(\mathbf{I}t, \mathbf{I}^{(s)}{t^\prime})) + (1 - \alpha) |\mathbf{I}t - \mathbf{I}^{(s)}{t^\prime}|1, \end{equation}$$ where $\alpha = 0.85$. Following [@godard2017unsupervised] we use an edge-aware smoothness regularisation term to improve the predictions around object boundaries: $$\begin{eqnarray} \ell_s &=& \left | \partial_x d^*t \right | e^{-\left | \partial_x \mathbf{I}t \right |} + \left | \partial_y d^*t \right | e^{-\left | \partial_y \mathbf{I}t \right |}, \label{eq:smoothness} \end{eqnarray}$$ where $d^*t = d_t / \overline{d_t}$ is the mean-normalized inverse depth from [@wang2017learning] to discourage shrinking of the estimated depth. The auto-masking of stationary points [@monodepth2] in [eq:photo_rep_loss]{reference-type="eqref" reference="eq:photo_rep_loss"} is necessary because the assumptions of a moving camera and a static scene are not always met in self-supervised monocular trained depth estimation methods [@monodepth2]. This masking filters out pixels that remain with the same appearance between two frames in a sequence, and is achieved with a binary mask defined as $$\begin{equation} \mu^{(s)} = \big [ \min{t'}pe(\mathbf{I}t,\mathbf{I}^{(s)}{t' \rightarrow t}) < \min{t'}pe(\mathbf{I}t,\mathbf{I}{t'}) \big ], \label{eq:automasking} \end{equation}$$ where $[.]$ represents the Iverson bracket. The binary mask $\mu$ in [eq:automasking]{reference-type="eqref" reference="eq:automasking"} masks the loss in [eq:photo_rep_loss]{reference-type="eqref" reference="eq:photo_rep_loss"} to only include the pixels where the re-projection error of $\mathbf{I}^{(s)}{t' \rightarrow t}$ is lower than the error of the un-warped image $\mathbf{I}{t'}$, indicating that the visual object is moving relative to the camera. The final loss is computed as the weighted sum of the per-pixel minimum reprojection loss in [eq:photo_rep_loss]{reference-type="eqref" reference="eq:photo_rep_loss"} and smoothness term in [eq:smoothness]{reference-type="eqref" reference="eq:smoothness"}, $$\begin{equation} \ell = \ell{p} + \lambda\ell_s \label{eq:final-loss} \end{equation}$$ where $\lambda$ is the weighting for the smoothness regularisation term. Both the pose model and depth model are trained jointly using this photometric reprojection error. Inference is achieved by taking a test image at the input of the model and producing the high-resolution disparity map $\sigma(\mathbf{D}{1/1})$.

Qualitative results on the KITTI Eigen split  test set. Our models perform better on thinner objects such as trees, signs and bollards, as well as being better at delineating difficult object boundaries.