| # Introduction |
|
|
| The development of robust models for visual understanding is tightly coupled with data annotation. For instance, one self-driving car can produce about 1Tb of data every day. Due to constant changes in environment new data should be annotated regularly. |
|
|
| Object segmentation provides fine-grained scene representation and can be useful in many applications, *e.g*. autonomous driving, robotics, medical image analysis, etc. However, practical use of object segmentation is now limited due to extremely high annotation costs. Several large segmentation benchmarks [\[3,](#page-8-1) [12\]](#page-8-2) with millions of annotated object instances came out recently. Annotation of these datasets became feasible with the use of automated interactive segmentation methods [\[1,](#page-8-3) [3\]](#page-8-1). |
|
|
| Interactive segmentation has been a topic of research for a long time [\[27,](#page-9-0) [10,](#page-8-4) [11,](#page-8-5) [13,](#page-8-6) [2,](#page-8-7) [33,](#page-9-1) [18,](#page-8-8) [23,](#page-8-9) [15\]](#page-8-0). The main scenario considered in the papers is click-based segmentation when the user provides input in a form of positive and negative clicks. Classical approaches formulate this task as an optimization problem [\[4,](#page-8-10) [10,](#page-8-4) [11,](#page-8-5) [13,](#page-8-6) [2\]](#page-8-7). These methods have many built-in heuristics and do not use semantic priors to full extent, thus requiring a large amount of input from the user. On the other hand, deep learning-based methods [\[33,](#page-9-1) [18,](#page-8-8) [23\]](#page-8-9) tend to overuse image semantics. While showing great results on the objects that were present in the training set, they tend to perform poorly on unseen object classes. Recent works propose different solutions to these problems [\[19,](#page-8-11) [18,](#page-8-8) [22\]](#page-8-12). Still, state-of-the-art networks for interactive segmentation are either able to accurately segment the object of interest after a few clicks or do not provide satisfactory result after any reasonable number of clicks (see Section [5.1](#page-5-0) for experiments). |
|
|
| The recently proposed backpropagating refinement scheme (BRS) [\[15\]](#page-8-0) brings together optimization-based and deep learning-based approaches to interactive segmentation. BRS enforces the consistency of the resulting object mask with user-provided clicks. The effect of BRS is based on the fact that small perturbations of the inputs for a deep network can cause massive changes in the network output [\[31\]](#page-9-2). Thus, BRS requires running forward and backward pass multiple times through the whole model, which substantially increases computational budget per click compared to other methods and is not practical for many enduser scenarios. |
|
|
| In this work we propose *f-BRS (feature backpropagating refinement scheme)* that reparameterizes the optimization problem and thus requires running forward and backward passes only through a small part of the network (*i.e*. last several layers). Straightforward optimization for activations in a small sub-network would not lead to the desired effect because the receptive field of the convolutions in the last layers relative to the output is too small. Thus we introduce a set of auxiliary parameters for optimization that are invariant to the position in the image. We show that op- |
|
|
| <span id="page-1-0"></span> |
|
|
| Figure 1. Results of interactive segmentation on an image from DAVIS dataset. First row: using proposed f-BRS-B (Section [3\)](#page-2-0), second row: without BRS. Green dots denote positive clicks, red dots denote negative clicks. |
|
|
| timization with respect to these parameters leads to a similar effect as the original BRS, without the need to compute backward pass through the whole network. |
|
|
| We perform experiments on standard datasets: GrabCut [\[27\]](#page-9-0), Berkeley [\[24\]](#page-8-13), DAVIS [\[26\]](#page-8-14) and SBD [\[13\]](#page-8-6), and show state-of-the-art results, improving over existing approaches in terms of speed and accuracy. |
|
|
| # Method |
|
|
| Adversarial examples generation. Szegedy *et al*. [\[31\]](#page-9-2) formulate an optimization problem for generating adversarial examples for an image classification task. They find images that are visually indistinguishable from the natural ones, which are incorrectly classified by the network. Let L denote a continuous loss function that penalizes for incorrect classification of an image. For a given image x ∈ R<sup>m</sup> |
|
|
| and target label l ∈ {1 . . . k}, they aim to find x + ∆x, which is the closest image to x classified as l by f. For that they solve the following optimization problem: |
|
|
| <span id="page-2-1"></span> |
| $$||\Delta x||_2 \to \min$$ |
| subject to |
| $1. f(x + \Delta x) = l$ (1) |
| $2. x + \Delta x \in [0, 1]^m$ |
| |
| This problem in [\(1\)](#page-2-1) is reduced to minimisation of the following energy function: |
| |
| <span id="page-2-2"></span> |
| $$\lambda ||\Delta x||_2 + \mathcal{L}(f(x + \Delta x), l) \to \min_{\Delta x}$$ |
| (2) |
| |
| The variable λ in later works is usually assumed a constant and serves as a trade-off between the two energy terms. |
| |
| Backpropagating refinement scheme for interactive segmentation. Jang *et al*. [\[15\]](#page-8-0) propose a backpropagating refinement scheme that applies a similar optimization technique to the problem of interactive image segmentation. In their work, a network takes as input an image stacked together with distance maps for user-provided clicks. They find minimal edits to the distance maps that result in an object mask consistent with user-provided annotation. For that, they minimise a sum of two energy functions, *i.e*. corrective energy and inertial energy. Corrective energy function enforces consistency of the resulting mask with userprovided annotation, and inertial energy prevents excessive perturbations in the network inputs. |
| |
| Let us denote the coordinates of user-provided click by (u, v) and its label (positive or negative) as l ∈ {0, 1}. Let us denote the output of a network f for an image x in posi<span id="page-3-2"></span>tion (u,v) as $f(x)_{u,v}$ and the set of all user-provided clicks as $\{(u_i,v_i,l_i)\}_{i=1}^n$ . The optimization problem in [15] is formulated as follows: |
|
|
| <span id="page-3-0"></span> |
| $$\lambda ||\Delta x||_2 + \sum_{i=1}^n \left( f(x + \Delta x)_{u_i, v_i} - l_i \right)^2 \to \min_{\Delta x}, \quad (3)$$ |
| |
| where the first term represents inertial energy, the second term represents corrective energy and $\lambda$ is a constant that regulates trade-off between the two energy terms. This optimization problem resembles the one from (2) with classification loss for one particular label replaced by a sum of losses for the labels of all user-provided clicks. Here we do not need to ensure that the result of optimization is a valid image, so the energy (3) can be minimised by unconstrained L-BFGS. |
| |
| The main drawback of this approach is that L-BFGS requires computation of gradients with respect to network inputs, *i.e.* backpropagating through the whole network. It is computationally expensive and results in significant computational overhead. |
| |
| We also notice that since the first layer of a network f is a convolution, *i.e.* a linear combination of the inputs, one can minimise the energy (3) with respect to input image instead of distance maps and obtain equivalent solution. Moreover, if we minimise it with respect to RGB image which is invariant to an interactive input, we can use the result as an initialisation for optimization of (3) with new clicks. Thus, we set the BRS with respect to an input image as a baseline in our experiments and denote it as RGB-BRS. For a fair comparison, we also implement the optimization with respect to the input distance maps (DistMap-BRS), that was originally introduced in [15]. |
| |
| In order to speed-up the optimization process, we want to compute backpropagation not for the whole network, but for some part of it. This can be achieved by optimizing some intermediate parameters in the network instead of the input. A naive approach would be to simply optimize the outputs of some of the last layers and thus compute backpropagation only through the head of a network. However, such a naive approach would not lead to the desired result. The convolutions in the last layers have a very small receptive field with respect to the network outputs. Therefore, an optimization target can be easily achieved by changing just a few components of a feature tensor which would cause only minor localized changes around the clicked points in the resulting object mask. |
| |
| Let us reparameterize the function f and introduce auxiliary variables for optimization. Let $\hat{f}(x,z)$ denote the function that depends both on the input x and on the introduced variables z. With auxiliary parameters fixed z=p the reparameterized function is equivalent to the original one |
| |
| $\hat{f}(x,p) \equiv f(x)$ . Thus, we aim to find a small value of $\Delta p$ , which would bring the values of $\hat{f}(x,p+\Delta p)$ in the clicked points close to the user-provided labels. We formulate the optimization problem as follows: |
| |
| $$\lambda ||\Delta p||_2 + \sum_{i=1}^n \left( \hat{f}(x, p + \Delta p)_{u_i, v_i} - l_i \right)^2 \to \min_{\Delta p}. \quad (4)$$ |
|
|
| We call this optimization task *f-BRS* (feature backpropagating refinement) and use unconstrained L-BFGS optimizer for minimization. For f-BRS to be efficient, we need to choose the reparameterization that a) does not have a localized effect on the outputs, b) does not require a backward pass through the whole network for optimization. |
|
|
| One of the options for such reparameterization may be channel-wise scaling and bias for the activations of the last layers in the network. Scale and bias are invariant to the position in the image, thus changes in this parameters would affect the results globally. Compared to optimization with respect to activations, optimization with respect to scale and bias cannot result in degenerate solutions (*i.e.* minor localized changes around the clicked points). |
|
|
| Let us denote the output of some intermediate layer of the network for an image x by F(x), the number of its channels by h, a function that the network head implements by g. Thus, f can be represented by $f(x) \equiv g(F(x))$ . Then the reparameterized function $\hat{f}$ looks as follows: |
|
|
| $$\hat{f}(x,s,b) = g(s \cdot F(x) + b), \tag{5}$$ |
|
|
| where $b \in \mathbb{R}^h$ is a vector of biases, $s \in \mathbb{R}^h$ is a vector of scaling coefficients and $\cdot$ denotes a channel-wise multiplication. For s = 1 and b = 0 we have $\hat{f}(x) \equiv f$ , thus we take these values as initial values for optimization. |
|
|
| By varying the part of the network to which auxiliary scale and bias are applied, we achieve a natural trade-off between accuracy and speed. Figure 2 shows the architecture of the network that we used in this work and illustrates different options for optimization. Surprisingly, we found that applying f-BRS to the last several layers causes just a small drop of accuracy compared to full-network BRS, and leads to significant speed-up. |
|
|
| Previous works on interactive segmentation often used inference on image crops to achieve speed-up and preserve fine details in the segmentation mask. Cropping helps to infer the masks of small objects, but it also may degrade results in cases when an object of interest is too large to fit into one crop. |
|
|
| In this work, we use an alternative technique (we call it *Zoom-In*), which is quite simple but improves both quality and speed of the interactive segmentation. It is based on the ideas from object detection [21, 7]. We have not found |
|
|
| <span id="page-4-2"></span> |
|
|
| <span id="page-4-0"></span>Figure 3. Example of applying zoom-in technique described in Section [4.](#page-3-1) See how cropping an image allows recovering fine details in the segmentation mask. |
|
|
| any mentions of this exact technique in the literature in the context of interactive segmentation, so we describe it below. |
|
|
| We noticed that the first 1-3 clicks are enough for the network to achieve around 80% IoU with ground truth mask in most cases. It allows us to obtain a rough crop around the region of interest. Therefore, starting from the third click we crop an image according to the bounding box of the inferred object mask and apply the interactive segmentation only to this Zoom-In area. We extend the bounding box by 40% along sides in order to preserve the context and not miss fine details on the boundary. If a user provides a click outside the bounding box, we expand or narrow down the zoom-in area. Then we resize the bounding box so that its longest side matches 400 pixels. Figure [3](#page-4-0) shows an example of Zoom-In. |
|
|
| This technique helps the network to predict more accurate masks for small objects. In our experiments, Zoom-In consistently improved the results, therefore we used it by default in all experiments in this work. Table [1](#page-5-1) shows a quantitative comparison of the results with and without Zoom-In on GrabCut and Berkeley datasets. |
|
|