diff --git a/actionlocalizationthroughcontinualpredictivelearning/3bbe7823-fe23-4bbf-8d5b-8f7efaef12cc_content_list.json b/actionlocalizationthroughcontinualpredictivelearning/3bbe7823-fe23-4bbf-8d5b-8f7efaef12cc_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7ae4a3737e1187cb69cea5ec9209983a3dd27cee
--- /dev/null
+++ b/actionlocalizationthroughcontinualpredictivelearning/3bbe7823-fe23-4bbf-8d5b-8f7efaef12cc_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d4449d20c8d0b8d58767f389381c5d40d36ea18b3ffa1f3e02a4ddae2968245
+size 77825
diff --git a/actionlocalizationthroughcontinualpredictivelearning/3bbe7823-fe23-4bbf-8d5b-8f7efaef12cc_model.json b/actionlocalizationthroughcontinualpredictivelearning/3bbe7823-fe23-4bbf-8d5b-8f7efaef12cc_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..54cd04cc687b6cd13871e8f0bf774cb166898285
--- /dev/null
+++ b/actionlocalizationthroughcontinualpredictivelearning/3bbe7823-fe23-4bbf-8d5b-8f7efaef12cc_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a8e62af56fe10dc54f2fb41376de6bb2e56648c02d2b87d71a8b7face1c9e00d
+size 96791
diff --git a/actionlocalizationthroughcontinualpredictivelearning/3bbe7823-fe23-4bbf-8d5b-8f7efaef12cc_origin.pdf b/actionlocalizationthroughcontinualpredictivelearning/3bbe7823-fe23-4bbf-8d5b-8f7efaef12cc_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6073460f01c39f7f08ea487015ef0260bc60f123
--- /dev/null
+++ b/actionlocalizationthroughcontinualpredictivelearning/3bbe7823-fe23-4bbf-8d5b-8f7efaef12cc_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c950b41d0e264138b6debf6384f4c2a0ea8d655921c182eb00dd819df8173e11
+size 10704015
diff --git a/actionlocalizationthroughcontinualpredictivelearning/full.md b/actionlocalizationthroughcontinualpredictivelearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c3ca6cb0e559d80d32822f0a96c9c33f2e311a98
--- /dev/null
+++ b/actionlocalizationthroughcontinualpredictivelearning/full.md
@@ -0,0 +1,289 @@
+# Action Localization through Continual Predictive Learning
+
+Sathyanarayanan Aakur $^{1}$ [0000-0003-1062-8929] and Sudeep Sarkar $^{2}$ [0000-0001-7332-4207]
+
+1 Oklahoma State University, Stillwater, OK 74074 saakur@okstate.edu
+2 University of South Florida, Tampa, FL, 33620 sarkar@usf.edu
+
+Abstract. The problem of action localization involves locating the action in the video, both over time and spatially in the image. The current dominant approaches use supervised learning to solve this problem. They require large amounts of annotated training data, in the form of frame-level bounding box annotations around the region of interest. In this paper, we present a new approach based on continual learning that uses feature-level predictions for self-supervision. It does not require any training annotations in terms of frame-level bounding boxes. The approach is inspired by cognitive models of visual event perception that propose a prediction-based approach to event understanding. We use a stack of LSTMs coupled with a CNN encoder, along with novel attention mechanisms, to model the events in the video and use this model to predict high-level features for the future frames. The prediction errors are used to learn the parameters of the models continuously. This self-supervised framework is not complicated as other approaches but is very effective in learning robust visual representations for both labeling and localization. It should be noted that the approach outputs in a streaming fashion, requiring only a single pass through the video, making it amenable for real-time processing. We demonstrate this on three datasets - UCF Sports, JHMDB, and THUMOS'13 and show that the proposed approach outperforms weakly-supervised and unsupervised baselines and obtains competitive performance compared to fully supervised baselines. Finally, we show that the proposed framework can generalize to egocentric videos and achieve state-of-the-art results on the unsupervised gaze prediction task. Code is available on the project page3.
+
+Keywords: Action localization, continuous learning, self-supervision
+
+# 1 Introduction
+
+We develop a framework for jointly learning spatial and temporal localization through continual, self-supervised learning, in a streaming fashion, requiring only a single pass through the video. Visual understanding tasks in computer vision have focused on the problem of recognition [1, 3, 23, 25] and captioning [1, 9, 47,
+
+46], with the underlying assumption that each input video is already localized both spatially and temporally. While there has been tremendous progress in action localization, it has primarily been driven by the dependence on large amounts of tedious, spatial-temporal annotations. In this work, we aim to tackle the problem of spatial-temporal segmentation of streaming videos in a continual, self-supervised manner, without any training annotations.
+
+
+Fig. 1: The Proposed Approach has four core components: (i) feature extraction and spatial region proposal, (ii) a future prediction framework, (iii) a spatial-temporal error detection module and (iv) the error-based action localization process.
+
+Drawing inspiration from psychology [13, 14, 52], we consider the underlying mechanism for both event understanding and attention selection in humans to be the idea of predictability. Defined as the surprise-attention hypothesis [13], unpredictable factors such as large changes in motion, appearance, or goals of the actor have a substantial effect on the event perception and human attention. Human event perception studies [52, 2] have shown that longer-term, temporal surprise have a strong correlation with event boundary detection. In contrast, short-term spatial surprise (such as those caused by motion) has a more substantial effect on human attention and localization [14]. Our approach combines both spatial and temporal surprise to formulate a computational framework to tackle the problem of self-supervised action localization in streaming videos in a continual manner.
+
+We formulate our computational framework on the idea of spatial-temporal feature anticipation to model predictability of perceptual features. The main assumption in our framework is that expected, unpredictable features require attention and often point to the actor performing the action of interest. In contrast, predictable features can belong to background clutter and are not relevant to the action of interest. It is to be noted that unpredictability or surprise is not the same as rarity. It refers to short-term changes that aid in the completion of an overall task, which can be recurring [13]. We model the perceptual features using a hierarchical, cyclical, and recurrent framework, whose predictions are influenced by current and prior observations as well as current perceptual predictions. Hence, the predictive model's output can influence the perception of
+
+the current frame being observed. The predictions are constantly compared with the incoming observations to provide self-supervision to guide future predictions.
+
+We leverage these characteristics to derive and quantify spatial-temporal predictability. Our framework performs continuous learning to generate "attention maps" that overlap with the action being performed. Using these attention maps, we leverage advances in region proposals [29, 31, 44, 54] to localize actions in streaming videos without any supervision. Contrary to other attention-based approaches [5, 28, 33], we do not use the object-level characteristics such as label, role, and affordance in the proposal generation process.
+
+Contributions: The contributions of our approach are three-fold: (i) we are among the first to tackle the problem of self-supervised action localization in streaming videos without any training data such as labels or bounding boxes, (ii) we show that modeling spatial-temporal prediction error can yield consistent localization performance across action classes and (iii) we show that the approach generalizes to egocentric videos and achieves competitive performance on the unsupervised gaze prediction task.
+
+# 2 Related Work
+
+Supervised action localization approaches tackle action localization through the simultaneous generation of bounding box proposals and labeling each bounding box with the predicted action class. Both bounding box generation and labeling are fully supervised, i.e., they require ground truth annotations of both bounding boxes and labels. Typical approaches leverage advances in object detection to include temporal information [7, 16, 18, 36, 37, 40, 43, 50] for proposal generation. The final step typically involves the use of the Viterbi algorithm [7] to link the generated bounding boxes across time.
+
+Weakly-supervised action localization approaches have been explored to reduce the need for extensive annotations [5, 26, 28, 33]. They typically only require video-level labels and rely on object detection-based approaches to generate bounding box proposals. It is to be noted that weakly supervised approaches also use object-level labels and characteristics to guide the bounding box selection process. Some approaches [5] use a similarity-based tracker to connect bounding boxes across time to incorporate temporal consistency.
+
+Unsupervised action localization approaches have not been explored to the same extent as supervised and weakly-supervised approaches. These approaches do not require any supervision - both labels or bounding boxes. The two more common approaches are to generate action proposals using (i) supervoxels [18,38] and (ii) clustering motion trajectories [45]. It should be noted that [38] also uses object characteristics to evaluate the "humanness" of each super-voxel to select bounding box proposals. Our approach falls into the class of unsupervised action localization approaches. The most closely related approaches (with respect to architecture and theme) to ours are VideoLSTM [28] and Actor Supervision [5], which use attention in the selection process for gen
+
+erating bounding box proposals, but require video-level labels. We, on the other hand, do not require any labels or bounding box annotations for training.
+
+While fully supervised approaches have more precise localization and achieve better recognition, the required number of annotations is rather large. It is not amenable to an increase in the number of classes and a decrease in the number of training videos. While not requiring frame-level annotations, weakly supervised approaches have the underlying assumption that there exists a large, annotated training set that allows for effective detection of all possible actors (both human and non-human) in the set of action classes. Unsupervised approaches, such as ours, do not make any such assumptions but can result in poorer localization performance. We alleviate this to an extent by leveraging advances in region proposal mechanisms and self-learning robust representations for obtaining video-level labels.
+
+# 3 Self-Supervised Action Localization
+
+In this section, we introduce our self-supervised action localization framework, as illustrated in Figure 1. Our approach has four core components: (i) feature extraction and spatial region proposal, (ii) a self-supervised future prediction framework, (iii) a spatial-temporal error detection module, and (iv) the error-based action localization process.
+
+# 3.1 Feature Extraction and Spatial Region Proposal
+
+The first step in our approach is feature extraction and the subsequent per-frame region proposal generation for identifying possible areas of actions and associated objects. Considering the tremendous advances in deep learning architectures for learning robust spatial representations, we use pre-trained convolutional neural networks to extract the spatial features for each frame in the video. We use a region proposal module, based on these spatial features, to predict possible action-agnostic spatial locations. We use class-agnostic proposals (i.e., the object category is ignored, and only feature-based localizations are taken into account) for two primary reasons. First, we do not want to make any assumptions on the actor's characteristics, such as label, role, and affordance. Second, despite significant progress in object detection, there can be many missed detections, especially when the object (or actor) performs actions that can transform their physical appearance. It is to be noted that these considerations can result in a large number of region proposals that require careful and robust selection but can yield higher chances of correct localization.
+
+# 3.2 Self-supervised Future Prediction
+
+The second stage in our proposed framework is the self-supervised future prediction framework. We consider the future prediction module to be a generative model whose output is conditioned on two factors - the current observation and
+
+an internal event model. The current observation $f_{t}^{S}$ is the feature-level encoding of the presently observed frame, $I_{t}$ . We use the same feature encoder as the region proposal module to reduce the approach's memory footprint and complexity. The internal event model is a set of parameters that can effectively capture the spatial-temporal dynamics of the observed event. Formally, we define the predictor model as $P(\hat{f}_{t + 1}^{S}|W_{e},f_{t}^{S})$ , where $W_{e}$ represents the internal event model and $\hat{f}_{t + 1}^{S}$ is the predicted features at time $t + 1$ . Note that features $f_{t}^{S}$ is not a one-dimensional vector, but a tensor (of dimension $w_{f}\times h_{f}\times d_{f}$ ) representing the features at each spatial location.
+
+We model temporal dynamics of the observed event using Long Short Term Memory Networks (LSTMs)[12]. While other approaches [21, 48, 49] can be used for prediction, we consider LSTMs to be more suited for the following reasons. First, we want to model the temporal dynamics across all frames of the observed action (or event). Second, LSTMs can allow for multiple possible futures and hence will not tend to average the outcomes of these possible futures, as can be the case with other prediction models. Third, since we work with error-based localization, using LSTMs can ensure that the learning process propagates the spatial-temporal error across time and can yield progressively better predictions, especially for actions of longer duration. Formally, we can express LSTMs as
+
+$$
+i _ {t} = \sigma \left(W _ {i} x _ {t} + W _ {h i} h _ {t - 1} + b _ {i}\right); \quad f _ {t} = \sigma \left(W _ {f} x _ {t} + W _ {h f} h _ {t - 1} + b _ {f}\right) \tag {1}
+$$
+
+$$
+o _ {t} = \sigma \left(W _ {o} x _ {t} + W _ {h o} h _ {t - 1} + b _ {o}\right); \quad g _ {t} = \phi \left(W _ {g} x _ {t} + W _ {h g} h _ {t - 1} + b _ {g}\right) \tag {2}
+$$
+
+$$
+m _ {t} = f _ {t} \cdot m _ {t - 1} + i _ {t} \cdot g _ {t}; \quad h _ {t} = o _ {t} \cdot \phi (m _ {t}) \tag {3}
+$$
+
+where $x_{t}$ is the input at time $t$ , $\sigma$ is a non-linear activation function, $(\cdot)$ represents element-wise multiplication, $\phi$ is the hyperbolic tangent function (tanh) and $W_{k}$ and $b_{k}$ represent the trained weights and biases for each of the gates.
+
+As opposed to [2], who also use an LSTM-based predictor and a decoder network, we use a hierarchical LSTM model (with three LSTM layers) as our event model. This modification allows us to model both spatial and temporal dependencies, since each higher-level LSTMs act as a progressive decoder framework that captures the temporal dependencies captured by the lower-level LSTMs. The first LSTM captures the spatial dependency that is propagated up the prediction stack. The updated hidden state of the first (bottom) LSTM layer $(h_t^1)$ depends on the current observation $f_t^S$ , the previous hidden state $(h_{t - 1}^{1})$ and memory state $(m_{t - 1}^{1})$ . Each of the higher-level LSTMs at level $l$ take the output of the bottom LSTM's output $h_t^{l - 1}$ and memory state $m_t^{l - 1}$ and can be defined as $(h_t^l,m_t^l) = LSTM(h_{t - 1}^l,h_t^{l - 1},m_t^{l - 1})$ . Note this is different from a typical hierarchical LSTM model [35] in that the higher LSTMs are impacted by the output of the lower level LSTMs at current time step, as opposed to that from the previous time step. Collectively, the event model $W_{e}$ is described by the learnable parameters and their respective biases from the hierarchical LSTM stack.
+
+Hence, the top layer of the prediction stack acts as the decoder whose goal is to predict the next feature $f_{t + 1}^{S}$ given all previous predictions $\hat{f}_1^S, \hat{f}_2^S, \dots, \hat{f}_t^S$ , an event model $W_{e}$ and the current observation $f_{t}^{S}$ . We model this prediction function as a log-linear model characterized by
+
+$$
+\log p \left(\hat {f} _ {t + 1} ^ {s} \mid h _ {t} ^ {l}\right) = \sum_ {n = 1} ^ {t} f \left(W _ {e}, f _ {t} ^ {S}\right) + \log Z \left(h _ {t}\right) \tag {4}
+$$
+
+where $h_t^l$ is the hidden state of the $l^{th}$ level LSTM at time $t$ and $Z(h_{t})$ is a normalization constant. The LSTM prediction stack acts as a generative process for anticipating future features.
+
+The objective function for training the predictive stack is a weighted zero order hold between the predicted features and the actual observed features, weighted by the zero order hold difference. The prediction error at time $t$ is given by $E(t) = \frac{1}{n_f} \sum_{i=1}^{w_f} \sum_{j=1}^{h_f} e_{ij}$ , where
+
+$$
+e _ {i j} = \hat {m} _ {t} (i, j) \odot \| f _ {t + 1} ^ {S} (i, j) - \hat {f} _ {t + 1} ^ {S} (i, j) \| _ {\ell_ {2}} ^ {2} \tag {5}
+$$
+
+Each feature $f_{t}^{S}$ has dimensions $w_{f} \times h_{f} \times d_{f}$ and $\hat{m}_{t}(i,j)$ is a function that returns the zero order difference between the observed features at times $t$ and $t + 1$ at location $(i,j)$ . Note that the prediction is done at the feature level and not at the pixel level, so the spatial quantization is coarser than pixels.
+
+# 3.3 Prediction Error-based Attention Map
+
+At the core of our approach is the idea of spatial-temporal prediction error for localizing the actions of interest in the video. It takes into account the quality of the predictions made and the relative spatial alignment of the prediction errors. The input to the error detection module is the quantity from Equation 5. We compute a weight $\alpha_{ij}$ associated with each spatial location $(i,j)$ in the predicted feature $\hat{f}_{t + 1}^{S}$ as
+
+$$
+\alpha_ {i j} = \frac {\exp \left(e _ {i j}\right)}{\sum_ {m = 1} ^ {w _ {k}} \sum_ {n = 1} ^ {h _ {k}} \exp \left(e _ {m n}\right)} \tag {6}
+$$
+
+where $e_{ij}$ represents the weighted prediction error at location $(i,j)$ (Equation 5). It can be considered to be a function $a(f_t^S, h_{t-1}^l)$ of the state of the top-most LSTM and the input feature $f_t^S$ at time $t$ . The resulting matrix is an error-based attention map that allows us to localize the prediction error at a specific spatial location. And the average spatial error over time, $E(t)$ , is used for temporal localization.
+
+One may remark that the formulation of $\alpha_{ij}$ is very similar to Bahdanau attention [4]. However, there are two key differences. First, our formulation is not parametrized and does not add to the number of learnable parameters in the framework. Second, our attention map is a characterization of the difficulty in anticipating unpredictable motion. In contrast, Bahdanau attention is an effort to increase the decoder's encoding ability and does not characterize the unpredictability of the future feature. We compare the use of both types of attention in Section 5.4, where we see that error-based localization is more suitable for our application.
+
+# 3.4 Extraction of Action Tubes
+
+The action localization module receives a stream of bounding box proposals and an error-based attention map to select an output tube. The action localization is a selection algorithm that filters all region proposals from Section 3.1 and returns the collection of proposals that have a higher probability of action localization. We do so by assigning an energy term to each of the bounding box proposals $(\mathcal{B}_{it})$ at time $t$ and choosing the top $k$ bounding boxes with least energy as our final proposals. The energy of a bounding box $\mathcal{B}_{it}$ is defined as
+
+$$
+E \left(\mathcal {B} _ {i t}\right) = w _ {\alpha} \phi \left(\alpha_ {i j}, \mathcal {B} _ {i t}\right) + w _ {t} \delta \left(\mathcal {B} _ {i t}, \left\{\mathcal {B} _ {j, t - 1} \right\}\right) \tag {7}
+$$
+
+where $\phi(\cdot)$ is a function that returns a value characteristic of the distance between the bounding box center and location of maximum error, $\delta(\cdot)$ is a function that returns the minimum spatial distance between the current bounding box and the closest bounding box from the previous time step. The constants $w_{\alpha}$ and $w_{t}$ are scaling factors. Note that $\delta(\cdot)$ is introduced to enforce temporal consistency in predictions, but we find that it is optional since the LSTM prediction stack implicitly enforces the temporal consistency through its memory states. In our experiments we set $k = 10$ , $w_{\alpha} = 0.75$ .
+
+# 3.5 Implementation Details
+
+In our experiments, we use a VGG-16 [34] network pre-trained on ImageNet as our feature extraction network. We use the output of the last convolutional layer before the fully connected layers as our spatial features. Hence the dimensions of the spatial features are $w_{f} = 14$ , $h_{f} = 14$ , $d_{f} = 512$ . These output features are then used by an SSD [29] to generate bounding box proposals. Note that we take the generated bounding box proposals without taking into account classes and associated probabilities. We use a three-layer hierarchical LSTM model with the hidden state size as 512 as our predictor module. We use the vanilla LSTM as proposed in [12]. Video level-features are obtained by max-pooling the element-wise dot-product of the hidden state of the top-most LSTM and the attention values across time. We train with the adaptive learning mechanism proposed in [2], with the initial learning rate set to be $1 \times 10^{-8}$ and scaling factors $\Delta_{t}^{-}$ and $\Delta_{t}^{+}$ as $1 \times 10^{-2}$ and $1 \times 10^{-3}$ , respectively. The network was trained for 1 epoch on a computer with one Titan X Pascal.
+
+# 4 Experimental Setup
+
+# 4.1 Data
+
+We evaluate our approach on three publicly available datasets for evaluating the proposed approach on the action localization task.
+
+UCF Sports [32] is an action localization dataset consisting of 10 classes of sports actions such as skating and lifting collected from sports broadcasts. It
+
+is an interesting dataset since it has a high concentration of distinct scenes and motions that make it challenging for localization and recognition. We use the splits (103 training and 47 testing videos) as defined in [26] for evaluation.
+
+JHMDB [19] is composed of 21 action classes and 928 trimmed videos. All videos are annotated with human-joints for every frame. The ground truth bounding box for the action localization task is chosen such that the box encompasses all the joints. This dataset offers several challenges, such as increasing amounts of background clutter, high inter-class similarity, complex motion (including camera motion), and occluded objects of interest. We report all results as the average across all three splits.
+
+THUMOS'13 [22] is a subset of the UCF-101 [39] dataset, consisting of 24 classes and 3,207 videos. Ground truth bounding boxes are provided for each of the classes for the action localization task. It is also known as the UCF-101-24 dataset. Following prior works [28,38], we perform our experiments and report results on the first split.
+
+We also evaluate the proposed approach's generalization ability on egocentric videos by evaluating it on the unsupervised gaze prediction task. There has been evidence from cognitive psychology that there is a strong correlation between gaze points and action localization [41]. Hence, the gaze prediction task would be a reasonable measure of the generalization to action localization in egocentric videos. We evaluate the performance on the GTEA Gaze [6] dataset, which consists of 17 sequences of tasks performed by 14 subjects, with each sequence lasting about 4 minutes. We use the official splits for the GTEA datasets as defined in prior works [6].
+
+# 4.2 Metrics and Baselines
+
+For the action localization task, we follow prior works [28,38] and report the mean average precision (mAP) at various overlap thresholds, obtained by computing the Intersection Over Union (IoU) of the predicted and ground truth bounding boxes. We also evaluate the quality of bounding box proposals by measuring the average, per-frame IoU, and the bounding box recall at varying overlap ratios.
+
+Since ours is an unsupervised approach, we obtain class labels by clustering the learned representations using the $k$ -means algorithm. While more complicated clustering may yield better recognition results [38], the $k$ -means approach allows us to evaluate the robustness of learned features. We evaluate our approach in two settings $K_{gt}$ and $K_{opt}$ , where the number of clusters is set to the number of ground truth action classes and an optimal number obtained through the elbow method [24], respectively. From our experiments, we observe that $K_{opt}$ is three times the number of ground truth classes, which is not unreasonable and has been a working assumption in other deep learning-based clustering approaches [11]. Clusters are mapped to the ground truth clusters for evaluation using the Hungarian method, as done in prior unsupervised approaches [20, 51].
+
+We also compare against other LSTM and attention-based approaches (Section 5.3) to the action localization problem for evaluating the effectiveness of the proposed training protocol.
+
+For the gaze prediction task, we evaluate the approaches using Area Under the Curve (AUC), which measures the area under the curve on saliency maps for true positive versus false-positive rates under various threshold values. We also report the Average Angular Error (AAE), which measures the angular distance between the predicted and ground truth gaze positions. Since our model's output is a saliency map, AUC is a more appropriate metric compared to average angular error (AAE), which requires specific locations.
+
+# 5 Quantitative Evaluation
+
+In this section, we present the quantitative evaluation of our approach on two different tasks, namely action localization, and egocentric gaze prediction. For the action localization task, we evaluate our approach on two aspects - the quality of proposals and spatial-temporal localization.
+
+# 5.1 Quality of Localization Proposals
+
+We first evaluate the quality of our localization proposals by assuming perfect class prediction. This allows us to independently assess the quality of localization performed in a self-supervised manner. We present the results of the evaluation in Table 1 and compare against fully supervised, weakly supervised, and unsupervised baselines. As can be seen, we outperform many supervised and weakly supervised baselines. APT [45] achieves a higher localization score. However, it produces, on average, 1,500 proposals per video, whereas our approach returns approximately 10 proposals. A large number of localization proposals per video can lead to higher recall and IoU but makes the localization task, i.e., action labeling per video harder and can affect the ability to generalize across domains. Also, it should be noted that our approach produces proposals in streaming fashion, as opposed to many of the other approaches, which produce action tubes based on motion computed across the entire video. This can make real-time action localization in streaming videos harder.
+
+# 5.2 Spatial-temporal Action Localization
+
+We also evaluate our approach on the spatial-temporal localization task. This evaluation allows us to analyze the robustness of the self-supervised features learned through prediction. We generate video-level class labels through clustering and use the standard evaluation metrics (Section 4.2) to quantify the performance. The AUC curves with respect to varying overlap thresholds are presented in Figure 2. We compare against a mix of supervised, weakly-supervised, and unsupervised baselines on all three datasets.
+
+
| Supervision | Approach | Average |
| Full | STPD[42] | 44.6 |
| Max Path Search [43] | 54.3 |
| Weak | Ma et al. [30] | 44.6 |
| GBVS [8] | 42.1 |
| Soomro et al. [38] | 47.7 |
| None | IME Tablets [18] | 51.5 |
| APT [45] | 63.7 |
| Proposed Approach | 55.7 |
+
+Table 1: Comparison with fully supervised and weakly supervised baselines on class-agnostic action localization on UCF Sports dataset. We report the average localization accuracy of each approach i.e. average IoU.
+
+On the UCF Sports dataset (Figure 2(a)), we outperform all baselines including several supervised baselines except for Gkioxari and Malik [7] at higher overlap thresholds ( $\sigma > 0.4$ ) when we set number of clusters $k$ to the number of ground truth classes. When we allow for some over-segmentation and use the optimal number of clusters, we outperform all baselines till $\sigma > 0.5$ .
+
+
+(a)
+
+
+(b)
+Fig. 2: AUC for the action localization tasks are shown for (a) UCF Sports, (b) JHMDB and (c) THUMOS13 datasets. We compare against baselines with varying levels of supervision such as Lan et al. [26], Tian et al. [40], Wang et al. [50], Gkioxari and Malik [7], Jain et al. [18], Soomro et al. [36-38], Hou et al. [16], and VideoLSTM [28].
+
+
+(c)
+
+On the JHMDB dataset (Figure 2(b)), we find that our approach, while having high recall $(77.8\% \circ \sigma = 0.5)$ , the large camera motion and intra-class variations have a significant impact on the classification accuracy. Hence, the mAP suffers when we set $k$ to be the number of ground truth classes. When we set the number of clusters to the optimal number of clusters, we outperform other baselines at lower thresholds $(\sigma < 0.5)$ . It should be noted that the other unsupervised baseline (Soomro et al. [38]) uses object detection proposals from a Faster R-CNN backbone to score the "humanness" of a proposal. This assumption tends to make the approach biased towards human-centered action localization and affects its ability to generalize towards actions with non-human
+
+actors. On the other hand, we do not make any assumptions on the characteristics of the actor, scene, or motion dynamics.
+
+On the THUMOS'13 dataset (Figure 2(c)), we achieve consistent improvements over unsupervised and weakly supervised baselines, at $k = k_{gt}$ and achieve state-of-the-art mAP scores when $k = k_{opt}$ . It is interesting to note that we perform competitively (when $k = k_{gt}$ ) the weakly-supervised attention-based VideoLSTM [28], which uses a convLSTM for temporal modeling along with a CNN-based spatial attention mechanism. It should be noted that we have a higher recall rate $(0.47@\sigma = 0.4$ and $0.33@\sigma = 0.5$ ) at higher thresholds than other state-of-the-art approaches on THUMOS'13 and shows the robustness of the error-based localization approach to intra-class variation and occlusion.
+
+Clustering quality. Since there is a significant difference in the mAP score when we set a different number of clusters in k-means, we measured the homogeneity (or purity) of the clustering. The homogeneity score measures the "quality" of the cluster by measuring how well a cluster models a given ground-truth class. Since we allow the over-segmentation of clusters when we set $k$ to the optimal number of clusters, this is an essential measure of feature robustness. Higher homogeneity indicates that intra-class variations are captured since all data points in a given cluster belong to the same ground truth class. We observe an average homogeneity score of $74.56\%$ when $k$ is set to the number of ground truth classes and $78.97\%$ when we use the optimal number of clusters. As can be seen, although we over-segment, each of the clusters typically models a single action class to a high degree of integrity.
+
+| Approach | Annotations | # Proposals | Average Recall | mAP @0.2 |
| Labels | Boxes | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
| ALSTM [33] | ✓ | X | 1 | 0.46 | 0.28 | 0.05 | 0.02 | - | 0.06 |
| VideoLSTM [28] | ✓ | X | 1 | 0.71 | 0.52 | 0.32 | 0.11 | - | 0.37 |
| Actor Supervision [5] | ✓ | X | ~ 1000 | 0.89 | - | - | - | 0.44 | 0.46 |
| Proposed Approach | X | X | ~ 10 | 0.84 | 0.72 | 0.58 | 0.47 | 0.33 | 0.59 |
+
+Table 2: Comparison with other LSTM-based and attention-based approaches on the THUMOS'13 dataset. We report average recall at various overlap thresholds, mAP at 0.2 overlap threshold and the average number of proposals per frame.
+
+# 5.3 Comparison with other LSTM-based approaches
+
+We also compare our approach with other LSTM-based and attention-based models to highlight the importance of the proposed self-supervised learning paradigm. Since LSTM-based frameworks can have highly similar architectures, we consider different requirements and characteristics, such as the level of annotations required for training and the number of localization proposals returned per video. We compare with three approaches similar in spirit to our approach - ALSTM [33], VideoLSTM [28] and Actor Supervision [5] and summarize the results in Table 2. It can be seen that we significantly outperform VideoLSTM
+
+and ALSTM on the THUMOS'13 dataset in both recall and $mAP@\sigma = 0.2$ . Actor Supervision [5] outperforms our approach on recall, but it is to be noted that the region proposals are dependent on two factors - (i) object detection-based actor proposals and (ii) a filtering mechanism that limits proposals based on ground truth action classes, which can increase the training requirements and limit generalizability. Also, note that returning a higher number of localization proposals can increase recall at the cost of generalization.
+
+# 5.4 Ablative Studies
+
+The proposed approach has three major units that affect its performance the most - (i) the region proposal module, (ii) future prediction module, and (iii) error-based action localization module. We consider and evaluate several alternatives to all three modules.
+
+We choose selective search [44] and EdgeBox [54] as alternative region proposal methods to SSD. We use an attention-based localization method for action localization as an approximation of the ALSTM [33] to evaluate the effectiveness of using the proposed error-based localization. We also evaluate a 1-layer LSTM predictor with a fully connected decoder network to approximate [2] on the localization task. We evaluate the effect of attention-based prediction by introducing a Bahdanau [4] attention layer before prediction as an alternative to the error-based action localization module.
+
+These ablative studies are conducted on the UCF Sports dataset. The results are plotted in Figure 3(a). It can be seen that the use of the prediction error-based localization has a significant improvement over a trained attention-based localization approach. We can also see that the choice of region proposal methods do have some effect on the performance of the approach, with selective search and EdgeBox proposals doing slightly better at higher thresholds $(\sigma \in (0.4, 0.5))$ at the cost of inference time and additional bounding box proposals (50 compared to the 10 from SSD-based region proposal). Using SSD for generating proposals allows us to share weights across the frame encoder and region proposal tasks and hence reduce the memory and computational footprint of the approach. We also find that using attention as part of the prediction module significantly impacts the architecture's performance. It could, arguably, be attributed to the objective function, which aims to minimize the prediction error. Using attention to encode the input could impact the prediction function.
+
+# 5.5 Unsupervised Egocentric Gaze Prediction
+
+Finally, we evaluate the ability to generalize to egocentric videos by quantifying the model's performance on the unsupervised gaze prediction task. Given that we do not need any annotations or other auxiliary data, we employ the same architecture and training strategy for this task. We evaluate on the GTEA gaze dataset and compare it with other unsupervised models in Table 3. As can be seen, we obtain competitive results on the gaze prediction task, outperforming all baselines on both the AUC and AAE scores. It is to be noted that we outperform
+
+ | Itti et al. [17] | GBVS [10] | AWS-D [27] | Center Bias | OBDL [15] | Ours |
| AUC | 0.747 | 0.769 | 0.770 | 0.789 | 0.801 | 0.861 |
| AAE | 18.4 | 15.3 | 18.2 | 10.2 | 15.6 | 13.6 |
+
+Table 3: Comparison with state-of-the-art on the unsupervised egocentric gaze prediction task on the GTEA dataset.
+
+the center bias method on the AUC metric. Center bias exploits the spatial bias in egocentric images and always predicts the center of the video frame as the predicted gaze position. The AUC metric's significant improvement indicates that our approach predicts gaze fixations that are more closely aligned with the ground truth than the center bias approach. Given that the model was not designed explicitly for this task, it is a remarkable performance, especially given the performance of fully supervised baselines such as DFG [53], which achieves 10.6 and 88.3 for AUC and AAE.
+
+
+(a)
+
+
+(b)
+Fig. 3: Qualitative analysis of the proposed approach on UCF Sports dataset (a) ablative variations on AUC. (a) class-wise AUC, and (c) class-wise bounding box recall at different overlap thresholds.
+
+
+(c)
+
+# 5.6 Qualitative Evaluation
+
+We find that our approach has a consistently high recall for the localization task across datasets and domains. We consider that an action is correctly localized if the average IoU across all frames is higher than 0.5, which indicates that most, if not all, frames in a video are correctly localized. We illustrate the recall scores and subsequent AUC scores for each class in the UCF sports dataset in Figures 3(b) and (c). For many classes (7/10 to be specific), we have more than $80\%$ recall at an overlap threshold of 0.5. We find, through visual inspection, that the spatial-temporal error is often correlated with the actor, but is usually not at the center of the region of interest and thus reduces the quality of the chosen proposals. We illustrate this effect in Figure 4. The first row shows the input frame, the second shows the error-based attention, and the last row shows the final localization proposals. If more proposals are returned (as is the case with selective search and EdgeBox), we can obtain a higher recall (Figure 3(b)) and higher mAP.
+
+
+Fig. 4: Qualitative Examples: We present the error-based attention location and the final prediction, for both successful and unsuccessful localizations. Green BB: Prediction, Blue BB: Ground truth
+
+# 6 Conclusion
+
+In this work, we introduce a self-supervised approach to action localization, driven by spatial-temporal error localization. We show that the use of self-supervised prediction using video frames can help learn highly robust features and obtain state-of-the-art results on localization without any training annotations. We also show that the proposed framework can work with a variety of proposal generation methods without losing performance. We also show that the approach can generalize to egocentric videos without changing the training methodology or the framework and obtain competitive performance on the unsupervised gaze prediction task.
+
+# Acknowledgement
+
+This research was supported in part by the US National Science Foundation grants CNS 1513126, IIS 1956050, and IIS 1955230.
+
+# References
+
+1. Aakur, S., de Souza, F.D., Sarkar, S.: Going deeper with semantics: Exploiting semantic contextualization for interpretation of human activity in videos. In: IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE (2019)
+2. Aakur, S.N., Sarkar, S.: A perceptual prediction framework for self supervised event segmentation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019)
+3. Aakur, S.N., de Souza, F.D., Sarkar, S.: Towards a knowledge-based approach for generating video descriptions. In: Conference on Computer and Robot Vision (CRV). Springer (2017)
+4. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
+5. Escorcia, V., Dao, C.D., Jain, M., Ghanem, B., Snoek, C.: Guess where? actor-supervision for spatiotemporal action localization. Computer Vision and Image Understanding 192, 102886 (2020)
+6. Fathi, A., Li, Y., Rehg, J.M.: Learning to recognize daily actions using gaze. In: European Conference on Computer Vision. pp. 314-327. Springer (2012)
+7. Gkioxari, G., Malik, J.: Finding action tubes. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 759-768 (2015)
+8. Grundmann, M., Kwatra, V., Han, M., Essa, I.: Efficient hierarchical graph-based video segmentation. In: 2010 ieee computer society conference on computer vision and pattern recognition. pp. 2141-2148. IEEE (2010)
+9. Guo, Z., Gao, L., Song, J., Xu, X., Shao, J., Shen, H.T.: Attention-based LSTM with semantic consistency for videos captioning. In: ACM Conference on Multimedia (ACM MM). pp. 357-361. ACM (2016)
+0. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems. pp. 545-552 (2007)
+1. Hershey, J.R., Chen, Z., Le Roux, J., Watanabe, S.: Deep clustering: Discriminative embeddings for segmentation and separation. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 31-35. IEEE (2016)
+2. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735-1780 (1997)
+3. Horstmann, G., Herwig, A.: Surprise attracts the eyes and binds the gaze. Psychonomic Bulletin & Review 22(3), 743-749 (2015)
+4. Horstmann, G., Herwig, A.: Novelty biases attention and gaze in a surprise trial. Attention, Perception, & Psychophysics 78(1), 69-77 (2016)
+5. Hossein Khatoonabadi, S., Vasconcelos, N., Bajic, I.V., Shan, Y.: How many bits does it take for a stimulus to be salient? In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5501-5510 (2015)
+6. Hou, R., Chen, C., Shah, M.: Tube convolutional neural network (t-cnn) for action detection in videos. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). pp. 5822-5831 (2017)
+7. Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research 40(10-12), 1489-1506 (2000)
+8. Jain, M., Van Gemert, J., Jégou, H., Bouthemy, P., Snoek, C.G.: Action localization with tubelets from motion. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 740-747 (2014)
+
+19. Jhuang, H., Gall, J., Zuffi, S., Schmid, C., Black, M.J.: Towards understanding action recognition. In: Proceedings of the IEEE international conference on computer vision. pp. 3192-3199 (2013)
+20. Ji, X., Henriques, J.F., Vedaldi, A.: Invariant information clustering for unsupervised image classification and segmentation. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 9865-9874 (2019)
+21. Jia, X., De Brabandere, B., Tuytelaars, T., Gool, L.V.: Dynamic filter networks. In: Neural Information Processing Systems. pp. 667-675 (2016)
+22. Jiang, Y.G., Liu, J., Zamir, A.R., Toderici, G., Laptev, I., Shah, M., Sukthankar, R.: Thumos challenge: Action recognition with a large number of classes (2014)
+23. Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1725-1732 (2014)
+24. Kodinariya, T.M., Makwana, P.R.: Review on determining number of cluster in k-means clustering. International Journal 1(6), 90-95 (2013)
+25. Kuehne, H., Arslan, A., Serre, T.: The language of actions: Recovering the syntax and semantics of goal-directed human activities. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 780-787 (2014)
+26. Lan, T., Wang, Y., Mori, G.: Discriminative figure-centric models for joint action localization and recognition. In: 2011 International conference on computer vision. pp. 2003-2010. IEEE (2011)
+27. Leboran, V., Garcia-Diaz, A., Fdez-Vidal, X.R., Pardo, X.M.: Dynamic whitening saliency. IEEE Transactions on Pattern Analysis and Machine Intelligence 39(5), 893-907 (2016)
+28. Li, Z., Gavrilyuk, K., Gavves, E., Jain, M., Snoek, C.G.: Videolstm convolves, attends and flows for action recognition. Computer Vision and Image Understanding 166, 41-50 (2018)
+29. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European conference on computer vision. pp. 21-37. Springer (2016)
+30. Ma, S., Zhang, J., Ikizler-Cinbis, N., Sclaroff, S.: Action recognition and localization by hierarchical space-time segments. In: Proceedings of the IEEE international conference on computer vision. pp. 2744-2751 (2013)
+31. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 779-788 (2016)
+32. Rodriguez, M.D., Ahmed, J., Shah, M.: Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: 2008 IEEE conference on computer vision and pattern recognition. pp. 1-8. IEEE (2008)
+33. Sharma, S., Kiros, R., Salakhutdinov, R.: Action recognition using visual attention. In: Neural Information Processing Systems: Time Series Workshop (2015)
+34. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
+35. Song, J., Gao, L., Guo, Z., Liu, W., Zhang, D., Shen, H.T.: Hierarchical LSTM with adjusted temporal attention for video captioning. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. pp. 2737-2743. AAAI Press (2017)
+36. Soomro, K., Idrees, H., Shah, M.: Action localization in videos through context walk. In: Proceedings of the IEEE international conference on computer vision. pp. 3280-3288 (2015)
+
+37. Soomro, K., Idrees, H., Shah, M.: Predicting the where and what of actors and actions through online action localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2648-2657 (2016)
+38. Soomro, K., Shah, M.: Unsupervised action discovery and localization in videos. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 696-705 (2017)
+39. Soomro, K., Zamir, A.R., Shah, M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)
+40. Tian, Y., Sukthankar, R., Shah, M.: Spatiotemporal deformable part models for action detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2642-2649 (2013)
+41. Tipper, S.P., Lortie, C., Baylis, G.C.: Selective reaching: Evidence for action-centered attention. Journal of Experimental Psychology: Human Perception and Performance 18(4), 891 (1992)
+42. Tran, D., Yuan, J.: Optimal spatio-temporal path discovery for video event detection. In: CVPR 2011. pp. 3321-3328. IEEE (2011)
+43. Tran, D., Yuan, J.: Max-margin structured output regression for spatio-temporal action localization. In: Advances in neural information processing systems. pp. 350-358 (2012)
+44. Uijlings, J.R., Van De Sande, K.E., Gevers, T., Smeulders, A.W.: Selective search for object recognition. International Journal of Computer Vision (IJCV) 104(2), 154-171 (2013)
+45. Van Gemert, J.C., Jain, M., Gati, E., Snoek, C.G., et al.: Apt: Action localization proposals from dense trajectories. In: BMVC. vol. 2, p. 4 (2015)
+46. Venugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T., Saenko, K.: Sequence to sequence-video to text. In: IEEE International Conference on Computer Vision (ICCV). pp. 4534-4542 (2015)
+47. Venugopalan, S., Xu, H., Donahue, J., Rohrbach, M., Mooney, R., Saenko, K.: Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729 (2014)
+48. Vondrick, C., Pirsiavash, H., Torralba, A.: Anticipating visual representations from unlabeled video. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 98-106 (2016)
+49. Vondrick, C., Torralba, A.: Generating the future with adversarial transformers. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1020-1028 (2017)
+50. Wang, L., Qiao, Y., Tang, X.: Video action detection with relational dynamicposelets. In: European conference on computer vision. pp. 565-580. Springer (2014)
+51. Xie, J., Girshick, R., Farhadi, A.: Unsupervised deep embedding for clustering analysis. In: International Conference on Machine Learning (ICML). pp. 478-487 (2016)
+52. Zacks, J.M., Tversky, B., Iyer, G.: Perceiving, remembering, and communicating structure in events. Journal of Experimental Psychology: General 130(1), 29 (2001)
+53. Zhang, M., Teck Ma, K., Hwee Lim, J., Zhao, Q., Feng, J.: Deep future gaze: Gaze anticipation on egocentric videos using adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4372-4381 (2017)
+54. Zhu, G., Porikli, F., Li, H.: Tracking randomly moving objects on edge box proposals. arXiv preprint arXiv:1507.08085 (2015)
\ No newline at end of file
diff --git a/actionlocalizationthroughcontinualpredictivelearning/images.zip b/actionlocalizationthroughcontinualpredictivelearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..80d78275b36ccde873dde3aa3f672d60df84c2fd
--- /dev/null
+++ b/actionlocalizationthroughcontinualpredictivelearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4a1a330d9ef8b96642e0bc52353eeb590d452a943f09488886687358442dbc65
+size 422331
diff --git a/actionlocalizationthroughcontinualpredictivelearning/layout.json b/actionlocalizationthroughcontinualpredictivelearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..45d6a8aeae21d350c926cd3573e392f317797fd1
--- /dev/null
+++ b/actionlocalizationthroughcontinualpredictivelearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c1a69b2a15e5379414dfbd3db13791b4aa2baf9810f1126c9be651eeed4cbf0
+size 389725
diff --git a/actionsasmovingpoints/a3d29da9-cfad-485f-a46c-15611092570e_content_list.json b/actionsasmovingpoints/a3d29da9-cfad-485f-a46c-15611092570e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c74bb186b83e2983dc51541fc1ed549e620e8a5b
--- /dev/null
+++ b/actionsasmovingpoints/a3d29da9-cfad-485f-a46c-15611092570e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1fd2ef48ec4fb828011e50338b749045ccd4a2a7d935afe12fe3b4a6c7a0903a
+size 77721
diff --git a/actionsasmovingpoints/a3d29da9-cfad-485f-a46c-15611092570e_model.json b/actionsasmovingpoints/a3d29da9-cfad-485f-a46c-15611092570e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c9fa7a2ada944cef711d732b8e2e4347826cae34
--- /dev/null
+++ b/actionsasmovingpoints/a3d29da9-cfad-485f-a46c-15611092570e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:10b6da23b26a7767ff184f377dfb4dce71562c41ef77e8293cd8aff1ec88c849
+size 94173
diff --git a/actionsasmovingpoints/a3d29da9-cfad-485f-a46c-15611092570e_origin.pdf b/actionsasmovingpoints/a3d29da9-cfad-485f-a46c-15611092570e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b38d5013145042b2b79f9461974a35f9465fe9b1
--- /dev/null
+++ b/actionsasmovingpoints/a3d29da9-cfad-485f-a46c-15611092570e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:48c8caf7d8ee45e8de3c04df8b245ef2911d16f56fa8a2bfbc1e0076ac3c4c95
+size 1668003
diff --git a/actionsasmovingpoints/full.md b/actionsasmovingpoints/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c8806a61a9d17a4c13b0e2a45901a350341c4bfc
--- /dev/null
+++ b/actionsasmovingpoints/full.md
@@ -0,0 +1,281 @@
+# Actions as Moving Points
+
+Yixuan Li\*, Zixu Wang\*, Limin Wang[0000-0002-3674-7718], and Gangshan Wu
+
+State Key Laboratory for Novel Software Technology, Nanjing University, China {liyixxxuan,zixuwang1997}@gmail.com, {lmwang,gswu}@nju.edu.cn
+
+Abstract. The existing action tubelet detectors often depend on heuristic anchor design and placement, which might be computationally expensive and sub-optimal for precise localization. In this paper, we present a conceptually simple, computationally efficient, and more precise action tubelet detection framework, termed as MovingCenter Detector (MOC-detector), by treating an action instance as a trajectory of moving points. Based on the insight that movement information could simplify and assist action tubelet detection, our MOC-detector is composed of three crucial head branches: (1) Center Branch for instance center detection and action recognition, (2) Movement Branch for movement estimation at adjacent frames to form trajectories of moving points, (3) Box Branch for spatial extent detection by directly regressing bounding box size at each estimated center. These three branches work together to generate the tubelet detection results, which could be further linked to yield video-level tubes with a matching strategy. Our MOC-detector outperforms the existing state-of-the-art methods for both metrics of frame-mAP and video-mAP on the JHMDB and UCF101-24 datasets. The performance gap is more evident for higher video IoU, demonstrating that our MOC-detector is particularly effective for more precise action detection. We provide the code at https://github.com/MCG-NJU/MOC-Detector.
+
+Keywords: Spatio-temporal action detection, anchor-free detection
+
+# 1 Introduction
+
+Spatio-temporal action detection is an important problem in video understanding, which aims to recognize all action instances present in a video and also localize them in both space and time. It has wide applications in many scenarios, such as video surveillance [20,12], video captioning [31,36] and event detection [5]. Some early approaches [8,21,25,32,33,26] apply an action detector at each frame independently and then generate action tubes by linking these frame-wise detection results [8,21,25,32,26] or tracking one detection result [33] across time. These methods fail to well capture temporal information when conducting frame-level detection, and thus are less effective for detecting action tubes in reality. To address this issue, some approaches [24,14,11,35,38,27] try
+
+
+
+
+(c) Move the 'Point' to each frame center
+
+
+
+
+(d) Generate bbox from each center (Tubelet detection result)
+Fig. 1. Motivation Illustration. We focus on devising an action tubelet detector from a short sequence. Movement information naturally describes human behavior, and each action instance could be viewed as a trajectory of moving points. In this view, action tubelet detector could be decomposed into three simple steps: (1) localizing the center point (red dots) at key frame (i.e., center frame), (2) estimating the movement at each frame with respect to the center point (yellow arrows), (3) regressing bounding box size at the calculated center point (green dots) for all frames. Best viewed in color and zoom in.
+
+to perform action detection at the clip-level by exploiting short-term temporal information. In this sense, these methods input a sequence of frames and directly output detected tubelets (i.e., a short sequence of bounding boxes). This tubelet detection scheme yields a more principled and effective solution for video-based action detection and has shown promising results on standard benchmarks.
+
+The existing tubelet detection methods [24,14,11,35,38,27] are closely related with the current mainstream object detectors such as Faster R-CNN [23] or SSD [19], which operate on a huge number of pre-defined anchor boxes. Although these anchor-based object detectors have achieved success in image domains, they still suffer from critical issues such as being sensitive to hyper-parameters (e.g., box size, aspect ratio, and box number) and less efficient due to densely placed bounding boxes. These issues are more serious when adapting the anchor-based detection framework from images to videos. First, the number of possible tubelet anchors would grow dramatically when increasing clip duration, which imposes a great challenge for both training and inference. Second, it is generally required to devise more sophisticated anchor box placement and adjustment to consider the variation along the temporal dimension. In addition, these anchor-based methods directly extend 2D anchors along the temporal dimension which predefine each action instance as a cuboid across space and time. This assumption lacks the flexibility to well capture temporal coherence and correlation of adjacent frame-level bounding boxes.
+
+Inspired by the recent advances in anchor-free object detection [22,15,4,40,30], we present a conceptually simple, computationally efficient, and more precise action tubelet detector in videos, termed as MovingCenter detector (MOC-detector). As shown in Figure 1, our detector presents a new tubelet detection scheme by treating each instance as a trajectory of moving points. In
+
+this sense, an action tubelet is represented by its center point in the key frame and offsets of other frames with respect to this center point. To determine the tubelet shape, we directly regress the bounding box size along the moving point trajectory on each frame. Our MOC-detector yields a fully convolutional one-stage tubelet detection scheme, which not only allows for more efficient training and inference but also could produce more precise detection results (as demonstrated in our experiments).
+
+Specifically, our MOC detector decouples the task of tubelet detection into three sub-tasks: center detection, offset estimation and box regression. First, frames are fed into a 2D efficient backbone network for feature extraction. Then, we devise three separate branches: (1) Center Branch: detecting the action instance center and category; (2) Movement Branch: estimating the offsets of the current frame with respect to its center; (3) Box Branch: predicting bounding box size at the detected center point of each frame. This unique design enables three branches cooperate with each other to generate the tubelet detection results. Finally, we link these detected action tubelets across frames to yield long-range detection results following the common practice [14]. We perform experiments on two challenging action tube detection benchmarks of UCF101-24 [28] and JHMDB [13]. Our MOC-detector outperforms the existing state-of-the-art approaches for both frame-mAP and video-mAP on these two datasets, in particular for higher IoU criteria. Moreover, the fully convolutional nature of MOC detector yields a high detection efficiency of around 25FPS.
+
+# 2 Related Work
+
+# 2.1 Object Detection
+
+Anchor-based Object Detectors. Traditional one-stage [19,22,17] and two-stage object detectors [7,10,6,23] heavily relied on predefined anchor boxes. Two-stage object detectors like Faster-RCNN [23] and Cascade-RCNN [1] devised RPN to generate RoIs from a set of anchors in the first stage and handled classification and regression of each RoI in the second stage. By contrast, typical one-stage detectors utilized class-aware anchors and jointly predicted the categories and relative spatial offsets of objects, such as SSD [19], YOLO [22] and RetinaNet [17].
+
+Anchor-free Object Detectors. However, some recent works [30,40,15,4,41] have shown that the performance of anchor-free methods could be competitive with anchor-based detectors and such detectors also get rid of computation-intensive anchors and region-based CNN. CornerNet [15] detected object bounding box as a pair of corners, and grouped them to form the final detection. CenterNet [40] modeled an object as the center point of its bounding box and regressed its width and height to build the final result.
+
+# 2.2 Spatio-temporal Action Detection
+
+Frame-level Detector. Many efforts have been made to extend an image object detector to the task of action detection as frame-level action detect-
+
+tors [8,32,21,25,26,33]. After getting the frame detection, linking algorithm is applied to generate final tubes [8,32,21,25,26] and Weinzaepfel et al. [33] utilized a tracking-by-detection method instead. Although flows are used to capture motion information, frame-level detection fails to fully utilize the video's temporal information.
+
+Clip-level Detector. In order to model temporal information for detection, some clip-level approaches or action tubelet detectors [14,11,35,16,38,27] have been proposed. ACT [14] took a short sequence of frames and output tubelets which were regressed from anchor cuboids. STEP [35] proposed a progressive method to refine the proposals over a few steps to solve the large displacement problem and utilized longer temporal information. Some methods [11,16] linked frame or tubelet proposals first to generate tubes proposal and then did classification.
+
+These approaches are all based on anchor-based object detectors, whose design might be sensitive to anchor design and computationally cost due to large numbers of anchor boxes. Instead, we try to design an anchor-free action tubelet detector by treating each action instance as a trajectory of moving points. Experimental results demonstrate that our proposed action tubelet detector is effective for spatio-temporal action detection, in particular for the high video IoU.
+
+# 3 Approach
+
+Overview. Action tubelet detection aims at localizing a short sequence of bounding boxes from an input clip and recognizing its action category as well. We present a new tubelet detector, coined as MovingCenter detector (MOC-detector), by viewing an action instance as a trajectory of moving points. As shown in Figure 2, in our MOC-detector, we take a set of consecutive frames as input and separately feed them into an efficient 2D backbone to extract frame-level features. Then, we design three head branches to perform tubelet detection in an anchor-free manner. The first branch is Center Branch, which is defined on the center (key) frame. This Center Branch localizes the tubelet center and recognizes its action category. The second branch is Movement Branch, which is defined over all frames. This Movement Branch tries to relate adjacent frames to predict the center movement along the temporal dimension. The estimated movement would propagate the center point from key frame to other frames to generate a trajectory. The third branch is Box Branch, which operates on the detected center points of all frames. This branch focuses on determining the spatial extent of the detected action instance at each frame, by directly regressing the height and width of the bounding box. These three branches collaborate together to yield tubelet detection from a short clip, which will be further linked to form action tube detection in a long untrimmed video by following a common linking strategy [14]. We will first give a short description of the backbone design, and then provide technical details of three branches and the linking algorithm in the following subsections.
+
+
+Fig. 2. Pipeline of MOC-detector. In the left, we present the overall MOC-detector framework. The red cuboids represent the extracted features, the blue boxes denote the backbone or detection head, and the gray cuboids are detection results produced by the Center Branch, the Movement Branch, the Box Branch. In the right, we show the detailed design of each branch. Each branch consists of a sequence of one $3^{*}3$ conv layer, one ReLu layer and one $1^{*}1$ conv layer, which is presented as yellow cuboids. The parameters of convolution are input channel, output channel, convolution kernel height, convolution kernel width.
+
+
+
+Backbone. In our MOC-detector, we input $K$ frames and each frame is with the resolution of $W \times H$ . First $K$ frames are fed into a 2D backbone network sequentially to generate a feature volume $\mathbf{f} \in \mathbb{R}^{K \times \frac{W}{R} \times \frac{H}{R} \times B}$ . $R$ is the spatial downsample ratio and $B$ denotes channel number. To keep the full temporal information for subsequent detection, we do not perform any downsampling over the temporal dimension. Specifically, we choose DLA-34 [37] architecture as our MOC-detector feature backbone following CenterNet [40]. This architecture employs an encoder-decoder architecture to extract features for each frame. The spatial downsampling ratio $R$ is 4 and the channel number $B$ is 64. The extracted features are shared by three head branches. Next we will present the technical details of these head branches.
+
+# 3.1 Center Branch: Detect Center at Key Frame
+
+The Center Branch aims at detecting the action instance center in the key frame (i.e., center frame) and recognizing its category based on the extracted video features. Temporal information is important for action recognition, and thereby we design a temporal module to estimate the action center and recognize its class by concatenating multi-frame feature maps along channel dimension. Specifically, based on the video feature representation $\mathbf{f} \in \mathbb{R}^{\frac{W}{R} \times \frac{H}{R} \times (K \times B)}$ , we estimate a center heatmap $\hat{L} \in [0,1]^{\frac{W}{R} \times \frac{H}{R} \times C}$ for the key frame. The $C$ is the number of action classes. The value of $\hat{L}_{x,y,c}$ represents the likelihood of detecting an
+
+action instance of class $c$ at location $(x, y)$ , and higher value indicates a stronger possibility. Specifically, we employ a standard convolution operation to estimate the center heatmap in a fully convolutional manner.
+
+Training. We train the Center Branch following the common dense prediction setting [15,40]. For $i^{th}$ action instance, we represent its center as key frame's bounding box center and utilize center's position for each action category as the ground truth label $(x_{c_i},y_{c_i})$ . We generate the ground truth heatmap $L\in [0,1]^{\frac{W}{R}\times \frac{H}{R}\times C}$ using a Gaussian kernel which produces the soft heatmap groundtruth $L_{x,y,c_i} = \exp (-\frac{(x - x_{c_i})^2 + (y - y_{c_i})^2}{2\sigma_p^2})$ . For other class (i.e., $c\neq c_i$ ), we set the heatmap $L_{x,y,c} = 0$ . The $\sigma_p$ is adaptive to instance size and we choose the maximum when two Gaussian of the same category overlap. We choose the training objective, which is a variant of focal loss [17], as follows:
+
+$$
+\ell_ {\text {c e n t e r}} = - \frac {1}{n} \sum_ {x, y, c} \left\{ \begin{array}{l l} (1 - \hat {L} _ {x y c}) ^ {\alpha} \log (\hat {L} _ {x y c}) & \text {i f} L _ {x y c} = 1 \\ (1 - L _ {x y c}) ^ {\beta} (\hat {L} _ {x y c}) ^ {\alpha} \log (1 - \hat {L} _ {x y c}) & \text {o t h e r w i s e} \end{array} \right. \tag {1}
+$$
+
+where $n$ is the number of ground truth instances and $\alpha$ and $\beta$ are hyperparameters of the focal loss [17]. We set $\alpha = 2$ and $\beta = 4$ following [15,40] in our experiments. It indicates that this focal loss is able to deal with the imbalanced training issue effectively [17].
+
+Inference. After the training, the Center Branch could be deployed in tubelet detection for localizing action instance center and recognizing its category. Specifically, we detect all local peaks which are equal to or greater than their 8-connected neighbors in the estimated heatmap $\hat{L}$ for each class independently. And then keep the top $N$ peaks from all categories as candidate centers with tubelet scores. Following [40], we set $N$ as 100 and detailed ablation studies will be provided in the supplementary material.
+
+# 3.2 Movement Branch: Move Center Temporally
+
+The Movement Branch tries to relate adjacent frames to predict the movement of the action instance center along the temporal dimension. Similar to Center Branch, Movement Branch also employs temporal information to regress the center offsets of current frame with respect to key frame. Specifically, Movement Branch takes stacked feature representation as input and outputs a movement prediction map $\hat{M} \in \mathbb{R}^{\frac{W}{R} \times \frac{H}{R} \times (K \times 2)}$ . $2K$ channels represent center movements from key frame to current frames in $X$ and $Y$ directions. Given the key frame center $(\hat{x}_{key}, \hat{y}_{key})$ , $\hat{M}_{\hat{x}_{key}, \hat{y}_{key}, 2j:2j+2}$ encodes center movement at $j^{th}$ frame.
+
+Training. The ground truth tubelet of $i^{th}$ action instance is $[(x_{tl}^1, y_{tl}^1, x_{br}^1, y_{br}^1), \ldots, (x_{tl}^j, y_{tl}^j, x_{br}^j, y_{br}^j), \ldots, (x_{tl}^K, y_{tl}^K, x_{br}^K, y_{br}^K)]$ , where subscript $tl$ and $br$ represent top-left and bottom-right points of bounding boxes, respectively. Let $k$ be the key frame index, and the $i^{th}$ action instance center at key frame is defined as follows:
+
+$$
+(x _ {i} ^ {k e y}, y _ {i} ^ {k e y}) = (\lfloor (x _ {t l} ^ {k} + x _ {b r} ^ {k}) / 2 \rfloor , \lfloor (y _ {t l} ^ {k} + y _ {b r} ^ {k}) / 2 \rfloor). \qquad (2)
+$$
+
+We could compute the bounding box center $(x_i^j, y_i^j)$ of $i^{th}$ instance at $j^{th}$ frame as follows:
+
+$$
+\left(x _ {i} ^ {j}, y _ {i} ^ {j}\right) = \left(\left(x _ {t l} ^ {j} + x _ {b r} ^ {j}\right) / 2, \left(y _ {t l} ^ {j} + y _ {b r} ^ {j}\right) / 2\right). \tag {3}
+$$
+
+Then, the ground truth movement of the $i^{th}$ action instance is calculated as follows:
+
+$$
+m _ {i} = (x _ {i} ^ {1} - x _ {i} ^ {k e y}, y _ {i} ^ {1} - y _ {i} ^ {k e y}, \dots , x _ {i} ^ {K} - x _ {i} ^ {k e y}, y _ {i} ^ {K} - y _ {i} ^ {k e y}). \qquad (4)
+$$
+
+For the training of Movement Branch, we optimize the movement map $\hat{M}$ only at the key frame center location and use the $\ell_1$ loss as follows:
+
+$$
+\ell_ {\text {m o v e m e n t}} = \frac {1}{n} \sum_ {i = 1} ^ {n} | \hat {M} _ {x _ {i} ^ {\text {k e y}}, y _ {i} ^ {\text {k e y}}} - m _ {i} |. \tag {5}
+$$
+
+Inference. After the Movement Branch training and given $N$ detected action centers $\{(\hat{x}_i,\hat{y}_i)|i\in \{1,2,\dots ,N\} \}$ from Center Branch, we obtain a set of movement vector $\{\hat{M}_{\hat{x}_i,\hat{y}_i}|i\in \{1,2,\dots ,N\} \}$ for all detected action instance. Based on the results of Movement Branch and Center Branch, we could easily generate a trajectory set $T = \{T_{i}|i\in \{1,2,\dots ,N\} \}$ , and for the detected action center $(\hat{x}_i,\hat{y}_i)$ , its trajectory of moving points is calculated as follows:
+
+$$
+T _ {i} = \left(\hat {x} _ {i}, \hat {y} _ {i}\right) + \left[ \hat {M} _ {\hat {x} _ {i}, \hat {y} _ {i}, 0: 2}, \hat {M} _ {\hat {x} _ {i}, \hat {y} _ {i}, 2: 4}, \dots , \hat {M} _ {\hat {x} _ {i}, \hat {y} _ {i}, 2 K - 2: 2 K} \right]. \tag {6}
+$$
+
+# 3.3 Box Branch: Determine Spatial Extent
+
+The Box Branch is the last step of tubelet detection and focuses on determining the spatial extent of the action instance. Unlike Center Branch and Movement Branch, we assume box detection only depends on the current frame and temporal information will not benefit the class-agnostic bounding box generation. We will provide the ablation study in the supplementary material. In this sense, this branch could be performed in a frame-wise manner. Specifically, Box Branch inputs the single frame's feature $\mathbf{f}^j \in \mathbb{R}_{\frac{W}{R}}^{W \times \frac{H}{R} \times B}$ and generates a size prediction map $\hat{S}^j \in \mathbb{R}_{\frac{W}{R}}^{W \times \frac{H}{R} \times 2}$ for the $j^{th}$ frame to directly estimate the bounding box size (i.e., width and height). Note that the Box Branch is shared across K frames.
+
+Training. The ground truth bbox size of $i^{th}$ action instance at $j^{th}$ frame can be represented as follows:
+
+$$
+s _ {i} ^ {j} = \left(x _ {b r} ^ {j} - x _ {t l} ^ {j}, y _ {b r} ^ {j} - y _ {t l} ^ {j}\right). \tag {7}
+$$
+
+With this ground truth bounding box size, we optimize the Box Branch at the center points of all frames for each tubelet with $\ell_1$ Loss as follows:
+
+$$
+\ell_ {\mathrm {b o x}} = \frac {1}{n} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {K} | \hat {S} _ {p _ {i} ^ {j}} ^ {j} - s _ {i} ^ {j} |. \tag {8}
+$$
+
+Note that the $p_i^j$ is the $i^{th}$ instance ground truth center at $j^{th}$ frame. So the overall training objective of our MOC-detector is
+
+$$
+\ell = \ell_ {\text {c e n t e r}} + a \ell_ {\text {m o v e m e n t}} + b \ell_ {\text {b o x}}, \tag {9}
+$$
+
+where we set $a = 1$ and $b = 0.1$ in all our experiments. Detailed ablation studies will be provided in the supplementary material.
+
+Inference. Now, we are ready to generate the tubelet detection results. based on center trajectories $T$ from Movement Branch and size prediction heatmap $\hat{S}$ for each location produced by this branch. For $j^{th}$ point in trajectory $T_{i}$ , we use $(T_{x}, T_{y})$ to denote its coordinates, and (w,h) to denote Box Branch size output $\hat{S}$ at specific location. Then, the bounding box for this point is calculated as:
+
+$$
+\left(T _ {x} - w / 2, T _ {y} - h / 2, T _ {x} + w / 2, T _ {y} + h / 2\right). \tag {10}
+$$
+
+# 3.4 Tubelet Linking
+
+After getting the clip-level detection results, we link these tubelets into final tubes across time. As our main goal is to propose a new tubelet detector, we use the same linking algorithm as [14] for fair comparison. Given a video, MOC extracts tubelets and keeps the top 10 as candidates for each sequence of K frames with stride 1 across time, which are linked into the final tubes in a tubelet by tubelet manner. Initialization: In the first frame, every candidate starts a new link. At a given frame, candidates which are not assigned to any existing links start new links. Linking: one candidate can only be assigned to one existing link when it meets three conditions: (1) the candidate is not selected by other links, (2) the candidate $t$ has the highest score, (3) the overlap between link and candidate is greater than a threshold $\tau$ . Termination: An existing link stops if it has not been extended in consecutive K frames. We build an action tube for each link, whose score is the average score of tubelets in the link. For each frame in the link, we average the bbox coordinates of tubelets containing that frame. Initialization and termination determine tubes' temporal extents. Tubes with low confidence and short duration are abandoned. As this linking algorithm is online, MOC can be applied for online video stream.
+
+# 4 Experiments
+
+# 4.1 Experimental Setup
+
+Datasets and Metrics. We perform experiments on the UCF101-24 [28] and JHMDB [13] datasets. UCF101-24 [28] consists of 3207 temporally untrimmed videos from 24 sports classes. Following the common setting [21,14], we report the action detection performance for the first split only. JHMDB [13] consists of 928 temporally trimmed videos from 21 action classes. We report results averaged over three splits following the common setting [21,14]. AVA [9] is a larger dataset for action detection but only contains a single-frame action instance annotation for each 3s clip, which concentrates on detecting actions on a single key frame. Thus, AVA is not suitable to verify the effectiveness of tubelet action detectors. Following [33,8,14], we utilize frame mAP and video mAP to evaluate detection accuracy.
+
+Implementation Details. We choose the DLA34 [37] as our backbone with COCO [18] pretrain and ImageNet [3] pretrain. We provide MOC results with COCO pretrain without extra explanation. For a fair comparison, we provide two-stream results on two datasets with both COCO pretrain and ImageNet pretrain in Section 4.3. The frame is resized to $288 \times 288$ . The spatial down-sample ratio $R$ is set to 4 and the resulted feature map size is $72 \times 72$ . During training, we use the same data augmentation as [14] to the whole video: photometric transformation, scale jittering, and location jittering. We use Adam with a learning rate 5e-4 to optimize the overall objective. The learning rate adjusts to convergence on the validation set and it decreases by a factor of 10 when performance saturates. The iteration maximum is set to 12 epochs on UCF101-24 [28] and 20 epochs on JHMDB [13].
+
+# 4.2 Ablation Studies
+
+For efficient exploration, we perform experiments only using RGB input modality, COCO pretrain, and K as 5 without extra explanation. Without special specified, we use exactly the same training strategy in this subsection.
+
+Effectiveness of Movement Branch. In MOC, Movement Branch impacts on both bbox's location and size. Movement Branch moves key frame center to other frames to locate bbox center, named as Move Center strategy. Box Branch estimates bbox size on the current frame center, which is located by Movement Branch not the same with key frame, named as Bbox Align strategy. To explore the effectiveness of Movement Branch, we compare MOC with other two detector designs, called as No Movement and Semi Movement. We set the tubelet length $K = 5$ in all detection designs with the same training strategy. As shown in Figure 3, No Movement directly removes the Movement Branch and just generates the bounding box for each frame at the same location with key frame center. Semi Movement first generates the bounding box for each frame at the same location with key frame center, and then moves the generated box in each frame according to Movement Branch prediction. Full Movement (MOC) first moves the key frame center to the current frame center according to Movement Branch prediction, and then Box Branch generates the bounding box for each frame at its own center. The difference between Full Movement and Semi Movement is that they generate the bounding box at different locations: one at the real center, and the other at the fixed key frame center. The results are summarized in Table 1.
+
+First, we observe that the performance gap between No Movement and Semi Movement is $1.56\%$ for frame mAP@0.5 and $11.05\%$ for video mAP@0.5. We find that the Movement Branch has a relatively small influence on frame mAP, but contributes much to improve the video mAP. Frame mAP measures the detection quality in a single frame without tubelet linking while video mAP measures the tube-level detection quality involving tubelet linking. Small movement in short tubelet doesn't harm frame mAP dramatically but accumulating these subtle
+
+
+(a) Generate bbox at key frame center, without any movement
+
+
+(b) First generate bbox at key frame center, then move the bbox
+
+
+(c) First move key frame center, then generate bbox at current frame center
+Fig. 3. Illustration of Three Movement Strategies. Note that the arrow represents moving according to Movement Branch prediction, the red dot represents the key frame center and the green dot represents the current frame center, which is localized by moving key frame center according to Movement Branch prediction.
+
+Table 1. Exploration study on MOC detector design with various combinations of movement strategies on UCF101-24.
+
+| Method | Strategy | F-mAP@0.5 (%) | Video-mAP (%) |
| Move Center Bbox Align | @0.2 | @0.5 | @0.75 | 0.5:0.95 |
| No Movement | | 68.22 | 68.91 | 37.77 | 19.94 | 19.27 |
| Semi Movement | ✓ | 69.78 | 76.63 | 48.82 | 27.05 | 26.09 |
| Full Movement (MOC) | ✓ | ✓ | 71.63 | 77.74 | 49.55 | 27.04 |
+
+errors in the linking process will seriously harm the video-level detection. So it demonstrates that the movement information is important for improving video mAP. Second, we can see that Full Movement performs slightly better than Semi Movement for both video mAP and frame mAP. Without Bbox Align, Box Branch estimates bbox size at key frame center for all frames, which causes a small performance drop with MOC. This small gap implies that Box Branch is relatively robust to the box center and estimating bbox size at small shifted location only brings a very slight performance difference.
+
+Study on Movement Branch Design. In practice, in order to find an efficient way to capture center movements, we implement Movement Branch in several different ways. The first one is Flow Guided Movement strategy which utilizes optical flow between adjacent frames to move action instance center. The second strategy, Cost Volume Movement, is to directly compute the movement offset by constructing cost volume between key frame and current frame following [39], but this explicit computing fails to yield better results and is slower due to the constructing of cost volume. The third one is Accumulated Movement strategy which predicts center movement between consecutive frames instead of with respect to key frame. The fourth strategy, Center Movement, is to employ 3D convolutional operation to directly regress the offsets of the current frame with respect to key frame as illustrated in Section 3.2. The results are reported in Table 2.
+
+We notice that the simple Center Movement performs best and choose it as Movement Branch design in our MOC-detector, which directly employs a 3D convolution to regress key frame center movement for all frames as a whole. We will analyze the fail reason for other three designs. For Flow Guided Move
+
+Table 2. Exploration study on the Movement Branch design on UCF101-24 [28]. Note that our MOC-detector adopts the Center Movement.
+
+| Method | F-mAP@0.5 (%) | Video-mAP (%) |
| @0.2 | @0.5 | @0.75 | 0.5:0.95 |
| Flow Guided Movement | 69.38 | 75.17 | 42.28 | 22.26 | 21.16 |
| Cost Volume Movement | 69.63 | 72.56 | 43.67 | 21.68 | 22.46 |
| Accumulated Movement | 69.40 | 75.03 | 46.19 | 24.67 | 23.80 |
| Center Movement | 71.63 | 77.74 | 49.55 | 27.04 | 26.09 |
+
+Table 3. Exploration study on the tubelet duration $K$ on UCF101-24.
+
+| Tubelet Duration | F-mAP@0.5 (%) | Video-mAP (%) |
| @0.2 | @0.5 | @0.75 | 0.5:0.95 |
| K = 1 | 68.33 | 65.47 | 31.50 | 15.12 | 15.54 |
| K = 3 | 69.94 | 75.83 | 45.94 | 24.94 | 23.84 |
| K = 5 | 71.63 | 77.74 | 49.55 | 27.04 | 26.09 |
| K = 7 | 73.14 | 78.81 | 51.02 | 27.05 | 26.51 |
| K = 9 | 72.17 | 77.94 | 50.16 | 26.26 | 26.07 |
+
+ment, (i) Flow is not accurate and just represents pixel movement, while Center Movement is supervised by box movement. (ii) Accumulating adjacent flow to generate trajectory will enlarge error. For the Cost Volume Movement, (i) We explicitly calculate the correlation of the current frame with respect to key frame. When regressing the movement of the current frame, it only depends on the current correlation map. However, when directly regressing movement with 3D convolutions, the movement information of each frame will depend on all frames, which might contribute to more accurate estimation. (ii) As cost volume calculation and offset aggregation involve a correlation without extra parameters, it is observed that the convergence is much harder than Center Movement. For Accumulated Movement, this strategy also causes the issue of error accumulation and is more sensitive to the training and inference consistency. In this sense, the ground truth movement is calculated at the real bounding box center during training, while for inference, the current frame center is estimated from Movement Branch and might not be so precise, so that Accumulated Movement would bring large displacement to the ground truth.
+
+Study on Input Sequence Duration. The temporal length $K$ of the input clip is an important parameter in our MOC-detector. In this study, we report the RGB stream performance of MOC on UCF101-24 [28] by varying $K$ from 1 to 9 and the experiment results are summarized in Table 3. We reduce the training batch size for $K = 7$ and $K = 9$ due to GPU memory limitation.
+
+First, we notice that when $K = 1$ , our MOC-detector reduces to the frame-level detector which obtains the worst performance, in particular for video mAP. This confirms the common assumption that frame-level action detector lacks
+
+Table 4. Comparison with the state of the art on JHMDB (trimmed) and UCF101-24 (untrimmed). Ours $(\mathrm{MOC})^{\dagger}$ is pretrained on ImageNet [3] and Ours (MOC) is pretrained on COCO [18].
+
+| Method | JHMDB | UCF101-24 |
| Frame-mAP@0.5 (%) | Video-mAP (%) | Frame-mAP@0.5 (%) | Video-mAP (%) |
| @0.2 | @0.5 | @0.75 | 0.5:0.95 | @0.2 | @0.5 |
| 2D Backbone |
| Saha et al. 2016 [25] | - | 72.6 | 71.5 | 43.3 | 40.0 | - | 66.7 | 35.9 |
| Peng et al. 2016 [21] | 58.5 | 74.3 | 73.1 | - | - | 39.9 | 42.3 | - |
| Singh et al. 2017 [26] | - | 73.8 | 72.0 | 44.5 | 41.6 | - | 73.5 | 46.3 |
| Kalogeiton et al. 2017 [14] | 65.7 | 74.2 | 73.7 | 52.1 | 44.8 | 69.5 | 76.5 | 49.2 |
| Yang et al. 2019 [35] | - | - | - | - | - | 75.0 | 76.6 | - |
| Song et al. 2019 [27] | 65.5 | 74.1 | 73.4 | 52.5 | 44.8 | 72.1 | 77.5 | 52.9 |
| Zhao et al. 2019 [38] | - | - | 74.7 | 53.3 | 45.0 | - | 78.5 | 50.3 |
| Ours (MOC)1 | 68.0 | 76.2 | 75.4 | 68.5 | 54.0 | 76.9 | 81.3 | 54.4 |
| Ours (MOC) | 70.8 | 77.3 | 77.2 | 71.7 | 59.1 | 78.0 | 82.8 | 53.8 |
| 3D Backbone |
| Hou et al. 2017 [11] (C3D) | 61.3 | 78.4 | 76.9 | - | - | 41.4 | 47.1 | - |
| Gu et al. 2018 [9] (I3D) | 73.3 | - | 78.6 | - | - | 76.3 | - | 59.9 |
| Sun et al. 2018 [29] (S3D-G) | 77.9 | - | 80.1 | - | - | - | - | - |
+
+consideration of temporal information for action recognition and thus it is worse than those tubelet detectors, which agrees with our basic motivation of designing an action tubelet detector. Second, we see that the detection performance will increase as we vary $K$ from 1 to 7 and the performance gap becomes smaller when comparing $K = 5$ and $K = 7$ . From $K = 7$ to $K = 9$ , detection performance drops because predicting movement is harder for longer input length. According to the results, we set $K = 7$ in our MOC.
+
+# 4.3 Comparison with the State of the Art
+
+Finally, we compare our MOC with the existing state-of-the-art methods on the trimmed JHMDB dataset and the untrimmed UCF101-24 dataset in Table 4. For a fair comparison, we also report two-stream results with ImageNet pretrain.
+
+Our MOC gains similar performance on UCF101-24 for ImageNet pretrain and COCO pretrain, while COCO pretrain obviously improves MOC's performance on JHMDB because JHMDB is quite small and sensitive to the pretrain model. Our method significantly outperforms those frame-level action detectors [25,21,26] both for frame-mAP and video-mAP, which perform action detection at each frame independently without capturing temporal information. [14,35,38,27] are all tubelet detectors, our MOC outperforms them for all metrics on both datasets, and the improvement is more evident for high IoU video mAP. This result confirms that our anchor-free MOC detector is more effective for localizing precise tubelets from clips than those anchor-based detectors, which might be ascribed to the flexibility and continuity of MOC detector by directly regressing tubelet shape. Our methods get comparable performance to those 3D backbone based methods [11,9,29]. These methods usually divide action detection into two steps: person detection (ResNet50-based Faster RCNN [23] pretrained on ImageNet), and action classification (I3D [2]/S3D-G [34] pretrained
+
+
+(a)
+
+
+(b)
+Fig. 4. Runtime Comparison and Analysis. (a) Comparison with other methods. Two-stream results following ACT [14]'s setting. (b) The detection accuracy (green bars) and speeds (red dots) of MOC's online setting.
+
+on Kinetics [2] + ROI pooling), and fail to provide a simple unified action detection framework.
+
+# 4.4 Runtime Analysis
+
+Following ACT [14], we evaluate MOC's two-stream offline speed on a single GPU without including flow extraction time and MOC reaches 25 fps. In Figure 4(a), we compare MOC with some existing methods which have reported their speed in the original paper. [35,38,14] are all action tubelet detectors and our MOC gains more accurate detection results with comparable speed. Our MOC can be applied for processing online real-time video stream. To simulate online video stream, we set batch size as 1. Since the backbone feature can be extracted only once, we save previous K-1 frames' features in a buffer. When getting a new frame, MOC's backbone first extracts its feature and combines with the previous K-1 frames' features in the buffer. Then MOC's three branches generate tubelet detections based on these features. After that, update the buffer by adding current frame's feature for subsequent detection. For online testing, we only input RGB as optical flow extraction is quite expensive and the results are reported in Figure 4(b). We see that our MOC is quite efficient in online testing and it reaches 53 FPS for $\mathrm{K} = 7$ .
+
+# 4.5 Visualization
+
+In Figure 5, we give some qualitative examples to compare the performance between tubelet duration $\mathrm{K} = 1$ and $\mathrm{K} = 7$ . Comparison between the second row and the third row shows that our tubelet detector leads to less missed detection results and localizes action more accurately owing to offset constraint in the same tubelet. What's more, comparison between the fifth and the sixth row presents that our tubelet detector can reduce classification error because some actions can not be discriminated by just looking one frame.
+
+
+Fig. 5. Examples of Per-frame $(\mathbf{K} = \mathbf{1})$ and Tubelet $(\mathbf{K} = \mathbf{7})$ Detection. The yellow color boxes present detection results, whose categories and scores are provided beside. Yellow categories represent correct and red ones represent wrong. Red dashed boxes represent missed actors. Green boxes and categories are the ground truth. MOC generates one score and category for one tubelet and we mark these in the first frame of the tubelet. Note that we set the visualization threshold as 0.4.
+
+# 5 Conclusion and Future Work
+
+In this paper, we have presented an action tubelet detector, termed as MOC, by treating each action instance as a trajectory of moving points and directly regressing bounding box size at estimated center points of all frames. As demonstrated on two challenging datasets, the MOC-detector has brought a new state-of-the-art with both metrics of frame mAP and video mAP, while maintaining a reasonable computational cost. The superior performance is largely ascribed to the unique design of three branches and their cooperative modeling ability to perform tubelet detection. In the future, based on the proposed MOC-detector, we try to extend its framework to longer-term modeling and model action boundary in the temporal dimension, thus contributing to spatio-temporal action detection in longer continuous video streams.
+
+Acknowledgements. This work is supported by Tencent AI Lab Rhino-Bird Focused Research Program (No. JR202025), the National Science Foundation of China (No. 61921006), Program for Innovative Talents and Entrepreneur in Jiangsu Province, and Collaborative Innovation Center of Novel Software Technology and Industrialization.
+
+# References
+
+1. Cai, Z., Vasconcelos, N.: Cascade r-cnn: Delving into high quality object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 6154-6162 (2018)
+2. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6299-6308 (2017)
+3. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. pp. 248-255. IEEE (2009)
+4. Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., Tian, Q.: Centernet: Keypoint triplets for object detection. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 6569-6578 (2019)
+5. Gan, C., Wang, N., Yang, Y., Yeung, D.Y., Hauptmann, A.G.: Devnet: A deep event network for multimedia event detection and evidence recounting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2568-2577 (2015)
+6. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 1440-1448 (2015)
+7. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 580-587 (2014)
+8. Gkioxari, G., Malik, J.: Finding action tubes. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 759-768 (2015)
+9. Gu, C., Sun, C., Ross, D.A., Vondrick, C., Pantofaru, C., Li, Y., Vijayanarasimhan, S., Toderici, G., Ricco, S., Sukthankar, R., et al.: Ava: A video dataset of spatiotemporally localized atomic visual actions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6047-6056 (2018)
+0. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence 37(9), 1904-1916 (2015)
+1. Hou, R., Chen, C., Shah, M.: Tube convolutional neural network (t-cnn) for action detection in videos. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 5822-5831 (2017)
+2. Hu, W., Tan, T., Wang, L., Maybank, S.: A survey on visual surveillance of object motion and behaviors. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 34(3), 334-352 (2004)
+3. Jhuang, H., Gall, J., Zuffi, S., Schmid, C., Black, M.J.: Towards understanding action recognition. In: Proceedings of the IEEE international conference on computer vision. pp. 3192-3199 (2013)
+4. Kalogeiton, V., Weinzaepfel, P., Ferrari, V., Schmid, C.: Action tubelet detector for spatio-temporal action localization. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4405-4413 (2017)
+5. Law, H., Deng, J.: Cornernet: Detecting objects as paired keypoints. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 734-750 (2018)
+6. Li, D., Qiu, Z., Dai, Q., Yao, T., Mei, T.: Recurrent tubelet proposal and recognition networks for action detection. In: Proceedings of the European conference on computer vision (ECCV). pp. 303-318 (2018)
+
+17. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision. pp. 2980-2988 (2017)
+18. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dólar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740-755. Springer (2014)
+19. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European conference on computer vision. pp. 21-37. Springer (2016)
+20. Oh, S., Hoogs, A., Perera, A., Cuntoor, N., Chen, C.C., Lee, J.T., Mukherjee, S., Aggarwal, J., Lee, H., Davis, L., et al.: A large-scale benchmark dataset for event recognition in surveillance video. In: CVPR 2011. pp. 3153-3160. IEEE (2011)
+21. Peng, X., Schmid, C.: Multi-region two-stream r-cnn for action detection. In: European conference on computer vision. pp. 744-759. Springer (2016)
+22. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 779-788 (2016)
+23. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp. 91-99 (2015)
+24. Saha, S., Singh, G., Cuzzolin, F.: Amtnet: Action-micro-tube regression by end-to-end trainable deep architecture. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4414-4423 (2017)
+25. Saha, S., Singh, G., Sapienza, M., Torr, P.H., Cuzzolin, F.: Deep learning for detecting multiple space-time action tubes in videos. arXiv preprint arXiv:1608.01529 (2016)
+26. Singh, G., Saha, S., Sapienza, M., Torr, P.H., Cuzzolin, F.: Online real-time multiple spatiotemporal action localisation and prediction. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 3637-3646 (2017)
+27. Song, L., Zhang, S., Yu, G., Sun, H.: Tacnet: Transition-aware context network for spatio-temporal action detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 11987-11995 (2019)
+28. Soomro, K., Zamir, A.R., Shah, M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)
+29. Sun, C., Shrivastava, A., Vondrick, C., Murphy, K., Sukthankar, R., Schmid, C.: Actor-centric relation network. In: ECCV. pp. 335-351 (2018)
+30. Tian, Z., Shen, C., Chen, H., He, T.: Fcos: Fully convolutional one-stage object detection. In: The IEEE International Conference on Computer Vision (ICCV) (October 2019)
+31. Venugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T., Saenko, K.: Sequence to sequence-video to text. In: Proceedings of the IEEE international conference on computer vision. pp. 4534-4542 (2015)
+32. Wang, L., Qiao, Y., Tang, X., Gool, L.V.: Actionness estimation using hybrid fully convolutional networks. In: CVPR. pp. 2708-2717 (2016)
+33. Weinzaepfel, P., Harchaoui, Z., Schmid, C.: Learning to track for spatio-temporal action localization. In: Proceedings of the IEEE international conference on computer vision. pp. 3164-3172 (2015)
+34. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 305-321 (2018)
+
+35. Yang, X., Yang, X., Liu, M.Y., Xiao, F., Davis, L.S., Kautz, J.: Step: Spatiotemporal progressive learning for video action detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 264-272 (2019)
+36. Yao, L., Torabi, A., Cho, K., Ballas, N., Pal, C., Larochelle, H., Courville, A.: Describing videos by exploiting temporal structure. In: Proceedings of the IEEE international conference on computer vision. pp. 4507-4515 (2015)
+37. Yu, F., Wang, D., Shelhamer, E., Darrell, T.: Deep layer aggregation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2403-2412 (2018)
+38. Zhao, J., Snoek, C.G.: Dance with flow: Two-in-one stream action detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9935-9944 (2019)
+39. Zhao, Y., Xiong, Y., Lin, D.: Recognize actions by disentangling components of dynamics. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6566-6575 (2018)
+40. Zhou, X., Wang, D., Krähenbuhl, P.: Objects as points. arXiv preprint arXiv:1904.07850 (2019)
+41. Zhou, X., Zhuo, J., Krahenbuhl, P.: Bottom-up object detection by grouping extreme and center points. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 850-859 (2019)
\ No newline at end of file
diff --git a/actionsasmovingpoints/images.zip b/actionsasmovingpoints/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d9e6db6d96016e6f63fb1d89929d65dd189627fe
--- /dev/null
+++ b/actionsasmovingpoints/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:841436324c926f4f86a8a86507a5d75914867c4b39a420a72d061728ca330b02
+size 481630
diff --git a/actionsasmovingpoints/layout.json b/actionsasmovingpoints/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..263d36b8add0d40f1d68442b52d1420bb9054ee3
--- /dev/null
+++ b/actionsasmovingpoints/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f82b48e025e97c411335a36581594af9b0ba2deee4ff0ecf59b386239cf82ec7
+size 382724
diff --git a/activecrowdcountingwithlimitedsupervision/78314569-4d9a-4d8a-8006-ffa9e1ce7f64_content_list.json b/activecrowdcountingwithlimitedsupervision/78314569-4d9a-4d8a-8006-ffa9e1ce7f64_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..65d90f9e9dc52c132aaaaae043051e4710b7ebd9
--- /dev/null
+++ b/activecrowdcountingwithlimitedsupervision/78314569-4d9a-4d8a-8006-ffa9e1ce7f64_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2fef285be96ea73873cf146e616776f48ea5c9d48cc47d710330b7700a967ae0
+size 79197
diff --git a/activecrowdcountingwithlimitedsupervision/78314569-4d9a-4d8a-8006-ffa9e1ce7f64_model.json b/activecrowdcountingwithlimitedsupervision/78314569-4d9a-4d8a-8006-ffa9e1ce7f64_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..228c4f523c790ec201d98574fa040d4432c5e344
--- /dev/null
+++ b/activecrowdcountingwithlimitedsupervision/78314569-4d9a-4d8a-8006-ffa9e1ce7f64_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f49e48b997e8def96687864af31b7c4b95ebf2c538d3b20a79cf8509432f9abe
+size 99882
diff --git a/activecrowdcountingwithlimitedsupervision/78314569-4d9a-4d8a-8006-ffa9e1ce7f64_origin.pdf b/activecrowdcountingwithlimitedsupervision/78314569-4d9a-4d8a-8006-ffa9e1ce7f64_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f8da7cf381b7c3e6262c261e16f85abf86caf539
--- /dev/null
+++ b/activecrowdcountingwithlimitedsupervision/78314569-4d9a-4d8a-8006-ffa9e1ce7f64_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:787cce74f66b80b3647c0d3d23240558ec8d7bc46f956c0c837ba67e6cc87036
+size 1566718
diff --git a/activecrowdcountingwithlimitedsupervision/full.md b/activecrowdcountingwithlimitedsupervision/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..93fbaa95462ddf94a7ebbc38b74f3d428d468315
--- /dev/null
+++ b/activecrowdcountingwithlimitedsupervision/full.md
@@ -0,0 +1,290 @@
+# Active Crowd Counting with Limited Supervision
+
+Zhen Zhao $^{1\star}$ , Miaojing Shi $^{2\star}$ , Xiaoxiao Zhao $^{1}$ , and Li Li $^{1,3}$
+
+1 College of Electronic and Information Engineering, Tongji University
+2 King's College London
+
+3 Institute of Intelligent Science and Technology, Tongji University zhenzhao0917@gmail.com; miaojing.shi@kcl.ac.uk; lili@tongji.edu.cn
+
+Abstract. To learn a reliable people counter from crowd images, head center annotations are normally required. Annotating head centers is however a laborious and tedious process in dense crowds. In this paper, we present an active learning framework which enables accurate crowd counting with limited supervision: given a small labeling budget, instead of randomly selecting images to annotate, we first introduce an active labeling strategy to annotate the most informative images in the dataset and learn the counting model upon them. The process is repeated such that in every cycle we select the samples that are diverse in crowd density and dissimilar to previous selections. In the last cycle when the labeling budget is met, the large amount of unlabeled data are also utilized: a distribution classifier is introduced to align the labeled data with unlabeled data; furthermore, we propose to mix up the distribution labels and latent representations of data in the network to particularly improve the distribution alignment in-between training samples. We follow the popular density estimation pipeline for crowd counting. Extensive experiments are conducted on standard benchmarks i.e. ShanghaiTech, UCF_CC_50, MAll, TRANCOS, and DCC. By annotating limited number of images (e.g. $10\%$ of the dataset), our method reaches levels of performance not far from the state of the art which utilize full annotations of the dataset.
+
+# 1 Introduction
+
+The task of crowd counting in computer vision is to automatically count people numbers in images/videos. With the rapid growth of world's population, crowd gathering becomes more frequent than ever. To help with crowd control and public safety, accurate crowd counting is demanded.
+
+Early methods count crowds via the detection of individuals [49, 2, 34]. They suffer from heavy occlusions in dense crowds. More importantly, learning such people detectors normally requires bounding box or instance mask annotations for individuals, which often makes it undesirable in large-scale applications. Modern methods mainly conduct crowd counting via density estimation [32, 60,
+
+
+Active Labeling and Learning
+Fig. 1: Given a crowd counting dataset, we propose an active learning framework (ALAC) which actively labels only a small proportion of the dataset and learns an accurate density estimation network using both labeled and unlabeled data.
+
+44, 37, 26, 21, 20, 54]. Counting is realized by estimating a density map of an image whose integral over the image gives the total people count. Given a training image, its density map is obtained via Gaussian blurring at every head center. Head centers are the required annotations for training. Thanks to the powerful deep neural networks (DNNs) [17], density estimation based methods show a great success in recent progress [60, 39, 20, 35, 42, 54, 43, 25].
+
+Despite above, annotating head centers in dense crowds is still a laborious and tedious process. For instance, it can take up to 10 minutes for our annotators to annotate a single image with 500 persons; while the popular counting dataset ShanghaiTech PartA [60] has 300 training images with an average of 501 persons per image! To substantially reduce the annotation cost, we study the crowd density estimation in a semi-supervised setting where only handful images are labeled while the rest are unlabeled. This setting has not been largely explored in crowd counting: [4,61] propose to actively annotate the most informative video frames for semi-supervised crowd counting, yet the algorithms are not deep learning based and rely on frame consecutiveness. Recently, some deep learning works propose to leverage additional web data [24, 23] or synthetic data [51] for crowd counting; images in existing dataset are still assumed annotated, or at least many of them. The model transferability is also evaluated in some works [12, 54] where a network is trained on a source dataset with full annotations and tested on a target dataset with no/few annotations.
+
+Given an existing dataset and a power DNN, we find that 1) learning from only a small subset, the performance can vary a lot depending on the subset selection; 2) for the specific subset that covers diverse crowd densities, the performance can be quite good (see results in Sec. 4.2). This motivates us to study crowd counting with very limited annotations yet producing very competitive precision. To achieve this goal, we propose an Active Learning framework for Accurate crowd Counting (AL-AC) as illustrated in Fig. 1: given a labeling budget, instead of randomly selecting images to annotate, we first introduce an active labelling strategy to iteratively annotate the most informative images in the
+
+dataset and learn the counting model on them. In each cycle we select samples that cover different crowd densities and also dissimilar to previous selections. Eventually, the large amount of unlabeled data are also included into the network training: we design a classifier with gradient reversal layer [7] to align the intrinsic distributions of labeled and unlabeled data. Since all training samples contain the same object class, e.g. person, we propose to further align distributions in-between training samples by mixing up the latent representations and distribution labels among labeled and unlabeled data in the network. With very limited labeled data, our model produces very competitive counting result.
+
+To summarize, several new elements are offered:
+
+- We introduce an active learning framework for accurate crowd counting with limited supervision.
+- We propose a partition-based sample selection with weights (PSSW) strategy to actively select and annotate both diverse and dissimilar samples for network training.
+- We design a distribution alignment branch with latent MixUp to align the distribution between the labeled data and large amount of unlabeled data in the network.
+
+Extensive experiments are conducted on standard counting benchmarks, i.e. ShanghaiTech [60], UCF_CC_50 [13], Mall [5], TRANCOS [9], and DCC [28]. Results demonstrate that, with a small number of labeled data, our AL-AC reaches levels of performance not far from state of the art fully-supervised methods.
+
+# 2 Related works
+
+In this section, we mainly survey deep learning based crowd counting methods and discuss semi-supervised learning and active learning in crowd counting.
+
+# 2.1 Crowd counting
+
+The prevailed crowd counting solution is to estimate a density map of a crowd image, whose integral of the density map gives the total person count of that image [60]. A density map encodes spatial information of an image, regressing it in a DNN is demonstrated to be more robust than simply regressing a global crowd count [58, 26]. Due to the commonly occurred heavy occlusions and perspective distortions in crowd images, multi-scale or multi-resolution architectures are often exploited in DNNs: Ranjan et al. [35] propose an iterative crowd counting network which produces the low-resolution density map and uses it to generate the high-resolution density map. Cao et al. [3] propose a novel encoder-decoder network, where the encoder extracts multi-scale features with scale aggregation modules and the decoder generates high-resolution density maps by using a set of transposed convolutions. Furthermore, Jiang et al. [15] develop a trellis encoder-decoder network that incorporates multiple decoding paths to hierarchically aggregate features at different encoding stages. In order to better utilize
+
+multi-scale features in the network, the attention [21, 43], context [44, 22], or perspective [42, 55] information in crowd images is often leveraged into the network. Our work is a density estimation based approach.
+
+# 2.2 Semi-supervised learning
+
+Semi-supervised learning [29] refers to learning with a small amount of labeled data and a large amount of unlabeled data, and has been a popular paradigm in deep learning [52, 36, 18, 57]. It is traditionally studied for classification, where a label represents a class per image [19, 10, 36, 18]. In this work, we focus on semi-supervised learning in crowd counting, where the label of an image means the people count, with individual head points available in most cases. The common semi-supervised crowd counting solution is to leverage both labeled and unlabeled data into the learning procedure: Tan et al. [46] propose a semi-supervised elastic net regression method by utilizing sequential information between unlabeled samples and their temporally neighboring samples as a regularization term; Loy et al. [4] further improve it by utilizing both the spatial and temporal regularization in a semi-supervised kernel ridge regression problem; finally, in [61], graph Laplacian regularization and spatiotemporal constraints are incorporated into the semi-supervised regression. All these are not deep learning works and rely on temporal information among video frames.
+
+Recently, Olmschenk et al. [30, 31] employ a generative adversarial network (GAN) in DNN to allow the usage of unlabeled data in crowd counting. Sam et al. [38] introduce an almost unsupervised learning method that only a tiny proportion of model parameters is trained with labeled data while vast parameters are trained with unlabeled data. Liu et al. [24, 23] propose to learn from unlabeled crowd data via a self-supervised ranking loss in the network. In [24, 23], they mainly assume the existence of a labeled dataset and add extra data from the web; in contrast, our AL-AC seeks a solution for accurate crowd counting with limited labeled data. Our method is also similar to [30, 31] in spirit of the distribution alignment between labeled and unlabeled data. While in [30, 31] they need to generate fake images to learn the discriminator in GAN which makes it hard to learn and converge. Our AL-AC instead mixes representations of labeled and unlabeled data in the network and learns the discriminator against them.
+
+# 2.3 Active learning
+
+Active learning defines a strategy determining data samples that, when added to the training set, improve a previously trained model most effectively [40]. Although it is not possible to obtain an universally good active learning strategy [6], there exist many heuristics [41], which have been proved to be effective in practice. Active learning has been explored in many applications such as image classification [45, 16] and object detection [8], while in this paper we focus on crowd counting. Methods in this context normally assume the availability of the whole counting set and choose samples from it, which is the so-called pool-based
+
+
+Fig. 2: Overview of our active learning framework for accurate crowd counting (AL-AC). GRL: gradient reversal layer; GAP: global average pooling. PSSW: Partition-based sample selection with weights; Conv $1 \times 1$ : output channel is 1.
+
+active learning [56]. [4] and [61] employ the graph-based approach to build adjacency matrix of all crowd images in the pool, sample selection is therefore cast as a matrix partitioning problem. Our work is also pool-based active learning.
+
+Lately, Liu et al. [23] apply active learning in DNN where they measure the informativeness of unlabeled samples via mistakes made by the network on a self-supervised proxy task. The method is conducted iteratively and in each cycle it selects a group of images based their uncertainties to the model. The diversity of selected images is however not carefully taken care in their uncertainty measure, which might result in a biased selection within some specific count range. Our work instead interprets uncertainty from two perspectives: selected samples are diverse in crowd density and dissimilar to previous selection in each learning cycle. It should also be noted that [23] mainly focuses on adding extra unlabeled data to an existing labeled dataset, while our AL-AC seeks for the limited data to be labeled within a given dataset.
+
+# 3 Method
+
+# 3.1 Problem
+
+We follow crowd density estimation in deep learning context where density maps are pixel-wise regressed in a DNN [60, 20]. A ground truth density map is generated by convolving Gaussian kernels at head centers in an image [60]. The network is optimized through a loss function minimizing the prediction error over the ground truth. In this paper, we place our problem in a semi-supervised setting where we only label several or few dozens of images while the rest large amount remains unlabeled. Both the labeled and unlabeled data will be exploited in model learning. Below, we introduce our active learning framework for accurate crowd counting (AL-AC).
+
+# 3.2 Overview
+
+Our algorithm follows an active learning pipeline in general. It is an iterative process where a model is learnt in each cycle and a set of samples is chosen to be labeled from a pool of unlabeled samples [41]. In classic setting, only one single sample is chosen in each cycle. This is however not feasible for DNNs because it is infeasible to train as many models as the number of samples since many practical problems of interest are very large-scale [40]. Hence, the commonly used strategy is batch mode selection [50, 23] where a subset is selected and labeled in each cycle. This subset is added into the labeled set to update the model and repeat the selection in next cycle. The procedure continues until a predefined criterion is met, e.g. a fixed budget.
+
+Our method is illustrated in Fig. 2: given a dataset $\mathcal{A}$ with labeling budget $M$ (number of images as in [38, 23]), we start by labeling $m$ samples uniformly at random from $\mathcal{A}$ . For each labeled sample $v_{i}$ , we generate its count label $c_{i}$ and density map $d_{i}$ based on the annotated head points in $v_{i}$ . We denote $\mathcal{V}^{1} = \{v_{i}, c_{i}, d_{i}\}$ and $\mathcal{U}^{1} = \{u_{j}\}$ as the labeled and unlabeled set in cycle 1, respectively. A DNN regressor $R^{1}$ is trained on $\mathcal{V}^{1}$ for crowd density estimation. Based on $R^{1}$ 's estimation of density maps on $\mathcal{U}^{1}$ , we propose a partition-based sample selection with weights strategy to select and annotate $m$ samples from $\mathcal{U}^{1}$ . These samples are added to $\mathcal{V}^{1}$ so we have the updated labeled and unlabeled set $\mathcal{V}^{2}$ and $\mathcal{U}^{2}$ in $2^{\mathrm{rd}}$ cycle. Model $R^{1}$ is further trained on $\mathcal{V}^{2}$ and updated as $R^{2}$ . The prediction of $R^{2}$ is better than $R^{1}$ as it uses more labeled data, we use the new prediction on $\mathcal{U}^{2}$ to again select $m$ samples and add them to $\mathcal{V}^{2}$ . The process moves on until the labeling budget $M$ is met. The unlabeled set $\mathcal{U}$ is also employed in network training through our proposed distribution alignment with latent MixUp. We only use $\mathcal{U}(\mathcal{U}^{T})$ in the last learning cycle $T$ as we observe that adding it in every cycle does not bring us accumulative benefits but rather additional training cost.
+
+The backbone network is not specified in Fig. 2 as it can be any standard backbone. We will detail our selection of backbone, $M$ , $m$ and $R$ in Sec. 4. Below we introduce our partition-based sample selection with weights and distribution alignment with latent MixUp. Overall loss function is given in this end.
+
+# 3.3 Partition-based sample selection with weights (PSSW)
+
+In each learning cycle, we want to annotate the most informative/uncertain samples and add them to the network. The informativeness/uncertainty of samples is evaluated from two perspectives: diverse in density and dissimilar to previous selections. It is observed that crowd data often forms a well structured manifold where different crowd densities normally distribute smoothly within the manifold space [4]; the diversity is to select crowd samples that cover different crowd densities in the manifold. This is realized by separating the unlabeled set into different density partitions for diverse selection. Within each partition, we want to select those samples that are dissimilar to previous labeled samples, such that the model has not seen them. The dissimilarity is measured considering both local
+
+crowd density and global crowd count: we introduce a grid-based dissimilarity measure (GDSIM) for this purpose. Below, we formulate our partition-based sample selection with weights.
+
+Formally, given the model $R^t$ , unlabeled set $\mathcal{U}^t$ and labeled set $\mathcal{V}^t$ in $t^{th}$ cycle, we denote by $\widetilde{c}_j$ the predicted crowd count by $R^t$ for an unlabeled image $u_j$ . The histogram of all $\widetilde{c}_j$ on $\mathcal{U}^t$ discloses the overall density distribution. For the sake of diversity, we want to partition the histogram into $m$ parts and select one sample from each. Since the crowd counts are not evenly distributed (see Fig. 3: Left), sampling images evenly from the histogram can end up with a biased view of the original distribution. We therefore employ the Jenks natural breaks optimization [14] to partition the histogram. Jenks minimizes the variation within each range, so the partitions between ranges reflect the natural breaks of the histogram (Fig. 3).
+
+Within each partition $P_{k}$ , inspired by grid average mean absolute error (GAME) [9], we propose a grid-based dissimilarity from an unlabeled sample to labeled samples. Given an image $i$ , GAME is originally introduced as an evaluation measure for density estimation,
+
+$$
+\operatorname {G A M E} (L) = \sum_ {l = 1} ^ {4 ^ {L}} | \widetilde {c _ {i} ^ {l}} - c _ {i} ^ {l} |, \tag {1}
+$$
+
+where $\widetilde{c_i^l}$ is the estimated count in region $l$ of image $i$ . It can be obtained via the integration over the density $\widetilde{d}_i^l$ of that region $l$ ; $c_{i}^{l}$ is the corresponding ground truth count. Given a specific level $L$ , GAME $(L)$ subdivides the image using a grid of $4^{L}$ non-overlapping regions which cover the full image (Fig. 3); the difference between the prediction and ground truth is the sum of the mean absolute error (MAE) in each of these regions. With different $L$ , GAME indeed offers moderate ways to compute the dissimilarity between two density maps, taking care of both global counts and local details. Building on GAME, we introduce grid-based dissimilarity measure GDSIM as,
+
+$$
+\underset {u _ {j} \in \mathcal {P} _ {k}} {\operatorname {G D S I M}} \left(u _ {j}, L _ {A}\right) = \underset {i, v _ {i} \in \mathcal {P} _ {k}} {\min } \left(\sum_ {L = 0} ^ {L _ {\mathrm {A}}} \sum_ {l = 1} ^ {4 ^ {L}} \left| \widetilde {c _ {j} ^ {l}} - c _ {i} ^ {l} \right|\right), \tag {2}
+$$
+
+where $u_{j}$ and $v_{i}$ are from the unlabeled set $\mathcal{U}^t$ and labeled set $\mathcal{V}^t$ , respectively; they both fall into the $\mathcal{P}_k$ -th partition. $\widetilde{c_i^l}$ and $c_i^l$ are crowd counts in region $l$ as in formula (1) but for different images $u_{j}$ and $v_{i}$ (see Fig. 3: Right). Given the level $L_{A}$ , unlike GAME, we compute the dissimilarity between $u_{j}$ and $v_{i}$ by traversing all levels from 0 to $L_{A}$ (Fig. 3). In this way, the dissimilarity is computed based on both global count $(L = 0)$ and local density $(L = L_{A})$ differences. Afterwards, instead of averaging the dissimilarity scores from $u_{j}$ to all the $v_{i}$ in $\mathcal{P}_k$ , we use min to indicate if $u_{j}$ is closer to any one of the labeled images, it is regarded as a familiar sample to the model. Ideally, we should choose the most dissimilar sample from each partition; nevertheless, the crowd count $\widetilde{c_j^l}$ in formula (2) is not
+
+
+Fig. 3: Illustration of Jenks natural breaks (Left) and grid-based dissimilarity measure (GDSIM, Right). We take the histogram of crowd count on SHB.
+
+
+
+ground truth. We convert the GDSIM scores to probabilities and adopt weighted random selection to label one sample from each partition.
+
+# 3.4 Distribution alignment with latent MixUp
+
+Since labeled data only represents partial crowd manifold, particularly when they are limited, distribution alignment with large amount of unlabeled data becomes necessary even within the same domain. In order for the model to learn a proper subspace representation of the entire set, we introduce distribution alignment with latent MixUp.
+
+We assign labeled data with distribution labels 0 while unlabeled data with labels 1. A distribution classifier branched off from the deep extractor ( $\phi$ in Fig. 2) is designed: it is composed of a gradient reversal layer (GRL) [7], $1 \times 1$ convolution layer and global average pooling (GAP) layer. The GRL multiplies the gradient by a certain negative constant (-1 in this paper) during the network back propagation; it enforces that the feature distributions over the labeled and unlabeled data are made as indistinguishable as possible for the distribution classifier, thus aligning them together.
+
+The hard distribution labels create hard boundaries between labeled and unlabeled data. To further merge the distributions and particularly align in between training samples, we adapt the idea from MixUp [59]. MixUp normally trains a model on random convex combinations of raw inputs and their corresponding labels. It encourages the model to behave linearly "between" training samples, as this linear behavior reduces the amount of undesirable oscillations when predicting outside the training samples. It has been popularly employed in several semi-supervised classification works [1,47,48,59]. In this work, we integrate it into our distribution alignment branch for semi-supervised crowd counting. We find that mixing raw input images does not work for our problem. Instead we propose to mix their latent representations in the network: supposedly we have two images, $x_{1}$ , $x_{2}$ , and their distribution labels $y_{1}$ , $y_{2}$ , respectively. The latent representations of $x_{1}$ and $x_{2}$ are produced by the deep extractor $\phi$ as two tensors $(\phi(x_{1})$ and $\phi(x_{2})$ ) from the last convolutional layer of the backbone.
+
+We mix up $(\phi(x_1), y_1)$ , $(\phi(x_2), y_2)$ with a weight $\lambda'$ as
+
+$$
+z ^ {\prime} = \lambda^ {\prime} \phi (x _ {1}) + (1 - \lambda^ {\prime}) \phi (x _ {2})
+$$
+
+$$
+y ^ {\prime} = \lambda^ {\prime} \times y _ {1} + (1 - \lambda^ {\prime}) \times y _ {2}. \tag {3}
+$$
+
+where $(z', y')$ denotes the mixed latent representation and label. $\lambda'$ is generated in the same way with [1]: $\lambda' = \max(\lambda, 1 - \lambda)$ , $\lambda \sim \mathrm{Beta}(\alpha, \alpha)$ ; $\alpha$ is a hyperparameter set to 0.5. Both labeled and unlabeled data can be mixed. For two samples with the same label, their mixed label remains. We balance the number of labeled and unlabeled data with data augmentation (see Sec. 4.1) so a mixed pair can be composed of labeled or unlabeled data with (almost) the same probability. MixUp enriches the distribution in-between training samples. Together with GRL, it allows the network to elaborately knit the distributions of labeled and unlabeled data. The alignment is only carried out in the last active learning cycle as an efficient practice. The network training proceeds with a multi-task optimization that minimizes the density regression loss on labeled data and the distribution classification loss for all data including mixed ones, specified below.
+
+# 3.5 Loss function
+
+For density regression, we adopt the commonly used pixel-wise MSE loss $\mathcal{L}_{reg}$ :
+
+$$
+\mathcal {L} _ {r e g} = \frac {1}{2 K} \sum_ {k = 1} ^ {K} \| d _ {k} ^ {e} - d _ {k} ^ {g} \| _ {2} ^ {2} \tag {4}
+$$
+
+$d_k^e$ and $d_k^g$ denote the density map prediction and ground truth of image $k$ , respectively. $K$ is the number of labeled images. For the distribution classification, since distribution labels for mixed samples can be non-integers, we adopt the binary cross entropy with logits loss $\mathcal{L}_{dc}$ , which combines a Sigmoid layer with the binary cross entropy loss. Given an image pair, $\mathcal{L}_{dc}$ is computed on each individual as well as their mixed representations (see Fig. 2). The overall multi-task loss function is given by
+
+$$
+\mathcal {L} = \mathcal {L} _ {r e g} + \beta \mathcal {L} _ {d c} \tag {5}
+$$
+
+# 4 Experiments
+
+We conduct our experiments on three counting datasets: ShanghaiTech [60], UCF_CC_50 [13], Mall [5]. In the supplementary material, we offer more results not only in the three datasets for people counting, but also in the TRANCOS [9] and DCC [28] datasets for vehicle and cell counting, respectively.
+
+# 4.1 Experimental Setup
+
+Datasets. ShanghaiTech [60] consists of 1,198 annotated images with a total of 330,165 people with head center annotations. This dataset is split into SHA and
+
+SHB. The average crowd counts are 123.6 and 501.4, respectively. Following [60], we use 300 images for training and 182 images for testing in SHA; 400 images for training and 316 images for testing in SHB. UCF_CC_50 [13] has 50 images with 63,974 head center annotations in total. The head counts range between 94 and 4,543 per image. The small dataset size and large variance make this a very challenging counting dataset. We call it UCF for short. Following [13], we perform 5-fold cross validations to report the average test performance. Mall [5] contains 2000 frames collected in a shopping mall. Each frame on average has only 31 persons. The first 800 frames are used as the training set and the rest 1200 frames as the test set.
+
+Implementation details. The backbone $(\phi)$ design follows [20]: VGGnet with 10 convolutional and 6 dilated convolutional layers, it is pretrained on ILSVRC classification task. We follow the setting in [20] to generate ground truth density maps. To have a strong baseline, the training set is augmented by randomly cropping patches of $1/4$ size of each image. We set a reference number 1200, both labeled and unlabeled data in each dataset are augmented up to this number to have a balanced distribution. For instance, if we have 30 labeled images, we need to crop 40 patches from each image to augment it to 1200. We feed the network with a minibatch of two image patches each time. In order to have the same size of two patches, we further crop them to keep the shorter width and height of the two. We set the learning rate as 1e-7, momentum 0.95 and weight decay 5e-4. We train 100 epochs with SGD optimizer for each active learning cycle and before the last cycle, the network is trained with only labeled data. In the last cycle, it is trained with both labeled and unlabeled data. In all experiments, $L_{A}$ is 3 for GDSIM (2) and $\beta$ is 3 for loss weight (5).
+
+Evaluation protocol. We evaluate the counting performance via the commonly used mean absolute error (MAE) and mean square error (MSE) [39, 44, 21] which measures the difference between the counts of ground truth and estimation. For active learning, we choose to label around $10\%$ images of the entire set, which goes along with our setting of limited supervision. $m$ is chosen not too small so that we can normally reach the labeling budget in about 2-4 active learning cycles. Sec. 5 gives a discussion on the time complexity. $M$ and $m$ are by default 30/40 and 10 on SHA and SHB, 10 and 3 on UCF (initial number is 4), 80 and 20 on Mall, respectively. We also evaluate different $M$ and $m$ to show the effectiveness of our method. The baseline is to randomly label $M$ images and train a regression model using the same backbone with our AL-AC but without distribution alignment. As in [4,61], taken the randomness into account, we repeat each experiment with 10 trials for both mean and standard deviation, to show the improvement of our method over baseline.
+
+# 4.2 ShanghaiTech
+
+Ablation study. The proposed partition-based sample selection with weights and distribution alignment with latent MixUp are ablated.
+
+Labeling budget $M$ and $m$ . As mentioned in Sec. 4.1, we set $M = 30/40$ and $m = 10$ by default. Comparable experiments are offered in two ways. First,
+
+| Dataset | SHA | SHB |
| Method | PSSW | RS | PSSW | RS |
| M=10, m=10 | 121.2±9.3 | 121.2±9.3 | 20.5±4.8 | 20.5±4.8 |
| M=20, m=10 | 96.7±7.3 | 111.5±7.4 | 17.0±1.9 | 19.3±2.2 |
| M=30, m=10 | 93.5±2.9 | 102.1±7.0 | 15.7±1.5 | 19.9±3.1 |
| M=40, m=10 | 85.4±2.5 | 93.8±5.6 | 14.6±1.3 | 17.9±1.9 |
| M=30, m=5 | 92.6±3.1 | 102.1±7.0 | 15.1±1.5 | 19.9±3.1 |
| M=40, m=5 | 84.4±2.6 | 93.8±5.6 | 14.4±1.2 | 17.9±1.9 |
+
+Table 1: Ablation study of the proposed partition-based sample selection with weights (PSSW) strategy. Left: comparison against random selection (RS). Right: comparison to some variants of PSSW; Even Partition means evenly splitting on the histogram of crowd count; Global Diff refers to using global count difference for dissimilarity. MAE is reported on SHA and SHB.
+
+| M=40, m=10 | SHA | SHB |
| RS (Baseline) | 93.8 | 17.9 |
| Even Partition | 89.6 | 16.2 |
| Global Diff | 86.6 | 15.3 |
| PSSW | 84.4 | 14.4 |
+
+keeping $m = 10$ , we vary $M$ from 10 to 40. The results are shown in Table 1. We compare our partition-based sample selection with weights (PSSW) with random selection (RS); distribution alignment is not added in this experiment. For PSSW, its MAE on SHA is gradually decreased from 121.2 with $M = 10$ to 85.4 with $M = 40$ , the standard deviation is also decreased from 9.3 to 2.5. The MAE result is in general 10 points lower than RS. With different $M$ , PSSW also produces lower MAE than RS on SHB. For example, with $M = 40$ , PSSW yields an MAE of 14.6 v.s. 17.9 for RS.
+
+Second, by keeping $M = 30/40$ , we decrease $m$ from 10 to 5 and repeat the experiment. Results show that having a small $m$ indeed works slightly better: for instance, PSSW with $M = 30$ and $m = 5$ reduces MAE by 1.0 on SHA compared to PSSW with $M = 30$ and $m = 10$ . On the other hand, $m$ can not be too small as discussed in Sec. 3.2 and Sec. 5. In practice, we still keep $m = 10$ for both efficiency and effectiveness.
+
+Variants of PSSW. Our PSSW has two components: the Jenks-based partition for diversity, and the GDSIM for dissimilarity (Sec. 3). In order to show the effectiveness of each, we present two variants of PSSW: Even Partition and Global Diff. Even Partition means that Jenks-based partition is replaced by evenly splitting the ranges on the histogram of crowd count while GSDIM remains; Global Diff means that GDSIM is replaced by using the global count difference to measure the dissimilarity while Jenks-based partition remains. We report MAE on SHA and SHB in Table 1: Right. It can be seen that Even Partition produces MAE 89.6 on SHA and 16.2 on SHB, while Global Diff produces 86.6 and 15.3. Both are clearly inferior to PSSW (84.4 and 14.4). This suggests the importance of the proposed diversity and dissimilarity measure.
+
+Distribution alignment with latent MixUp. Our proposed distribution alignment with latent MixUp is composed of two elements: distribution classifier with GRL and latent MixUp (Sec. 3.4). To demonstrate their effectiveness, we present the result of PSSW plus GRL classifier (denoted as PSSW + GR-L), and latent MixUp (denoted as PSSW + GRL + MX) in Table 2. We take $M = 40$ as an example, adding GRL and MX to PSSW contributes to 5.0 points
+
+| Dataset | SHA | SHB |
| M = 30, m =10 | MAE | MSE | MAE | MSE |
| PSSW | 93.5±2.9 | 151.0±15.1 | 15.7±1.5 | 28.3±3.4 |
| PSSW+GRL | 90.8±2.7 | 144.9±14.5 | 14.7±1.3 | 27.8±2.9 |
| PSSW+GRL+MX | 87.9±2.3 | 139.5±12.7 | 13.9±1.2 | 26.2±2.5 |
| M = 40, m =10 | MAE | MSE | MAE | MSE |
| PSSW | 85.4±2.5 | 144.7±10.7 | 14.6±1.3 | 24.6±3.0 |
| PSSW+GRL | 82.7±2.4 | 140.9±11.3 | 13.7±1.3 | 23.5±2.2 |
| PSSW+GRL+MX | 80.4±2.4 | 138.8±10.1 | 12.7±1.1 | 20.4±2.1 |
+
+| M=40, m=10 | SHA | SHB |
| RS (Baseline) | 93.8 | 17.9 |
| RS+GRL+MX | 87.3 | 15.1 |
| PSSW | 84.4 | 14.4 |
| PSSW+GRL+MX | 80.4 | 12.7 |
+
+Table 2: Ablation study of the proposed distribution alignment with latent MixUp. Left: analysis on latent MixUp (MX) and gradient reversal layer (GRL). Right: comparison against RS plus GRL and MX. MAE is reported in the right table.
+
+MAE decrease on SHA and 1.9 points decrease on SHB. Specifically, The MX contributes to 2.3 and 1.0 points decrease on SHA and SHB, respectively. The same observation goes for MSE: by adding GRL and MX, it decreases from 144.7 to 138.8 on SHA, from 24.6 to 20.4 on SHB.
+
+To make a further comparison, we also add the proposed distribution alignment with latent MixUp to RS in Table 2: Right, where we achieve MAE 87.3 on SHA and 15.1 on SHB. Adding GRL+MX to RS also improves the baseline: the performance difference between PSSW and RS becomes smaller; yet, the absolute value of the difference is still big, which justifies our PSSW. Notice PSSW $+\mathrm{GRL} + \mathrm{MX}$ is the final version of our AL-AC hereafter.
+
+Comparison with fully-supervised methods. We compare our work with those prior arts [60, 39, 20, 35, 42, 43, 27]. All these approaches are fully-supervised methods which utilize annotations of the entire dataset (300 in SHA and 400 in SHB). While in our setting, we label only $30/40$ images, $10\%$ of the entire set. It can be seen that our method outperforms the representative methods [60, 39] a few years ago, and are not far from other recent arts, i.e. [20, 35, 42, 43, 27]. A direct comparison to ours is CSRNet [20], we share the same backbone. With about $10\%$ labeled data, our AL-AC retains $85\%$ accuracy on SHA (68.2 / 80.4), $83\%$ accuracy on SHB (10.6 / 12.7). Compared to our baseline (denoted as RS in Table 1), AL-AC in general produces significantly lower MAE, e.g. 87.9 v.s. 102.1 on SHA with $M = 30$ ; 17.9 v.s. 12.7 on SHB with $M = 40$ .
+
+Despite that we only label $10\%$ data, our distribution alignment with latent MixUp indeed enables us to make use of more unlabeled data across datasets: for instance, a simple implementation with $\mathrm{M} = 40$ on SHA, if we add SHB as unlabeled data to AL-AC for distribution alignment, we obtain an even lower MAE 78.6 v.s. 80.4 in Table 3.
+
+Comparison with semi-supervised methods. There are also some semi-supervised crowd counting methods [23, 38, 31] $^{1}$ . For instance in [38, 31], with $M = 50$ they produce MAE 170.0 and 136.9 on SHA, respectively. These are much higher MAE than ours. Since [38, 31] use different architectures from AL-
+
+| Dataset | SHA | SHB | Counting | UCF |
| Measures | MAE | MSE | MAE | MSE | Measures | MAE | MSE |
| MCNN [60] | 110.2 | 173.2 | 26.4 | 41.3 | MCNN [60] | 377.6 | 509.1 |
| Switching CNN [39] | 90.4 | 135.0 | 21.6 | 33.4 | Switching CNN [39] | 318.1 | 439.2 |
| CSRNet [20] | 68.2 | 115.0 | 10.6 | 16.0 | CP-CNN[44] | 295.8 | 320.9 |
| ic-CNN [35] | 68.5 | 116.2 | 10.7 | 16.0 | CSRNet [20] | 266.1 | 397.5 |
| PACNN [42] | 62.4 | 102.0 | 7.6 | 11.8 | ic-CNN [35] | 260.0 | 365.5 |
| CFF [43] | 65.2 | 109.4 | 7.2 | 11.2 | PACNN [42] | 241.7 | 320.7 |
| BAYESIAN+ [27] | 62.8 | 101.8 | 7.7 | 12.7 | BAYESIAN+ [27] | 229.3 | 308.2 |
| Baseline (M = 30) | 102.1 | 164.0 | 19.9 | 30.6 | Baseline (M=10, m=3) | 444.7± 25.9 | 600.3± 32.7 |
| AL-AC (M = 30) | 87.9 | 139.5 | 13.9 | 26.2 | AL-AC (M=10, m=3) | 351.4± 19.2 | 448.1± 24.5 |
| Baseline (M =40) | 93.8 | 150.9 | 17.9 | 27.3 | Baseline (M=20, m=10) | 417.2± 29.8 | 550.1± 25.5 |
| AL-AC (M =40) | 80.4 | 138.8 | 12.7 | 20.4 | AL-AC (M=20, m=10) | 318.7± 23.0 | 421.6± 24.1 |
+
+Table 3: Comparison of AL-AC to Table 4: Comparison of AL-AC with state the state of the art on SHA and SHB. of the art on UCF.
+
+
+Fig. 4: Examples of AL-AC on SHA, SHB, UCF, TRANCOS, and DCC. Ground truth counts are in the original images while predicted counts in the estimated density maps.
+
+AC, they are not straightforward comparisons. For [23], it uses about $50\%$ labeled data on SHA (Fig.7 in [23]) to reach the similar performance of our AL-AC with $10\%$ labeled data. We both adopt the VGGnet yet [23] utilizes extra web data for ranking loss while we only use unlabeled data within SHA, we use dilated convolutions while [23] does not. To make them more comparable, we instead use the same backbone of [23] and repeat AL-AC on SHA (implementation details still follow Sec. 4.1), the mean MAE with $\mathrm{M} = 30$ , $\mathrm{m} = 10$ on SHA becomes 91.4 (v.s. 87.9 in Table 3), which is still much better than that of [23].
+
+In the supplementary material, we also provide the result by gradually increasing $M$ till 280 on SHA, where we show that by labelling about 80-100 labeled data (nearly $30\%$ of the dataset), AL-AC already reaches the performance close to the fully-supervised method, as in [20] (Table 3).
+
+# 4.3 UCF_CC_50
+
+It has 40 training images in total. We show in Table 4 that, labeling ten of them $(M = 10, m = 3)$ already produces a very competitive result: the MAE is 351.4 while the MSE is 448.1. The MAE and MSE are significantly lower (93.3 and
+
+| Mall | Baseline | AL-AC* | Count Forest [33] | ConvLSTM [53] | DecideNet [21] | E3D [62] | SAAN [11] |
| MAE | 5.9±0.9 | 3.8±0.5 | 4.4 | 2.1 | 1.5 | 1.6 | 1.3 |
| MSE | 6.3±1.1 | 5.4±0.8 | 2.4 | 7.6 | 1.9 | 2.1 | 1.7 |
+
+Table 5: Comparison of AL-AC with state of the art on Mall (M=80, m=20).
+
+152.2 points) than baseline. We analyzed the result and found that our AL-AC is able to select those hard samples with thousands of persons and label them for training, while this is not guaranteed in random selection. Compared to fully supervised method, e.g. [20], our MAE is not far. We also present the result of $M = 20$ , $m = 10$ : MAE/MSE is further reduced.
+
+# 4.4 Mall
+
+Different from ShanghaiTech and UCF datasets, Mall contains images with much sparser crowds, 31 persons on average per image. Following our setup, we label 80 out of 800 images and compare our AL-AC with both baseline and other fully-supervised methods [33, 53, 21, 62, 11] in Table 5. With $10\%$ labeled data, we achieve MAE 3.8 superior to the baseline and [33], MSE 5.4 superior to the baseline and [53]. This shows the effectiveness of our method on sparse crowds.
+
+# 5 Discussion
+
+We present an active learning framework for accurate crowd counting with limited supervision. Given a counting dataset, instead of annotating every image, we introduce a partition-based sample selection with weights to label only a few most informative images and learn a crowd regression network upon them. This process is iterated till the labeling budget is reached. Next, rather than learning from only labeled data, the abundant unlabeled data are also exploited: we introduce a distribution alignment branch with latent MixUp in the network. Experiments conducted on standard benchmarks show that labeling only $10\%$ of the entire set, our method already performs close to recent state-of-the-art.
+
+By choosing an appropriate $m$ , we normally reach the labeling budget in three active learning cycles. In our setting, training data in each dataset are augmented to a fixed number. We run our experiments with GPU GTX1080. It takes around three hours to complete each active learning cycle. The total training hours are more or less the same to fully-supervised training, as in each learning cycle we train much fewer epochs with limited number of labeled data. More importantly, compared to the annotation cost for an entire dataset (see Sec. 1 for an estimation on SHA), ours is substantially reduced!
+
+Acknowledgement: This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 61828602 and 51475334; as well as National Key Research and Development Program of Science and Technology of China under Grant No. 2018YFB1305304, Shanghai Science and Technology Pilot Project under Grant No. 19511132100.
+
+# References
+
+1. Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., Raffel, C.: Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249 (2019)
+2. Brostow, G.J., Cipolla, R.: Unsupervised bayesian detection of independent motion in crowds. In: CVPR (2006)
+3. Cao, X., Wang, Z., Zhao, Y., Su, F.: Scale aggregation network for accurate and efficient crowd counting. In: ECCV (2018)
+4. Change Loy, C., Gong, S., Xiang, T.: From semi-supervised to transfer counting of crowds. In: CVPR (2013)
+5. Chen, K., Loy, C.C., Gong, S., Xiang, T.: Feature mining for localised crowd counting. In: BMVC (2012)
+6. Dasgupta, S.: Analysis of a greedy active learning strategy. In: NIPS (2005)
+7. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: JMLR (2015)
+8. Gonzalez-Garcia, A., Vezhnevets, A., Ferrari, V.: An active search strategy for efficient object class detection. In: CVPR (2015)
+9. Guerrero-Gomez-Olmedo, R., Torre-Jimenez, B., López-Sastre, R., Maldonado-Bascon, S., Onoro-Rubio, D.: Extremely overlapping vehicle counting. In: Iberian Conference on Pattern Recognition and Image Analysis (2015)
+0. Hoffer, E., Ailon, N.: Semi-supervised deep learning by metric embedding. arXiv preprint arXiv:1611.01449 (2016)
+1. Hossain, M., Hosseinzadeh, M., Chanda, O., Wang, Y.: Crowd counting using scale-aware attention networks. In: WACV (2019)
+2. Hossain, M.A., Kumar, M., Hosseinzadeh, M., Chanda, O., Wang, Y.: One-shot scene-specific crowd counting. In: BMVC (2019)
+3. Idrees, H., Saleemi, I., Seibert, C., Shah, M.: Multi-source multi-scale counting in extremely dense crowd images. In: CVPR (2013)
+4. Jenks, G.F.: The data model concept in statistical mapping. International yearbook of cartography 7, 186-190 (1967)
+5. Jiang, X., Xiao, Z., Zhang, B., Zhen, X., Cao, X., Doermann, D., Shao, L.: Crowd counting and density estimation by trellis encoder-decoder networks. In: CVPR (2019)
+6. Joshi, A.J., Porikli, F., Papanikolopoulos, N.: Multi-class active learning for image classification. In: CVPR (2009)
+7. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012)
+8. Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. In: ICLR (2016)
+9. Lee, D.H.: Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In: ICMLW (2013)
+20. Li, Y., Zhang, X., Chen, D.: Csrnet: Dilated convolutional neural networks for understanding the highly congested scenes. In: CVPR (2018)
+21. Liu, J., Gao, C., Meng, D., G. Hauptmann, A.: Decidenet: Counting varying density crowds through attention guided detection and density estimation. In: CVPR (2018)
+22. Liu, W., Salzmann, M., Fua, P.: Context-aware crowd counting. In: CVPR (2019)
+23. Liu, X., Van De Weijer, J., Bagdanov, A.D.: Exploiting unlabeled data in cnns by self-supervised learning to rank. IEEE transactions on pattern analysis and machine intelligence (2019)
+
+24. Liu, X., Weijer, J., Bagdanov, A.D.: Leveraging unlabeled data for crowd counting by learning to rank. In: CVPR (2018)
+25. Liu, Y., Shi, M., Zhao, Q., Wang, X.: Point in, box out: Beyond counting persons in crowds. In: CVPR (2019)
+26. Lu, Z., Shi, M., Chen, Q.: Crowd counting via scale-adaptive convolutional neural network. In: WACV (2018)
+27. Ma, Z., Wei, X., Hong, X., Gong, Y.: Bayesian loss for crowd count estimation with point supervision. In: ICCV (2019)
+28. Marsden, M., McGuinness, K., Little, S., Keogh, C.E., O'Connor, N.E.: People, penguins and petri dishes: adapting object counting models to new visual domains and object types without forgetting. In: CVPR (2018)
+29. Olivier, C., Bernhard, S., Alexander, Z.: Semi-supervised learning. In: IEEE Transactions on Neural Networks, vol. 20, pp. 542-542 (2006)
+30. Olmschenk, G., Tang, H., Zhu, Z.: Crowd counting with minimal data using generative adversarial networks for multiple target regression. In: WACV (2018)
+31. Olmschenk, G., Zhu, Z., Tang, H.: Generalizing semi-supervised generative adversarial networks to regression using feature contrasting. Computer Vision and Image Understanding (2019)
+32. Onoro-Rubio, D., López-Sastre, R.J.: Towards perspective-free object counting with deep learning. In: ECCV (2016)
+33. Pham, V.Q., Kozakaya, T., Yamaguchi, O., Okada, R.: Count forest: Co-voting uncertain number of targets using random forest for crowd density estimation. In: ICCV (2015)
+34. Rabaud, V., Belongie, S.: Counting crowded moving objects. In: CVPR (2006)
+35. Ranjan, V., Le, H., Hoai, M.: Iterative crowd counting. In: ECCV (2018)
+36. Rasmus, A., Berglund, M., Honkala, M., Valpola, H., Raiko, T.: Semi-supervised learning with ladder networks. In: NIPS (2015)
+37. Sam, D.B., Babu, R.V.: Top-down feedback for crowd counting convolutional neural network. In: AAAI (2018)
+38. Sam, D.B., Sajjan, N.N., Maurya, H., Babu, R.V.: Almost unsupervised learning for dense crowd counting. In: AAAI (2019)
+39. Sam, D.B., Surya, S., Babu, R.V.: Switching convolutional neural network for crowd counting. In: CVPR (2017)
+40. Sener, O., Savarese, S.: Active learning for convolutional neural networks: A core-set approach. In: ICLR (2018)
+41. Settles, B.: Active learning literature survey. Tech. rep., University of Wisconsin-Madison Department of Computer Sciences (2009)
+42. Shi, M., Yang, Z., Xu, C., Chen, Q.: Revisiting perspective information for efficient crowd counting. In: CVPR (2019)
+43. Shi, Z., Mettes, P., Snoek, C.G.: Counting with focus for free. In: ICCV (2019)
+44. Sindagi, V.A., Patel, V.M.: Generating high-quality crowd density maps using contextual pyramid cnns. In: ICCV (2017)
+45. Sinha, S., Ebrahimi, S., Darrell, T.: Variational adversarial active learning. In: ICCV (2019)
+46. Tan, B., Zhang, J., Wang, L.: Semi-supervised elastic net for pedestrian counting. Pattern Recognition 44(10-11), 2297-2304 (2011)
+47. Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas, I., Courville, A., Lopez-Paz, D., Bengio, Y.: Manifold mixup: Better representations by interpolating hidden states. In: ICML (2019)
+48. Verma, V., Lamb, A., Kannala, J., Bengio, Y., Lopez-Paz, D.: Interpolation consistency training for semi-supervised learning. arXiv preprint arXiv:1903.03825 (2019)
+
+49. Viola, P., Jones, M.J., Snow, D.: Detecting pedestrians using patterns of motion and appearance. IJCV 63(2), 153-161 (2003)
+50. Wang, K., Zhang, D., Li, Y., Zhang, R., Lin, L.: Cost-effective active learning for deep image classification. IEEE Transactions on Circuits and Systems for Video Technology 27(12), 2591-2600 (2016)
+51. Wang, Q., Gao, J., Lin, W., Yuan, Y.: Learning from synthetic data for crowd counting in the wild. In: CVPR (2019)
+52. Weston, J., Ratle, F., Mobahi, H., Collobert, R.: Deep learning via semi-supervised embedding. In: Neural Networks: Tricks of the Trade, pp. 639-655. Springer (2012)
+53. Xiong, F., Shi, X., Yeung, D.Y.: Spatiotemporal modeling for crowd counting in videos. In: ICCV (2017)
+54. Xu, C., Qiu, K., Fu, J., Bai, S., Xu, Y., Bai, X.: Learn to scale: Generating multipolar normalized density map for crowd counting. In: ICCV (2019)
+55. Yan, Z., Yuan, Y., Zuo, W., Tan, X., Wang, Y., Wen, S., Ding, E.: Perspective-guided convolution networks for crowd counting. In: ICCV (2019)
+56. Yang, Y., Ma, Z., Nie, F., Chang, X., Hauptmann, A.G.: Multi-class active learning by uncertainty sampling with diversity maximization. International Journal of Computer Vision 113(2), 113-127 (2015)
+57. Yang, Z., Shi, M., Avrithis, Y., Xu, C., Ferrari, V.: Training object detectors from few weakly-labeled and many unlabeled images. arXiv preprint arXiv:1912.00384 (2019)
+58. Zhang, C., Li, H., Wang, X., Yang, X.: Cross-scene crowd counting via deep convolutional neural networks. In: CVPR (2015)
+59. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: Mixup: Beyond empirical risk minimization. In: ICLR (2018)
+60. Zhang, Y., Zhou, D., Chen, S., Gao, S., Ma, Y.: Single-image crowd counting via multi-column convolutional neural network. In: CVPR (2016)
+61. Zhou, Q., Zhang, J., Che, L., Shan, H., Wang, J.Z.: Crowd counting with limited labeling through submodular frame selection. IEEE Transactions on Intelligent Transportation Systems 20(5), 1728-1738 (2018)
+62. Zou, Z., Shao, H., Qu, X., Wei, W., Zhou, P.: Enhanced 3d convolutional networks for crowd counting. In: BMVC (2019)
\ No newline at end of file
diff --git a/activecrowdcountingwithlimitedsupervision/images.zip b/activecrowdcountingwithlimitedsupervision/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b9bcaed60dee14556c52e19e00ac2842d17289ee
--- /dev/null
+++ b/activecrowdcountingwithlimitedsupervision/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0b47b92a4bdb4298c04490df681f227e2ae18a985be664dbb46cc37f4369f059
+size 378383
diff --git a/activecrowdcountingwithlimitedsupervision/layout.json b/activecrowdcountingwithlimitedsupervision/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b75581723f1b6944b70b487dc51aaba4db9989ae
--- /dev/null
+++ b/activecrowdcountingwithlimitedsupervision/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a80b50942d8f4e1e3b885f81a22f5ec52acc529fd77eb735cf44c955a63d13e7
+size 461078
diff --git a/activeperceptionusinglightcurtainsforautonomousdriving/64e5a70a-c7d5-4804-aaf5-56c8f5cd421b_content_list.json b/activeperceptionusinglightcurtainsforautonomousdriving/64e5a70a-c7d5-4804-aaf5-56c8f5cd421b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1a10062212bed94fba2410cd3c2cfaa0e5525dd2
--- /dev/null
+++ b/activeperceptionusinglightcurtainsforautonomousdriving/64e5a70a-c7d5-4804-aaf5-56c8f5cd421b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dc4053f9ebc6f91203f97dab326c1a0a9952e16eb2a0631102aef416802a5926
+size 73939
diff --git a/activeperceptionusinglightcurtainsforautonomousdriving/64e5a70a-c7d5-4804-aaf5-56c8f5cd421b_model.json b/activeperceptionusinglightcurtainsforautonomousdriving/64e5a70a-c7d5-4804-aaf5-56c8f5cd421b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..3355643fadcc02b8a17733212f8cb7b604306605
--- /dev/null
+++ b/activeperceptionusinglightcurtainsforautonomousdriving/64e5a70a-c7d5-4804-aaf5-56c8f5cd421b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:11357509de8ec1586a558766d88b4ed4cc8c1dd5f2b5241f577558998da36f9d
+size 88947
diff --git a/activeperceptionusinglightcurtainsforautonomousdriving/64e5a70a-c7d5-4804-aaf5-56c8f5cd421b_origin.pdf b/activeperceptionusinglightcurtainsforautonomousdriving/64e5a70a-c7d5-4804-aaf5-56c8f5cd421b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..106d6d36d9e2bb2e26f57ba2857f7efdda104cdb
--- /dev/null
+++ b/activeperceptionusinglightcurtainsforautonomousdriving/64e5a70a-c7d5-4804-aaf5-56c8f5cd421b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1f06b4d819ec06fb1b091925158b6eb18b390354232b86cbc0ef453707b80629
+size 5604248
diff --git a/activeperceptionusinglightcurtainsforautonomousdriving/full.md b/activeperceptionusinglightcurtainsforautonomousdriving/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..61e267c093fcfee5eb06b5f3c99733396e1bcb70
--- /dev/null
+++ b/activeperceptionusinglightcurtainsforautonomousdriving/full.md
@@ -0,0 +1,274 @@
+# Active Perception using Light Curtains for Autonomous Driving
+
+Siddharth Ancha, Yaadhav Raaj, Peiyun Hu, Srinivasa G. Narasimhan, and David Held
+
+Carnegie Mellon University, Pittsburgh PA 15213, USA {sancha, ryaadhav, peiyunh, srinivas, dheld} @andrew.cmu.edu
+
+Website: http://siddancha.github.io/projects/active-perception-light-curtains
+
+Abstract. Most real-world 3D sensors such as LiDARs perform fixed scans of the entire environment, while being decoupled from the recognition system that processes the sensor data. In this work, we propose a method for 3D object recognition using light curtains, a resource-efficient controllable sensor that measures depth at user-specified locations in the environment. Crucially, we propose using prediction uncertainty of a deep learning based 3D point cloud detector to guide active perception. Given a neural network's uncertainty, we develop a novel optimization algorithm to optimally place light curtains to maximize coverage of uncertain regions. Efficient optimization is achieved by encoding the physical constraints of the device into a constraint graph, which is optimized with dynamic programming. We show how a 3D detector can be trained to detect objects in a scene by sequentially placing uncertainty-guided light curtains to successively improve detection accuracy. Links to code can be found on the project webpage.
+
+Keywords: Active Vision, Robotics, Autonomous Driving, 3D Vision
+
+# 1 Introduction
+
+3D sensors, such as LiDAR, have become ubiquitous for perception in autonomous systems operating in the real world, such as self-driving vehicles and field robots. Combined with recent advances in deep-learning based visual recognition systems, they have lead to significant breakthroughs in perception for autonomous driving, enabling the recent surge of commercial interest in self-driving technology.
+
+However, most 3D sensors in use today perform passive perception, i.e. they continuously sense the entire environment while being completely decoupled from the recognition system that will eventually process the sensor data. In such a case, sensing the entire scene can be potentially inefficient. For example, consider an object detector running on a self-driving car that is trying to recognize objects in its environment. Suppose that it is confident that a tree-like structure on the side of the street is not a vehicle, but it is unsure whether an object turning around the curb is a vehicle or a pedestrian. In such a scenario, it might be
+
+
+
+
+
+
+Fig.1: Object detection using light curtains. (a) Scene with 4 cars; ground-truth boxes shown in green. (b) Sparse green points are from a single-beam LiDAR; it can detect only two cars (red boxes). Numbers above detections boxes are confidence scores. Uncertainty map in greyscale is displayed underneath: whiter means higher uncertainty. (c) First light curtain (blue) is placed to optimally cover the most uncertain regions. Dense points (green) from light curtain results in detecting 2 more cars. (d) Second light curtain senses even more points and fixes the misalignment error in the leftmost detection.
+
+
+
+beneficial if the 3D sensor focuses on collecting more data from the latter object, rather than distributing its sensing capacity uniformly throughout the scene.
+
+In this work, we propose a method for 3D object detection using active perception, i.e. using sensors that can be purposefully controlled to sense specific regions in the environment. Programmable light curtains [22,2] were recently proposed as controllable, light-weight, and resource efficient sensors that measure the presence of objects intersecting any vertical ruled surface whose shape can be specified by the user (see Fig. 2). There are two main advantages of using programmable light curtains over LiDARs. First, they can be cheaply constructed, since light curtains use ordinary CMOS sensors (a current lab-built prototype costs $1000, and the price is expected to go down significantly in production). In contrast, a 64-beam Velodyne LiDAR that is commonly used in 3D self-driving datasets like KITTI [10] costs upwards of $80,000. Second, light curtains generate data with much higher resolution in regions where they actively focus [2] while LiDARs sense the entire environment and have low spatial and angular resolution.
+
+One weakness of light curtains is that they are able to sense only a subset of the environment - a vertical ruled surface (see Fig. 1(c,d), Fig 2). In contrast, a LiDAR senses the entire scene. To mitigate this weakness, we can take advantage of the fact that the light curtain is a controllable sensor - we can choose where to place the light curtains. Thus, we must intelligently place light curtains in the appropriate locations, so that they sense the most important parts of the scene. In this work, we develop an algorithm for determining how to best place the light curtains for maximal detection performance.
+
+We propose to use a deep neural network's prediction uncertainty as a guide for determining how to actively sense an environment. Our insight is that if an active sensor images the regions which the network is most uncertain about, the data obtained from those regions can help resolve the network's uncertainty and improve recognition. Conveniently, most deep learning based recognition systems output confidence maps, which can be used for this purpose when converted to an appropriate notion of uncertainty.
+
+Given neural network uncertainty estimates, we show how a light curtain can be placed to optimally cover the regions of maximum uncertainty. First, we use an information-gain based framework to propose placing light curtains that maximize the sum of uncertainties of the covered region (Sec. 4.3, Appendix A). However, the structure of the light curtain and physical constraints of the device impose restrictions on how the light curtain can be placed. Our novel solution is to precompute a "constraint graph", which describes all possible light curtain placements that respect these physical constraints. We then use an optimization approach based on dynamic programming to efficiently search over all possible feasible paths in the constraint graph and maximize this objective (Sec. 4.4). This is a novel approach to constrained optimization of a controllable sensor's trajectory which takes advantage of the properties of the problem we are solving.
+
+Our proposed active perception pipeline for 3D detection proceeds as follows. We initially record sparse data with an inexpensive single beam LIDAR sensor that performs fixed 3D scans. This data is input to a 3D point cloud object detector, which outputs an initial set of detections and confidence estimates. These confidence estimates are converted into uncertainty estimates, which are used by our dynamic programming algorithm to determine where to place the first light curtain. The output of the light curtain readings are again input to the 3D object detector to obtain refined detections and an updated uncertainty map. This process of estimating detections and placing new light curtains can be repeated multiple times (Fig. 3). Hence, we are able to sense the environment progressively, intelligently, and efficiently.
+
+We evaluate our algorithm using two synthetic datasets of urban driving scenes [9,29]. Our experiments demonstrate that our algorithm leads to a monotonic improvement in performance with successive light curtain placement. We compare our proposed optimal light curtain placement strategy to multiple baseline strategies and find that they are significantly outperformed by our method. To summarize, our contributions are the following:
+
+- We propose a method for using a deep learning based 3D object detector's prediction uncertainty as a guide for active sensing (Sec. 4.2).
+- Given a network's uncertainty, we show how to compute a feasible light curtain that maximizes the coverage of uncertainty. Our novel contribution is to encode the physical constraints of the device into a graph and use dynamic-programming based graph optimization to efficiently maximize the objective while satisfying the physical constraints (Sec. 4.3, 4.4).
+- We show how to train such an active detector using online light curtain data generation (Sec. 4.5).
+
+- We empirically demonstrate that our approach leads to significantly improved detection performance compared to a number of baseline approaches (Sec. 5).
+
+# 2 Related Work
+
+# 2.1 Active Perception and Next-Best View Planning
+
+Active Perception encompasses a variety of problems and techniques that involve actively controlling the sensor for improved perception [1,23]. Examples include actively modifying camera parameters [1], moving a camera to look around occluding objects [4], and next-best view (NBV) planning [5]. NBV refers to a broad set of problems in which the objective is to select the next best sensing action in order to solve a specific task. Typical problems include object instance classification [24,8,7,18] and 3D reconstruction [12,13,21,6,11]. Many works on next-best view formulate the objective as maximizing information gain (also known as mutual information) [24,7,12,13,21,6], using models such as probabilistic occupancy grids for beliefs over states [24,12,13,21,6]. Our method is similar in spirit to next-best view. One could consider each light curtain placement as obtaining a new "view" of the environment; we try to find the next best light curtain that aids object detection. In Sec. 4.3 and Appendix A, we derive an information-gain based objective to find the next best light curtain placement.
+
+# 2.2 Object Detection from Point Clouds
+
+There have been many recent advances in deep learning for 3D object detection. Approaches include representing LiDAR data as range images in LaserNet[16], using raw point clouds [19], and using point clouds in the bird's eye view such as AVOD [14], HDNet [26] and Complex-YOLO [20]. Most state-of-the-art approaches use voxelized point clouds, such as VoxelNet [27], PointPillars [15], SECOND [25], and CBGS [28]. These methods process an input point cloud by dividing the space into 3D regions (voxels or pillars) and extracting features from each of region using a PointNet [17] based architecture. Then, the volumetric feature map is converted to 2D features via convolutions, followed by a detection head that produces bounding boxes. We demonstrate that we can use such detectors, along with our novel light curtain placement algorithm, to process data from a single beam LiDAR combined with light curtains.
+
+# 3 Background on Light Curtains
+
+Programmable light curtains [22,2] are a sensor for adaptive depth sensing. "Light curtains" can be thought of as virtual surfaces placed in the environment. They can detect points on objects that intersect this surface. Before explaining how the curtain is created, we briefly describe our coordinate system and the basics of a rolling shutter camera.
+
+Coordinate system: Throughout the paper, we will use the standard camera
+
+
+(a) Working principle
+
+
+(b) Optical schematic (top view)
+Fig. 2: Illustration of programmable light curtains adapted from [2,22]. a) The light curtain is placed at the intersection of the illumination plane (from the projector) and the imaging plane (from the camera). b) A programmable galvanometer and a rolling shutter camera create multiple points of intersection, $\mathbf{X}_t$ .
+
+coordinate system centered at the sensor. We assume that the $z$ axis corresponds to depth from the sensor pointing forward, and that the $y$ vector points vertically downwards. Hence the $xz$ -plane is parallel to the ground and corresponds to a top-down view, also referred to as the bird's eye view.
+
+Rolling shutter camera: A rolling shutter camera contains pixels arranged in $T$ number of vertical columns. Each pixel column corresponds to a vertical imaging plane. Readings from only those visible 3D points that lie on the imaging plane get recorded onto its pixel column. We will denote the $xz$ -projection of the imaging plane corresponding to the $t$ -th pixel column by ray $\mathbf{R}_t$ , shown in the top-down view in Fig. 2(b). We will refer to these as "camera rays". The camera has a rolling shutter that successively activates each pixel column and its imaging plane one at a time from left to right. The time interval between the activation of two adjacent pixel columns is determined by the pixel clock.
+
+Working principle of light curtains: The latest version of light curtains [2] works by rapidly rotating a light sheet laser in synchrony with the motion of a camera's rolling shutter. A laser beam is collimated and shaped into a line sheet using appropriate lenses and is reflected at a desired angle using a controllable galvanometer mirror (see Fig. 2(b)). The illumination plane created by the laser intersects the active imaging plane of the camera in a vertical line along the curtain profile (Fig. 2(a)). The $xz$ -projection of this vertical line intersecting the $t$ -th imaging plane lies on $\mathbf{R}_t$ , and we call this the $t$ -th "control point", denoted by $\mathbf{X}_t$ (Fig. 2(b)).
+
+Light curtain input: The shape of a light curtain is uniquely defined by where it intersects each camera ray in the $xz$ -plane, i.e. the control points $\{\mathbf{X}_1,\dots ,\mathbf{X}_T\}$ . These will act as inputs to the light curtain device. In order to produce the light curtain defined by $\{\mathbf{X}_t\}_{t = 1}^T$ , the galvanometer is programmed to compute and rotate at, for each camera ray $\mathbf{R}_t$ , the reflection angle $\theta_t(\mathbf{X}_t)$ of the laser beam
+
+
+Fig.3: Our method for detecting objects using light curtains. An inexpensive single-beam lidar input is used by a 3D detection network to obtain rough initial estimates of object locations. The uncertainty of the detector is used to optimally place a light curtain that covers the most uncertain regions. The points detected by the light curtain (shown in green in the bottom figure) are input back into the detector so that it can update its predictions as well as uncertainty. The new uncertainty maps can again be used to place successive light curtains in an iterative manner, closing the loop.
+
+such that the laser sheet intersects $\mathbf{R}_t$ at $\mathbf{X}_t$ . By selecting a control point on each camera ray, the light curtain device can be made to image any vertical ruled surface [2,22].
+
+Light curtain output: The light curtain outputs a point cloud of all 3D visible points in the scene that intersect the light curtain surface. The density of light curtain points on the surface is usually much higher than LiDAR points.
+
+Light curtain constraints: The rotating galvanometer can only operate at a maximum angular velocity $\omega_{\mathrm{max}}$ . Let $\mathbf{X}_t$ and $\mathbf{X}_{t + 1}$ be the control points on two consecutive camera rays $\mathbf{R}_t$ and $\mathbf{R}_{t + 1}$ . These induce laser angles $\theta (\mathbf{X}_t)$ and $\theta (\mathbf{X}_{t + 1})$ respectively. If $\varDelta t$ is the time difference between when the $t$ -th and $(t + 1)$ -th pixel columns are active, the galvanometer needs to rotate by an angle of $\varDelta\theta(\mathbf{X}_t)=\theta(\mathbf{X}_{t+1})-\theta(\mathbf{X}_t)$ within $\varDelta t$ time. Denote $\varDelta\theta_{\mathrm{max}}=\omega_{\mathrm{max}}\cdot\varDelta t$ . Then the light curtain can only image control points subject to $|\theta (\mathbf{X}_{t + 1}) - \theta (\mathbf{X}_t)|\leq \varDelta\theta_{\mathrm{max}},\forall 1\leq t < T$ .
+
+# 4 Approach
+
+# 4.1 Overview
+
+Our aim is to use light curtains for detecting objects in a 3D scene. The overall approach is illustrated in Fig. 3. We use a voxel-based point cloud detector [25] and train it to use light curtain data without any architectural changes. The pipeline illustrated in Fig. 3 proceeds as follows.
+
+To obtain an initial set of object detections, we use data from an inexpensive single-beam LiDAR as input to the detector. This produces rough estimates of object locations in the scene. Single-beam LiDAR is inexpensive because it
+
+consists of only one laser beam as opposed to 64 or 128 beams that are common in autonomous driving. The downside is that the data from the single beam contains very few points; this results in inaccurate detections and high uncertainty about object locations in the scene (see Fig. 1b).
+
+Alongside bounding box detections, we can also extract from the detector an "uncertainty map" (explained in Sec. 4.2). We then use light curtains, placed in regions guided by the detector's uncertainty, to collect more data and iteratively refine the object detections. In order to get more data from the regions the detector is most uncertain about, we derive an information-gain based objective function that sums the uncertainties along the light curtain control points (Sec. 4.3 and Appendix A), and we develop a constrained optimization algorithm that places the light curtain to maximize this objective (Sec. 4.4).
+
+Once the light curtain is placed, it returns a dense set of points where the curtain intersects with visible objects in the scene. We maintain a unified point cloud, which we define as the union of all points observed so far. The unified point cloud is initialized with the points from the single-beam LiDAR. Points from the light curtain are added to the unified point cloud and this data is input back into the detector. Note that the input representation for the detector remains the same (point clouds), enabling the use of existing state-of-the-art point cloud detection methods without any architectural modifications.
+
+As new data from the light curtains are added to the unified point cloud and input to the detector, the detector refines its predictions and improves its accuracy. Furthermore, the additional inputs cause the network to update its uncertainty map; the network may no longer be uncertain about the areas that were sensed by the light curtain. Our algorithm uses the new uncertainty map to generate a new light curtain placement. We can iteratively place light curtains to cover the current uncertain regions and input the sensed points back into the network, closing the loop and iteratively improving detection performance.
+
+# 4.2 Extracting uncertainty from the detector
+
+The standard pipeline for 3D object detection [27,25,15] proceeds as follows. First, the ground plane (parallel to the $xz$ -plane) is uniformly tiled with "anchor boxes"; these are reference boxes used by a 3D detector to produce detections. They are located on points in a uniformly discretized grid $G = [x_{\mathrm{min}}, x_{\mathrm{max}}] \times [z_{\mathrm{min}}, z_{\mathrm{max}}]$ . For example, a $[-40\mathrm{m}, 40\mathrm{m}] \times [0\mathrm{m}, 70.4\mathrm{m}]$ grid is used for detecting cars in KITTI [10]. A 3D detector, which is usually a binary detector, takes a point cloud as input, and produces a binary classification score $p \in [0,1]$ and bounding box regression offsets for every anchor box. The score $p$ is the estimated probability that the anchor box contains an object of a specific class (such as car/pedestrian). The detector produces a detection for that anchor box if $p$ exceeds a certain threshold. If so, the detector combines the fixed dimensions of the anchor box with its predicted regression offsets to output a detection box.
+
+We can convert the confidence score to binary entropy $H(p) \in [0,1]$ where $H(p) = -p\log_2p - (1 - p)\log_2(1 - p)$ . Entropy is a measure of the detector's uncertainty about the presence of an object at the anchor location. Since we
+
+have an uncertainty score at uniformly spaced anchor locations parallel to the $xz$ -plane, they form an "uncertainty map" in the top-down view. We use this uncertainty map to place light curtains.
+
+# 4.3 Information gain objective
+
+Based on the uncertainty estimates given by Sec. 4.2, our method determines how to place the light curtain to sense the most uncertain/ambiguous regions. It seems intuitive that sensing the locations of highest detector uncertainty can provide the largest amount of information from a single light curtain placement, towards improving detector accuracy. As discussed in Sec. 3, a single light curtain placement is defined by a set of $T$ control points $\{\mathbf{X}_t\}_{t=1}^T$ . The light curtain will be placed to lie vertically on top of these control points. To define an optimization objective, we use the framework of information gain (commonly used in next-best view methods; see Sec. 2.1) along with some simplifying assumptions (see Appendix A). We show that under these assumptions, placing a light curtain to maximize information gain (a mathematically defined information-theoretic quantity) is equivalent to maximizing the objective $J(\mathbf{X}_1, \ldots, \mathbf{X}_T) = \sum_{t=1}^{T} H(\mathbf{X}_t)$ , where $H(\mathbf{X})$ is the binary entropy of the detector's confidence at the anchor location of $\mathbf{X}$ . When the control point $\mathbf{X}$ does not exactly correspond to an anchor location, we impute $H(\mathbf{X})$ by nearest-neighbor interpolation from the uncertainty map. Please see Appendix A for a detailed derivation.
+
+# 4.4 Optimal light curtain placement
+
+In this section, we will describe an exact optimization algorithm to maximize the objective function $J(\mathbf{X}_1, \ldots, \mathbf{X}_T) = \sum_{t=1}^T H(\mathbf{X}_t)$ .
+
+Constrained optimization: The control points $\{\mathbf{X}_t\}_{t=1}^T$ , where each $\mathbf{X}_t$ lies on the camera ray $\mathbf{R}_t$ , must be chosen to satisfy the physical constraints of the light curtain device: $|\theta(\mathbf{X}_{t+1}) - \theta(\mathbf{X}_t)| \leq \Delta \theta_{\max}$ (see Sec. 3: light curtain constraints). Hence, this is a constrained optimization problem. We discretize the problem by considering a dense set of $N$ discrete, equally spaced points $\mathcal{D}_t = \{\mathbf{X}_t^{(n)}\}_{n=1}^N$ on each ray $\mathbf{R}_t$ . We will assume that $\mathbf{X}_t \in \mathcal{D}_t$ for all $1 \leq t \leq T$ henceforth unless stated otherwise. We use $N = 80$ in all our experiments which we found to be sufficiently large. Overall, the optimization problem can be formulated as:
+
+$$
+\arg \max _ {\left\{\mathbf {X} _ {t} \right\} _ {t = 1} ^ {T}} \sum_ {t = 1} ^ {T} H \left(\mathbf {X} _ {t}\right) \tag {1}
+$$
+
+$$
+\text {w h e r e} \mathbf {X} _ {t} \in \mathcal {D} _ {t} \forall 1 \leq t \leq T \tag {2}
+$$
+
+$$
+\text {s u b j e c t} \left| \theta \left(\mathbf {X} _ {t + 1}\right) - \theta \left(\mathbf {X} _ {t}\right) \right| \leq \Delta \theta_ {\max }, \forall 1 \leq t < T \tag {3}
+$$
+
+Light Curtain Constraint Graph: we encode the light curtain constraints into a graph, as illustrated in Figure 4. Each black ray corresponds to a camera ray. Each black dot on the ray is a vertex in the constraint graph. It represents a
+
+
+(a)
+
+
+(b)
+Fig. 4: (a) Light curtain constraint graph. Black dots are nodes and blue arrows are the edges of the graph. The optimized light curtain profile is depicted as red arrows. (b) Example uncertainty map from the detector, and optimized light curtain profile in red. Black is lowest uncertainty and white is highest uncertainty. The optimized light curtain covers the most uncertain regions.
+
+candidate control point and is associated with an uncertainty score. Exactly one control point must be chosen per camera ray. The optimization objective is to choose such points to maximize the total sum of uncertainties. An edge between two control points indicates that the light curtain is able to transition from one control point $\mathbf{X}_t$ to the next, $\mathbf{X}_{t + 1}$ without violating the maximum velocity light curtain constraints. Thus, the maximum velocity constraint (Eqn. 3) can be specified by restricting the set of edges (depicted using blue arrows). We note that the graph only needs to be constructed once and can be done offline.
+
+Dynamic programming for constrained optimization: The number of possible light curtain in placements, $|\mathcal{D}_1 \times \dots \times \mathcal{D}_T| = N^T$ , is exponentially large, which prevents us from searching for the optimal solution by brute force. However, we observe that the problem can be decomposed into simpler subproblems. In particular, let us define $J_{t}^{*}(\mathbf{X}_{t})$ as the optimal sum of uncertainties of the tail subproblem starting from $\mathbf{X}_t$ i.e.
+
+$$
+J _ {t} ^ {*} (\mathbf {X} _ {t}) = \max _ {\mathbf {X} _ {t + 1}, \dots , \mathbf {X} _ {T}} H (\mathbf {X} _ {t}) + \sum_ {k = t + 1} ^ {T} H (\mathbf {X} _ {k}); \tag {4}
+$$
+
+$$
+\text {s u b j e c t} \left| \theta \left(\mathbf {X} _ {k + 1}\right) - \theta \left(\mathbf {X} _ {k}\right) \right| \leq \Delta \theta_ {\max }, \forall t \leq k < T \tag {5}
+$$
+
+If we were able to compute $J_{t}^{*}(\mathbf{X}_{t})$ , then this would help in solving a more complex subproblem using recursion: we observe that $J_{t}^{*}(\mathbf{X}_{t})$ has the property of optimal substructure, i.e. the optimal solution of $J_{t - 1}^{*}(\mathbf{X}_{t - 1})$ can be computed from the optimal solution of $J_{t}^{*}(\mathbf{X}_{t})$ via
+
+$$
+J _ {t - 1} ^ {*} (\mathbf {X} _ {t - 1}) = H \left(\mathbf {X} _ {t - 1}\right) + \max _ {\mathbf {X} _ {t} \in \mathcal {D} _ {t}} J _ {t} ^ {*} \left(\mathbf {X} _ {t}\right) \tag {6}
+$$
+
+subject to $|\theta (\mathbf{X}_t) - \theta (\mathbf{X}_{t - 1})|\leq \varDelta \theta_{\mathrm{max}}$
+
+Because of this optimal substructure property, we can solve for $J_{t - 1}^{*}(\mathbf{X}_{t - 1})$ via dynamic programming. We also note that the solution to $\max_{\mathbf{X}_1}J_1^* (\mathbf{X}_1)$ is the solution to our original constrained optimization problem (Eqn. 1-3).
+
+We thus perform the dynamic programming optimization as follows: the recursion from Eqn. 6 can be implemented by first performing a backwards pass, starting from $T$ and computing $J_{t}^{*}(\mathbf{X}_{t})$ for each $\mathbf{X}_t$ . Computing each $J_{t}^{*}(\mathbf{X}_{t})$ takes only $O(B_{\mathrm{avg}})$ time where $B_{\mathrm{avg}}$ is the average degree of a vertex (number of edges starting from a vertex) in the constraint graph, since we iterate once over all edges of $\mathbf{X}_t$ in Eqn. 6. Then, we do a forward pass, starting with $\arg \max_{\mathbf{X}_1\in \mathcal{D}_1}J_1^* (\mathbf{X}_1)$ and for a given $\mathbf{X}_{t - 1}^*$ , choosing $\mathbf{X}_t^*$ according to Eqn. 6. Since there are $N$ vertices per ray and $T$ rays in the graph, the overall algorithm takes $O(NTB_{\mathrm{avg}})$ time; this is a significant reduction from the $O(N^T)$ brute-force solution. We describe a simple extension of this objective that encourages smoothness in Appendix B.
+
+# 4.5 Training active detector with online training data generation
+
+The same detector is used to process data from the single beam LiDAR and all light curtain placement. Since the light curtains are placed based on the output (uncertainty maps) of the detector, the input point cloud for the next iteration depends on the current weights of the detector. As the weights change during training, so does the input data distribution. We account for non-stationarity of the training data by generating it online during the training process. This prevents the input distribution from diverging from the network weights during training. See Appendix C for algorithmic details and ablation experiments.
+
+# 5 Experiments
+
+To evaluate our algorithm, we need dense ground truth depth maps to simulate an arbitrary placement of a light curtain. However, standard autonomous driving datasets, such as KITTI [10] and nuScenes [3], contain only sparse LiDAR data, and hence the data is not suitable to accurately simulate a dense light curtain to evaluate our method. To circumvent this problem, we demonstrate our method on two synthetic datasets that provide dense ground truth depth maps, namely the Virtual KITTI [9] and SYNTHIA [29] datasets. Please find more details of the datasets and the evaluation metrics in Appendix D.
+
+Our experiments demonstrate the following: First, we show that our method for successive placement of light curtains improves detection performance; particularly, there is a significant increase between the performance of single-beam LiDAR and the performance after placing the first light curtain. We also compare our method to multiple ablations and alternative placement strategies that demonstrate that each component of our method is crucial to achieve good performance. Finally, we show that our method can generalize to many more light curtain placements at test time than the method was trained on. In the appendix, we perform further experiments that include evaluating the generalization of our method to noise in the light curtain data, an ablation experiment for training with online data generation (Sec. 4.5), and efficiency analysis.
+
+# 5.1 Comparison with varying number of light curtains
+
+We train our method using online training data generation simultaneously on data from single-beam LiDAR and one, two, and three light curtain placements. We perform this experiment for both the Virtual KITTI and SYNTHIA datasets. The accuracies on their tests sets are reported in Table 1.
+
+ | Virtual KITTI | SYNTHIA |
| 3D mAP | BEV mAP | 3D mAP | BEV mAP |
| 0.5 IoU | 0.7 IoU | 0.5 IoU | 0.7 IoU | 0.5 IoU | 0.7 IoU | 0.5 IoU | 0.7 IoU |
| Single Beam Lidar | 39.91 | 15.49 | 40.77 | 36.54 | 60.49 | 47.73 | 60.69 | 51.22 |
| Single Beam Lidar (separate model) | 42.35 | 23.66 | 47.77 | 40.15 | 60.69 | 48.23 | 60.84 | 57.98 |
| 1 Light Curtain | 58.01 | 35.29 | 58.51 | 47.05 | 68.79 | 55.99 | 68.97 | 59.63 |
| 2 Light Curtains | 60.86 | 37.91 | 61.10 | 49.84 | 69.02 | 57.08 | 69.17 | 67.14 |
| 3 Light Curtains | 68.52 | 38.47 | 68.82 | 50.53 | 69.16 | 57.30 | 69.25 | 67.25 |
+
+Table 1: Performance of the detector trained with single-beam LiDAR and up to three light curtains. Performance improves with more light curtain placements, with a significant jump at the first light curtain placement.
+
+Note that there is a significant and consistent increase in the accuracy between single-beam LiDAR performance and the first light curtain placement (row 1 and row 3). This shows that actively placing light curtains on the most uncertain regions can improve performance over a single-beam LiDAR that performs fixed scans. Furthermore, placing more light curtains consistently improves detection accuracy.
+
+As an ablation experiment, we train a separate model only on single-beam LiDAR data (row 2), for the same number of training iterations. This is different from row 1 which was trained with both single beam LiDAR and light curtain data but evaluated using only data for a single beam LiDAR. Although training a model with only single-beam LiDAR data (row 2) improves performance over row 1, it is still significantly outperformed by our method which uses data from light curtain placements.
+
+Noise simulations: In order to simulate noise in the real-world sensor, we perform experiments with added noise in the light curtain input. We demonstrate that the results are comparable to the noiseless case, indicating that our method is robust to noise and is likely to transfer well to the real world. Please see Appendix E for more details.
+
+# 5.2 Comparison with alternative light curtain placement strategies
+
+In our approach, light curtains are placed by maximizing the coverage of uncertain regions using a dynamic programming optimization. How does this compare to other strategies for light curtain placement? We experiment with several baselines:
+
+1. Random: we place frontoparallel light curtains at a random $z$ -distance from the sensor, ignoring the detector's uncertainty map.
+2. Fixed depth: we place a frontoparallel light curtain at a fixed $z$ -distance (15m, 30m, 45m) from the sensor, ignoring the detector's uncertainty map.
+3. Greedy optimization: this baseline tries to evaluate the benefits of using a dynamic programming optimization. Here, we use the same light curtain constraints described in Section 4.4 (Figure 4(a)). We greedily select the next control point based on local uncertainty instead of optimizing for the future sum of uncertainties. Ties are broken by (a) choosing smaller laser angle changes, and (b) randomly.
+4. Frontoparallel + Uncertainty: Our optimization process finds light curtains with flexible shapes. What if the shapes were constrained to make the optimization problem easier? If we restrict ourselves to frontoparallel curtains, we can place them at the $z$ -distance of maximum uncertainty by simply summing the uncertainties for every fixed value of $z$ .
+
+The results on the Virtual KITTI and SYNTHIA datasets are shown in Table 2. Our method significantly and consistently outperforms all baselines. This empirically demonstrates the value of using dynamic programming for light curtain placement to improve object detection performance.
+
+# 5.3 Generalization to successive light curtain placements
+
+If we train a detector using our online light curtain data generation approach for $k$ light curtains, can the performance generalize to more than $k$ light curtains? Specifically, if we continue to place light curtains beyond the number trained for,
+
+ | Virtual KITTI | SYNTHIA |
| 3D mAP | BEV mAP | 3D mAP | BEV mAP |
| .5 IoU | .7 IoU | .5 IoU | .7 IoU | .5 IoU | .7 IoU | .5 IoU | .7 IoU |
| Random | 41.29 | 17.49 | 46.65 | 38.09 | 60.43 | 47.09 | 60.66 | 58.14 |
| Fixed depth - 15m | 44.99 | 22.20 | 46.07 | 38.05 | 60.74 | 48.16 | 60.89 | 58.48 |
| Fixed depth - 30m | 39.72 | 19.05 | 45.21 | 35.83 | 60.02 | 47.88 | 60.23 | 57.89 |
| Fixed depth - 45m | 39.86 | 20.02 | 40.61 | 36.87 | 60.23 | 48.12 | 60.43 | 57.77 |
| Greedy Optimization (Randomly break ties) | 37.40 | 19.93 | 42.80 | 35.33 | 60.62 | 47.46 | 60.83 | 58.22 |
| Greedy Optimization (Min laser angle change) | 39.20 | 20.19 | 44.80 | 36.94 | 60.61 | 47.05 | 60.76 | 58.07 |
| Frontoparallel + Uncertainty | 39.41 | 21.25 | 45.10 | 37.80 | 60.36 | 47.20 | 60.52 | 58.00 |
| Ours | 58.01 | 35.29 | 58.51 | 47.05 | 68.79 | 55.99 | 68.97 | 59.63 |
+
+Table 2: Baselines for alternate light curtain placement strategies, trained and tested on (a) Virtual KITTI and (b) SYNTHIA datasets. Our dynamic programming optimization approach significantly outperforms all other strategies.
+
+
+(a) Generalization in Virtual KITTI
+
+
+(b) Generalization in SYNTHIA
+Fig. 5: Generalization to many more light curtains than what the detector was trained for. We train using online data generation on single-beam lidar and only 3 light curtains. We then test with placing 10 curtains, on (a) Virtual KITTI, and (b) SYNTHIA. Performance continues to increase monotonically according to multiple metrics. Takeaway: one can safely place more light curtains at test time and expect to see sustained improvement in accuracy.
+
+will the accuracy continue improving? We test this hypothesis by evaluating on 10 light curtains, many more than the model was trained for (3 light curtains). Figure 5 shows the performance as a function of the number of light curtains. We find that in both Virtual KITTI and SYNTHIA, the accuracy monotonically improves with the number of curtains.
+
+This result implies that a priori one need not worry about how many light curtains will be placed at test time. If we train on only 3 light curtains, we can place many more light curtains at test time; our results indicate that the performance will keep improving.
+
+# 5.4 Qualitative analysis
+
+We visualized a successful case of our method in Fig. 1. This is an example where our method detects false negatives missed by the single-beam LiDAR. We also show two other types of successful cases where light curtains remove false positive detections and fix misalignment errors in Figure 6. In Figure 7, we show the predominant failure case of our method. See captions for more details.
+
+# 6 Conclusions
+
+In this work, we develop a method to use light curtains, an actively controllable resource-efficient sensor, for object recognition in static scenes. We propose to use a 3D object detector's prediction uncertainty as a guide for deciding where to sense. By encoding the constraints of the light curtain into a graph, we show how to optimally and feasibly place a light curtain that maximizes the coverage of uncertain regions. We are able to train an active detector that interacts with light
+
+
+
+
+Fig. 6: Successful cases: Other type of successful cases than Fig. 1. In (A), the single-beam LiDAR incorrectly detects a bus and a piece of lawn as false positives. They get eliminated successively after placing the first and second light curtains. In (B), the first light curtain fixes misalignment in the bounding box predicted by the single beam LiDAR.
+
+curtains to iteratively and efficiently sense parts of scene in an uncertainty-guided manner, successively improving detection accuracy. We hope this works pushes towards designing perception algorithms that integrate sensing and recognition, towards intelligent and adaptive perception.
+
+# Acknowledgements
+
+We thank Matthew O'Toole for feedback on the initial draft of this paper. This material is based upon work supported by the National Science Foundation under Grants No. IIS-1849154, IIS-1900821 and by the United States Air Force and DARPA under Contract No. FA8750-18-C-0092.
+
+
+Fig. 7: Failure cases: The predominant failure mode is that the single beam LiDAR detects a false positive which is not removed by light curtains because the detector is overly confident in its prediction (so the estimated uncertainty is low). Middle: Falsely detecting a tree to be a car. Right: After three light curtains, the detection persists because light curtains do not get placed on this false positive. False positive gets removed eventually only after six light curtain placements.
+
+# References
+
+1. Bajcsy, R.: Active perception. Proceedings of the IEEE 76(8), 966-1005 (1988)
+2. Bartels, J.R., Wang, J., Whittaker, W.R., Narasimhan, S.G.: Agile depth sensing using triangulation light curtains. In: The IEEE International Conference on Computer Vision (ICCV) (October 2019)
+3. Caesar, H., Bankiti, V., Lang, A.H., Vora, S., Liong, V.E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., Beijbom, O.: nuscenes: A multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027 (2019)
+4. Cheng, R., Agarwal, A., Fragkiadaki, K.: Reinforcement learning of active vision for manipulating objects under occlusions. arXiv preprint arXiv:1811.08067 (2018)
+5. Connolly, C.: The determination of next best views. In: Proceedings. 1985 IEEE international conference on robotics and automation. vol. 2, pp. 432-435. IEEE (1985)
+6. Daudelin, J., Campbell, M.: An adaptable, probabilistic, next-best view algorithm for reconstruction of unknown 3-d objects. IEEE Robotics and Automation Letters 2(3), 1540-1547 (2017)
+7. Denzler, J., Brown, C.M.: Information theoretic sensor data selection for active object recognition and state estimation. IEEE Transactions on pattern analysis and machine intelligence 24(2), 145-157 (2002)
+8. Doumanoglou, A., Kouskouridas, R., Malassiotis, S., Kim, T.K.: Recovering 6d object pose and predicting next-best-view in the crowd. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3583-3592 (2016)
+9. Gaidon, A., Wang, Q., Cabon, Y., Vig, E.: Virtual worlds as proxy for multi-object tracking analysis. In: CVPR (2016)
+0. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: The kitti dataset. The International Journal of Robotics Research 32(11), 1231-1237 (2013)
+1. Haner, S., Heyden, A.: Covariance propagation and next best view planning for 3d reconstruction. In: European Conference on Computer Vision. pp. 545-556. Springer (2012)
+2. Isler, S., Sabzevari, R., Delmerico, J., Scaramuzza, D.: An information gain formulation for active volumetric 3d reconstruction. In: 2016 IEEE International Conference on Robotics and Automation (ICRA). pp. 3477-3484. IEEE (2016)
+3. Kriegel, S., Rink, C., Bodenmüller, T., Suppa, M.: Efficient next-best-scan planning for autonomous 3d surface reconstruction of unknown objects. Journal of Real-Time Image Processing 10(4), 611-631 (2015)
+4. Ku, J., Mozifian, M., Lee, J., Harakeh, A., Waslander, S.L.: Joint 3d proposal generation and object detection from view aggregation. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 1-8. IEEE (2018)
+5. Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: Pointpillars: Fast encoders for object detection from point clouds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 12697-12705 (2019)
+6. Meyer, G.P., Laddha, A., Kee, E., Vallespi-Gonzalez, C., Wellington, C.K.: Lasernet: An efficient probabilistic 3d object detector for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 12677-12686 (2019)
+7. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 652-660 (2017)
+
+18. Scott, W.R., Roth, G., Rivest, J.F.: View planning for automated three-dimensional object reconstruction and inspection. ACM Computing Surveys (CSUR) 35(1), 64-96 (2003)
+19. Shi, S., Wang, X., Li, H.: Pointcnn: 3d object proposal generation and detection from point cloud. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770-779 (2019)
+20. Simony, M., Milzy, S., Amendey, K., Gross, H.M.: Complex-yolo: An euler-regionproposal for real-time 3d object detection on point clouds. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 0-0 (2018)
+21. Vasquez-Gomez, J.I., Sucar, L.E., Murrieta-Cid, R., Lopez-Damian, E.: Volumetric next-best-view planning for 3d object reconstruction with positioning error. International Journal of Advanced Robotic Systems 11(10), 159 (2014)
+22. Wang, J., Bartels, J., Whittaker, W., Sankaranarayanan, A.C., Narasimhan, S.G.: Programmable triangulation light curtains. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 19-34 (2018)
+23. Wilkes, D.: Active object recognition (1994)
+24. Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3d shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2015)
+25. Yan, Y., Mao, Y., Li, B.: Second: Sparsely embedded convolutional detection. Sensors 18(10), 3337 (2018)
+26. Yang, B., Liang, M., Urtasun, R.: Hdnet: Exploiting hd maps for 3d object detection. In: Conference on Robot Learning. pp. 146-155 (2018)
+27. Zhou, Y., Tuzel, O.: Voxelnet: End-to-end learning for point cloud based 3d object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4490-4499 (2018)
+28. Zhu, B., Jiang, Z., Zhou, X., Li, Z., Yu, G.: Class-balanced grouping and sampling for point cloud 3d object detection. arXiv preprint arXiv:1908.09492 (2019)
+29. Zolfaghari Bengar, J., Gonzalez-Garcia, A., Villalonga, G., Raducanu, B., Aghdam, H.H., Mozerov, M., Lopez, A.M., van de Weijer, J.: Temporal coherence for active learning in videos. arXiv preprint arXiv:1908.11757 (2019)
\ No newline at end of file
diff --git a/activeperceptionusinglightcurtainsforautonomousdriving/images.zip b/activeperceptionusinglightcurtainsforautonomousdriving/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..43ce6cd15890aecbc198606fa0554295036e1ba7
--- /dev/null
+++ b/activeperceptionusinglightcurtainsforautonomousdriving/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:21351e6385daeaf18199e4d7101ad0e970a2252802927e4759758425cd75fd66
+size 444360
diff --git a/activeperceptionusinglightcurtainsforautonomousdriving/layout.json b/activeperceptionusinglightcurtainsforautonomousdriving/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..3b32ca5fa6b3c1797df5087d900471ecccc5ac46
--- /dev/null
+++ b/activeperceptionusinglightcurtainsforautonomousdriving/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:41d8938e0a1e724c72e4b7b9a74f8ec2875023cd759ebaf4d0dc0b98c4665e23
+size 365183
diff --git a/activevisualinformationgatheringforvisionlanguagenavigation/a84150dc-dd46-413d-8323-ec629cc4b60a_content_list.json b/activevisualinformationgatheringforvisionlanguagenavigation/a84150dc-dd46-413d-8323-ec629cc4b60a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..398c9a94dea74d3d8cf2397e9f500822a13d453f
--- /dev/null
+++ b/activevisualinformationgatheringforvisionlanguagenavigation/a84150dc-dd46-413d-8323-ec629cc4b60a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2b6e218db01c2e961357983b067f7108dfc04ecc7b81944f74a51aff06183c1
+size 88483
diff --git a/activevisualinformationgatheringforvisionlanguagenavigation/a84150dc-dd46-413d-8323-ec629cc4b60a_model.json b/activevisualinformationgatheringforvisionlanguagenavigation/a84150dc-dd46-413d-8323-ec629cc4b60a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..df9899f847c593ca70fd5fa8edbac31408b46b8a
--- /dev/null
+++ b/activevisualinformationgatheringforvisionlanguagenavigation/a84150dc-dd46-413d-8323-ec629cc4b60a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:efd40f4db325920c8c960df5916cba1431b33cf4157e8ea353a3b2b07955ce70
+size 101642
diff --git a/activevisualinformationgatheringforvisionlanguagenavigation/a84150dc-dd46-413d-8323-ec629cc4b60a_origin.pdf b/activevisualinformationgatheringforvisionlanguagenavigation/a84150dc-dd46-413d-8323-ec629cc4b60a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..59a1f46d3a8dceb3a72ef4488850880ead9f4b5d
--- /dev/null
+++ b/activevisualinformationgatheringforvisionlanguagenavigation/a84150dc-dd46-413d-8323-ec629cc4b60a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:26fcc921ec056fbdbc1faeaf2a26b8b9fac0f948e559828df1f5cacad6b03a86
+size 7855553
diff --git a/activevisualinformationgatheringforvisionlanguagenavigation/full.md b/activevisualinformationgatheringforvisionlanguagenavigation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fcbec0c6486a3e3c2a02a089ca8e786ed5fd3e8d
--- /dev/null
+++ b/activevisualinformationgatheringforvisionlanguagenavigation/full.md
@@ -0,0 +1,319 @@
+# Active Visual Information Gathering for Vision-Language Navigation
+
+Hanqing Wang $^{1}$ , $\boxtimes$ Wenguan Wang $^{2}$ , Tianmin Shu $^{3}$ , Wei Liang $^{1}$ , and Jianbing Shen $^{4}$
+
+$^{1}$ School of Computer Science, Beijing Institute of Technology $^{2}$ ETH Zurich $^{3}$ Massachusetts Institute of Technology $^{4}$ Inception Institute of Artificial Intelligence https://github.com/HanqingWangAI/Active_VLN
+
+Abstract. Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments. One of the key challenges in VLN is how to conduct a robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment. Agents trained by current approaches typically suffer from this and would consequently struggle to avoid random and inefficient actions at every step. In contrast, when humans face such a challenge, they can still maintain robust navigation by actively exploring the surroundings to gather more information and thus make more confident navigation decisions. This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent vision-language navigation policy. To achieve this, we propose an end-to-end framework for learning an exploration policy that decides i) when and where to explore, ii) what information is worth gathering during exploration, and iii) how to adjust the navigation decision after the exploration. The experimental results show promising exploration strategies emerged from training, which leads to significant boost in navigation performance. On the R2R challenge leaderboard, our agent gets promising results all three VLN settings, i.e., single run, pre-exploration, and beam search.
+
+Keywords: Vision-Language Navigation $\cdot$ Active Exploration
+
+# 1 Introduction
+
+Vision-language navigation (VLN) [1] aims to build an agent that can navigate a complex environment following human instructions. Existing methods have made amazing progress via i) efficient learning paradigms (e.g., using an ensemble of imitation learning and reinforcement learning [23,24], auxiliary task learning [10, 12,23,28], or instruction augmentation based semi-supervised learning [7,20]), ii) multi-modal information association [9], and iii) self-correction [11,13]. However, these approaches have not addressed one of the core challenges in VLN - the uncertainty caused by ambiguous instructions and partial observability.
+
+
+Fig. 1. (a) A top-down view of the environment with the groundtruth navigation path, based on the instructions. The start and end points are noted as red and blue circles, respectively. The navigation paths are labeled in white. (b) A side view of the bathroom in (a). (c) Previous agents face difficulties as there are two doors in the bathroom, hence causing the navigation fail. (d) Our agent is able to actively explore the environment for more efficient information collection. The exploration paths are labeled in yellow. (e) After exploring the two doors, our agent executes the instructions successfully.
+
+Consider the example in Fig. 1, where an agent is required to navigate across rooms following human instructions: "Leave the bathroom and walk forward along the pool. . . .". The agent might be confused because the bathroom has two doors, and it consequently fails to navigate to the correct location (Fig. 1(c)). In contrast, when faced with the same situation, our humans may perform better as we would first explore the two doors, instead of directly making a risky navigation decision. After collecting enough information, i.e., confirming which one allows us to "walk forward along the pool", we can take a more confident navigation action. This insight from human navigation behavior motivates us to develop an agent that has a similar active exploration and information gathering capability. When facing ambiguous instructions or low confidence on his navigation choices, our agent can actively explore his surroundings and gather information to better support navigation-decision making (Fig. 1(d-e)). However, previous agents are expected to conduct navigation at all times and only collect information from a limited scope. Compared with these, which perceive a scene passively, our agent gains a larger visual field and improved robustness against complex environments and ambiguous instructions by actively exploring the surrounding.
+
+To achieve this, we develop an active exploration module, which learns to 1) decide when the exploration is necessary, 2) identify which part of the surroundings is worth exploring, and 3) gather useful knowledge from the environment to support more robust navigation. During training, we encourage the agent to collect relevant information to help itself make better decisions. We empirically show that our exploration module successfully learns a good information gathering policy and, as a result, the navigation performance is significantly improved.
+
+With above designs, our agent gets promising results on R2R [1] benchmark leaderboard, over all three VLN settings, i.e., single run, pre-exploration, and beam search. In addition, the experiments show that our agent performs well in both seen and unseen environments.
+
+# 2 Related Work
+
+Vision and Language. Over the last few years, unprecedented advances in the design and optimization of deep neural network architectures have led to tremendous progress in computer vision and natural language processing. This progress, in turn, has enabled a multitude of multi-modal applications spanning both disciplines, including image captioning [25], visual question answering [3], visual grounding [26], visual dialog [6, 27], and vision-language navigation [1]. The formulation of these tasks requires a comprehensive understanding of both visual and linguistic content. A typical solution is to learn a joint multi-modal embedding space, i.e., CNN-based visual features and RNN-based linguistic representations are mapped to a common space by several non-linear operations. Recently, neural attention [25], which is good at mining cross-modal knowledge, has shown to be a pivotal technique for multi-modal representation learning.
+
+Vision-Language Navigation (VLN). In contrast to previous vision-language tasks (e.g., image captioning, visual dialog) only involving static visual content, VLN entails an agent to actively interact with the environment to fulfill navigational instructions. Although VLN is relatively new in computer vision (dating back to [1]), many of its core units/technologies (such as instruction following [2] and instruction-action mapping [15]) were introduced much earlier. Specifically, these were originally studied in natural language processing and robotics communities, for the focus of either language-based navigation in a controlled environmental context [2, 5, 14, 15, 17, 21], or vision-based navigation in visually-rich real-world scenes [16, 29]. The VLN simulator described in [1] unites these two lines of research, providing photo-realistic environments and human-annotated instructions (as opposed to many prior efforts using virtual scenes or formulaic instructions). Since its release, increased research has been conducted in this direction. Sequence-to-sequence [1] and reinforcement learning [24] based solutions were first adopted. Then, [7, 20] strengthened the navigator by synthesizing new instructions. Later, combining imitation learning and reinforcement learning became a popular choice [23]. Some recent studies explored auxiliary tasks as self-supervised signals [10, 12, 23, 28], while some others addressed self-correction for intelligent path planning [11, 13]. In addition, Thomason et al. [22] identified unimodal biases in VLN, and Hu et al. [9] then achieved multi-modal grounding using a mixture-of-experts framework.
+
+# 3 Methodology
+
+Problem Description. Navigation in the Room-to-Room task [1] demands an agent to perform a sequence of navigation actions in real indoor environments and reach a target location by following natural language instructions.
+
+Problem Formulation and Basic Agent. Formally, a language instruction is represented via textual embeddings as $\mathbf{X}$ . At each navigation step $t$ , the agent has a panoramic view [7], which is discretized into 36 single views (i.e., RGB images). The agent makes a navigation decision in the panoramic action
+
+space, which consists of $K$ navigable views (reachable and visible), represented as $\pmb{V}_t = \{\pmb{v}_{t,1},\pmb{v}_{t,2},\dots,\pmb{v}_{t,K}\}$ . The agent needs to make a decision on which navigable view to go to (i.e., choose an action $a_{t}^{\mathrm{nv}}\in \{1,\dots ,K\}$ with the embedding $\pmb{a}_{t}^{\mathrm{nv}} = \pmb{v}_{t,a_{t}^{\mathrm{nv}}}$ ), according to the given instruction $\pmb{X}$ , history panoramic views $\{\pmb{V}_1,\pmb{V}_2,\dots,\pmb{V}_{t - 1}\}$ and previous actions $\{\pmb{a}_1^{\mathrm{nv}},\pmb{a}_2^{\mathrm{nv}},\dots,\pmb{a}_{t - 1}^{\mathrm{nv}}\}$ . Conventionally, this dynamic navigation process is formulated in a recurrent form [1,20]:
+
+$$
+\boldsymbol {h} _ {t} ^ {\mathrm {n v}} = \operatorname {L S T M} \left(\left[ \boldsymbol {X}, \boldsymbol {V} _ {t - 1}, \boldsymbol {a} _ {t - 1} ^ {\mathrm {n v}} \right], \boldsymbol {h} _ {t - 1} ^ {\mathrm {n v}}\right). \tag {1}
+$$
+
+With current navigation state $\pmb{h}_t^{\mathrm{nv}}$ , the probability of $k^{th}$ navigation action is:
+
+$$
+p _ {t, k} ^ {\mathrm {n v}} = \operatorname {s o f t m a x} _ {k} \left(\boldsymbol {v} _ {t, k} ^ {\top} \boldsymbol {W} ^ {\mathrm {n v}} \boldsymbol {h} _ {t} ^ {\mathrm {n v}}\right). \tag {2}
+$$
+
+Here, $\mathbf{W}^{\mathrm{nv}}$ indicates a learnable parameter matrix. The navigation action $a_{t}^{\mathrm{nv}}$ is chosen according to the probability distribution $\{p_{t,k}^{\mathrm{nv}}\}_{k=1}^{K}$ .
+
+Basic Agent Implementation. So far, we have given a brief description of our basic navigation agent from a high-level view, also commonly shared with prior art. In practice, we choose [20] to implement our agent (but not limited to).
+
+Core Idea. When following instructions, humans do not expect every step to be a "perfect" navigation decision, due to current limited visual perception, the inevitable ambiguity in instructions, and the complexity of environments. Instead, when we are uncertain about the future steps, we tend to explore the surrounding first and gather more information to mitigate the ambiguity, and then make a more informed decision. Our core idea is thus to equip an agent with such active exploration/learning ability. To ease understanding, we start with a naive model which is equipped with a simplest exploration function (§3.1). We then complete the naive model in §3.2 and §3.3 and showcase how a learned active exploration policy can greatly improve the navigation performance.
+
+# 3.1 A Naïve Model with A Simple Exploration Ability
+
+Here, we consider the most straightforward way of achieving our idea. At each navigation step, the agent simply explores all the navigable views and only one exploration step is allowed for each. This means that the agent explores the first direction, gathers surrounding information and then returns to the original navigation position. Next, it goes one step towards the second navigable direction and turns back. Such one-step exploration process is repeated until all the possible directions have been visited. The information gathered during exploration will be used to support current navigation-decision making.
+
+Formally, at $t^{th}$ navigation step, the agent has $K$ navigable views, i.e., $V_{t} = \{v_{t,1}, v_{t,2}, \dots, v_{t,K}\}$ . For $k^{th}$ view, we further denote its $K'$ navigable views as $O_{t,k} = \{o_{t,k,1}, o_{t,k,2}, \dots, o_{t,k,K'}\}$ (see Fig. 2(a)). The subscript $(t,k)$ will be omitted for notation simplicity. If the agent makes a one-step exploration in $k^{th}$ direction, he is desired to collect surrounding information from $O$ . Specifically, keeping current navigation state $h_{t}^{\mathrm{nv}}$ in mind, the agent assembles the visual information from $O$ by an attention operation (Fig. 2(b)):
+
+$$
+\hat {\boldsymbol {o}} _ {t, k} = \operatorname {a t t} (\boldsymbol {O}, \boldsymbol {h} _ {t} ^ {\mathrm {n v}}) = \sum_ {k ^ {\prime} = 1} ^ {K ^ {\prime}} \alpha_ {k ^ {\prime}} \boldsymbol {o} _ {k ^ {\prime}}, \text {w h e r e} \alpha_ {k ^ {\prime}} = \operatorname {s o f t m a x} _ {k ^ {\prime}} \left(\boldsymbol {o} _ {k ^ {\prime}} ^ {\top} \boldsymbol {W} ^ {\text {a t t}} \boldsymbol {h} _ {t} ^ {\mathrm {n v}}\right). \tag {3}
+$$
+
+
+Fig. 2. Illustration of our naïve model (§3.1). (a) At $t^{th}$ navigation step, the agent has a panoramic view $V_{t}$ . For $k^{th}$ subview, we further denote its panoramic view as $O_{t,k}$ . (b) After making a one-step exploration in the first direction $\boldsymbol{v}_{t,1}$ , the agent collects information $\hat{o}_{t,1}$ from $O_{t,1}$ via Eq. 3. (c) After exploring all the directions, the agent updates his knowledge, i.e., $\{\tilde{\boldsymbol{v}}_{t,1},\tilde{\boldsymbol{v}}_{t,2}\}$ , via Eq. 4. (d) With the updated knowledge, the agent computes the navigation probability distribution $\{p_{t,k}^{\mathrm{nv}}\}_{k}$ (Eq. 5) and makes a more reliable navigation decision (i.e., $a_{t}^{\mathrm{nv}} = 2$ ). (e) Visualization of navigation route, where yellow lines are the exploration routes and green circles are navigation landmarks.
+
+
+
+
+
+
+
+
+
+Then, the collected information $\hat{\pmb{o}}_{t,k}$ is used to update the current visual knowledge $\pmb{v}_{t,k}$ about $k^{th}$ view, computed in a residual form (Fig. 2(c)):
+
+$$
+\tilde {\boldsymbol {v}} _ {t, k} = \boldsymbol {v} _ {t, k} + \boldsymbol {W} ^ {\mathrm {o}} \hat {\boldsymbol {o}} _ {t, k}. \tag {4}
+$$
+
+In this way, the agent successively makes one-step explorations of all $K$ navigable views and enriches his corresponding knowledge. Later, with the updated knowledge $\{\tilde{\pmb{v}}_{t,1},\tilde{\pmb{v}}_{t,2},\dots ,\tilde{\pmb{v}}_{t,K}\}$ , the probability of making $k^{th}$ navigable action (originated in Eq. 2) can be formulated as (Fig. 2(d)):
+
+$$
+p _ {t, k} ^ {\mathrm {n v}} = \operatorname {s o f t m a x} _ {k} \left(\tilde {\boldsymbol {v}} _ {t, k} ^ {\top} \boldsymbol {W} ^ {\mathrm {n v}} \boldsymbol {h} _ {t} ^ {\mathrm {n v}}\right). \tag {5}
+$$
+
+Through this exploration, the agent should be able to gather more information from its surroundings, and then make a more reasonable navigation decision. In §4.3, we empirically demonstrate that, by equipping the basic agent with such a naive exploration module, we achieve $4\sim 6\%$ performance improvement in terms of Successful Rate (SR). This is impressive, as we only allow the agent to make one-step exploration. Another notable issue is that the agent simply explores all the possible directions, resulting in long Trajectory Length $(\mathrm{TL})^{1}$ . Next we will improve the naive model, by tackling two key issues: "how to decide where to explore" (§3.2) and "how to make deeper exploration" (§3.3).
+
+# 3.2 Where to Explore
+
+In the naïve model (§3.1), the agent conducts exploration of all navigable views at every navigation step. Such a strategy is unwise and brings longer trajectories, and goes against the intuition that exploration is only needed at a few navigation steps, in a few directions. To address this, the agent should learn an exploration-decision making strategy, i.e., more actively deciding which direction to explore.
+
+
+Fig. 3. Equip our agent with an exploration-decision making ability (§3.2). (a) The agent predicts a probability distribution $\{p_{t,k}^{\mathrm{ep}}\}_{k=1}^{K+1}$ over exploration action candidates (i.e., Eq. 6). (b) According to $\{p_{t,k}^{\mathrm{ep}}\}_{k=1}^{K+1}$ , the most "valuable" view is selected to make a one-step exploration. (c) The agent updates his knowledge $\tilde{v}_{t,2}$ and makes a second-round exploration decision (Eq. 7). If STOP action is selected, the agent will make a navigation decision (Eq. 5) and start $(t+1)^{th}$ navigation step.
+
+
+
+
+
+
+
+
+
+To achieve this, at each navigation step $t$ , we let the agent make an exploration decision $a_{t}^{\mathrm{ep}} \in \{1, \dots, K + 1\}$ from current $K$ navigable views as well as a STOP action. Thus, the exploration action embedding $\pmb{a}_{t}^{\mathrm{ep}}$ is a vector selected from the visual features of the $K$ navigable views (i.e., $V_{t} = \{v_{t,1}, v_{t,2}, \dots, v_{t,K}\}$ ), and the STOP action embedding (i.e., $v_{t,K+1} = \mathbf{0}$ ). To learn the exploration-decision making strategy, with current navigation state $h_{t}^{\mathrm{nv}}$ and current visual surrounding knowledge $V_{t}$ , the agent predicts a probability distribution $\{p_{t,k}^{\mathrm{ep}}\}_{k=1}^{K+1}$ for the $K+1$ exploration action candidates (Fig. 3(a)):
+
+$$
+\hat {\boldsymbol {v}} _ {t} = \mathrm {a t t} (\boldsymbol {V} _ {t}, \boldsymbol {h} _ {t} ^ {\mathrm {n v}}), \qquad p _ {t, k} ^ {\mathrm {e p}} = \operatorname {s o f t m a x} _ {k} (\boldsymbol {v} _ {t, k} ^ {\top} \boldsymbol {W} ^ {\mathrm {e p}} [ \hat {\boldsymbol {v}} _ {t}, \boldsymbol {h} _ {t} ^ {\mathrm {n v}} ]). \qquad (6)
+$$
+
+Then, an exploration action $k^{*}$ is made according to $\arg \max_{k} p_{t,k}^{\mathrm{ep}}$ . If the STOP action is selected (i.e., $k^{*} = K + 1$ ), the agent directly turns to making a navigation decision by Eq.2, without exploration. Otherwise, the agent will make a one-step exploration in a most "valuable" direction $k^{*} \in \{1, \dots, K\}$ (Fig. 3(b)). Then, the agent uses the collected information $\hat{\pmb{o}}_{t,k^{*}}$ (Eq. 3) to enrich his knowledge $\pmb{v}_{t,k^{*}}$ about $k^{*th}$ viewpoint (Eq. 4). With the updated knowledge, the agent makes a second-round exploration decision (Fig. 3(c)):
+
+$$
+\begin{array}{l} \tilde {\boldsymbol {V}} _ {t} \leftarrow \{\boldsymbol {v} _ {t, 1}, \dots , \tilde {\boldsymbol {v}} _ {t, k ^ {*}}, \dots , \boldsymbol {v} _ {t, K} \}, \quad \hat {\boldsymbol {v}} _ {t} \leftarrow \operatorname {a t t} (\tilde {\boldsymbol {V}} _ {t}, \boldsymbol {h} _ {t} ^ {\mathrm {n v}}), \\ p _ {t, k ^ {u}} ^ {\mathrm {e p}} \leftarrow \operatorname {s o f t m a x} _ {k ^ {u}} \left(\boldsymbol {v} _ {t, k ^ {u}} ^ {\top} \boldsymbol {W} ^ {\mathrm {e p}} \left[ \hat {\boldsymbol {v}} _ {t}, \boldsymbol {h} _ {t} ^ {\mathrm {n v}} \right]\right). \tag {7} \\ \end{array}
+$$
+
+Note that the views that have been already explored are removed from the exploration action candidate set, and $k^u$ indicates an exploration action that has not been selected yet. Based on the new exploration probability distribution $\{p_{t,k^u}^{\mathrm{ep}}\}_{k^u}^{K+1}$ , if the STOP action is still not selected, the agent will make a second-round exploration in a new direction. The above multi-round exploration process will be repeated until either the agent is satisfied with his current knowledge about the surroundings (i.e., choosing the STOP decision), or all the $K$ navigable directions are explored. Finally, with the newest knowledge about the surroundings $\tilde{\mathbf{V}}_t$ , the agent makes a more reasonable navigation decision (Eq.5, Fig.3(d)). Our experiments in §4.3 show that, when allowing the agent to actively select
+
+
+Fig. 4. Our full model can actively make multi-direction, multi-step exploration. (a) The agent is in $1^{st}$ exploration step $(s = 1)$ , starting from $k^{th}$ view at $t^{th}$ navigation step. According to the exploration probability $\{p_{s,k'}^{\mathrm{ep}}\}_{k'}$ (Eq. 11), the agent decides to make a further step exploration. (b) At $2^{nd}$ exploration step, the agent decides to finish the exploration of $k^{th}$ view. (c) The agent thinks there is no other direction worth exploring, then makes a navigation decision based on the updated knowledge.
+
+navigation directions, compared with the naive model, TL is greatly decreased and even SR is improved (as the agent focuses on the most valuable directions).
+
+# 3.3 Deeper Exploration
+
+So far, our agent is able to make explorations only when necessary. Now we focus on how to let him conduct multi-step exploration, instead of simply constraining the maximum exploration length as one. Ideally, during the exploration of a certain direction, the agent should be able to go ahead a few steps until sufficient information is collected. To model such a sequential exploration decision-making process, we design a recurrent network based exploration module, which also well generalizes to the cases discussed in §3.1 and §3.2. Specifically, let us assume that the agent starts an exploration episode from $k^{th}$ view $\boldsymbol{v}_{t,k}$ at $t^{th}$ navigation step (Fig. 4(a)). At an exploration step $s$ , the agent perceives the surroundings with a panoramic view and collects information from $K'$ navigable views $\boldsymbol{Y}_{t,k,s} = \{\boldsymbol{y}_{t,k,s,1}, \boldsymbol{y}_{t,k,s,2}, \dots, \boldsymbol{y}_{t,k,s,K'}\}$ . With such a definition, we have $\boldsymbol{Y}_{t,k,0} = \boldsymbol{V}_t$ . In §3.1 and §3.2, for $k^{th}$ view at $t^{th}$ navigation step, its panoramic view $\boldsymbol{O}_{t,k}$ is also $\boldsymbol{Y}_{t,k,1}$ . The subscript $(t,k)$ will be omitted for notation simplicity.
+
+Knowledge Collection During Exploration: As the exploration module is in a recurrent form, the agent has a specific state $\pmb{h}_s^{\mathrm{ep}}$ at $s^{th}$ exploration step. With $\pmb{h}_s^{\mathrm{ep}}$ , the agent actively collects knowledge by assembling the surrounding information $\pmb{Y}_s$ using an attention operation (similar to Eq. 3):
+
+$$
+\hat {\pmb {y}} _ {s} = \mathrm {a t t} (\pmb {Y} _ {s}, \pmb {h} _ {s} ^ {\mathrm {e p}}). \tag {8}
+$$
+
+Knowledge Storage During Exploration: As the agent performs multi-step exploration, the learned knowledge $\hat{\pmb{y}}_s$ is stored in a memory network:
+
+$$
+\boldsymbol {h} _ {s} ^ {\mathrm {k w}} = \operatorname {L S T M} ^ {\mathrm {k w}} \left(\hat {\boldsymbol {y}} _ {s}, \boldsymbol {h} _ {s - 1} ^ {\mathrm {k w}}\right), \tag {9}
+$$
+
+which will eventually be used for supporting navigation-decision making.
+
+Sequential Exploration-Decision Making for Multi-Step Exploration: Next, the agent needs to decide whether or not to choose a new direction for further exploration. In the exploration action space, the agent either selects one direction from the current $K'$ reachable views to explore or stops the current exploration episode and returns to the original position at $t^{th}$ navigation step. The exploration action $a_{s}^{\mathrm{ep}}$ is represented as a vector $\pmb{a}_{s}^{\mathrm{ep}}$ from the visual features of the $K'$ navigable views (i.e., $\pmb{Y}_{s} = \{\pmb{y}_{s,1},\pmb{y}_{s,2},\dots,\pmb{y}_{s,K'}\}$ ), as well as the STOP action embedding (i.e., $\pmb{y}_{s,K'+1} = \mathbf{0}$ ). $a_{s}^{\mathrm{ep}}$ is predicted according to the current exploration state $h_{s}^{\mathrm{ep}}$ and collected information $h_{s}^{\mathrm{kw}}$ . Hence, the computation of $h_{s}^{\mathrm{ep}}$ is conditioned on the current navigation state $h_{t}^{\mathrm{nv}}$ , history exploration views $\{Y_{1},Y_{2},\dots,Y_{s-1}\}$ , and previous exploration actions $\{a_{1}^{\mathrm{ep}},a_{2}^{\mathrm{ep}},\dots,a_{s-1}^{\mathrm{ep}}\}$ :
+
+$$
+\boldsymbol {h} _ {s} ^ {\mathrm {e p}} = \operatorname {L S T M} ^ {\mathrm {e p}} \left(\left[ \boldsymbol {h} _ {t} ^ {\mathrm {n v}}, \boldsymbol {Y} _ {s - 1}, \boldsymbol {a} _ {s - 1} ^ {\mathrm {e p}} \right], \boldsymbol {h} _ {s - 1} ^ {\mathrm {e p}}\right), \text {w h e r e} \boldsymbol {h} _ {0} ^ {\mathrm {e p}} = \boldsymbol {h} _ {t} ^ {\mathrm {n v}}. \tag {10}
+$$
+
+For $k^{\prime th}$ exploration action candidate (reachable view), its probability is:
+
+$$
+p _ {s, k ^ {\prime}} ^ {\mathrm {e p}} = \operatorname {s o f t m a x} _ {k ^ {\prime}} \left(\boldsymbol {y} _ {s, k ^ {\prime}} ^ {\top} \boldsymbol {W} ^ {\mathrm {e p}} \left[ \boldsymbol {h} _ {s} ^ {\mathrm {k w}}, \boldsymbol {h} _ {s} ^ {\mathrm {e p}} \right]\right). \tag {11}
+$$
+
+The exploration action $a_{s}^{\mathrm{ep}}$ is chosen according to $\{p_{s,k'}^{\mathrm{ep}}\}_{k'=1}^{K'+1}$ .
+
+Multi-Round Exploration-Decision Making for Multi-Direction Exploration: After $S$ -step exploration, the agent chooses the STOP action when he thinks sufficient information along a certain direction $k$ has been gathered (Fig. 4 (b)). He goes back to the start point at $t^{th}$ navigation step and updates his knowledge about $k^{th}$ direction, i.e., $\boldsymbol{v}_{t,k}$ , with the gathered information $h_S^{\mathrm{kw}}$ . Thus, Eq. 4 is improved as:
+
+$$
+\tilde {\boldsymbol {v}} _ {t, k} = \boldsymbol {v} _ {t, k} + \boldsymbol {W} ^ {\mathrm {o}} \boldsymbol {h} _ {S} ^ {\mathrm {k w}}. \tag {12}
+$$
+
+With the updated knowledge regarding the surroundings, the agent makes a second-round exploration decision:
+
+$$
+\begin{array}{l} \tilde {\boldsymbol {V}} _ {t} \leftarrow \left\{\boldsymbol {v} _ {t, 1}, \dots , \tilde {\boldsymbol {v}} _ {t, k}, \dots , \boldsymbol {v} _ {t, K} \right\}, \quad \hat {\boldsymbol {v}} _ {t} \leftarrow \operatorname {a t t} \left(\tilde {\boldsymbol {V}} _ {t}, \boldsymbol {h} _ {t} ^ {\mathrm {n v}}\right), \tag {13} \\ p _ {t, k ^ {u}} ^ {\mathrm {e p}} \leftarrow \operatorname {s o f t m a x} _ {k ^ {u}} \left(\boldsymbol {v} _ {t, k ^ {u}} ^ {\top} \boldsymbol {W} ^ {\mathrm {e p}} \left[ \hat {\boldsymbol {v}} _ {t}, \boldsymbol {h} _ {t} ^ {\mathrm {n v}} \right]\right). \\ \end{array}
+$$
+
+Again, $k^u$ indicates an exploration action that has not been selected yet. Then the agent can make another round of exploration in a new direction, until he chooses the STOP action (i.e., the collected information is enough to help make a confident navigation decision), or has explored all $K$ directions (Fig. 4(c)).
+
+Exploration-Assisted Navigation-Decision Making: After multi-round multi-step exploration, with the newest knowledge $\tilde{\pmb{V}}_t$ about the surroundings, the agent makes a more reliable navigation decision (Eq.5): $p_{t,k}^{\mathrm{nv}} = \mathrm{softmax}_k(\tilde{v}_{t,k}^\top \pmb{W}^{\mathrm{nv}}\pmb{h}_t^{\mathrm{nv}})$ . Then, at $(t + 1)^{th}$ navigation step, the agent makes multi-step explorations in several directions (or can even omit exploration) and then chooses a new navigation action. In §4.3, we will empirically demonstrate that our full model gains the highest SR score with only slightly increased TL.
+
+Memory based Late Action-Taking Strategy: After finishing exploration towards a certain direction, if directly "going back" the start position and making next-round exploration/observation, it may cause a lot of revisits. To alleviate
+
+this, we let the agent store the visited views during exploration in an outside memory. The agent then follows a late action-taking strategy, i.e., moving only when it is necessary. When the agent decides to stop his exploration at a direction, he stays at his current position and "images" the execution of his following actions without really going back. When he needs to visit a new point that is not stored in the memory, he will go to that point directly and updates the memory accordingly. Then, again, holding the position until he needs to visit a new point that is not met before. Please refer to the supplementary for more details.
+
+# 3.4 Training
+
+Our entire agent model is trained with two distinct learning paradigms, i.e., 1) imitation learning, and 2) reinforcement learning.
+
+Imitation Learning (IL). In IL, an agent is forced to mimic the behavior of its teacher. Such a strategy has been proved effective in VLN [1, 7, 13, 15, 20, 23, 24]. Specifically, at navigation step $t$ , the teacher provides the teacher action $a_{t}^{*} \in \{1, \dots, K\}$ , which selects the next navigable viewpoint on the shortest route from the current viewpoint to the target viewpoint. The negative log-likelihood of the demonstrated action is computed as the IL loss:
+
+$$
+\mathcal {L} _ {\mathrm {I L}} ^ {\mathrm {n v}} = \sum_ {t} - \log p _ {t, a _ {t} ^ {*}} ^ {\mathrm {n v}}. \tag {14}
+$$
+
+The IL loss for the exploration is defined as:
+
+$$
+\mathcal {L} _ {\mathrm {I L}} ^ {\mathrm {e p}} = \sum_ {t} \sum_ {s = 0} ^ {S} - \log p _ {s, a _ {t + s} ^ {*}} ^ {\mathrm {e p}}, \tag {15}
+$$
+
+where $S$ is the maximum number of steps allowed for exploration. At $t^{th}$ navigation step, the agent performs $S$ -step exploration, simply imitating the teacher's navigation actions from $t$ to $t + S$ steps. Though the goals of navigation and exploration are different, here we simply use the teacher navigation actions to guide the learning of exploration, which helps the exploration module learn better representations, and quickly obtain an initial exploration policy.
+
+Reinforcement Learning (RL). Through IL, the agent can learn an off-policy that works relatively well on seen scenes, but it is biased towards copying the route introduced by the teacher, rather than learning how to recover from its erroneous behavior in an unseen environment [24]. Recent methods [20, 23, 24, 29] demonstrate that the on-policy RL method Advantage Actor-Critic (A2C) [18] can help the agent explore the state-action space outside the demonstration path.
+
+For RL based navigation learning, our agent samples a navigation action from the distribution $\{p_{t,k}^{\mathrm{nv}}\}_{k = 1}^{K}$ (see Eq. 2) and learns from rewards. Let us denote the reward after taking a navigation action $a_{t}^{\mathrm{nv}}$ at current view $v_{t}$ as $r(v_{t},a_{t}^{\mathrm{nv}})$ . As in [20,23], at each non-stop step $t$ , $r^{\mathrm{nv}}(v_t,a_t^{\mathrm{nv}})$ is the change in the distance to the target navigation location. At the final step $T$ , if the agent stops within 3 meters of the target location, we set $r^{\mathrm{nv}}(v_T,a_T^{\mathrm{nv}}) = +3$ ; otherwise $r^{\mathrm{nv}}(v_T,a_T^{\mathrm{nv}}) = -3$ . Then, to incorporate the influence of the action $a_{t}^{\mathrm{nv}}$ on the future and account for the local greedy search, the total accumulated return with a discount factor
+
+is adopted: $R_{t}^{\mathrm{nv}} = \sum_{t' = t}^{T}\gamma^{t' - t}r^{\mathrm{nv}}(v_{t'},a_{t'}^{\mathrm{nv}})$ , where the discounted factor $\gamma$ is set as 0.9. In A2C, our agent can be viewed as an actor and a state-value function $b^{\mathrm{nv}}(\pmb {h}_t)$ , viewed as critic, is evaluated. For training, the actor aims to minimize the negative log-probability of action $a_{t}^{\mathrm{nv}}$ scaled by $R_{t}^{\mathrm{nv}} - b^{\mathrm{nv}}(\pmb{h}_{t}^{\mathrm{nv}})$ (known as the advantage of action $a_{t}^{\mathrm{nv}}$ ), and the critic $b^{\mathrm{nv}}(\pmb {h}_t)$ aims to minimize the Mean-Square-Error between $R_{t}^{\mathrm{nv}}$ and the estimated value:
+
+$$
+\mathcal {L} _ {\mathrm {R L}} ^ {\mathrm {n v}} = - \sum_ {t} \left(R _ {t} ^ {\mathrm {n v}} - b ^ {\mathrm {n v}} \left(\boldsymbol {h} _ {t} ^ {\mathrm {n v}}\right)\right) \log p _ {t, a _ {t} ^ {\mathrm {n v}}} ^ {\mathrm {n v}} + \sum_ {t} \left(R _ {t} ^ {\mathrm {n v}} - b ^ {\mathrm {n v}} \left(\boldsymbol {h} _ {t} ^ {\mathrm {n v}}\right)\right) ^ {2}. \tag {16}
+$$
+
+For RL-based exploration learning, we also adopt on-policy A2C for training. Specifically, let us assume a set of explorations $\{a_{t,k,s}^{\mathrm{ep}}\}_{s=1}^{S_{t,k}}$ are made in a certain direction $k$ at navigation step $t$ , and the original navigation action (before exploration) is $a_t^{\mathrm{nv}}$ . Also assume that the exploration-assisted navigation action (after exploration) is $a_t^{\mathrm{nv}}$ . The basic reward $r^{\mathrm{ep}}(v_t,\{a_{t,k,s}^{\mathrm{ep}}\}_s)$ for the exploration actions $\{a_{t,k,s}^{\mathrm{ep}}\}_s$ is defined as:
+
+$$
+r ^ {\mathrm {e p}} \left(v _ {t}, \left\{a _ {t, k, s} ^ {\mathrm {e p}} \right\} _ {s}\right) = r ^ {\mathrm {n v}} \left(v _ {t}, a _ {t} ^ {\mathrm {n v}}\right) - r ^ {\mathrm {n v}} \left(v _ {t}, a _ {t} ^ {\mathrm {n v}}\right). \tag {17}
+$$
+
+This means that, after making explorations $\{a_{t,k,s}^{\mathrm{ep}}\}_{s}$ at $t^{th}$ navigation step in $k^{th}$ direction, if the new navigation decision $a_{t}^{\mathrm{nv}}$ is better than the original one $a_{t}^{\prime \mathrm{nv}}$ , i.e., helps the agent make a better navigation decision, a positive exploration reward will be assigned. More intuitively, such an exploration reward represents the benefit that this set of explorations $\{a_{t,k,s}^{\mathrm{ep}}\}_{s}$ could bring for the navigation. We average $r^{\mathrm{ep}}(v_t,\{a_{t,k,s}^{\mathrm{ep}}\}_s)$ to each exploration action $a_{t,k,s}^{\mathrm{ep}}$ as the immediate reward, i.e., $r^{\mathrm{ep}}(v_t,a_{t,k,s}^{\mathrm{ep}}) = \frac{1}{S_{t,k}} r^{\mathrm{ep}}(v_t,\{a_{t,k,s}^{\mathrm{ep}}\}_s)$ . In addition, to limit the length of exploration, we add a negative term $\beta$ $(= -0.1)$ to the reward of each exploration step. Then, the total accumulated discount return for an exploration action $a_{t,k,s}^{\mathrm{ep}}$ is defined as: $R_{t,k,s}^{\mathrm{ep}} = \sum_{s' = s}^{S_{t,k}}\gamma^{s' - s}(r^{\mathrm{ep}}(v_t,a_{t,k,s'}^{\mathrm{ep}}) + \beta)$ . The RL loss for the exploration action $a_{t,k,s}^{\mathrm{ep}}$ is defined as:
+
+$$
+\mathcal {L} \left(a _ {t, k, s} ^ {\mathrm {e p}}\right) = - \left(R _ {t, k, s} ^ {\mathrm {e p}} - b ^ {\mathrm {e p}} \left(\boldsymbol {h} _ {t, k, s} ^ {\mathrm {e p}}\right)\right) \log p _ {t, k, a _ {t, k, s} ^ {\mathrm {e p}}} ^ {\mathrm {e p}} + \left(R _ {t, k, s} ^ {\mathrm {e p}} - b ^ {\mathrm {e p}} \left(\boldsymbol {h} _ {t, k, s} ^ {\mathrm {e p}}\right)\right) ^ {2}, \tag {18}
+$$
+
+where $b^{\mathrm{ep}}$ is the critic. Then, similar to Eq. 16, the RL loss for all the exploration actions is defined as:
+
+$$
+\mathcal {L} _ {\mathrm {R L}} ^ {\exp} = - \sum_ {t} \sum_ {k} \sum_ {s} \mathcal {L} \left(a _ {t, k, s} ^ {\mathrm {e p}}\right). \tag {19}
+$$
+
+Curriculum Learning for Multi-Step Exploration. During training, we find that, once the exploration policy is updated, the model easily suffers from extreme variations in gathered information, particularly for long-term exploration, making the training jitter. To avoid this, we adopt curriculum learning[4] to train our agent with incrementally improved exploration length. Specifically, in the beginning, the maximum exploration length is set to 1. After the training loss converges, we use current parameters to initialize the training of the agent with at most 2-step exploration. In this way, we train an agent with at most 6-step exploration (due to the limited GPU memory and time). This strategy
+
+greatly improves the convergence speed (about $\times 8$ faster) with no noticeable diminishment in performance. Experiments related to the influence of the maximum exploration length can be found in §4.3.
+
+Back Translation Based Training Data Augmentation. Following [7, 20], we use back translation to augment training data. The basic idea is that, in addition to training a navigator that finds the correct route in an environment according to the given instructions, an auxiliary speaker is trained for generating an instruction given a route inside an environment. In this way, we generate extra instructions for 176k unlabeled routes in Room-to-Room[1] training environments. After training the agent on the labeled samples from the Room-to-Room training set, we use the back translation augmented data for fine-tuning.
+
+# 4 Experiment
+
+# 4.1 Experimental Setup
+
+Dataset. We conduct experiments on the Room-to-Room (R2R) dataset [1], which has 10,800 panoramic views in 90 housing environments, and 7,189 paths sampled from its navigation graphs. Each path is associated with three ground-truth navigation instructions. R2R is split into four sets: training, validation seen, validation unseen, and test unseen. There are no overlapping environments between the unseen and training sets.
+
+Evaluation Metric. As in conventions [1, 7], five metrics are used for evaluation: Success Rate (SR), Navigation Error (NE), Trajectory Length (TL), Oracle success Rate (OR), and Success rate weighted by Path Length (SPL).
+
+Implementation Detail. As in [1, 7, 23, 24], the viewpoint embedding $\pmb{v}_{t,k}$ is a concatenation of image feature (from an ImageNet [19] pre-trained ResNet-152 [8]) and a 4-d orientation descriptor. A bottleneck layer is applied to reduce the dimension of $\pmb{v}_{t,k}$ to 512. Instruction embeddings $\pmb{X}$ are obtained from an LSTM with a 512 hidden size. For each LSTM in our exploration module, the hidden size is 512. For back translation, the speaker is implemented as described in [7].
+
+# 4.2 Comparison Results
+
+Performance Comparisons Under Different VLN Settings. We extensively evaluate our performance under three different VLN setups in R2R.
+
+(1) Single Run Setting: This is the basic setup in R2R, where the agent conducts navigation by selecting the actions in a step-by-step, greedy manner. The agent is not allowed to: 1) run multiple trials, 2) explore or map the test environments before starting. Table 1 reports the comparison results under such a setting. The following are some essential observations. i) Our agent outperforms other competitors on the main metric SR, and some other criteria, e.g., NE and OR. For example, in terms of SR, our model improves AuxRN [28] $3\%$ and $5\%$ , on validation unseen and test unseen sets, respectively, demonstrating our strong generalizability. ii) Our agent without data augmentation already outperforms many existing methods on SR and NE. iii) Our TL and SPL scores
+
+Table 1. Comparison results on validation seen, validation unseen, and test unseen sets of R2R [1] under Single Run setting (§4.2). For compliance with the evaluation server, we report SR as fractions. *: back translation augmentation.
+
+| Models | Single Run Setting | | |
| validation seen | validation unseen | test unseen | | |
| SR↑ | NE↓ | TL↓ | OR↑ | SPL↑ | SR↑ | NE↓ | TL↓ | OR↑ | SPL↑ | SR↑ | NE↓ | TL↓ | OR↑ | SPL↑ |
| Random | 0.16 | 9.45 | 9.58 | 0.21 | - | 0.16 | 9.23 | 9.77 | 0.22 | - | 0.13 | 9.77 | 9.93 | 0.18 | 0.12 |
| Student-Forcing [1] | 0.39 | 6.01 | 11.3 | 0.53 | - | 0.22 | 7.81 | 8.39 | 0.28 | - | 0.20 | 7.85 | 8.13 | 0.27 | 0.18 |
| RPA [24] | 0.43 | 5.56 | 8.46 | 0.53 | - | 0.25 | 7.65 | 7.22 | 0.32 | - | 0.25 | 7.53 | 9.15 | 0.33 | 0.23 |
| E-Dropout [20] | 0.55 | 4.71 | 10.1 | - | 0.53 | 0.47 | 5.49 | 9.37 | - | 0.43 | - | - | - | - | - |
| Regretful [13] | 0.65 | 3.69 | - | 0.72 | 0.59 | 0.48 | 5.36 | - | 0.61 | 0.37 | - | - | - | - | - |
| Ours | 0.66 | 3.35 | 19.8 | 0.79 | 0.49 | 0.55 | 4.40 | 19.9 | 0.70 | 0.38 | - | - | - | - | - |
| Speaker-Follower [7]* | 0.66 | 3.36 | - | 0.74 | - | 0.36 | 6.62 | - | 0.45 | - | 0.35 | 6.62 | 14.8 | 0.44 | 0.28 |
| RCM [23]* | 0.67 | 3.53 | 10.7 | 0.75 | - | 0.43 | 6.09 | 11.5 | 0.50 | - | 0.43 | 6.12 | 12.0 | 0.50 | 0.38 |
| Self-Monitoring [12]* | 0.67 | 3.22 | - | 0.78 | 0.58 | 0.45 | 5.52 | - | 0.56 | 0.32 | 0.43 | 5.99 | 18.0 | 0.55 | 0.32 |
| Regretful [13]* | 0.69 | 3.23 | - | 0.77 | 0.63 | 0.50 | 5.32 | - | 0.59 | 0.41 | 0.48 | 5.69 | 13.7 | 0.56 | 0.40 |
| E-Dropout [20]* | 0.62 | 3.99 | 11.0 | - | 0.59 | 0.52 | 5.22 | 10.7 | - | 0.48 | 0.51 | 5.23 | 11.7 | 0.59 | 0.47 |
| Tactical Rewind [11]* | - | - | - | - | - | 0.56 | 4.97 | 21.2 | - | 0.43 | 0.54 | 5.14 | 22.1 | 0.64 | 0.41 |
| AuxRN [28]* | 0.70 | 3.33 | - | 0.78 | 0.67 | 0.55 | 5.28 | - | 0.62 | 0.50 | 0.55 | 5.15 | - | 0.62 | 0.51 |
| Ours* | 0.70 | 3.20 | 19.7 | 0.80 | 0.52 | 0.58 | 4.36 | 20.6 | 0.70 | 0.40 | 0.60 | 4.33 | 21.6 | 0.71 | 0.41 |
+
+Table 2. Comparison results on test unseen set of R2R [1], under Pre-Explore and Beam Search settings (§4.2). To comply with the evaluation server, we report SR as fractions. *: back translation augmentation. -: unavailable statistics. †: a different beam search strategy is used, making the scores incomparable.
+
+| Models | Pre-Explore Setting | Beam Search Setting |
| test unseen |
| SR↑ | NE↓ | TL↓ | OR↑ | SPL↑ | SR↑ | TL↓ | SPL↑ |
| Speaker-Follower [7]* | - | - | - | - | - | 0.53 | 1257.4 | 0.01 |
| RCM [23]* | 0.60 | 4.21 | 9.48 | 0.67 | 0.59 | 0.63 | 357.6 | 0.02 |
| Self-Monitoring [12]* | - | - | - | - | - | 0.61 | 373.1 | 0.02 |
| E-Dropout [20]* | 0.64 | 3.97 | 9.79 | 0.70 | 0.61 | 0.69 | 686.8 | 0.01 |
| AuxRN [28]* | 0.68 | 3.69 | - | 0.75 | 0.65 | 0.70 | † | † |
| Ours* | 0.67 | 3.66 | 9.78 | 0.73 | 0.64 | 0.70 | 204.4 | 0.05 |
+
+are on par with current art, with considering exploration routes into the metric computation. iv) If considering the routes for pure navigation, on validation unseen set, our TL is only about 9.4.
+
+(2) Pre-Explore Setting: This setup, first introduced by [23], allows the agent to pre-exlore the unseen environment before conducting navigation. In [23], the agent learns to adapt to the unseen environment through semi-supervised methods, using only pre-given instructions [23], without paired routes. Here, we follow a more strict setting, as in [20], where only the unseen environments can be accessed. Specifically, we use back translation to synthesize instructions for routes sampled from the unseen environments and fine-tune the agent on the synthetic data. As can be seen from Table 2, the performance of our method is significantly better than the existing methods [20, 23], improving the SR score from 0.64 to 0.67, and is on par with AuxRN [28].
+
+Table 3. Ablation study on the validation seen and validation unseen sets of R2R [1] under the Single Run setting. See §4.3 for details.
+
+| Aspect | Model | Single Run Setting | |
| validation seen | validation unseen | |
| SR↑ | NE↓ | TL↓ | OR↑ | SPL↑ | SR↑ | NE↓ | TL↓ | OR↑ |
| Basic agent | w/o. any exploration | 0.62 | 3.99 | 11.0 | 0.71 | 0.59 | 0.52 | 5.22 | 10.7 | 0.58 |
| Component | Our naïve model (§3.1) | 0.66 | 3.55 | 40.9 | 0.81 | 0.19 | 0.54 | 4.76 | 35.7 | 0.71 |
| 1-step exploration+all directions |
| w. exploration decision (§3.2) | 0.66 | 3.72 | 12.2 | 0.76 | 0.53 | 0.55 | 4.82 | 13.7 | 0.66 |
| 1-step exploration+parts of directions |
| w. further exploration | 0.70 | 3.15 | 69.6 | 0.95 | 0.13 | 0.60 | 4.27 | 58.8 | 0.89 |
| at most 4-step exploration+all directions |
| Full model (§3.3) | as most 1-step exploration | 0.66 | 3.72 | 12.2 | 0.76 | 0.53 | 0.55 | 4.82 | 13.7 | 0.66 |
| at most 3-step exploration | 0.68 | 3.21 | 17.3 | 0.79 | 0.52 | 0.57 | 4.50 | 18.6 | 0.69 |
| parts of directions | at most 4-step exploration | 0.70 | 3.20 | 19.7 | 0.80 | 0.52 | 0.58 | 4.36 | 20.6 | 0.70 |
| at most 6-step exploration | 0.70 | 3.13 | 22.7 | 0.83 | 0.49 | 0.58 | 4.21 | 23.6 | 0.73 |
+
+(3) Beam Search Setting: Beam search was originally used in [7] to optimize SR metric. Given an instruction, the agent is allowed to collect multiple candidate routes to score and pick the best one [11]. Following [7,20], we use the speaker to estimate the candidate routes and pick the best one as the final result. As shown in Table 2, our performance is on par with or better than previous methods.
+
+# 4.3 Diagnostic Experiments
+
+Effectiveness of Our Basic Idea. We first examine the performance of the naive model (§3.1). As shown in Table 3, even with a simple exploration ability, the agent gains significant improvements over SR, NE and OR. It is no surprise to see drops in TL and SPL, as the agent simply explores all directions.
+
+Exploration Decision Making. In §3.2, the agent learns to select some valuable directions to explore. As seen, the improved agent is indeed able to collect useful surrounding information by only conducting necessary exploration, as TL and SPL are improved without sacrificing improvements in SR, NE and OR.
+
+Allowing Multi-Step Exploration. In §3.3, instead of only allowing one-step exploration, the agent learns to conduct multi-step exploration. To investigate the efficacy of such a strategy individually, we allow our naïve model to make at most 4-step exploration ( $w / o$ . exploration decision making). In Table 3, we can observe further improvements over SR, NE and OR scores, with larger TL.
+
+Importance of All Components. Next we study the efficacy of our full model from §3.3, which is able to make multi-direction, multi-step exploration. We find that, by integrating all the components together, our agent with at most 4-step exploration achieves the best performance in most metrics.
+
+Influence of Maximum Allowable Exploration Step. From Table 3, we find that, with more maximum allowable exploration steps $(1\rightarrow 4)$ , the agent attains better performance. However, allowing further exploration steps $(4\rightarrow 6)$ will hurt the performance. For at most 4-step exploration, the average exploration rate is $15.3\%$ . During exploration, the percentage of wrong navigation actions being
+
+
+Fig. 5. Left: The basic agent is confused by the ambiguous instruction "Travel to the end of the hallway...", causing failed navigation. Our agent can actively collect information (the yellow part) and then make a better navigation decision. Middle Bottom: First view during exploration. Right: First view during navigation. We can find that, before exploration, the wrong direction gains a high navigation probability (i.e., 0.6). However, after exploration, the score for the correct direction is improved.
+
+
+
+
+
+corrected is $\sim 65.2\%$ , while right navigation action being changed wrongly is $\sim 10.7\%$ . The percentages of maximum exploration steps, from 1 to 4, are $53.6\%$ , $12.5\%$ , $8.7\%$ , and $25.3\%$ , respectively. We find that, in most cases, one-step exploration is enough. Sometimes the agent may choose long exploration, which maybe because he needs to collect more information for hard examples.
+
+Qualitative Results. Fig. 5 depicts a challenge example, with the ambiguous instruction "Travel to the end of the hallway...". The basic agent chooses the wrong direction and ultimately fails. However, our agent is able to actively explore the environment and collect useful information, to support the navigation-decision making. We observe that, after exploration, the correct direction gains a significant score and our agent reaches the goal location successfully.
+
+# 5 Conclusion
+
+This work proposes an end-to-end trainable agent for the VLN task, with an active exploration ability. The agent is able to intelligently interact with the environment and actively gather information when faced with ambiguous instructions or unconfident navigation decisions. The elaborately designed exploration module successfully learns its own policy with the purpose of supporting better navigation-decision making. Our agent shows promising results on R2R dataset. Acknowledgements This work was partially supported by Natural Science Foundation of China (NSFC) grant (No.61472038), Zhejiang Lab's Open Fund (No. 2020AA3AB14), Zhejiang Lab's International Talent Fund for Young Professionals, and Key Laboratory of Electronic Information Technology in Satellite Navigation (Beijing Institute of Technology), Ministry of Education, China.
+
+# References
+
+1. Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., Sünderhauf, N., Reid, I., Gould, S., van den Hengel, A.: Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In: CVPR (2018) 1, 2, 3, 4, 9, 11, 12, 13
+2. Andreas, J., Klein, D.: Alignment-based compositional semantics for instruction following. In: EMNLP (2015) 3
+3. Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Lawrence Zitnick, C., Parikh, D.: VQA: Visual question answering. In: ICCV (2015) 3
+4. Bengio, Y., Louradour, J., Collobert, R., Weston, J.: Curriculum learning. In: ICML (2009) 10
+5. Chen, D.L., Mooney, R.J.: Learning to interpret natural language navigation instructions from observations. In: AAAI (2011) 3
+6. Das, A., Kottur, S., Gupta, K., Singh, A., Yadav, D., Moura, J.M., Parikh, D., Batra, D.: Visual dialog. In: CVPR (2017) 3
+7. Fried, D., Hu, R., Cirik, V., Rohrbach, A., Andreas, J., Morency, L.P., Berg-Kirkpatrick, T., Saenko, K., Klein, D., Darrell, T.: Speaker-follower models for vision-and-language navigation. In: NeurIPS (2018) 1, 3, 9, 11, 12, 13
+8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016) 11
+9. Hu, R., Fried, D., Rohrbach, A., Klein, D., Darrell, T., Saenko, K.: Are you looking? grounding to multiple modalities in vision-and-language navigation. In: ACL (2019) 1, 3
+10. Huang, H., Jain, V., Mehta, H., Ku, A., Magalhaes, G., Baldridge, J., Ie, E.: Transferable representation learning in vision-and-language navigation. In: ICCV (2019) 1, 3
+11. Ke, L., Li, X., Bisk, Y., Holtzman, A., Gan, Z., Liu, J., Gao, J., Choi, Y., Srinivasa, S.: Tactical rewind: Self-correction via backtracking in vision-and-language navigation. In: CVPR (2019) 1, 3, 12
+12. Ma, C.Y., Lu, J., Wu, Z., AlRegib, G., Kira, Z., Socher, R., Xiong, C.: Self-monitoring navigation agent via auxiliary progress estimation. In: ICLR (2019) 1, 3, 12, 13
+13. Ma, C.Y., Wu, Z., AlRegib, G., Xiong, C., Kira, Z.: The regretful agent: Heuristic-aided navigation through progress estimation. In: CVPR (2019) 1, 3, 9, 12
+14. MacMahon, M., Stankiewicz, B., Kuipers, B.: Walk the talk: connecting language, knowledge, and action in route instructions. In: AAAI (2006) 3
+15. Mei, H., Bansal, M., Walter, M.R.: Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In: AAAI (2016) 3, 9
+16. Mirowski, P., Pascanu, R., Viola, F., Soyer, H., Ballard, A.J., Banino, A., Denil, M., Goroshin, R., Sifre, L., Kavukcuoglu, K., et al.: Learning to navigate in complex environments. In: ICLR (2017) 3
+17. Misra, D., Bennett, A., Blukis, V., Niklasson, E., Shatkhin, M., Artzi, Y.: Mapping instructions to actions in 3d environments with visual goal prediction. In: EMNLP (2018) 3
+18. Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. In: ICML (2016) 9
+19. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet Large Scale Visual Recognition Challenge. IJCV 115(3), 211-252 (2015) 11
+
+20. Tan, H., Yu, L., Bansal, M.: Learning to navigate unseen environments: Back translation with environmental dropout. In: NAACL (2019) 1, 3, 4, 9, 11, 12, 13
+21. Tellex, S., Kollar, T., Dickerson, S., Walter, M.R., Banerjee, A.G., Teller, S., Roy, N.: Understanding natural language commands for robotic navigation and mobile manipulation. In: AAAI (2011) 3
+22. Thomason, J., Gordon, D., Bisk, Y.: Shifting the baseline: Single modality performance on visual navigation & qa. In: NAACL (2019) 3
+23. Wang, X., Huang, Q., Celikyilmaz, A., Gao, J., Shen, D., Wang, Y.F., Wang, W.Y., Zhang, L.: Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In: CVPR (2019) 1, 3, 9, 11, 12, 13
+24. Wang, X., Xiong, W., Wang, H., Yang Wang, W.: Look before you leap: Bridging model-free and model-based reinforcement learning for planned-ahead vision-and-language navigation. In: ECCV (2018) 1, 3, 9, 11, 12
+25. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In: ICML (2015) 3
+26. Yu, L., Poirson, P., Yang, S., Berg, A.C., Berg, T.L.: Modeling context in referring expressions. In: ECCV (2016) 3
+27. Zheng, Z., Wang, W., Qi, S., Zhu, S.C.: Reasoning visual dialogs with structural and partial observations. In: CVPR (2019) 3
+28. Zhu, F., Zhu, Y., Chang, X., Liang, X.: Vision-language navigation with self-supervised auxiliary reasoning tasks. In: CVPR (2020) 1, 3, 11, 12, 13
+29. Zhu, Y., Mottaghi, R., Kolve, E., Lim, J.J., Gupta, A., Fei-Fei, L., Farhadi, A.: Target-driven visual navigation in indoor scenes using deep reinforcement learning. In: ICRA (2017) 3, 9
\ No newline at end of file
diff --git a/activevisualinformationgatheringforvisionlanguagenavigation/images.zip b/activevisualinformationgatheringforvisionlanguagenavigation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..434968af3d8de443370f68a894eba63de7bcbbe0
--- /dev/null
+++ b/activevisualinformationgatheringforvisionlanguagenavigation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:39283ed57ec4e212461c45cfcf0e8156df1929ba6268c4513c144d0e9756b9fa
+size 617640
diff --git a/activevisualinformationgatheringforvisionlanguagenavigation/layout.json b/activevisualinformationgatheringforvisionlanguagenavigation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ee08e009507256552712f86cc4ca72009e2947d1
--- /dev/null
+++ b/activevisualinformationgatheringforvisionlanguagenavigation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e563facc55b3eec4dadc9797f5caab2a47af684cfedca4412f27bcf93749da1c
+size 471693
diff --git a/adaptingobjectdetectorswithconditionaldomainnormalization/05885c2c-b740-4e53-9dfe-2678a4cac04a_content_list.json b/adaptingobjectdetectorswithconditionaldomainnormalization/05885c2c-b740-4e53-9dfe-2678a4cac04a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4567b1917307a171060fce72549ca85e7baccf51
--- /dev/null
+++ b/adaptingobjectdetectorswithconditionaldomainnormalization/05885c2c-b740-4e53-9dfe-2678a4cac04a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b6be935421ec5da9c4867d3fd65ce55065cef1c13d2a0c123f6c9b06cef22c7
+size 81867
diff --git a/adaptingobjectdetectorswithconditionaldomainnormalization/05885c2c-b740-4e53-9dfe-2678a4cac04a_model.json b/adaptingobjectdetectorswithconditionaldomainnormalization/05885c2c-b740-4e53-9dfe-2678a4cac04a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a59717b2b687c9c1d8936fbb9c7ce2a6ce370a2c
--- /dev/null
+++ b/adaptingobjectdetectorswithconditionaldomainnormalization/05885c2c-b740-4e53-9dfe-2678a4cac04a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:73c84eaffa4765307b0d41653528fa0243edf9397a8c2a5fc8ef0464c00d7e9e
+size 99106
diff --git a/adaptingobjectdetectorswithconditionaldomainnormalization/05885c2c-b740-4e53-9dfe-2678a4cac04a_origin.pdf b/adaptingobjectdetectorswithconditionaldomainnormalization/05885c2c-b740-4e53-9dfe-2678a4cac04a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c6beb9142aef703a7ab54b5eadcc5dc607f64a78
--- /dev/null
+++ b/adaptingobjectdetectorswithconditionaldomainnormalization/05885c2c-b740-4e53-9dfe-2678a4cac04a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d3ae3d8a734ca459ab3a11943a1891994f4612af359636a4a79e255a8c10f463
+size 28762076
diff --git a/adaptingobjectdetectorswithconditionaldomainnormalization/full.md b/adaptingobjectdetectorswithconditionaldomainnormalization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e5c0f98a4b514396aed8c3c805a8d5ca78c820bd
--- /dev/null
+++ b/adaptingobjectdetectorswithconditionaldomainnormalization/full.md
@@ -0,0 +1,357 @@
+# Adapting Object Detectors with Conditional Domain Normalization
+
+Peng Su $^{1,2}$ , Kun Wang $^{2}$ , Xingyu Zeng $^{2}$ , Shixiang Tang $^{2}$ , Dapeng Chen $^{2}$ , Di Qiu $^{2}$ , and Xiaogang Wang $^{1}$
+
+1 The Chinese University of Hong Kong
+2 SenseTime Research
+{psu, xgwang}@ee.cuhk.edu.hk
+
+Abstract. Real-world object detectors are often challenged by the domain gaps between different datasets. In this work, we present the Conditional Domain Normalization (CDN) to bridge the domain distribution gap. CDN is designed to encode different domain inputs into a shared latent space, where the features from different domains carry the same domain attribute. To achieve this, we first disentangle the domain-specific attribute out of the semantic features from source domain via a domain embedding module, which learns a domain-vector to characterize the domain attribute information. Then this domain-vector is used to encode the features from target domain through a conditional normalization, resulting in different domains' features carrying the same domain attribute. We incorporate CDN into various convolution stages of an object detector to adaptively address the domain shifts of different level's representation. In contrast to existing adaptation works that conduct domain confusion learning on semantic features to remove domain-specific factors, CDN aligns different domain distributions by modulating the semantic features of target domains conditioned on the learned domain-vector of the source domain. Extensive experiments show that CDN outperforms existing methods remarkably on both real-to-real and synthetic-to-real adaptation benchmarks, including 2D image detection and 3D point cloud detection.
+
+# 1 Introduction
+
+Deep neural networks have achieved remarkable success on visual recognition tasks. However, it is still very challenging for deep networks to generalize on a different domain, whose data distribution is not identical with original training data. Such a problem is known as dataset bias or domain shift [31]. For example, to guarantee safety in autonomous driving, the perception model is required to perform well under all conditions, like sunny, night, rainy, snowy, etc. However, even top-grade object detectors still face significant challenges when deployed in such varying real-world settings. Although collecting and annotating more data from unseen domains can help, it is prohibitively expensive, laborious and time-consuming. Another appealing application is to adapt from synthetic data to
+
+real data, as it can save the amount of cost and time. However, current objector detectors trained with synthetic data can rarely generalize on real data due to a significant domain distribution gap [36, 38].
+
+Adversarial domain adaptation emerges as a hopeful method to learn transferable representations across domains. It has achieved noticeable progress in various machine learning tasks, from image classification [24, 27], semantic segmentation [39, 36, 47], object detection [33, 46] to reinforcement learning [38, 28, 20]. According to Ben-David's theory [1], the empirical risk on the target domain is bounded by the source domain risk and the $\mathcal{H}$ domain divergence. Adversarial adaptation dedicates to learn domain invariant representation to reduce the $\mathcal{H}$ divergence, which eventually decreases the upper bound of the empirical error on the target domain.
+
+However, existing adversarial adaptation methods still suffer from several problems. First, previous methods [8, 4, 38] directly feed semantic features into a domain discriminator to conduct domain confusion learning. But the semantic features contain both image contents and domain attribute information. It's difficult to make the discriminator only focusing on removing domain-specific information without inducing undesirable influence on the images contents. Second, existing adversarial adaptation methods [8, 4, 38] use domain confusion learning at one or few convolution stages to handle the distribution mismatch, which ignores the differences of domain shifts at various representation levels. For example, the first few convolution layers' features mainly convey low-level information of local patterns, while the higher convolution layers' features include more abstract global patterns with semantics [43]. Such differences born within deep convolution neural networks naturally exhibit different types of domain shift at various convolution stages.
+
+Motivated by this, we propose the Conditional Domain Normalization (CDN) to embed different domain inputs into a shared latent space, where the features of all different domains inputs carry the same domain attribute information. Specifically, CDN utilizes a domain embedding module to learn a domain-vector to characterize the domain attribute information, through disentangling the domain attribute out of the semantic features of domain inputs. We use this domain-vector to encode the semantic features of another domain's inputs via a conditional normalization. Thus different domain features carry the same domain attributes information. The experiment on both real-to-real and synthetic-to-real adaptation benchmarks demonstrate that our method outperforms the-state-of-the-art adaptation methods. To summarize, our contributions are three folds: (1) We propose the Conditional Domain Normalization (CDN) to bridge the domain distribution gap, through embedding different domain inputs into a shared latent space, where the features from different domains carry the same domain attribute. (2) CDN achieves state-of-the-art unsupervised domain adaptation performance on both real-to-real and synthetic-to-real benchmarks, including 2D image and 3D point cloud detection tasks. And we conduct both quantitative and qualitative comparisons to analyze the features learned by CDN. (3)
+
+We construct a large-scale synthetic-to-real driving benchmark for 2D object detection, including a variety of public datasets.
+
+# 2 Related work
+
+Object Detection is the center topic in computer vision, which is crucial for many real-world applications, such as autonomous driving. In 2D detection, following the pioneering work of RCNN [11], a number of object detection frameworks based on convolutional networks have been developed like Fast R-CNN [10], Faster R-CNN [32], and Mask R-CNN [12], which significantly push forward the state of the art. In 3D detection, spanning from detecting 3d objects from 2d images [3], to directly generate 3D box from point cloud [29, 37], abundant works has been successfully explored. All these 2D and 3D objectors have achieved remarkable success on one or few specific public datasets. However, even top-grade object detectors still face significant challenges when deployed in real-world settings. The difficulties usually arise from the changes in environmental conditions.
+
+Domain Adaptation generalizes a model across different domains, and it has been extensively explored in various tasks, spanning from image classification [2, 40, 24, 27, 23], semantic segmentation [15, 39, 36] to reinforcement learning [38, 28, 20]. For 2D detection, domain confusion learning via a domain discriminator has achieved noticeable progress in cross-domain detection. [4] incorporated a gradient reversal layer [8] into a Faster R-CNN model. [33, 46] adopt domain confusion learning on both global and local levels to align source and target distributions. In contrast to existing methods conducting domain confusion learning directly on semantic features, we explicitly disentangle the domain attribute out of semantic features. And this domain attribute is used to encode other domains' features, thus different domain inputs share the same domain attribute in the feature space. For 3D detection, only a few works [45, 17] has been explored to adapt object detectors across different point cloud dataset. Different from existing works [45, 17] are specifically designed for point cloud data, our proposed CDN is a general adaptation framework that adapts both 2D image and 3D point cloud object detector through the conditional domain normalization.
+
+Conditional Normalization is a technique to modulate the neural activation using a transformation that depends on external data. It has been successfully used in the generative models and style transfer, like conditional batch normalization [6], adaptive instance normalization (AdaIN) [16] and spatial adaptive batch normalization [25]. [16] proposes AdaIN to control the global style of the synthesized image. [41] modulates the features conditioned on semantic masks for image super-resolution. [25] adopts a spatially-varying transformation, making it suitable for image synthesis from semantic masks. Inspired by these works, we propose Conditional Domain Normalization (CDN) to modulate one domain's inputs condition on another domain's attributes information. But our method exhibits significant difference with style transfer works: Style transfer works modify a content image conditioned on another style image, which is a conditional
+
+instance normalization by nature; but CDN modulates one domain's features conditioned on the domain embedding learned from another domains' inputs (a group of images), which is like a domain-to-domain translation. Hence we use different types of conditional normalization to achieve different goals.
+
+# 3 Method
+
+We first introduce the general unsupervised domain adaptation approach in section 3.1. Then we present the proposed Conditional Domain Normalization (CDN) in section 3.2. Last we adapt object detectors with the CDN in section 3.3.
+
+# 3.1 General Adversarial Adaptation Framework
+
+Given source images and labels $\{(x_i^S, y_i^S)\}_{i=1}^{N_S}$ drawn from $P_s$ , and target images $\{x_i^T\}_{i=1}^{N_T}$ from target domain $P_t$ , the goal of unsupervised domain adaptation is to find a function $f: x \to y$ that minimizes the empirical error on target data. For object detection task, the $f$ can be decomposed as $f = G(\cdot; \theta_g) \circ H(\cdot; \theta_h)$ , where $G(\cdot; \theta_g)$ represents a feature extractor network and $H(\cdot; \theta_h)$ denotes a bounding box head network. The adversarial domain adaptation introduces a discriminator network $D(\cdot; \theta_d)$ that tries to determine the domain labels of feature maps generated by $G(\cdot; \theta_g)$ .
+
+$$
+\min _ {\theta_ {g}, \theta_ {h}} \mathcal {L} _ {d e t} = \mathcal {L} _ {c l s} (G (x; \theta_ {g}), H (x; \theta_ {h})) + \mathcal {L} _ {r e g} (G (x; \theta_ {g}), H (x; \theta_ {h}))
+$$
+
+$$
+\min _ {\theta_ {d}} \max _ {\theta_ {g}} \mathcal {L} _ {a d v} = \mathbb {E} _ {x \sim P _ {s}} [ \log (D (G (x; \theta_ {g}); \theta_ {d})) ] + \mathbb {E} _ {x \sim P _ {t}} [ \log (1 - D (G (x; \theta_ {g}); \theta_ {d}) ] ^ {(1)}
+$$
+
+As illustrated in Eq.1, $G(\cdot; \theta_g)$ and $H(\cdot; \theta_h)$ are jointly trained to minimize the detection loss $\mathcal{L}_{det}$ by supervised training on the labeled source domain. At the same time, the backbone $G(\cdot; \theta_g)$ is optimized to maximize the probability of $D(\cdot; \theta_d)$ to make mistakes. Through this two-player min-max game, the final $G(\cdot; \theta_g)$ will converge to extract features that are indistinguishable for $D(\cdot; \theta_d)$ , thus domain invariant representations are learned.
+
+# 3.2 Conditional Domain Normalization
+
+Conditional Domain Normalization (CDN) is designed to embed source and target domain inputs into a shared latent space, where the semantic features from different domains carry the same domain attribute information. Formally, let $v^{s} \in \mathbb{R}^{N \times C \times H \times W}$ and $v^{t} \in \mathbb{R}^{N \times C \times H \times W}$ represent feature maps of source and target inputs, respectively. $C$ is the channel dimension and $N$ denotes the mini-batch size. We first learn a domain embedding vector $e_{domain}^{s} \in \mathbb{R}^{1 \times C \times 1}$ to characterize the domain attribute of source inputs. It is accomplished by a domain embedding network $\mathbf{F}_{d}(\cdot;W)$ parameterized by two fully-connect layers with ReLU non-linearity $\delta$ as
+
+$$
+e _ {d o m a i n} ^ {s} = \mathbf {F} _ {d} \left(v _ {a v g} ^ {s}; W\right) = \delta \left(W _ {2} \delta \left(W _ {1} v _ {a v g} ^ {s}\right)\right). \tag {2}
+$$
+
+
+Fig. 1: (Left) Traditional domain adversarial approach. (Right) Conditional Domain Normalization (CDN). The green and blue cubes represent the feature maps of domain A and domain B respectively.
+
+And $v_{avg}^{s} \in \mathbb{R}^{N \times C \times 1}$ represents the channel-wise statistics of source feature $v^{s}$ generated by global average pooling
+
+$$
+v _ {a v g} ^ {s} = \frac {1}{H W} \sum_ {h = 1} ^ {H} \sum_ {w = 1} ^ {W} v ^ {s} (h, w). \tag {3}
+$$
+
+To embed both source and target domain inputs into a shared latent space, where source and target features carry the same domain attributes while preserving individual image contents. We encode the target features $v^{t}$ with the source domain embedding via an affine transformation as
+
+$$
+\hat {v} ^ {t} = \mathbf {F} \left(e _ {\text {d o m a i n}} ^ {s}; W _ {\gamma}, b _ {\gamma}\right) \cdot \left(\frac {v ^ {t} - \mu^ {t}}{\sigma^ {t}}\right) + \mathbf {F} \left(e _ {\text {d o m a i n}} ^ {s}; W _ {\beta}, b _ {\beta}\right), \tag {4}
+$$
+
+where $\mu^t$ and $\sigma^t$ denote the mean and variance of target feature $v^t$ . The affine parameters are learned by function $F(\cdot; W_{\gamma}, b_{\gamma})$ and $F(\cdot; W_{\beta}, b_{\beta})$ conditioned on the source domain embedding vector $e_{domain}^s$ ,
+
+$$
+F \left(e _ {\text {d o m a i n}} ^ {s}; W _ {\gamma}, b _ {\gamma}\right) = W _ {\gamma} e _ {\text {d o m a i n}} ^ {s} + b _ {\gamma}, \quad F \left(e _ {\text {d o m a i n}} ^ {s}; W _ {\beta}, b _ {\beta}\right) = W _ {\beta} e _ {\text {d o m a i n}} ^ {s} + b _ {\beta}. \tag {5}
+$$
+
+For the target feature mean $\mu^t\in \mathbb{R}^{1\times C\times 1}$ and variance $\sigma^t\in \mathbb{R}^{1\times C\times 1}$ , we calculate it with a standard batch normalization [19]
+
+$$
+\mu_ {c} ^ {t} = \frac {1}{N H W} \sum_ {n = 1} ^ {N} \sum_ {h = 1} ^ {H} \sum_ {w = 1} ^ {W} v _ {n c h w} ^ {t}, \quad \sigma_ {c} ^ {t} = \sqrt {\frac {1}{N H W} \sum_ {n = 1} ^ {N} \sum_ {h = 1} ^ {H} \sum_ {w = 1} ^ {W} \left(v _ {n c h w} ^ {t} - \mu_ {c} ^ {t}\right) ^ {2} + \epsilon}, \tag {6}
+$$
+
+where $\mu_c^t$ and $\sigma_c^t$ denotes $c$ -th channel of $\mu^t$ and $\sigma^t$ . Finally, we have a discriminator to supervise the encoding process of domain attribute as
+
+$$
+\min _ {\theta_ {d}} \max _ {\theta_ {g}} \mathcal {L} _ {a d v} = \mathbb {E} [ \log (D (\mathbf {F} _ {d} (v ^ {s}); \theta_ {d}) ] + \mathbb {E} [ \log (1 - D (\mathbf {F} _ {d} (\hat {v} ^ {t}); \theta_ {d})) ], \tag {7}
+$$
+
+where $v^s$ and $v^t$ are generated by $G(\cdot ;\theta_g)$
+
+
+Fig. 2: Faster R-CNN network incorporates with CDN. The CDN is adopted in both backbone network and bounding box head network to adaptively address the domain shift at different representation levels.
+
+Discussion CDN exhibits a significant difference compared with existing adversarial adaptation works. As shown in Fig. 1, previous methods conduct domain confusion learning directly on semantic features to remove domain-specific factors. However, the semantic features contain both domain attribute and image contents. It is not easy to enforce the domain discriminator only regularizing the domain-specific factors without inducing any undesirable influence on image contents. In contrast, we disentangle the domain attribute out of the semantic features via conditional domain normalization. And this domain attribute is used to encode other domains' features, thus different domain features carry the same domain attribute information.
+
+# 3.3 Adapting Detector with Conditional Domain Normalization
+
+Convolution neural network's (CNN) success in pattern recognition has been largely attributed to its great capability of learning hierarchical representations [43]. More specifically, the first few layers of CNN focus on low-level features of local pattern, while higher layers capture semantic representations. Given this observation, CNN based object detectors naturally exhibit different types of domain shift at various levels' representations. Hence we incorporate CDN into different convolution stages in object detectors to address the domain mismatch adaptively, as shown in Fig.2.
+
+Coincident to our analysis, some recent works [33, 46] empirically demonstrate that global and local region alignments have different influences on detection performance. For easy comparison, we refer to the CDN located at the backbone network as global alignment, and CDN in the bounding box head networks as local or instance alignment.
+
+As shown in Fig. 2, taking faster-RCNN model [32] with ResNet [13] backbone as an example, we incorporate CDN in the last residual block at each stage. Thus
+
+the global alignment loss can be computed by
+
+$$
+L _ {a d v} ^ {\text {g l o b a l}} = \sum_ {l = 1} ^ {L} \mathbb {E} \left[ \right. \log \left( \right.D _ {l} \left(\mathbf {F} _ {d} ^ {l} \left(v _ {l} ^ {s}\right); \theta_ {d} ^ {l}\right)\left. \right] + \mathbb {E} \left[ \log \left(1 - D _ {l} \left(\mathbf {F} _ {d} ^ {l} \left(\hat {v} _ {l} ^ {t}\right); \theta_ {d} ^ {l}\right)\right)\right], \tag {8}
+$$
+
+where $v_{l}^{s}$ and $v_{l}^{t}$ denote $l$ -th layer's source feature and the encoded target feature, and $D_{l}$ represents the corresponding domain discriminator parameterized by $\theta_{d}^{l}$ .
+
+As for bounding box head network, we adopt CDN on the fixed-size region of interest (ROI) features generated by ROI pooling [32]. Because the original ROIs are often noisy and the quantity of source and target ROIs are not equal, we randomly select $\min(N_{roi}^S, N_{roi}^T)$ ROIs from each domain. $N_{roi}^S$ and $N_{roi}^T$ represent the quantity of source and target ROIs after non-maximum suppression (NMS). Hence we have instance alignment regularization for ROI features as
+
+$$
+L _ {a d v} ^ {\text {i n s t a n c e}} = \mathbb {E} \left[ \right. \log \left( \right.D _ {r o i} \left(\mathbf {F} _ {d} ^ {r o i} \left(v _ {r o i} ^ {s}\right); \theta_ {d} ^ {r o i}\right)\left. \right] + \mathbb {E} \left[ \log \left(1 - D _ {r o i} \left(\mathbf {F} _ {d} ^ {r o i} \left(\hat {v} _ {r o i} ^ {t}\right); \theta_ {d} ^ {r o i}\right)\right)\right]. \tag {9}
+$$
+
+The overall training objective is to minimize the detection loss $\mathcal{L}_{\text{det}}$ (of the labeled source domain) that consists of a classification loss $\mathcal{L}_{\text{cls}}$ and a regression loss $\mathcal{L}_{\text{reg}}$ , and min-max a adversarial loss $\mathcal{L}_{\text{adv}}$ of discriminator network
+
+$$
+\begin{array}{l} \min _ {\theta_ {d}} \max _ {\theta_ {g}} \mathcal {L} _ {a d v} = \lambda L _ {a d v} ^ {g l o b a l} + L _ {a d v} ^ {i n s t a n c e} \tag {10} \\ \min _ {\theta_ {g}, \theta_ {h}} \mathcal {L} _ {d e t} = \mathcal {L} _ {c l s} (G (x; \theta_ {g}), H (x; \theta_ {h})) + \mathcal {L} _ {r e g} (G (x; \theta_ {g}), H (x; \theta_ {h})), \\ \end{array}
+$$
+
+where $\lambda$ is a weight to balance the global and local alignment regularization.
+
+# 4 Experiments
+
+We evaluate CDN on various real-to-real (KITTI to Cityscapes) and synthetic-to-real (Virtual KITTI/Synscapes/SIM10K to BDD100K, PreSIL to KITTI) adaptation benchmarks. We also report results on cross-weather adaptation, Cityscapes to Foggy Cityscapes. Mean average precision (mAP) with an intersection-over-union (IOU) threshold of 0.5 is reported for 2D detection experiments. We use Source and Target to represent the results of supervised training on source and target domain, respectively. For 3D point cloud object detection, PointR-CNN [37] with backbone of PointNet++ [29] is adopted as our baseline model. Following standard metric on KITTI benchmark [37], we use Average Precision(AP) with IOU threshold 0.7 for car and 0.5 for pedestrian/cyclist.
+
+# 4.1 Dataset
+
+Cityscapes [5] is a European traffic scene dataset, which contains 2,975 images for training and 500 images for testing.
+
+Foggy Cityscapes derives from Cityscapes with a fog simulation proposed by [34]. It also includes 2,975 images for training, 500 images for testing.
+
+KITTI [9] contains 21,260 images collected from different urban scenes, which includes 2D RGB images and 3D point cloud data.
+
+Virtual KITTI is derived from KITTI with a real-to-virtual cloning technique proposed by [7]. It has the same number of images and categories as KITTI.
+
+Synscapes [42] is a synthetic dataset of street scene, which consists of 25,000 images created with a photo-realistic rendering technique.
+
+SIM10K [21] is a street view dataset generated from the realistic computer game GTA-V. It has 10,000 training images and the same categories as in Cityscapes.
+
+PreSIL [17] is synthetic point cloud dataset derived from GTA-V, which consists of 50,000 frames of high-definition images and point clouds.
+
+BDD100K [44] is a large-scale dataset (contains 100k images) that covers diverse driving scenes. It is a good representative of real data in the wild.
+
+# 4.2 Implementation Details
+
+We train the Faster R-CNN [32] model for 12 epochs on all experiments. The model is optimized by SGD with multi-step learning rate decay. SGD uses the learning rate of 0.00625 multiplied by the batchsize, and momentum of 0.9. All experiments use sync BN [26] with a batchsize of 32. $\lambda$ is set as 0.4 by default in all experiments. On synthetic-to-real adaptation, for a fair comparison, we randomly select 7000 images for training and 3000 for testing, for all synthetic datasets and BDD100K dataset. For 3D point cloud detection, we use PointR-CNN [37] model with same setting as [37]. We incorporated the CDN layer in the point-wise feature generation stage (global alignment) and 3D ROIs proposal stage (instance alignment).
+
+# 5 Experimental Results and Analysis
+
+# 5.1 Results on Cityscapes to Foggy Cityscapes
+
+We compare CDN with the state-of-the-art methods in Table 1. Following [33, 46], we also report results using Faster R-CNN model with VGG16 backbone. As shown in Table 1, CDN outperforms previous state-of-the-art methods by a large margin of $1.8\%$ mAP. The results demonstrate the effectiveness of CDN on reducing domain gaps. A detailed comparison of different CDN settings can be found at the ablation study 7. As shown in Fig. 3, our method exhibits good generalization capability under foggy weather conditions.
+
+# 5.2 Results on KITTI to Cityscapes
+
+Different camera settings may influence the detector performance in real-world applications. We conduct the cross-camera adaptation from KITTI to Cityscapes. Table 2 shows the adaptation results on car category produced by Faster R-CNN with VGG16. Global and Instance represent global and local alignment respectively. The results demonstrate that CDN achieves $1.7\%$ mAP improvements
+
+| Method | Person | Rider | Car | Truck | Bus | Train | Motorcycle | Bicycle | mAP |
| Source | 29.3 | 31.9 | 43.5 | 15.8 | 27.4 | 9.0 | 20.3 | 29.9 | 26.1 |
| DA-Faster [4] | 25.0 | 31.0 | 40.5 | 22.1 | 35.3 | 20.2 | 20.0 | 27.1 | 27.9 |
| DT [18] | 25.4 | 39.3 | 42.4 | 24.9 | 40.4 | 23.1 | 25.9 | 30.4 | 31.5 |
| SCDA [46] | 33.5 | 38.0 | 48.5 | 26.5 | 39.0 | 23.3 | 28.0 | 33.6 | 33.8 |
| DDMRL [22] | 30.8 | 40.5 | 44.3 | 27.2 | 38.4 | 34.5 | 28.4 | 32.2 | 34.6 |
| SWDA [33] | 30.3 | 42.5 | 44.6 | 24.5 | 36.7 | 31.6 | 30.2 | 35.8 | 34.8 |
| CDN (ours) | 35.8 | 45.7 | 50.9 | 30.1 | 42.5 | 29.8 | 30.8 | 36.5 | 36.6 |
+
+Table 1: Cityscapes to Foggy Cityscapes adaptation.
+
+over the state-of-the-art methods. We can also find that instance feature alignment contributes to a larger performance boost than global counterpart, which is consistent with previous discovery [33, 46].
+
+# 5.3 Results on SIM10K to Cityscapes
+
+Following the setting of [33], we evaluate the detection performance on car on SIM10K-to-Cityscapes benchmark. The results in Table 3 demonstrate CDN constantly performs better than the baseline methods. CDN with both global and instance alignment achieves $49.3\%$ mAP on validation set of Cityscapes, which outperforms the previous state-of-the-art method by $1.6\%$ mAP.
+
+# 5.4 Results on Synthetic to Real Data
+
+To thoroughly evaluate the performance of the state-of-the-art methods on synthetic-to-real adaptation, we construct a large-scale synthetic-to-real adaptation benchmark on various public synthetic datasets, including Virtual KITTI, Synscapes and SIM10K. "All" represents using the combination of 3 synthetic datasets. Compared with SIM10K-to-Cityscapes, the proposed benchmark is more challenging in terms of much larger image diversity in both real and synthetic domains. We compare CDN with the state-of-the-art method SWDA[33] in Table 4. CDN consistently outperforms SWDA under different backbones, which achieves average $2.2\%$ mAP and $2.1\%$ mAP improvements on Faster-R18 and Faster-R50 respectively. Using the same adaptation method, the detection performance strongly depends on the quality of synthetic data. For instance, the adaptation performance of SIM10K is much better than Virtual KITTI. Some example predictions produced by our method are visualized in Fig. 3.
+
+# 5.5 Adaptation on 3D Point Cloud Detection
+
+We evaluate CDN on adapting 3D object detector from synthetic point cloud (PreSIL) to real point cloud data (KITTI). Table 5 shows that CDN constantly outperforms the state-of-the-art method PointDAN [30] across all categories, with an average improvement of $1.9\%$ AP. We notice that instance alignment
+
+| Method | Global | Instance | mAP (%) |
| Source only | | | 37.1 |
| DA-Faster [4] | ✓ | ✓ | 38.3 |
| SWDA [33] | ✓ | ✓ | 43.2 |
| SCDA [46] | ✓ | ✓ | 42.9 |
| ✓ | | 40.2 |
| CDN | | ✓ | 43.1 |
| ✓ | ✓ | 44.9 |
+
+Table 2: KITTI to Cityscapes.
+
+| Method | Global | Instance | mAP (%) |
| Source only | | | 34.3 |
| DA-Faster [4] | ✓ | ✓ | 38.3 |
| SWDA [33] | ✓ | ✓ | 47.7 |
| SCDA [46] | ✓ | ✓ | 44.1 |
| ✓ | | 41.2 |
| CDN | | ✓ | 45.8 |
| ✓ | ✓ | 49.3 |
+
+Table 3: SIM10K to Cityscapes.
+
+| Model | Method | Virtual KITTI | Synscapes | SIM10K | All |
| Faster-R18 | Source | 9.8 | 24.5 | 37.7 | 38.2 |
| SWDA[33] | 15.6 | 27.0 | 40.2 | 41.3 |
| CDN | 17.5 | 29.1 | 42.7 | 43.6 |
| Target | 70.5 |
| Faster-R50 | Source | 13.9 | 29.1 | 41.6 | 42.8 |
| SWDA[33] | 19.7 | 31.5 | 42.9 | 44.3 |
| CDN | 21.8 | 33.4 | 45.3 | 47.2 |
| Target | 75.6 |
+
+Table 4: Adaptation from different synthetic data to real data. mAP on car is reported on BDD100K validation. The results of supervised training on BDD100K are highlighted in gray.
+
+
+Fig.3: Example results on Foggy Cityscapes/Synscapes/SIM10K/BDD100K (from top to bottom). The results are produced by a Faster R-CNN model incorporated with CDN. The class and score predictions are at the top left corner of the bounding box. Zoom in to visualize the details.
+
+contributes to a larger performance boost than global alignment. It can be attributed by the fact that point cloud data spread over a huge 3D space but most information is stored in the local foreground points (see Fig. 4).
+
+
+Fig. 4: Top:PreSIL; Bottom:KITTI.
+
+| Model | Global Instance | Car | Pedestrian | Cyclist |
| Source | | 15.7 | 9.6 | 5.6 |
| CycleGAN [35] | ✓ | 16.5 | 10.3 | 5.9 |
| PointDAN[30] | ✓ | 17.1 | 10.9 | 7.5 |
| ✓ | | 17.3 | 6.0 |
| CDN | | 18.5 | 12.8 | 8.7 |
| ✓ | 19.0 | 13.2 | 9.1 |
| Target | | 75.7 | 41.7 | 59.6 |
+
+Table 5: Adapting from synthetic (PreSIL) to real (KITTI) pint cloud. AP of moderate level on KITTI test is reported.
+
+# 6 Analysis
+
+# 6.1 Visualize and Analyze the Feature Maps
+
+Despite the general efficiency on various benchmarks, we are also interested in the underlying principle of CDN. We interpret the learned domain embedding via appending a decoder network after the backbone to reconstruct the RGB images from the feature maps. As shown in Fig. 5, the top row shows the original inputs from Foggy Cityscapes, SIM10K and Synscapes (left to right), and the bottom row shows the reconstructed images from the corresponding features encoded with the domain embedding of another domain. The reconstructed images carry the same domain style of another domain, suggesting the learned domain embedding captures the domain attribute information and CDN can effectively transform the domain style of different domains.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Fig. 5: Top row: Original inputs from Foggy Cityscapes, SIM10K and Synscapes (left to right); Bottom row: Reconstructed images from features encoded with the learned domain embedding of another domain.
+
+
+
+
+
+
+
+
+
+
+
+Furthermore, we compute Fréchet Inception Distance (FID)[14] score to quantitatively investigate the difference between source and target features. FID has been a popular metric to evaluate the style similarity between two groups of images in GANs. Lower FID score indicates a smaller style difference. For easy comparison, we normalize the FID score to $[0, 1]$ by dividing the maximum score. As shown in Table 6, the feature learned with CDN achieves significantly smaller FID score compared with feature learned on source domain only, suggesting CDN effectively reduces the domain gap in the feature space. Obviously, supervised joint training on source and target data gets the smallest FID score, which is verified by the best detection performance achieved by joint training. As shown in Fig. 6, synthetic-to-real has larger FID score than real-to-real dataset, since the former owns larger domain gaps.
+
+| Method | SIM to BDD | City to Foggy |
| FID | mAP | FID | mAP |
| Source | 0.94 | 37.7 | 0.83 | 26.1 |
| Joint training | 0.67 | 79.3 | 0.41 | 49.5 |
| SWDA [33] | 0.83 | 40.2 | 0.76 | 34.8 |
| CDN | 0.71 | 42.7 | 0.60 | 36.6 |
+
+Table 6: FID score and mAP.
+
+
+Fig. 6: FID scores on all datasets.
+
+# 6.2 Analysis on Domain Discrepancy
+
+We adopt symmetric Kullback-Leibler divergence to investigate the discrepancy between source and target domain in feature space. To simplify the analysis, we assume source and target features are drawn from the multivariate normal distribution. The divergence is calculated with the Res5-3 features and plotted in log scale. Fig. 7 (a) and (c) show that the domain divergence continues decreasing during training, indicating the Conditional Domain Normalization keeps reducing domain shift in feature space. Benefiting from the reduction of domain divergence, the adaptation performance on the target domain keeps increasing. Comparing with SWDA, CDN achieves lower domain discrepancy and higher adaptation performance.
+
+Fig. 7 (b)(d) shows the t-SNE plot of instance features extracted by a Faster R-CNN model incorporated with CDN. The same category features from two domains group in tight clusters, suggesting source and target domain distributions are well aligned in feature space. Besides, features of different categories own clear decision boundaries, indicating discriminative features are learned by our method. These two factors contribute to the target performance boost.
+
+# 7 Ablation Study
+
+For the ablation study, we use a Faster R-CNN model with ResNet-18 on SIM10K-to-BDD100K adaptation benchmark, and a Faster R-CNN model with VGG16
+
+
+(a) City-to-Foggy
+
+
+(b) City-to-Foggy
+
+
+(c) CG-to-Real
+Fig. 7: (a)(c): Divergence and adaptation performance. (c)(d): t-SNE plot of instance features.
+
+
+(d) CG-to-Real
+
+on Cityscapes-to-Foggy Cityscapes adaptation benchmark. G and I denote adopting CDN in the backbone and bounding box head network, respectively.
+
+
+(a)
+Fig. 8: (a) Adopt CDN at different convolution stages of ResNet; (b) Adopt CDN in existing adaptation frameworks; (c) Domain embedding vs. semantic features.
+
+
+(b)
+
+
+(c)
+
+Adopting CDN at different convolution stages. Fig. 8(a) compares the results of Faster R-CNN models adopting CDN at different convolution stages. We follow [13] to divide ResNet into 5 stages. Bbox head denotes the bounding box head network. From left to right, adding more CDN layers keeps boosting the adaptation performance on both benchmarks, benefiting from adaptive distribution alignments across different levels' representation. It suggests that adopting CDN in each convolution stage is a better choice than only aligning domain distributions at one or two specific convolution stages.
+
+Comparing with existing domain adaptation frameworks adopting CDN. Fig. 8(b) shows the results of adopting CDN layer in existing adaptation methods like SWDA [33] and SCDA [46]. Directly adopting CDN in SWDA and SCDA can bring average $1.3\%$ mAP improvements on two adaptation benchmarks, suggesting CDN is more effective to address domain shifts than traditional domain confusion learning. It can be attributed to that CDN disentangle the domain-specific factors out of the semantic features via learning a domain-
+
+vector. Leveraging the domain-vector to align the different domain distributions can be more efficient.
+
+Compare domain embedding with semantic features. In Eq. 7, we can either use semantic features $(v^{s},\hat{v}^{t})$ or domain embedding $(\mathbf{F}_d(v^s),\mathbf{F}_d(\hat{v}^t))$ as inputs of discriminator. Fig. 8(c) compares the adaptation performance of using semantic features with using domain embedding. Although semantic features can improve the performance over baseline, domain embedding consistently achieves better results than directly using semantic features. Suggesting the learned domain embedding well captures the domain attribute information, and it is free from some undesirable regularization on specific image contents.
+
+Value of $\lambda$ In Eq. 10, we use $\lambda$ controls the balance between global and local regularization. Fig. 9 (left) shows the influence on adaptation performance by different $\lambda$ . Because object detectors naturally focus more on local regions, we can see stronger instance regularization largely contributes to detection performance. In our experiments, $\lambda$ between 0.4 and 0.5 gives the best performance.
+
+
+Fig. 9: Left: mAP vs. Value of $\lambda$ ; Middle: mAP vs. Percentage (\%) of synthetic image data; Right: AP vs. Percentage (\%) of synthetic point cloud.
+
+
+
+
+
+Scale of target domain dataset Fig. 9 middle/right quantitatively investigate the relation between real data detection performance and percentage of synthetic data used for training. "All" means to use the combination of 3 different synthetic datasets. The larger synthetic dataset provides better adaptation performance, on both 2D image and 3D point cloud detection.
+
+# 8 Conclusion
+
+We present the Conditional Domain Normalization (CDN) to adapt object detectors across different domains. CDN aims to embed different domain inputs into a shared latent space, where the features from different domains carry the same domain attribute. Extensive experiments demonstrate the effectiveness of CDN on adapting object detectors, including 2D image and 3D point cloud detection tasks. And both quantitative and qualitative comparisons are conducted to analyze the features learned by our method.
+
+# References
+
+1. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., Vaughan, J.W.: A theory of learning from different domains. Machine learning (2010)
+2. Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: CVPR (2017)
+3. Chen, X., Kundu, K., Zhu, Y., Berneshawi, A.G., Ma, H., Fidler, S., Urtasun, R.: 3d object proposals for accurate object class detection. In: Advances in Neural Information Processing Systems (2015)
+4. Chen, Y., Li, W., Sakaridis, C., Dai, D., Van Gool, L.: Domain adaptive faster r-cnn for object detection in the wild. In: CVPR (2018)
+5. Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding. In: CVPR (2016)
+6. Dumoulin, V., Shlens, J., Kudlur, M.: A learned representation for artistic style. arXiv preprint arXiv:1610.07629 (2016)
+7. Gaidon, A., Wang, Q., Cabon, Y., Vig, E.: Virtual worlds as proxy for multi-object tracking analysis. In: CVPR (2016)
+8. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: ICML (2015)
+9. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: CVPR (2012)
+0. Girshick, R.: Fast r-cnn. In: ICCV (2015)
+1. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR (2014)
+2. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: ICCV (2017)
+3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
+4. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances in neural information processing systems (2017)
+5. Hoffman, J., Tzeng, E., Park, T., Zhu, J.Y., Isola, P., Saenko, K., Efros, A.A., Darrell, T.: Cycada: Cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213 (2017)
+6. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV (2017)
+7. Hurl, B., Czarnecki, K., Waslander, S.: Precise synthetic image and lidar (presil) dataset for autonomous vehicle perception. In: 2019 IEEE Intelligent Vehicles Symposium (IV) (2019)
+8. Inoue, N., Furuta, R., Yamasaki, T., Aizawa, K.: Cross-domain weakly-supervised object detection through progressive domain adaptation. In: CVPR (2018)
+9. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
+20. James, S., Wohlhart, P., Kalakrishnan, M., Kalashnikov, D., Irpan, A., Ibarz, J., Levine, S., Hadsell, R., Bousmalis, K.: Sim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks. In: CVPR (2019)
+21. Johnson-Roberson, M., Barto, C., Mehta, R., Sridhar, S.N., Rosaen, K., Vasudevan, R.: Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks? arXiv preprint arXiv:1610.01983 (2016)
+
+22. Kim, T., Jeong, M., Kim, S., Choi, S., Kim, C.: Diversify and match: A domain adaptive representation learning paradigm for object detection. In: CVPR (2019)
+23. Liu, Z., Miao, Z., Pan, X., Zhan, X., Lin, D., Yu, S.X., Gong, B.: Open compound domain adaptation. In: CVPR (2020)
+24. Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems (2018)
+25. Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. arXiv preprint arXiv:1903.07291 (2019)
+26. Peng, C., Xiao, T., Li, Z., Jiang, Y., Zhang, X., Jia, K., Yu, G., Sun, J.: Megdet: A large mini-batch object detector. In: CVPR (2018)
+27. Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: ICCV (2019)
+28. Peng, X.B., Andrychowicz, M., Zaremba, W., Abbeel, P.: Sim-to-real transfer of robotic control with dynamics randomization. In: ICRA. IEEE (2018)
+29. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In: Advances in neural information processing systems (2017)
+30. Qin, C., You, H., Wang, L., Kuo, C.C.J., Fu, Y.: Pointdan: A multi-scale 3d domain adaption network for point cloud representation. In: Advances in Neural Information Processing Systems (2019)
+31. Quionero-Candela, J., Sugiyama, M., Schwaighofer, A., Lawrence, N.D.: Dataset shift in machine learning. The MIT Press (2009)
+32. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems (2015)
+33. Saito, K., Ushiku, Y., Harada, T., Saenko, K.: Strong-weak distribution alignment for adaptive object detection. In: CVPR (2019)
+34. Sakaridis, C., Dai, D., Van Gool, L.: Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision (2018)
+35. Saleh, K., Abobakr, A., Attia, M., Iskander, J., Nahavandi, D., Hossny, M., Nahvandi, S.: Domain adaptation for vehicle detection from bird's eye view lidar point cloud data. In: ICCV Workshops (2019)
+36. Sankaranarayanan, S., Balaji, Y., Jain, A., Nam Lim, S., Chellappa, R.: Learning from synthetic data: Addressing domain shift for semantic segmentation. In: CVPR (2018)
+37. Shi, S., Wang, X., Li, H.: Pointcnn: 3d object proposal generation and detection from point cloud. In: CVPR (2019)
+38. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., Abbeel, P.: Domain randomization for transferring deep neural networks from simulation to the real world. In: IROS. IEEE (2017)
+39. Tsai, Y.H., Sohn, K., Schulter, S., Chandraker, M.: Domain adaptation for structured output via discriminative representations. arXiv preprint arXiv:1901.05427 (2019)
+40. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: CVPR (2017)
+41. Wang, X., Yu, K., Dong, C., Change Loy, C.: Recovering realistic texture in image super-resolution by deep spatial feature transform. In: CVPR (2018)
+42. Wrenninger, M., Unger, J.: Synscapes: A photorealistic synthetic dataset for street scene parsing. arXiv preprint arXiv:1810.08705 (2018)
+43. Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015)
+
+44. Yu, F., Xian, W., Chen, Y., Liu, F., Liao, M., Madhavan, V., Darrell, T.: Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687 (2018)
+45. Yue, X., Wu, B., Seshia, S.A., Keutzer, K., Sangiovanni-Vincentelli, A.L.: A lidar point cloud generator: from a virtual world to autonomous driving. In: Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval (2018)
+46. Zhu, X., Pang, J., Yang, C., Shi, J., Lin, D.: Adapting object detectors via selective cross-domain alignment. In: CVPR (2019)
+47. Zou, Y., Yu, Z., Vijaya Kumar, B., Wang, J.: Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In: ECCV (2018)
\ No newline at end of file
diff --git a/adaptingobjectdetectorswithconditionaldomainnormalization/images.zip b/adaptingobjectdetectorswithconditionaldomainnormalization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..bf75ba5fbc0d61da750859dc000ff4bc5c6df1e6
--- /dev/null
+++ b/adaptingobjectdetectorswithconditionaldomainnormalization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f2cec6cf1aa3a863c5b2fc077cba517c1b83fb2e738438c54fa259ae9e70b92
+size 630379
diff --git a/adaptingobjectdetectorswithconditionaldomainnormalization/layout.json b/adaptingobjectdetectorswithconditionaldomainnormalization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..136f330a870202eeb61b97862219c2270e46ffd7
--- /dev/null
+++ b/adaptingobjectdetectorswithconditionaldomainnormalization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7c605d378b37c4502d13e62f74f1756ccba99475c74528bcf6a8ef727fabe2cb
+size 422700
diff --git a/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/ce8cfb4f-aab6-4458-b05a-8376aede26a3_content_list.json b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/ce8cfb4f-aab6-4458-b05a-8376aede26a3_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4d32ccebb95fd5cec07fa74c48b9bb7579125df6
--- /dev/null
+++ b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/ce8cfb4f-aab6-4458-b05a-8376aede26a3_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:98152fcf51e40ad631ae0841bf55e6ef74590aac10ac426d9404bbba7576cdaa
+size 82931
diff --git a/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/ce8cfb4f-aab6-4458-b05a-8376aede26a3_model.json b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/ce8cfb4f-aab6-4458-b05a-8376aede26a3_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ec7d0784e4cb4f5a53421a42a193622d09fc9324
--- /dev/null
+++ b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/ce8cfb4f-aab6-4458-b05a-8376aede26a3_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d79c03ab037bfb62d6e687121b77d2bd4d418eaa267735e47f76c1b14c7b64f8
+size 99910
diff --git a/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/ce8cfb4f-aab6-4458-b05a-8376aede26a3_origin.pdf b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/ce8cfb4f-aab6-4458-b05a-8376aede26a3_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c941c584b1094a6c07272a4ac1e7e9fa0d658252
--- /dev/null
+++ b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/ce8cfb4f-aab6-4458-b05a-8376aede26a3_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:870da82eb4d87f4e5483d16a42db26518305ae941dcef3341acf8edba485d876
+size 3706609
diff --git a/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/full.md b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..510afe7913389d63695a123fecdd6f1b5aeaf302
--- /dev/null
+++ b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/full.md
@@ -0,0 +1,325 @@
+# Adaptive Computationally Efficient Network for Monocular 3D Hand Pose Estimation
+
+Zhipeng Fan $^{1}$ , Jun Liu $^{2\star}$ , and Yao Wang $^{1}$
+
+$^{1}$ Tandon School of Engineering, New York University, Brooklyn NY, USA {zf606, yw523}@nyu.edu
+
+$^{2}$ Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore jun.liu@sutd.edu.sg
+
+Abstract. 3D hand pose estimation is an important task for a wide range of real-world applications. Existing works in this domain mainly focus on designing advanced algorithms to achieve high pose estimation accuracy. However, besides accuracy, the computation efficiency that affects the computation speed and power consumption is also crucial for real-world applications. In this paper, we investigate the problem of reducing the overall computation cost yet maintaining the high accuracy for 3D hand pose estimation from video sequences. A novel model, called Adaptive Computationally Efficient (ACE) network, is proposed, which takes advantage of a Gaussian kernel based Gate Module to dynamically switch the computation between a light model and a heavy network for feature extraction. Our model employs the light model to compute efficient features for most of the frames and invokes the heavy model only when necessary. Combined with the temporal context, the proposed model accurately estimates the 3D hand pose. We evaluate our model on two publicly available datasets, and achieve state-of-the-art performance at $22\%$ of the computation cost compared to traditional temporal models.
+
+Keywords: 3D Hand Pose Estimation, Computation Efficiency, Dynamic Adaption, Gaussian Gate
+
+# 1 Introduction
+
+Understanding human hand poses is a long lasting problem in computer vision community, due to the great amount of potential applications in action recognition, AR/VR [28], robotics and human computer interactions (HCI) [11]. The problem of inferring 3D configurations of human hands from images and videos is inherently challenging because of the frequent self-occlusion and the large variance of hand poses. A large body of existing works address the problem of hand pose estimation from depth data [7,37], as it reduces ambiguities in the
+
+
+Fig. 1: Illustration of our Adaptive Computationally Efficient (ACE) network. In most of the time, the LSTM takes features from the coarse pose encoder and refines the predicted pose. Occasionally, when the pose varies a lot or severely occluded, the Gaussian Kernel Gate opts to compute fine features with the computationally heavy model to inject more accurate features to the LSTM.
+
+depth dimension and makes it easier to acquire the 3D poses of the corresponding hand. However, depth cameras, such as Kinect, are not always available and are prone to measurement errors if deployed in outdoor settings. Therefore, in this work we address the problem of 3D hand pose estimation with a monocular RGB commercial camera.
+
+Recent successes in 3D hand pose estimation [2, 3, 22, 26, 46] mainly focus on employing the same computation framework for all video frames, without considering the redundancy that exists across adjacent frames and the variation of the pose estimation difficulties over frames. The moving speed and occlusion status of the human hands vary when performing different actions, which inspires us to design a new scheme to dynamically allocate the computation resources based on the ambiguity determined by the current input frame and temporal context status. This kind of adaption mechanism is useful for both online and offline applications. For offline pose estimation from videos, being able to use a simpler computation module in most of the frames saves the amount of the resource usage and reduces the total inference time for the entire video. For online pose estimation applications (e.g. HCI and robots), multiple tasks often run concurrently under a total computation resource constraint, thus the saved resources at most of the frames could be released for other important tasks at those time steps, which meanwhile also reduces the amount of energy consumed by the pose estimation task.
+
+Motivated by our idea of reducing computation consumption, and given the fact that the information among video frames could be redundant and the pose estimation difficulty varies over frames, we propose a novel Adaptive Computationally Efficient (ACE) network using a recurrent 3D hand pose estimator with adaptive input. In our method, we design two base pose encoders based on the hourglass(HG) [27] architecture with different computational costs. A Long Short-Term Memory (LSTM) [14] model was introduced to refine the predicted pose and features from the single-frame base pose encoder, by considering the temporal consistency. We propose a new Gaussian Gate Module to automatically determine whether the low complexity coarse encoder output alone is sufficient for the LSTM, or the high complexity fine encoder is needed. The fine encoder is only invoked when necessary and its output is combined with the output of the coarse encoder to generate the input for the LSTM. The proposed network architecture is illustrated in Fig. 1. To facilitate the training of our switch module, which is naturally a discrete operation, an effective Gumbel-SoftMax strategy, as an approximation of sampling from discrete distributions, is introduced.
+
+To summarize, a novel end-to-end ACE network is proposed for 3D hand pose estimation from monocular video. It dynamically switches between using coarse v.s. fine features at each time step, which eliminates the computational cost of the fine encoder when the prediction from the coarse encoder is deemed sufficient. We evaluate our network on two broadly used datasets, First-Person Hand Action (FPHA) and Stereo Tracking Benchmark (STB), and obtain state-of-the-art pose estimation accuracy, while greatly reducing the overall computation cost (around $78\%$ on STB dataset), compared to baseline models that constantly use the fine encoder for all time steps.
+
+# 2 Related Work
+
+Most of the existing works focus on the accuracy of 3D hand pose estimation without explicitly considering the important computation cost issue. We will briefly review the recent works in both the 3D hand pose estimation domain as well as the recent endeavor in designing computationally efficient architectures for image and video understanding.
+
+3D hand pose estimation. 3D hand pose estimation is a long-standing problem in computer vision domain, and various methods have been proposed. We restrict ourselves to the more recent deep learning based approaches since they are more related to our work.
+
+A large body of the works on hand pose estimation operate on the depth input, which greatly reduces the depth ambiguity of the task. Deephand proposes a ConvNet model with an additional matrix completion algorithm to retrieve the actual poses [34]. Volumetric representation was adopted to better encode the depth image recently [7,8]. The volumetric representation is projected to multiple views and then processed by several 2D ConvNets followed by fusion in [7]. Rather than tedious projections to multiple views, a 3D ConvNet is directly in
+
+produced to infer the 3D position from the volumetric representations [8]. This line of work is further summarized in [9], in which the completeness of the 3D hand surface is leveraged as additional supervision. Rather than volumetric representations, the skeleton annotation could be represented as dense pixel-wise labels [37]. The predicted dense estimations are then converted back to 3D coordinates with a vote casting mechanism. Recently, self-supervised methods are also explored on a mixture of synthetic and unlabelled dataset by exploring the approximate depth and the kinematic feasibility as the weak supervision [36].
+
+Rather than performing pose estimation on depth data, we lay more focus on the works with RGB inputs, which are often less restricted in real-world applications. Zimmermann and Brox proposed a multi-stage network, which performs hand segmentation, localization, 2D and 3D pose estimations one by one [46]. Similar to the depth based method, depth regularization was employed to enable weakly supervised learning [2]. Instead of regressing the joint positions independently, kinematic model could be naturally integrated into the model to yield anatomically plausible results [26]. A latent 2.5D representation is introduced in [16], where the ConvNet also learns the implicit depth map of the entire palm. Numerous graphic models are also proposed to better handle the joint relationships [3, 22]. Spatial dependencies and temporal consistencies could be modeled explicitly with graph neural net [3] and could further boost the quality of estimated features [22] from hourglass models [27]. Another line of works reconstruct the shape and the pose of hands at the same time [1, 10, 25, 42, 45], in which either a hand mesh model [25, 33] or a generative GNN [45] is leveraged to map the low-dimensional hand pose & shape manifold to the full 3D meshes.
+
+Despite all the success in accurate hand pose estimation, we argue that the efficiency problem is also of vital importance, especially for AR/VR [28] and mobile devices [11], where resources are often limited. To harvest the redundancy present in the consecutive frames, we propose an adaptive dynamic gate to efficiently switch between an efficient light pose estimator and a computationally heavy pose estimator for 3D hand pose estimation from sequences of frames.
+
+Computationally efficient architectures. Recent progresses have shown that the computation efficiency of neural net models could be improved in various ways. Neural network pruning was first realized using second-order derivative [13, 19] and then evolved into pruning weights with relatively small magnitude [12]. Different from the pruning technique operated on fully trained models [12, 13, 19], recent developments reveal that pruning while training often results in better performance. This was achieved by enforcing additional loss ( $L1$ norm [23], Group LASSO [39] or $L0$ norm approximations [24].) during training. Other innovative ideas include specially designed architectures for high-efficiency computing [15, 44] and network quantization [4, 5, 20, 31].
+
+In videos, consecutive frames are often quite similar and strongly co-dependent, which leave lots of space for efficiency optimization. Recently, various works have been developed to improve the computation efficiency for video classification [18,29,38,40]. Leveraging the fact that most of the computational expansive layers (w/o activation) are linear and sparse feature updates are more efficient,
+
+a recurrent residual model was introduced [29] to incur minimum amount of feature updates between consecutive frames. Hierarchical coarse-to-fine architectures are also introduced for more efficient video inference [40]. Recently, RL frameworks are adopted to learn an efficient sampling agent to filter out salient parts/frames from videos for fast recognition [18] [41].
+
+In this work, we address the problem of dense hand pose estimation from video sequences, where we need to derive corresponding poses for each individual frame. We take advantage of the fact that at most of the time, when the motion of the hand is not extreme or the hand pose is not severely occluded, the 3D hand pose could be safely derived from the temporal context. We thus propose a novel Gaussian Kernel-based Adaptive Dynamic Gate module that explicitly measures the necessity to compute fine features with a costly model, which significantly reduces the total amount of computation in general. Our scheme is also orthogonal to many of the aforementioned methods, such as the pruning methods, which leaves the potential to further boost the efficiency.
+
+# 3 Method
+
+# 3.1 Overview
+
+Given a sequence of video frames $\{I^t\}_{t=1}^T$ , our task is to infer the 3D pose $\mathbf{P}^t = \{P_k^t\}_{k=1}^K$ of the hand at each frame $t$ , where $K$ denotes the number of hand joints, and $P_k^t$ denotes the 3D position of the joint $k$ at frame $t$ .
+
+The overall pipeline of our proposed ACE network is illustrated in Fig. 1. In our method, at each time step, both a less accurate yet computationally light model and an accurate but computationally heavy model can be selected as the pose encoder for the RGB input. The features from either models could be fed into a LSTM to refine the inferred features and the estimated pose based on the temporal coherence. To reduce the computation cost, inspired by the idea that temporal context can provide sufficient information when the motion of the target hand is slow or the pose is less challenging, we propose a novel Gaussian Kernel-based gate module as the key component of our ACE network, which compares the temporal context information provided by the LSTM model with the coarse features computed by the light encoder to assess the necessity of extracting fine features with the heavier encoder for the current time step. Below we introduce each component in more detail.
+
+# 3.2 Single Frame Hand Pose Estimator
+
+We first introduce two base pose encoders: coarse pose encoder and fine pose encoder, which have significantly different computation profiles for a single frame. Both models are constructed with the state-of-the-art hourglass (HG) network [27]. Furthermore, as illustrated in Fig. 2a, we augment it to directly regress the hand joint coordinates $\mathbf{P}^t$ via a ConvNet from the heat map of joint probabilities $\mathbf{H}^t = \{H_k^t\}_{k=1}^K$ , output feature map from HG as well as feature maps from early
+
+
+(a) Architecture of pose encoder
+(b) Schema of the Gaussian Kernel Gate.
+Fig. 2: Graphical illustration of the Single frame pose encoder and the Gaussian Kernel Based Gate.
+
+
+
+downsampling layers. The complexity of the models is adjusted by changing the number of convolutional layers and the size of the inputs of the hourglass module. We denote the light-weight coarse pose encoder model as $\mathbf{M}_{\mathrm{Coarse - Enc}}$ , and the heavy model as $\mathbf{M}_{\mathrm{Fine - Enc}}$ . These encoders extract pose related features, $\mathbf{F}_{\mathrm{coarse}}^t$ and $\mathbf{F}_{\mathrm{fine}}^t$ , based on the input frame $I^t$ , as follows:
+
+$$
+\mathbf {F} _ {\text {c o a r s e}} ^ {t} = \mathbf {M} _ {\text {C o a r s e - E n c}} \left(I ^ {t}\right) \tag {1}
+$$
+
+$$
+\mathbf {F} _ {\text {f i n e}} ^ {t} = \mathbf {M} _ {\text {F i n e - E n c}} \left(I ^ {t}\right) \tag {2}
+$$
+
+Note that in our final ACE network with the gate mechanism, we compute the coarse features $\left(\mathbf{F}_{\mathrm{coarse}}^{t}\right)$ for each frame, while the fine features $\left(\mathbf{F}_{\mathrm{fine}}^{t}\right)$ are computed for a fraction of time only, thus reducing the overall computation cost.
+
+# 3.3 Pose Refinement Recurrent Model
+
+In pose estimation from videos, a natural idea is to exploit the temporal context information for more smooth and accurate estimations, i.e., instead of solely relying on the information of the current frame, historical context can also be incorporated to reduce the ambiguities in pose estimation [21,30,32,35]. Thus we introduce a LSTM model to refine the estimations from the hourglass modules. The LSTM module, denoted as $\mathrm{M_{LSTM}}$ , takes the sequential features from pose encoder as inputs, and refine these input features using the temporal context.
+
+More formally, at the $t$ -th time step, the LSTM takes the pose-related features from the current frame as the input, and infer the 3D pose $(\mathbf{P}^t)$ for the current step based on the hidden state, as follows:
+
+$$
+h ^ {t}, c ^ {t} = \mathrm {M} _ {\text {L S T M}} \left(\mathbf {F} _ {\text {f r a m e}} ^ {t}, \left(h ^ {t - 1}, c ^ {t - 1}\right)\right) \tag {3}
+$$
+
+$$
+\mathbf {P} ^ {t} = W _ {\text {p o s e}} ^ {\top} h ^ {t} + b _ {\text {p o s e}} \tag {4}
+$$
+
+where $h^t$ and $c^t$ are the hidden state and cell state of the LSTM module respectively. $W_{\mathrm{pose}}$ and $b_{\mathrm{pose}}$ are the parameters of the output linear layer for regressing the final 3D hand joint coordinates. Here we denote the features from the single frame pose estimator as $\mathbf{F}_{\mathrm{frame}}^t$ , which is controlled by our adaptive dynamic gate model (introduced next) and could be either the coarse features $\mathbf{F}_{\mathrm{coarse}}^t$ or the weighted combination of the coarse features $\mathbf{F}_{\mathrm{coarse}}^t$ and fine features $\mathbf{F}_{\mathrm{Fine}}^t$ .
+
+# 3.4 Adaptive Dynamic Gate Model
+
+Recall that when humans perform activities with their hands, the motion speed and the self-occlusion status of the hands vary across different activities and different frames. In some of the actions like "high five", the palm is often less occluded and the pose pattern is relatively static and simple, while in some other actions, like "open soda can" and "handshake", the human hand is often under severe occlusions and presents rich and delicate movements of the fingers.
+
+This inspires us to rely more on the temporal context information (and only use a brief glimpse over the current frame with the coarse pose encoder) for pose inference when the pose pattern is simple, stable and could be safely derived from the temporal context. However, if the temporal context is not consistent with the current frame information, this means either the current frame could be challenging for pose inference (i.e. pose inaccurately estimated by coarse pose encoder but temporal context is reliable) or significantly differs from previous frames due to large motions (i.e. temporal context becomes unstable), and thus the network needs to take a more careful examination for the current frame by using the fine pose encoder. Therefore, we propose an adaptive dynamic gate model in our ACE framework to dynamically determine the granularity of the features needed for pose estimation with our LSTM model.
+
+Assuming the motion of the hand is smooth, the first and second-order statistics of the hand's status over different frames provide useful context information for estimating the evolution of the hand pose over time. Accordingly, we compute the first-order difference $(h^{t'})$ and second-order difference $(h^{t''})$ over the history hidden states of the LSTM to estimate the motion status information of the hand pose as:
+
+$$
+h ^ {t ^ {\prime}} = h ^ {t} - h ^ {t - 1} \tag {5}
+$$
+
+$$
+h ^ {t ^ {\prime \prime}} = \left(h ^ {t} - h ^ {t - 1}\right) - \left(h ^ {t - 1} - h ^ {t - 2}\right) \tag {6}
+$$
+
+At the time step $t$ , we feed the hidden state of the previous frame $(h^{t - 1})$ , as well as its first and second-order information $(h^{t - 1^{\prime}}$ and $h^{t - 1^{\prime \prime}}$ ) as the history context information, to our gate module, which then estimates the pose feature information of current frame $(t)$ with a sub-network, as follows:
+
+$$
+\widetilde {\mathbf {F}} ^ {t} = W _ {g} ^ {\top} \left[ h ^ {t - 1}, h ^ {t - 1 ^ {\prime}}, h ^ {t - 1 ^ {\prime \prime}} \right] + b _ {g} \tag {7}
+$$
+
+We then measure the similarity of the predicted pose feature information $(\widetilde{\mathbf{F}}^t)$ that is completely estimated from the temporal context of previous frames, with the pose features $(\mathbf{F}_{\mathrm{coarse}}^t)$ that are extracted with the coarse pose encoder solely based on current frame $I^t$ , via a Gaussian Kernel with a fixed spread $\omega$ as follows:
+
+$$
+\mathcal {G} _ {\mathrm {c o a r s e}} ^ {t} = \left[ \exp \left(- \frac {(\widetilde {\mathbf {F}} ^ {t} - \mathbf {F} _ {\mathrm {c o a r s e}} ^ {t}) ^ {2}}{\omega^ {2}}\right) \right] _ {\mathrm {M e a n}} \tag {8}
+$$
+
+This Gaussian Kernel based gate outputs mean value $(\mathcal{G}_{coarse}^{t})$ between 0 and 1, which provides an explicit measurement of the consistency and similarity between $\widetilde{\mathbf{F}}^t$ and $\mathbf{F}_{\mathrm{coarse}}^t$ , implying the pose estimation difficulty of current frame, i.e., higher $\mathcal{G}_{coarse}^{t}$ value indicates simple pose and stable movements of the hand.
+
+If the hand pose status at this step changes a lot and pose feature becomes unpredictable from the temporal context, or the pose at current frame becomes challenging, leading to the pose features $\left(\mathbf{F}_{\mathrm{coarse}}^{t}\right)$ extracted by the coarse pose encoder not reliable and therefore inconsistent with the temporal context, the discrepancy between $\widetilde{\mathbf{F}}^t$ and $\mathbf{F}_{\mathrm{coarse}}^t$ grows larger, and thus our Gaussian gate will output a relatively small value close to 0.
+
+With an estimation of the difficulty of current frame, we then decide if we need to employ the more powerful fine pose encoder to carefully examine the input frame of current time step. Specifically, we can use the $\mathcal{G}_{\mathrm{coarse}}^t$ from our Gaussian gate as the confidence score of staying with the coarse pose encoder for current time step, and naturally $\mathcal{G}_{\mathrm{fine}}^t = 1 - \mathcal{G}_{\mathrm{coarse}}^t$ becomes the score that we need to use the more powerful fine pose encoder.
+
+A straight-forward switching mechanism would be to directly follow the one with a larger confidence score, i.e., if $\mathcal{G}_{\mathrm{fine}}^t > \mathcal{G}_{\mathrm{coarse}}^t$ , we need to involve the fine pose encoder for the current frame. This switching operation is however a discrete operation that is not differentiable. To facilitate the network training, following the recent work on reparameterization for the categorical distribution [17], we reparameterize the Bernoulli distribution with the Gumbel-Softmax trick, which introduces a simple yet efficient way to draw samples $z$ from a categorical distribution parameterized by the unnormalized probability $\pi$ . Specifically, we can approximately sample from $\pi_i$ following:
+
+$$
+z _ {i} = \underset {i \in \mathcal {M}} {\operatorname {a r g m a x}} \left[ g _ {i} + \log \pi_ {i} \right] \quad \mathcal {M} = \{\text {c o a r s e , f i n e} \} \tag {9}
+$$
+
+where at each time step $t$ , we set $\pi_i^t = -\log (1 - \mathcal{G}_i^t)$ , which is the unnormalized version of the predict probability $\mathcal{G}_i^t$ in Bernoulli distribution $\mathcal{G}_i^t \in \{\mathcal{G}_{\mathrm{coarse}}^t, \mathcal{G}_{\mathrm{fine}}^t\}$ . $g_i$ is the Gumbel noise. Here we draw samples from the Gumbel distribution following $g_i = -\log (-\log (u_i))$ , where $u_i$ is the i.i.d. samples drawn from Uniform(0,1). We further relax the non-differentiable operation argmax with softmax to facilitate back propagation. The final sampled probability is obtained with:
+
+$$
+z _ {i} ^ {t} = \frac {\exp \left(\left(g _ {i} + \log \pi_ {i} ^ {t}\right) / \tau\right)}{\sum_ {j} \exp \left(\left(g _ {j} + \log \pi_ {j} ^ {t}\right) / \tau\right)} \quad \text {f o r} i, j \in \{\text {c o a r s e , f i n e} \} \tag {10}
+$$
+
+where $\tau$ is the hyper-parameter of temperature, which controls the discreteness of the sampling mechanism. When $\tau \to \infty$ , the sample approximates the uniform sampling, and when $\tau \to 0$ , it yields the argmax operation while allows the gradient to be back-propagated.
+
+During training, we obtain the confidence scores of using rough glimpse for the input frame via the coarse pose encoder or using careful derived features
+
+with the fine encoder, via the Gumbel-SoftMax trick following Eq. (10), and then we combine the coarse features $\mathbf{F}_{\mathrm{coarse}}^{t}$ and fine features $\mathbf{F}_{\mathrm{fine}}^{t}$ as:
+
+$$
+\mathbf {F} _ {\text {w e i g h t e d}} ^ {t} = z _ {\text {c o a r s e}} ^ {t} \mathbf {F} _ {\text {c o a r s e}} ^ {t} + z _ {\text {f i n e}} ^ {t} \mathbf {F} _ {\text {f i n e}} ^ {t} \tag {11}
+$$
+
+During evaluation, we omit the sampling process and directly use the coarse features when $\mathcal{G}_{\mathrm{fine}}^t \leq \lambda$ , and use the weighted average features when $\mathcal{G}_{\mathrm{fine}}^t > \lambda$ with weight $\mathcal{G}_{\mathrm{fine}}^t$ and $\mathcal{G}_{\mathrm{coarse}}^t$ . In general, $\lambda$ is set to 0.5, which essentially follows the larger probability. This threshold $\lambda$ could also be tweaked to balance between accuracy and efficiency during inference.
+
+# 3.5 Training Strategy and Losses
+
+We employ a two-step training strategy, in which we separately train the single-frame coarse pose encoder and fine pose encoder first, and then fine-tune them during the training of the LSTM pose refinement module and the adaptive gate module. To train the single frame pose encoder, we use the combination of 2D heat map regression loss and 3D coordinate regression loss:
+
+$$
+\mathcal {L} _ {\text {s i n g l e}} = \frac {1}{K} \sum_ {k = 1} ^ {K} \left(\widetilde {H} ^ {k} - H ^ {k}\right) ^ {2} + \beta \cdot \text {S m o o t h L 1} (\widetilde {\mathbf {P}}, \mathbf {P}) \tag {12}
+$$
+
+where $H^{k}$ corresponds to the 2D heat map of joint $k$ and $\mathbf{P}$ is the 3D joint coordinates. We use the mean squared loss for the heat map and Smooth L1 loss for the 3D coordinates, which has a squared term when the absolute element-wise difference is below 1 (otherwise it is essentially a L1 term).
+
+The single frame pose estimator is then fine-tuned when training the pose refinement LSTM and the gate module. To prevent the gate module from constantly using the fine features, we set an expected activation frequency $(\gamma_{g})$ for the gate, and optimize the mean square error between mean probability of using fine encoder and the expected activation frequency. Specifically, we define the loss mathematically given the expected activate rate, $\gamma_{g}$ as:
+
+$$
+\mathcal {L} _ {\text {w h o l e}} = \sum_ {d \in \mathcal {S}} \operatorname {S m o o t h} \mathrm {L} 1 (\widetilde {\mathbf {P}} _ {\mathrm {d}}, \mathbf {P} _ {\mathrm {d}}) + \delta \cdot \mathbb {E} _ {z ^ {t} \sim \text {B e r n o u l l i} \left(\mathcal {G} ^ {t} \mid \theta_ {g}\right)} \left(\frac {1}{T} \sum_ {t = 1} ^ {T} z _ {\text {f i n e}} ^ {t} - \gamma_ {g}\right) ^ {2} \tag {13}
+$$
+
+where $S = \{\text{coarse, fine, LSTM}\}$ and $z_{\text{fine}}^{t}$ is the sample probability based on the prediction $\mathcal{G}^t$ given by the adaptive dynamic gate model. $\theta_g$ denotes the parameter of the gate and $\delta$ balances the accuracy and the efficiency.
+
+# 4 Experiments
+
+# 4.1 Datasets and Metrics
+
+We evaluate our ACE network on two publicly available datasets, namely the Stereo Tracking Benchmark (STB) [43] dataset and the First-Person Hand Action (FPHA) [6] dataset.
+
+Stereo Tracking Benchmark (STB) provides 2D and 3D pose annotations of the 21 hand keypoints for 12 stereo video sequences. Each sequence consists of 1500 RGB frames for both the left camera and right camera. In total, this dataset consists of 18000 frames and the resolution of the frame is $640 \times 480$ . Within the dataset, 6 set of different backgrounds are captured, with each background appears in two video sequences. Following the setting of [46], we separate the dataset into a training set of 10 videos (15000 frames) and a evaluation set of 2 video sequences (3000 frames).
+
+First Person Hand Action (FPHA) contains video sequences for 45 different daily actions from 6 different subjects in egocentric views. In total, FPHA contains more than $100\mathrm{k}$ frames with a resolution of $1920 \times 1080$ . The ground truth is provided via a mo-cap system and derived with inverse kinematics. Similar to the STB dataset, 21 keypoints on the human hand are annotated. Object interaction with 26 different objects is involved, which introduces additional challenges to hand pose estimation. We follow the official split of the dataset.
+
+Metrics. We report the Percentage of Correct Keypoints (PCKs) under $20~\mathrm{mm}$ and the Area Under Curve (AUC) of PCKs under the error thresholds from $20~\mathrm{mm}$ to $50~\mathrm{mm}$ for STB dataset following [46], and from $0~\mathrm{mm}$ to $50~\mathrm{mm}$ for FPHA dataset. We report average GFLOPs1 per frame for speed comparison, which does not rely on the hardware configurations and thus provides more objective evaluations.
+
+# 4.2 Implementation Details
+
+Although the proposed ACE module is theoretically compatible with different pose encoder architectures, we mainly evaluate it with the hourglass (HG) architecture [27] as it is widely used and works well in many existing works [22, 45]. Compared to the FPHA dataset, STB is less challenging as no hand-object interaction is involved. Therefore, different HG architectures are employed for different datasets. For the STB dataset, the coarse pose encoder contains one hourglass module with 32 feature channels, while for the fine pose encoder, we employ 64 channels. In addition to the different configurations of the module, the input images to the coarse and fine modules are set to $64 \times 64$ and $256 \times 256$ respectively, which greatly reduce the amount of computation. For the more challenging FPHA dataset, we keep the configurations of the fine pose encoder as STB, while for the coarse pose encoder, we double the size of input to $128 \times 128$ . Please see the supplementary materials for more details of the pose encoder.
+
+For the LSTM refinement module, we use one layer of LSTM with hidden state dimension of 256. The hidden states and its order statistics are first mapped to a fixed dimension of 256 and then concatenated as the input to our adaptive Gaussian gate. During training, we set $\gamma_{g} = 0.05$ for STB and $\gamma_{g} = 0.01$ for FPHA and $\omega = 0.1$ .
+
+Table 1: Results of various models (vanilla single frame coarse/fine models and their variants considering temporal dynamics) for 3D hand pose estimation. Our adaptive model uses much less computation with minor accuracy drops.
+
+| Method | STB | FPHA |
| 3D PCK20 | AUC(20-50) | GFLOPs | 3D PCK20 | AUC(0-50) | GFLOPs |
| Coarse-HG | 85.1% | 0.946 | 0.28 | 72.6% | 0.674 | 1.10 |
| Fine-HG | 96.3% | 0.994 | 6.96 | 79.7% | 0.714 | 6.96 |
| Vanilla-LSTM-Coarse-HG | 92.1% | 0.973 | 0.28 | 78.9% | 0.707 | 1.10 |
| Vanilla-LSTM-Fine-HG | 98.7% | 0.997 | 6.96 | 83.9% | 0.740 | 6.96 |
| Vanilla-LSTM-Mix-HG | 98.7% | 0.997 | 7.24 | 83.1% | 0.734 | 8.06 |
| Adaptive-LSTM-Mix-HG | 97.9% | 0.996 | 1.56 | 82.9% | 0.731 | 1.37 |
+
+# 4.3 Main Results
+
+We conduct extensive experiments to show the advantages of our proposed ACE framework for hand pose estimation from videos. We compare the accuracy and computation efficiency among different models and further visualize the prediction results of our model. To facilitate the understanding of the gate behaviour, we also present the frames selected for fine feature computation.
+
+
+(a) STB dataset
+
+
+(b) FPHA dataset
+Fig. 3: Quantitative evaluations. We achieve state-of-the-art performance on STB, and outperform the existing methods on FPHA by a large margin.
+
+Quantitative comparison. We present the comparison among our adaptive dynamic gate model and various baselines in Table 1, where Coarse-HG/fine-HG indicates that the baseline pose encoder (hourglass structure) is employed to predict 3D joint coordinates frame by frame. For the Vanilla-LSTM variants, we take features from either coarse pose encoder, fine pose encoder, or average features from coarse and fine pose encoders, and then feed them into an ordinary LSTM model without gate module. The detailed results are in Table. 1.
+
+Table 2: Comparison of the computation cost with state-of-the-arts on STB. Our method achieves higher AUC yet consumes significantly less computation.
+
+| Method | 3D PCK20 | AUC | GFLOPs |
| Z&B [46] | 0.870 | 0.948 | 78.2 |
| Liu et al. [22] | 0.895 | 0.964 | 16.0 |
| HAMR [45] | 0.982 | 0.995 | 8.0 |
| Cai et al. [3] | 0.973 | 0.995 | 6.2 |
| Ours | 0.979 | 0.996 | 1.6 |
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Fig. 4: Visualization of pose estimation. The top row shows input frames and the bottom row visualizes the predicted poses (red) and ground-truth poses (green).
+
+
+
+
+
+
+
+
+
+
+
+As shown in Table 1, our adaptive model obtains comparable performance to our designed baseline model "Vanilla-LSTM-Fine-HG" that constantly takes the fine features for pose estimation with less than $1/4$ computation cost by computing the fine features only on selected frames. Besides, our proposed method obtains state-of-the-art performance on both benchmarks, which is presented in Fig. 3a and 3b, where we plot the area under the curve (AUC) on the percentage of the correct key points (PCK) with various thresholds.
+
+In addition to the comparison in terms of the accuracy, we further evaluate the speed of our model compared to the existing art. The detailed comparison are illustrated in Table 2. As FPHA dataset is relatively new and fewer works report their performance, we mainly conduct the evaluation on the STB dataset.
+
+Visualization To verify our model works well in terms of accurately deriving poses from the RGB images. We visualize a few predictions by our network in Fig. 4. Our model is capable of inferring precise poses from RGB input images even under severe occlusion and challenging lightning conditions.
+
+We further look into the mechanism of the Gaussian kernel based gate. We visualize a few test sequences as in Fig. 5. In (a), the fine pose encoder activates less often for the straightforward poses while more densely used for the challenging poses close to the end of the sequence. For (b), the gate tends to invoke fine pose encoder more often when occlusion presents (1st half v.s. 2nd half), while in (c) and (d), when large motion presents (see the rightmost blurry frames from both sequences), the gate chooses to examine the frame more closely with the fine pose encoder. Those observations are in par with our motivations that to only invoke the computationally heavy pose encoders when necessary.
+
+
+Fig. 5: Visualization of frames selected (marked with yellow boxes) to adopt the fine pose encoder. The fine encoder activates sparsely when the pose is straightforward while is frequently used when the pose becomes challenging (left part v.s. right part of (a)). When the hand pose status becomes less stable (see rightmost part of (c) and (d)) or occlusions become more severe (see rightmost part of (b)), our model tends to use the fine encoder more frequently. The frequency of invoking fine pose encoder is much lower when the poses are relatively stable.
+
+Table 3: Evaluation of using different gate architectures on STB dataset. $P_{\mathrm{fine}}$ denotes the frequency of using the fine pose encoder. Our Gaussian kernel gate achieves highest accuracy yet at lowest computation cost.
+
+| Gate | γg | 3D | PCK20 | AUC | GFLOPs | Pfine |
| Neural Gate | 0.1 | 0.981 | 0.995 | 2.54 | 0.32 | |
| Neural Temporal Gate | 0.1 | 0.977 | 0.996 | 2.20 | 0.43 | |
| Gaussian Kernel Gate | 0.1 | 0.983 | 0.997 | 2.09 | 0.26 | |
+
+# 4.4 Ablation study
+
+We first study on the design choice of the Gaussian kernel based adaptive gate. Instead of explicitly parameterize the difference with Gaussian function, one straight forward way would be to directly predict the probability via a linear module. The linear module takes the hidden state, 1st and 2nd order statistics and coarse feature as the input and yields the probability of introducing the fine module. This model is referred as Neural Gate. Going one step further, although the coarse pose encoder is relatively light, we could still obtain performance gains by avoiding it and derive probability solely based on the temporal context. Therefore, we also evaluate the model that make decisions based on the temporal context only, which is referred as Neural Temporal Gate. The detailed results are in Table. 3.
+
+As shown in Table. 3, different gates offer similar performance while the Gaussian Kernel is slightly more accurate and more efficient. We further investigate the impact of a few hyper parameters on the overall performance. Specifically, we look into the $\gamma_{g}$ in Table 4 and $\lambda$ in Table 5, which could be tweaked to adjust the rate of computing fine features before and after training.
+
+When varying $\gamma_{g}$ from 0.3 to 0.01, the accuracy of the models does not vary much while the frequency of using fine features drops from 0.43 to 0.15, which suggests the large amount redundancy in consecutive frames are exploited by
+
+Table 4: Evaluation of different $\gamma_{g}$ values for network training on STB dataset. As the expected usage of fine pose encoder drops, the computation cost falls significantly, while the accuracy decreases marginally
+
+| γg | 3D PCK20 | AUC | GFLOPs | Pfine |
| 1 | 0.987 | 0.9978 | 6.96 | 1 |
| 0.3 | 0.984 | 0.9972 | 3.30 | 0.43 |
| 0.2 | 0.985 | 0.9973 | 2.34 | 0.29 |
| 0.1 | 0.983 | 0.9970 | 2.09 | 0.26 |
| 0.05 | 0.979 | 0.9962 | 1.56 | 0.18 |
| 0.01 | 0.977 | 0.9956 | 1.37 | 0.15 |
| 0.001 | 0.955 | 0.9897 | 1.43 | 0.16 |
+
+Table 5: Evaluation of different $\lambda$ ( $\gamma_{g} = 0.1$ ) during testing on the STB dataset. For the same trained model, with higher $\lambda$ , fine encoder is used less often, i.e., we can configure $\lambda$ to balance the trade-off between the efficiency and accuracy.
+
+| λ | 3D | PCK20 | AUC | GFLOPs | \( P_{\text{fine}} \) |
| 0.1 | 0.987 | | 0.9977 | 7.01 | 0.97 |
| 0.3 | 0.986 | | 0.9976 | 3.31 | 0.43 |
| 0.5 | 0.983 | | 0.9970 | 2.09 | 0.26 |
| 0.7 | 0.943 | | 0.9894 | 0.88 | 0.08 |
| 0.9 | 0.505 | | 0.8277 | 0.30 | 0 |
+
+the ACE model. While for $\lambda$ , with a larger threshold, we greatly reduce the frequency of using fine encoders at the cost of accuracy. $\lambda$ could be adjusted during inference to balance the trade off between efficiency and accuracy.
+
+# 5 Conclusion
+
+We present the ACE framework, an adaptive dynamic model for efficient hand pose estimation from monocular videos. At the core of the ACE model is the Gaussian kernel based gate, which determines whether to carefully examine the current frame using a computationally heavy pose encoder based on a quick glimpse of the current frame with a light pose encoder and the temporal context. We further introduce the Gumbel-SoftMax trick to enable the learning of the discrete decision gate. As a result, we obtain state of the art performance on 2 widely used datasets, STB and FPHA, while with less than $1/4$ of the computation compared to the baseline models. The proposed ACE model is general and could be built upon any single frame pose encoder, which indicates the efficiency could be further improved by harvesting more efficient structures as single frame pose encoder.
+
+Acknowledgements This work is partially supported by the National Institutes of Health under Grant R01CA214085 as well as SUTD Projects PIE-SGP-Al-2020-02 and SRG-ISTD-2020-153.
+
+# References
+
+1. Boukhayma, A., Bem, R.d., Torr, P.H.: 3d hand shape and pose from images in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 10843-10852 (2019)
+2. Cai, Y., Ge, L., Cai, J., Yuan, J.: Weakly-supervised 3d hand pose estimation from monocular rgb images. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 666-682 (2018)
+3. Cai, Y., Ge, L., Liu, J., Cai, J., Cham, T.J., Yuan, J., Thalmann, N.M.: Exploiting spatial-temporal relationships for 3d pose estimation via graph convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2272-2281 (2019)
+4. Courbariaux, M., Bengio, Y., David, J.P.: Binaryconnect: Training deep neural networks with binary weights during propagations. In: Advances in neural information processing systems. pp. 3123-3131 (2015)
+5. Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks: Training deep neural networks with weights and activations constrained to + 1 or -1. arXiv preprint arXiv:1602.02830 (2016)
+6. Garcia-Hernando, G., Yuan, S., Baek, S., Kim, T.K.: First-person hand action benchmark with rgb-d videos and 3d hand pose annotations. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 409-419 (2018)
+7. Ge, L., Liang, H., Yuan, J., Thalmann, D.: Robust 3d hand pose estimation in single depth images: from single-view cnn to multi-view cnns. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3593-3601 (2016)
+8. Ge, L., Liang, H., Yuan, J., Thalmann, D.: 3d convolutional neural networks for efficient and robust hand pose estimation from single depth images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1991-2000 (2017)
+9. Ge, L., Liang, H., Yuan, J., Thalmann, D.: Real-time 3d hand pose estimation with 3d convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence 41(4), 956-970 (2018)
+0. Ge, L., Ren, Z., Li, Y., Xue, Z., Wang, Y., Cai, J., Yuan, J.: 3d hand shape and pose estimation from a single rgb image. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 10833-10842 (2019)
+1. Goudidis, F., Panteleris, P., Oikonomidis, I., Argyros, A.: Accurate hand keypoint localization on mobile devices. In: 2019 16th International Conference on Machine Vision Applications (MVA). pp. 1-6. IEEE (2019)
+2. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Advances in neural information processing systems. pp. 1135-1143 (2015)
+3. Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: Optimal brain surgeon. In: Advances in neural information processing systems. pp. 164-171 (1993)
+4. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735-1780 (1997)
+5. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
+
+16. Iqbal, U., Molchanov, P., Breuel Juergen Gall, T., Kautz, J.: Hand pose estimation via latent 2.5 d heatmap regression. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 118-134 (2018)
+17. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 (2016)
+18. Korbar, B., Tran, D., Torresani, L.: Scsampler: Sampling salient clips from video for efficient action recognition. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 6232-6242 (2019)
+19. LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Advances in neural information processing systems. pp. 598-605 (1990)
+20. Li, Z., Ni, B., Zhang, W., Yang, X., Gao, W.: Performance guaranteed network acceleration via high-order residual quantization. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2584-2592 (2017)
+21. Lin, M., Lin, L., Liang, X., Wang, K., Cheng, H.: Recurrent 3d pose sequence machines. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 810-819 (2017)
+22. Liu, J., Ding, H., Shahroudy, A., Duan, L.Y., Jiang, X., Wang, G., Kot, A.C.: Feature boosting network for 3d pose estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence 42(2), 494-501 (2020)
+23. Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2736-2744 (2017)
+24. Louizos, C., Welling, M., Kingma, D.P.: Learning sparse neural networks through $l_{-}0$ regularization. arXiv preprint arXiv:1712.01312 (2017)
+25. Malik, J., Elhayek, A., Nunnari, F., Varanasi, K., Tamaddon, K., Heloir, A., Stricker, D.: Deephps: End-to-end estimation of 3d hand pose and shape by learning from synthetic depth. In: 2018 International Conference on 3D Vision (3DV). pp. 110-119. IEEE (2018)
+26. Mueller, F., Bernard, F., Sotnychenko, O., Mehta, D., Sridhar, S., Casas, D., Theobalt, C.: Generated hands for real-time 3d hand tracking from monocular rgb. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 49-59 (2018)
+27. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: European conference on computer vision. pp. 483-499. Springer (2016)
+28. Oculus: Hand tracking SDK for Oculus quest available with v12 release, https://developer.oculus.com/blog/hand-tracking-sdk-for-oculus-quest-available
+29. Pan, B., Lin, W., Fang, X., Huang, C., Zhou, B., Lu, C.: Recurrent residual module for fast inference in videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1536-1545 (2018)
+30. Pavllo, D., Feichtenhofer, C., Grangier, D., Auli, M.: 3d human pose estimation in video with temporal convolutions and semi-supervised training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7753-7762 (2019)
+31. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: Xnor-net: Imagenet classification using binary convolutional neural networks. In: European conference on computer vision. pp. 525-542. Springer (2016)
+32. Rayat Imtiaz Hossain, M., Little, J.J.: Exploiting temporal information for 3d human pose estimation. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 68-84 (2018)
+
+33. Romero, J., Tzionas, D., Black, M.J.: Embodied hands: Modeling and capturing hands and bodies together. ACM Transactions on Graphics (ToG) 36(6), 245 (2017)
+34. Sinha, A., Choi, C., Ramani, K.: Deephand: Robust hand pose estimation by completing a matrix imputed with deep features. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4150-4158 (2016)
+35. Tekin, B., Rozantsev, A., Lepetit, V., Fua, P.: Direct prediction of 3d body poses from motion compensated sequences. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 991-1000 (2016)
+36. Wan, C., Probst, T., Gool, L.V., Yao, A.: Self-supervised 3d hand pose estimation through training by fitting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 10853-10862 (2019)
+37. Wan, C., Probst, T., Van Gool, L., Yao, A.: Dense 3d regression for hand pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5147-5156 (2018)
+38. Wang, F., Wang, G., Huang, Y., Chu, H.: Sast: Learning semantic action-aware spatial-temporal features for efficient action recognition. IEEE Access 7, 164876-164886 (2019)
+39. Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: Advances in neural information processing systems. pp. 2074-2082 (2016)
+40. Wu, Z., Xiong, C., Jiang, Y.G., Davis, L.S.: Liteeval: A coarse-to-fine framework for resource efficient video recognition. In: Advances in Neural Information Processing Systems. pp. 7778-7787 (2019)
+41. Wu, Z., Xiong, C., Ma, C.Y., Socher, R., Davis, L.S.: Adaframe: Adaptive frame selection for fast video recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1278-1287 (2019)
+42. Xiang, D., Joo, H., Sheikh, Y.: Monocular total capture: Posing face, body, and hands in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 10965-10974 (2019)
+43. Zhang, J., Jiao, J., Chen, M., Qu, L., Xu, X., Yang, Q.: 3d hand pose tracking and estimation using stereo matching. arXiv preprint arXiv:1610.07214 (2016)
+44. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 6848-6856 (2018)
+45. Zhang, X., Li, Q., Mo, H., Zhang, W., Zheng, W.: End-to-end hand mesh recovery from a monocular rgb image. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2354-2364 (2019)
+46. Zimmermann, C., Brox, T.: Learning to estimate 3d hand pose from single rgb images. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4903-4911 (2017)
\ No newline at end of file
diff --git a/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/images.zip b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..aa9d898774846a284ec8b3b6cb059a2f2d8e6684
--- /dev/null
+++ b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:46ec1318515331d19a3c3c1b8b1a5dd39dbec46c29c5903b0a3d3293e85b2aa8
+size 455591
diff --git a/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/layout.json b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..efaf53e438cfea7156f1e5f2371d27789af6cbfd
--- /dev/null
+++ b/adaptivecomputationallyefficientnetworkformonocular3dhandposeestimation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:178e9b1fc6df731764b94ecb2f7cc33d5a7181fb324ae9fcb03891e2c0986bf2
+size 428000
diff --git a/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/20cc9823-27e8-4b97-95f3-8c57433a4366_content_list.json b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/20cc9823-27e8-4b97-95f3-8c57433a4366_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f73ceb7eb425fb579009c27afbbf4df51c3c5e76
--- /dev/null
+++ b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/20cc9823-27e8-4b97-95f3-8c57433a4366_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f5341e733a7e33f22f4b46ed22b336f7db6a5c0c4a0d9b8ab2aae62950a87652
+size 72059
diff --git a/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/20cc9823-27e8-4b97-95f3-8c57433a4366_model.json b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/20cc9823-27e8-4b97-95f3-8c57433a4366_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..918dfae3107dfd1d97489b941af4e9c858908f28
--- /dev/null
+++ b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/20cc9823-27e8-4b97-95f3-8c57433a4366_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33f262fb42745482652d39c0af26d6a739c35afbd95364ae429db42f3cc80ac5
+size 88683
diff --git a/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/20cc9823-27e8-4b97-95f3-8c57433a4366_origin.pdf b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/20cc9823-27e8-4b97-95f3-8c57433a4366_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a56d1ee206a4396de020718838246883bdbdd369
--- /dev/null
+++ b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/20cc9823-27e8-4b97-95f3-8c57433a4366_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a8cc6a5e7935196d905634b487d0e9ebb1f4064a6e98d735f8b505d2f40f9f52
+size 1103622
diff --git a/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/full.md b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..1b52b25de29c0226f8b4590b663dc2fe9197a521
--- /dev/null
+++ b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/full.md
@@ -0,0 +1,260 @@
+# Adaptive Margin Diversity Regularizer for handling Data Imbalance in Zero-Shot SBIR
+
+Titir Dutta, Anurag Singh, and Soma Biswas
+
+Indian Institute of Science, Bangalore, India {titird, anuragsingh2, somabiswas}@iisc.ac.in
+
+Abstract. Data from new categories are continuously being discovered, which has sparked significant amount of research in developing approaches which generalizes to previously unseen categories, i.e. zero-shot setting. Zero-shot sketch-based image retrieval (ZS-SBIR) is one such problem in the context of cross-domain retrieval, which has received lot of attention due to its various real-life applications. Since most real-world training data have a fair amount of imbalance; in this work, for the first time in literature, we extensively study the effect of training data imbalance on the generalization to unseen categories, with ZS-SBIR as the application area. We evaluate several state-of-the-art data imbalance mitigating techniques and analyze their results. Furthermore, we propose a novel framework AMDReg (Adaptive Margin Diversity Regularizer), which ensures that the embeddings of the sketch and images in the latent space are not only semantically meaningful, but they are also separated according to their class-representations in the training set. The proposed approach is model-independent, and it can be incorporated seamlessly with several state-of-the-art ZS-SBIR methods to improve their performance under imbalanced condition. Extensive experiments and analysis justifies the effectiveness of the proposed AMDReg for mitigating the effect of data imbalance for generalization to unseen classes in ZS-SBIR.
+
+# 1 Introduction
+
+Sketch-based image retrieval (SBIR) [15][35], which deals with retrieving natural images, given a hand-drawn sketch query, has gained significant traction because of its potential applications in e-commerce, forensics, etc. Since new categories of data are continuously being added to the system, it is important for algorithms to generalize well to unseen classes, which is termed as Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR) [6][5][16][7]. Majority of ZS-SBIR approaches learn a shared latent-space representation for both sketch and image, where sketches and images from same category come closer to each other and also incorporate additional techniques to facilitate generalization to unseen classes.
+
+One important factor that has been largely overlooked in this task of generalization to unseen classes is the distribution of the training data. Real-world data, used to train the model, is not always class-wise or domain-wise well-balanced. When training and test categories are same, as expected, class imbalance in
+
+the training data results in severe degradation in testing performance, specially for the minority classes. Many seminal approaches have been proposed to mitigate this effect for the task of image classification [14][11][2][4], but the effect of data imbalance on generalization to unseen classes is relatively unexplored, both for single and cross-domain applications. In fact, both of the two large-scale datasets, widely used for SBIR/ ZS-SBIR, namely Sketchy Extended [25] and TU-Berlin Extended [8] have data imbalance. In cross-domain data, there can be two types of imbalance: 1) domain imbalance - where the number of data samples in one domain is significantly different compared to the other domain; 2) class imbalance - where there is a significant difference in the number of data samples per class. TU-Berlin Ext. exhibits imbalance of both types. Although a recent paper [5] has attributed poor retrieval performance for TU-Berlin Ext. to data imbalance, no measures have been proposed to handle this.
+
+Here, we aim to study the effect of class imbalance in the training data on the retrieval performance of unseen classes in the context of ZS-SBIR, but interestingly we observe that the proposed framework works well even when both types of imbalances are present. We analyze several state-of-the-art approaches for mitigating the effect of training data imbalance on the final retrieval performance. To this end, we propose a novel regularizer termed AMDReg - Adaptive Margin Diversity Regularizer, which ensures that the embeddings of the data samples in the latent space account for the distribution of classes in the training set. To facilitate generalization to unseen classes for ZS-SBIR, majority of the ZS-SBIR approaches impose a direct or indirect semantic constraint on the latent-space which ensures that the sketch and image samples from unseen classes during testing are embedded in the neighborhood of its related seen classes. But merely imposing a semantic constraint does not account for the training class imbalance. The proposed AMDReg, which is computed from the class-wise training data distribution present in sketch and image domains helps to appropriately position the semantic embeddings. It tries to enforce a broader margin / spread for the classes for which less number of training samples are available as compared to the classes which have larger number of samples. Extensive analysis and evaluation on two benchmark datasets validate the effectiveness of the proposed approach. The contributions of this paper have been summarized below.
+
+1. We analyze the effect of class-imbalance on generalization to unseen classes for the ZS-SBIR task. To the best of our knowledge, this is the first work in literature which addresses the data-imbalance problem in the context of cross-domain retrieval.
+2. We analyze the performance of several state-of-the-art techniques for handling data imbalance problem for this task.
+3. We propose a novel regularizer termed AMDReg, which can seamlessly be used with several ZS-SBIR methods to improve their performance. We have observed significant improvement in the performance of three state-of-the-art ZS-SBIR methods.
+4. We obtain state-of-the-art performance for ZS-SBIR and generalized ZS-SBIR for two large-scale benchmark datasets.
+
+# 2 Related Work
+
+Here, we discuss relevant work in the literature for this study. We include recent papers for sketch-based image retrieval (SBIR), zero-shot sketch-based image retrieval (ZS-SBIR), as well as the class-imbalanced problems in classification.
+
+Sketch-based Image Retrieval (SBIR): The primary goal of these approaches is to bridge the domain-gap between natural images and hand-drawn sketches. Early methods for SBIR, such as HOG [12], LKS [24] aim to extract hand-crafted features from the sketches as well as from the edge-maps obtained from natural images, which are then directly used for retrieval. The advent of deep networks have advanced the state-of-the-art significantly. Siamese network [22] with triplet-loss or contrastive-loss, GoogleNet [25] with triplet loss, etc. are some of the initial architectures. Recently a number of hashing-based methods, such as [15][35] have achieved significant success. [15] uses a heterogeneous network, which employs the edge maps from images, along with the sketch-image training data to learn a shared representation space. In contrast, GDH [35] exploits a generative model to learn the equivalent image representation from a given sketch and performs the final retrieval in the image space.
+
+Zero-shot Sketch-based Image Retrieval (ZS-SBIR): The knowledge-gap encountered by the retrieval model, when a sketch query or database image is from a previously unseen class makes ZS-SBIR extremely challenging. ZSIH [26], generative-model based ZS-SBIR [32] are some of the pioneering works in this direction. However, as identified by [6], ZSIH [26] requires a fusion-layer for learning the model, which shoots up the learning cost and [32] requires strictly paired sketch-image data for training. Some of the recent works, [5][6][7][16] have reported improved performance for ZS-SBIR over the early techniques. [6] introduces a further generalization in the evaluation protocol for ZS-SBIR, termed as generalized ZS-SBIR; where the search set contains images from both the sets of seen and unseen classes. This poses even greater challenge to the algorithm, and the performances degrade significantly for this evaluation protocol [6][7]. Few of the ZS-SBIR approaches are discussed in more details later.
+
+Handling data Imbalance for Classification: Since real-world training data are often imbalanced, recently, a number of works [14][11][2][4] have been proposed to address this problem. [14] mitigates the problem of foreground background class imbalance problem in the context of object recognition and proposes a modification to the traditional cross-entropy based classification loss. [4] introduces an additional cost-sensitive term to be included with any classification loss, designed on the basis of effective number of samples in a particular class. [2] and [11] both propose a modification in the margin of the class-boundary learned via minimizing intra-class variations and maximizing inter-class margin. [17] discusses a dynamic meta-embedding technique to address classification problem under long-tailed training data scenario.
+
+Equipped with the knowledge of recent algorithms for both ZS-SBIR and single domain class imbalance mitigating techniques, we now move forward to discuss the problem of imbalanced training data for cross-domain retrieval.
+
+# 3 Does Imbalanced Training Data Effect ZS-SBIR?
+
+First, we analyze what is the effect of training data imbalance on generalization to unseen classes in the context of ZS-SBIR. Here, for ease of analysis, we consider only class imbalance, but our approach is effective for the mixed imbalance too, as justified by the experimental results later. Since both the standard datasets for this task, namely Sketchy Ext. [25] and TU-Berlin Ext. [8] are already imbalanced, to systematically study the effect of imbalance, we create a smaller balanced dataset, which is a subset of Sketchy Ext. dataset. This is termed as mini-Sketchy Dataset and contains sketches and images from 60 classes, with 500 images and sketches per class. Among them, randomly selected 10 classes are used as unseen classes and the rest 50 classes are used for training.
+
+To study the effect of imbalance, motivated by the class-imbalance literature in image classification [14][11], we introduce two different types of class imbalance: 1) Step imbalance - where few of the classes in the training set contains less amount of samples compared to other classes; 2) Long-tailed imbalance - where the number of samples across the classes decrease gradually following the rule, $n_k^{lt} = n_k \mu^{\frac{k}{C_{seen} - 1}}$ ; where $n_k^{lt}$ is the available samples for $k^{th}$ class under long-tailed distribution and $n_k$ is the number of original samples of that class (=500 here). Here, $k \in \{1, 2, \dots, C_{seen}\}$ , i.e. $C_{seen}$ is the number of training classes and $\mu = \frac{1}{p}$ . We define imbalance factor $p$ for a particular data-distribution to be the ratio of the highest number of samples in any class to the lowest number of samples in any class in that data and higher value of $p$ implies more severe training class imbalance. Since the analysis is with class-imbalance, we assume that the data samples in image and sketch domain is the same.
+
+As mentioned earlier, the proposed regularizer is generic and can be used with several baseline approaches to improve their performance in presence of data imbalance. For this analysis, we choose one recent auto-encoder based approach [7]. We term this as Baseline Model for this discussion, since the analysis is equally applicable for other approaches as well. We systematically introduce both the step and long-tailed imbalances for two different values of $p$ and observe the performance for each of them. The results are reported in Table 1.
+
+As compared to the balanced setting, we observe significant degradation in performance of the baseline whenever any kind of imbalance is present in the training data. This implies that training data imbalance not only effects the test performance when the classes remain the same, it also adversely effects the generalization performance significantly. This is due to the fact that unseen classes are recognized by embedding them close to their semantically relevant seen classes. Data imbalance results in (1) latent embedding space which is not sufficiently discriminative and (2) improperly learnt embedding functions, both of which negatively affects the embeddings of the unseen classes. The goal of the proposed AMDReg is to mitigate these limitations, which in turn will help in better generalization to unseen classes (Table 1 bottom row). Thus we see, that if the imbalance is handled properly, it may reduce the need for collecting large-scale balanced training samples.
+
+Table 1. Evaluation (MAP@200) of Baseline Model [7] for ZS SBIR on mini-Sketchy dataset. Results for long-tailed and step imbalance with different imbalance factors are reported. The final performance using the proposed AMDReg is also compared.
+
+| Experimental Protocol | Balanced data | Imbalanced Data |
| Long-tailed | Step |
| p = 10 | p = 100 | p = 10 | p = 100 |
| Baseline [7] | 0.395 | 0.234 | 0.185 | 0.241 | 0.156 |
| Baseline [7] + AMDReg | | 0.332 | 0.240 | 0.315 | 0.218 |
+
+# 4 Proposed Approach
+
+Here, we describe the proposed Adaptive Margin Diversity Regularizer (AMDReg), which when used with existing ZS-SBIR approaches can help to mitigate the adverse effect of training data imbalance. We observe that majority of the state-of-the-art ZS-SBIR [6][16][7] approaches have two objectives: (1) projecting the sketches and images to a common discriminative latent space, where retrieval can be performed; (2) to ensure that the latent space is semantically meaningful so that the approach generalizes to unseen classes. For the first objective, a classification loss is used while learning the shared latent-space, which constraints the latent-space embeddings of both sketches and images from same classes to be clustered together, and samples from different classes to be well-separated. For the second objective, different direct or indirect techniques are utilized to make the embeddings semantically meaningful to ensure better generalization.
+
+Semantically Meaningful Class Prototypes: Without loss of generality, we again chose the same baseline [7] to explain how to incorporate the proposed AMDReg into an existing ZS-SBIR approach. Let us consider that there are $C_{seen}$ number of classes present in the dataset, and $d$ is the latent space dimension. The baseline model has two parallel branches $F_{im}(\theta_{im})$ and $F_{sk}(\theta_{sk})$ for extracting features from images and sketches, $\{f^{(m)}\}$ , where $m \in \{im, sk\}$ , respectively. These features are then passed through corresponding content encoder networks to learn the shared latent-space embeddings for the same, i.e. $z^{(m)} = E_m(f^{(m)})$ . In [7], a distance-based cross-entropy loss is used to learn these latent embeddings such that the embeddings is close to the semantic information. As is widely used, the class-name embeddings $h(y)$ of the seen-class labels $y \in \{1, 2, \dots, C_{seen}\}$ are used as the semantic information. These embeddings are extracted from a pre-trained language model, such as, word2vec [18] or GloVe [20]. Please refer to Fig. 1 for illustration of the proposed AMDReg with respect to this baseline model.
+
+The last fully connected (fc) layer of the encoders is essentially the classification layer and the weights of this layer, $\mathbf{P} = [\mathbf{p}_1,\mathbf{p}_2,\dots,\mathbf{p}_{C_{seen}}],\mathbf{p}_i\in \mathbb{R}^d$ can be considered as the shared class-prototypes or the representatives of the corresponding class [21]. To ensure a semantically meaningful latent representation, one can learn the prototypes ( $\mathbf{p}_i$ 's) such that they are close to the class-name
+
+embeddings, or the prototypes can themselves be set equal to the semantic embeddings, i.e. $\mathbf{p}_i = h(y)$ and kept fixed. If the training data is imbalanced, just ensuring that the prototypes are semantically meaningful is not sufficient, we should also ensure that they take into account the label distribution of the training data. In our modification, to be able to adjust the prototypes properly, instead of fixing them as the class-embeddings, we initialize them using these attributes. Since the output of this fc layer is given by $\mathbf{z}^{(m)} = [z_1^{(m)}, z_2^{(m)}, \dots, z_{C_{seen}}^{(m)}]$ ; the encoder with the prototypes is learnt using standard cross-entropy loss as,
+
+$$
+\mathcal {L} _ {C E} \left(\mathbf {z} ^ {\left(\mathbf {m}\right)}, y\right) = - \log \frac {\exp \left(z _ {y} ^ {(m)}\right)}{\sum_ {j = 1} ^ {C _ {\text {s e e n}}} \exp \left(z _ {j} ^ {(m)}\right)} \tag {1}
+$$
+
+Now, with this as the background, we will describe the proposed regularizer, AMDReg, which ensures that the prototypes are modified in such a way that they are spread out according to their class representation in the training set.
+
+Adaptive Margin Diversity Regularizer: Our proposed AMDReg is inspired from the recently proposed Diversity Regularizer [11], which addresses data imbalance in image classification by adjusting the classifier weights (here prototypes) so that they are uniformly spread out in the feature space. In our context, it can be enforced by the following regularizer
+
+$$
+\mathcal {R} (\mathbf {P}) = \frac {1}{C _ {\text {s e e n}}} \sum_ {i < j} [ \| \mathbf {p} _ {i} - \mathbf {p} _ {j} \| _ {2} ^ {2} - d _ {\text {m e a n}} ] ^ {2}, \forall j \in \{1, 2, \dots , C _ {\text {s e e n}} \} \tag {2}
+$$
+
+Here $d_{mean}$ is the mean distance between all the class prototypes and is computed as
+
+$$
+d _ {\text {m e a n}} = \frac {2}{C _ {\text {s e e n}} ^ {2} - C _ {\text {s e e n}}} \sum_ {i < j} \left\| \mathbf {p} _ {i} - \mathbf {p} _ {j} \right\| _ {2} ^ {2}, \forall j \in \{1, 2, \dots , C _ {\text {s e e n}} \} \tag {3}
+$$
+
+The above regularizer tries to spread out all the class prototypes, without considering the amount of imbalance present in the training data. As has been observed in many recent works [2], due to insufficient number of samples of the minority classes, it is more likely that their test samples will have a wider spread instead of being clustered around the class prototype during testing. For our problem, this implies greater uncertainty for samples of unseen classes, which are semantically similar to the minority classes in the training set.
+
+Towards this end, we propose to adjust the class prototypes adaptively, which takes into account the data imbalance. Since there can be both class and domain imbalance in the cross-domain retrieval problem, we propose to use the total number of sketch and image samples per class in the training set, and we refer to this combined number for $k^{th}$ -class as the effective number of samples, $n_k^{eff}$ , in this work. We then define the imbalance-based margin for the $k^{th}$ class as,
+
+$$
+\Delta_ {k} = \frac {K}{n _ {k} ^ {\text {e f f}}} \tag {4}
+$$
+
+
+Fig. 1. Illustration of the proposed Adaptive Margin Diversity Regularizer (AMDReg). The AMDReg ensures that the embeddings of the shared prototypes of the images and sketches are not only placed away from each other, but also account for the increased uncertainty when the training class distribution is imbalanced. This results in better generalization to unseen classes.
+
+This is similar to the inverse frequency of occurrence, except for the experimental hyper-parameter $K$ . Thus the final AMDReg is given by
+
+$$
+\mathcal {R} _ {\Delta} (\mathbf {P}) = \frac {1}{C _ {s e e n}} \sum_ {i < j} \left[ \left\| \mathbf {p} _ {i} - \mathbf {p} _ {j} \right\| _ {2} ^ {2} - \left(d _ {m e a n} + \Delta_ {j}\right) \right] ^ {2}, \forall j \in \{1, 2, \dots , C _ {s e e n} \} \tag {5}
+$$
+
+Thus, we adjust the relative distance between $\mathbf{p}_i$ 's such that they are at least separated by a distance which is more than the mean-distance by the class imbalance margin. This ensures that the prototypes for the minority classes have more margin around them, which will reduce the chances of confusion for the semantically similar unseen classes during testing. Finally, the encoder with the prototypes are learnt using the CE loss along with the AMDReg as
+
+$$
+\mathcal {L} _ {C E} ^ {A M D R e g} = \mathcal {L} _ {C E} + \lambda \mathcal {R} _ {\Delta} \tag {6}
+$$
+
+where $\lambda$ is an experimental hyper-parameter, which controls the contribution of the regularizer towards the learning.
+
+Difference with Related Works: Even though the proposed AMDReg is inspired from [11], there are significant differences, namely (1) [11] addresses the imbalanced classification task for a single domain, while our work addresses generalization to unseen classes in the context of cross-domain retrieval (ZS-SBIR); (2) While [11] ensures that the weight vectors are equally spread out, AMDReg
+
+accounts for the training data distribution while designing the relative distances between the semantic embeddings; (3) Finally, [11] works with the max-margin loss, but AMDReg is used with the standard CE loss while learning the semantic class prototypes.
+
+The proposed approach also differs from another closely related work LDAM [2]. LDAM loss is a modification on the standard cross-entropy or Hinge-loss to incorporate class-wise margin. In contrast, proposed AMDReg is a margin-based regularizer with adaptive margins between class-prototypes, based on the corresponding representation of classes in the training set. Thus, while [2] is inspired from margin-based generalization bound, the proposed AMDReg is inspired from the widely used inverse frequency of occurrence.
+
+# 4.1 Analysis with standard & SOTA imbalance-aware approaches
+
+Here, we analyze how the proposed AMDReg compares with several existing state-of-the-art techniques used for addressing the problem of imbalance in the training data mainly for the task of image classification. These techniques can be broadly classified into two categories, (1) re-sampling techniques to balance the existing imbalanced dataset and (2) cost-sensitive learning or modification of the classifier. For this analysis also, we use the same retrieval backbone [7]. In this context, we first compute the average number of samples in the dataset. Any class which has lesser number of samples than the average are considered minority classes, and the remaining are considered majority classes.
+
+1) Re-balancing the dataset: Re-sampling is a standard and effective technique used to balance out the dataset distribution bias. The most common methods are under-sampling of the majority classes [1] or over-sampling of minority classes [3]. We systematically use such imbalance data-sampling techniques on the training data to address the class imbalance for ZS-SBIR as discussed below. Here, the re-sampled / balanced dataset created by individual re-sampling operations described below is used for training the baseline network and reporting the retrieval performance.
+
+1. Naive under-sampling: Here, we randomly select $1/p$ -th of total samples per class for the majority classes and discard their remaining samples. Naturally, we lose a significant amount of important samples with such random sampling technique.
+2. Selective Decontamination [1]: This technique is used to intelligently under-sample the majority classes instead of randomly throwing away excess samples. As per [1], we also modify the Euclidean distance function $d_{E}(\mathbf{x}_{i},\mathbf{x}_{j})$ between two samples of $c^{th}$ class, $\mathbf{x}_i$ and $\mathbf{x}_j$ as,
+
+$$
+d _ {\text {m o d i f i e d}} \left(\mathbf {x} _ {i}, \mathbf {x} _ {j}\right) = \left(\frac {n _ {c}}{N}\right) ^ {(1 / m)} d _ {E} \left(\mathbf {x} _ {i}, \mathbf {x} _ {j}\right) \tag {7}
+$$
+
+where $n_c$ and $N$ are the number of samples in $c^{th}$ class and in all classes, respectively. $m$ represents the dimension of the feature space. We retain only those samples in the majority classes for which the classes of majority of samples in top- $K$ nearest neighbors agree completely.
+
+3. Naive over-sampling: Here, the minority classes are augmented by repeating the instances (as in [35]) and using the standard image augmentation techniques (such as, rotation, translation, flipping etc.).
+4. SMOTE [3]: In this intelligent over-sampling technique, instead of replacing the samples, the minority classes are augmented by generating synthetic features along the line-segment joining each minority class sample with its $K$ -nearest neighbors.
+5. GAN Based Augmentation [29]: Finally, we propose to augment the minority classes by generating features with the help of generative models, which have been very successful for zero-shot [29] / few-shot [19] / any-shot [30] image classification. Towards that goal, we use f-GAN [29] model to generate synthetic features for the minority classes using their attributes and augment those features with the available training dataset to reduce the imbalance.
+2) Cost-sensitive Learning of Classifier: The goal of cost-sensitive learning based methods is to learn a better classifier using the original imbalanced training data, but with a more suitable loss function which can account for the data imbalance. To observe the effect of the different kinds of losses, we modify the distance-based CE-loss in the baseline model to the following ones, keeping the rest of the network fixed.
+1. Focal loss: This loss [14] was proposed to address foreground-background class imbalance issue in the context of object detection. It is based on a simple yet effective modification of standard cross-entropy loss, such that while computation, the easy or well-classified samples are given less weights compared to the difficult samples.
+2. Class-balanced Focal Loss: It is a variant of focal loss, recently proposed in [4], which incorporates the effective number of samples for a class in the imbalanced dataset.
+3. Diversity Regularizer: This recently proposed regularizer [11] ensures that both the majority and minority classes are at equal distance from each other in the latent-space and reported significant performance improvement for imbalanced training data for image classification.
+4. $LDAM$ : [2] proposes a margin-based modification of standard cross-entropy loss or hinge loss, to ensure that the classes are well-separated from each other.
+
+The retrieval performance obtained with these imbalance-handling methods are reported in Table 2. We observe that all the techniques result in varying degree of improvement over the base model. Among the data augmentation techniques, GAN-based augmentation outperforms the other approaches. In general, all the cost-sensitive learning techniques performs quite well, specially the recently proposed diversity regularizer and the LDAM cross-entropy loss. However, the proposed AMDReg outperforms both the data balancing and cost-sensitive learning approaches, giving the best performance across all types and degrees of imbalance.
+
+Table 2. ZS-SBIR performance (MAP@200) of different kinds of imbalance handling techniques applied on the Baseline Model [7] for the mini-Sketchy dataset. Results of the original Baseline Model is also reported for reference.
+
+| Imbalance Handler | Methods | Long-tailed | Step |
| p = 10 | p = 100 | p = 10 | p = 100 |
| Baseline Model [7] | 0.234 | 0.185 | 0.241 | 0.156 |
| Data balancing methods | Naive under-sampling | 0.235 | 0.191 | 0.256 | 0.159 |
| Naive over-sampling | 0.269 | 0.219 | 0.258 | 0.155 |
| Selective decontamination [1] | 0.268 | 0.221 | 0.251 | 0.164 |
| SMOTE [3] | 0.269 | 0.217 | 0.269 | 0.183 |
| GAN-based Augmentation [29] | 0.305 | 0.229 | 0.274 | 0.188 |
| Loss-Modification Techniques | Focal loss [14] | 0.273 | 0.228 | 0.289 | 0.195 |
| Class-balanced Focal Loss [4] | 0.299 | 0.236 | 0.296 | 0.210 |
| Diversity-Regularizer [11] | 0.296 | 0.222 | 0.285 | 0.207 |
| LDAM-CE loss [2] | 0.329 | 0.234 | 0.310 | 0.213 |
| Proposed AMDReg | 0.332 | 0.240 | 0.315 | 0.218 |
+
+# 5 Experimental Evaluation on ZS-SBIR
+
+Here, we provide details of the extensive experiments performed to evaluate the effectiveness of the proposed AMDReg for handing data imbalance in ZS-SBIR.
+
+Datasets Used and Experimental Protocol: We have used two large-scale standard benchmarks for evaluating ZS-SBIR approaches, namely, Sketchy Ext. [25] and TU-Berlin Ext. [8].
+
+Sketchy Ext. [25] dataset originally contained approximately 75,000 sketches and 12,500 images from 125 object categories. Later, [15] collected and added additional 60,502 images to this dataset. Following the standard protocol [6][16], we randomly choose 25 classes as unseen-classes (sketches as query and images in the search set) and the rest 100 classes for training.
+
+TU-Berlin Ext. [8] originally contained 80 hand-drawn sketches per class from total 250 classes. To make it a better fit for large-scale experiments, [34] included additional 2,04,489 images. As followed in literature [6] [7], we randomly select 30-classes as unseen, while the rest 220-classes are used for training.
+
+The dataset statistics are shown in Fig. 2, which depicts data imbalance in both the datasets. This is specially evident in TU-Berlin Ext., which has huge domain-wise imbalance as well as class-wise imbalance. These real-world datasets reinforce the importance of handling data imbalance for the ZS-SBIR task.
+
+# 5.1 State-of-the-art ZS-SBIR approaches integrated with AMDReg
+
+As already mentioned, the proposed AMDReg is generic and can be seamlessly integrated with most state-of-the-art ZS-SBIR approaches for handling the training data imbalance. Here, we have integrated AMDReg with three state-of-the-art approaches, namely (1) Semantically-tied paired cycle-consistency based
+
+
+Fig. 2. Dataset statistics of sketches and images of Sketchy-extended and TU-Berlin-extended are shown in the first two and last two plots respectively in that order.
+
+
+
+
+
+
+
+network (SEM-PCYC) [6]; (2) Semantic-aware knowledge preservation for ZS-SBIR (SAKE) [16]. (3) Style-guided network for ZS-SBIR [7]. Now, we briefly describe the three approaches along with the integration of AMDReg.
+
+SEM-PCYC [6] with AMDReg: SEM-PCYC is a generative model with two separate branches for image and sketch; for visual-to-semantic mapping along with cyclic consistency loss. Further, to ensure that the semantic output of the generators is also class-discriminative, a classification loss is used. This classifier is pre-trained on seen-class training data and kept frozen while the whole retrieval model is trained. We modify the training methodology by enabling the classifier to train along with the rest of the model, by including the AMDReg with the CE-loss. Here, the semantic information is enforced through an autoencoder, which uses a hierarchical and a text-based model as input, and thus the weights are randomly initialized. Please refer to [6] for more details.
+
+SAKE [16] with AMDReg: This ZS-SBIR method extends the concept of domain-adaptation for fine-tuning a pre-trained model on ImageNet [23] for the specific ZS-SBIR datasets. The network contains a shared branch to extract features from both sketches and images, which are later used for the categorical classification task using the soft-max CE-loss. Simultaneously, the semantic structure with respect to the ImageNet [23] classes are maintained. Here also, we modify the CE-loss using the proposed AMDReg to mitigate the adverse effect of training data imbalance. The rest of the branches and the proposed SAKE-loss remain unchanged. Please refer to [16] for more details of the base algorithm.
+
+Style-guide [7] with AMDReg: This is a two-step process, where the shared latent-space is learnt first. Then, the latent-space content extracted from the sketch query is combined with the styles of the relevant images to obtain the final retrieval in the image-space. While learning the latent-space, a distance-based cross-entropy loss is used, which is modified as explained in details earlier. Please refer to [7] for more details of the base algorithm.
+
+Implementation Details The proposed regularizer is implemented using Pytorch. We use a single Nvidia GeForce GTX TITAN X for all our experiments.
+
+Table 3. Performance of several state-of-the-art approaches for ZS-SBIR and generalized ZS-SBIR.
+
+| Algorithms | TU-Berlin extended | Sketchy-extended |
| MAP@all | Prec@100 | MAP@all | Prec@100 |
| SBIR | Softmax Baseline | 0.089 | 0.143 | 0.114 | 0.172 |
| Siamese CNN [22] | 0.109 | 0.141 | 0.132 | 0.175 |
| SaN [33] | 0.089 | 0.108 | 0.115 | 0.125 |
| GN Triplet [25] | 0.175 | 0.253 | 0.204 | 0.296 |
| 3D shape [28] | 0.054 | 0.067 | 0.067 | 0.078 |
| DSH (binary) [15] | 0.129 | 0.189 | 0.171 | 0.231 |
| GDH (binary) [35] | 0.135 | 0.212 | 0.187 | 0.259 |
| ZSL | CMT [27] | 0.062 | 0.078 | 0.087 | 0.102 |
| DeViSE [10] | 0.059 | 0.071 | 0.067 | 0.077 |
| SSE [36] | 0.089 | 0.121 | 0.116 | 0.161 |
| JLSE [37] | 0.109 | 0.155 | 0.131 | 0.185 |
| SAE [13] | 0.167 | 0.221 | 0.216 | 0.293 |
| FRWGAN [9] | 0.110 | 0.157 | 0.127 | 0.169 |
| ZSH [31] | 0.141 | 0.177 | 0.159 | 0.214 |
| ZSIH (binary) [26] | 0.223 | 0.294 | 0.258 | 0.342 |
| ZS-SBIR [32] | 0.005 | 0.001 | 0.196 | 0.284 |
| Zero-Shot SBIR | SEM-PCYC [6] | 0.297 | 0.426 | 0.349 | 0.463 |
| SEM-PCYC + AMDReg | 0.330 | 0.473 | 0.397 | 0.494 |
| Style-guide [7] | 0.254 | 0.355 | 0.375 | 0.484 |
| Style-guide + AMDReg | 0.291 | 0.376 | 0.410 | 0.512 |
| SAKE [16] | 0.428* | 0.534* | 0.547 | 0.692 |
| SAKE + AMDReg | 0.447 | 0.574 | 0.551 | 0.715 |
| Generalized Zero-shot SBIR | Style-guide [7] | 0.149 | 0.226 | 0.330 | 0.381 |
| SEM-PCYC [6] | 0.192 | 0.298 | 0.307 | 0.364 |
| SEM-PCYC + AMDReg | 0.245 | 0.303 | 0.320 | 0.398 |
+
+For all the experiments, we set $\lambda = 10^{3}$ and $K = 1$ . Adam optimizer has been used with $\beta_{1} = 0.5$ , $\beta_{2} = 0.999$ and a learning rate of $lr = 10^{-3}$ . The implementation of different baselines and the choice of hyper-parameters for their implementation has been done as described in the corresponding papers.
+
+# 5.2 Evaluation for ZS-SBIR
+
+Here, we report the results of the modifications to the state-of-the-art approaches for ZS-SBIR. We first train all the three original models (as described before) to replicate the results reported in the respective papers. We use the codes given by the authors and are able to replicate all the results for SEM-PCYC and Styleguide as reported. However, for SAKE, in two cases, the results we obtained are slightly different from that reported in the paper. So we report the results as we obtained, for fair evaluation of proposed improvement (marked with a star to indicate that they are different from the reported numbers in the paper).
+
+We incorporate the proposed modifications for AMDReg in all three approaches and retrained the models. The results are reported in Table 3. All the
+
+
+Fig.3. Performance comparison of the base model (SEM-PCYC) and the modified base-model using proposed AMDReg: (a) Few examples of top-5 retrieved images against the given unseen sketch query from TU-Berlin dataset; (b) P-R curve on Sketchy dataset; (c) P-R curve on TU-Berlin dataset.
+
+
+
+
+
+results of the other approaches are taken directly from [6]. We observe significant improvement in the performance of all the state-of-the-art approaches, when trained using the proposed regularizer. This experiment throws insight that by handling the data-imabalance, which is inherently present in the collected data, it is possible to gain significant improvement in the final performance. Since AMDReg is generic, it can potentially be incorporated with other approaches, developed for the ZS-SBIR task, to handle the training data imbalance problem.
+
+Fig. 3 shows top-5 retrieved results for a few unseen queries (first column), using SEM-PCYC as the baseline model, without and with AMDReg, respectively. We observe significant improvement when AMDReg is used, justifying its effectiveness. We make similar observations from the P-R curves in Fig. 3.
+
+# 5.3 Evaluation for Generalized ZS-SBIR
+
+In real scenarios, the search set may consist of both the seen and unseen image samples, which makes the problem much more challenging. This is termed as the generalized ZS-SBIR. To evaluate the effectiveness of proposed AMDReg for this scenario, we follow the experimental protocol in [6] and SEM-PCYC [6] as the base model. From the results in Table 3, we observe that AMDReg is able to significantly improve the performance of the base model and yields state-of
+
+the-art results for three out of the four cases. Only for Sketchy Ext., it performs slightly less than Style-Guide, but still improves upon its baseline performance.
+
+# 5.4 Evaluation for SBIR
+
+Though the main purpose of this work is to analyze the effect of training data imbalance on generalization to unseen classes, this approach should also benefit standard SBIR in presence of imbalance. We observe from Table 4, that the
+
+Table 4. SBIR evaluation (MAP@200) of Baseline Model [7] on mini-Sketchy.
+
+| Balanced Data | Step Imb. (p=100) | GAN-based Aug. [29] | CB Focal Loss [4] | Diversity Regularizer [11] | Proposed AMDReg |
| 0.839 | 0.571 | 0.580 | 0.613 | 0.636 | 0.647 |
+
+performance of SBIR indeed decreases drastically with training data imbalance. Proposed AMDReg is able to mitigate this by a significant margin as compared to the other state-of-the-art imbalance handling techniques. We further analyze the performance of SEM-PCYC [6] on Sketchy Ext. dataset for standard SBIR protocol with and without AMDReg. We observe significant improvement when proposed AMDReg is used (MAP@all: 0.811; Prec@100: 0.897) as compared to the baseline SEM-PCYC (MAP@all: 0.771; Prec@100: 0.871).
+
+# 6 Conclusion
+
+In this work, for the first time in literature, we analyzed the effect of training data imbalance for the task of generalization to unseen classes in context of ZS-SBIR. We observe that most real-world SBIR datasets are in-fact imbalanced, and that this imbalance does effect the generalization adversely. We systematically evaluate several state-of-the-art imbalanced mitigating approaches (for classification) for this problem. Additionally, we propose a novel adaptive margin diversity regularizer (AMDReg), which ensures that the shared latent space embeddings of the images and sketches account for the data imbalance in the training set. The proposed regularizer is generic, and we show how it can be seamlessly incorporated in three existing state-of-the-art ZS-SBIR approaches with slight modifications. Finally, we show that the proposed AMDReg results in significant improvement in both ZS-SBIR and generalized ZS-SBIR protocols, setting the new state-of-the-art result.
+
+# Acknowledgement
+
+This work is partly supported through a research grant from SERB, Department of Science and Technology, Government of India.
+
+# References
+
+1. Barandela, R., Rangel, E., Sanchez, J.S., Ferri, F.J.: Restricted decontamination for the imbalanced training sample problem. Iberoamerican Congress on Pattern Recognition, Springer (2003)
+2. Cao, K., Wei, C., Gaidon, A., Arechiga, N., Ma, T.: Learning imbalanced datasets with label-distribution-aware margin loss. NeurIPS (2019)
+3. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research 16, 321-357 (2002)
+4. Cui, Y., Jia, M., Lin, T.Y., Song, Y.: Class-balanced loss based on effective number of samples. CVPR (2019)
+5. Dey, S., Riba, P., Dutta, A., Llados, J., Song, Y.Z.: Doodle to search: practical zero-shot sketch-based image retrieval. CVPR (2019)
+6. Dutta, A., Akata, Z.: Sematically tied paired cycle consistency for zero-shot sketch-based image retrieval. CVPR (2019)
+7. Dutta, T., Biswas, S.: Style-guided zero-shot sketch-based image retrieval. BMVC (2019)
+8. Eitz, M., Hays, J., Alexa, M.: How do humans sketch objects? ACM TOG 31(4), 44.1-44.10 (2012)
+9. Felix, R., Kumar, V.B., Reid, I., Carneiro, G.: Multi-modal cycle-consistent generalized zero-shot learning. In: ECCV (2018)
+0. Frome, A., Corrado, G.S., Shlens, J., Bengio, S., Dean, J., Ranzato, M., Mikolov, T.: Devise: A deep visual-semantic embedding model. In: NeurIPS (2013)
+1. Hayat, M., Khan, S., Zamir, S.W., Shen, J., Shao, L.: Gaussian affinity for max-margin class imbalanced learning. ICCV (2019)
+2. Hu, R., Collomosse, J.: A performance evaluation of gradient field hog descriptor for sketch based image retrieval. CVIU 117(7), 790-806 (2013)
+3. Kodirov, E., Xiang, T., Gong, S.: Semantic autoencoder for zero-shot learning. In: CVPR (2017)
+4. Lin, T.Y., Goyal, P., Girshiick, R., He, K., Dollar, P.: Focal loss for dense object detection. arXiv:1708.02002 [cs.CV] (2018)
+5. Liu, L., Shen, F., Shen, Y., Liu, X., Shao, L.: Deep sketch hashing: fast free-hand sketch-based image retrieval. CVPR (2017)
+6. Liu, Q., Xie, L., Wang, H., Yuille, A.: Semantic-aware knowledge preservation for zero-shot sketch-based image retrieval. ICCV (2019)
+7. Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., Yu, S.X.: Large-scale long-tailed recognition in an open world. In: CVPR (2019)
+8. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. NeurIPS (2013)
+9. Mishra, A., Reddy, S.K., Mittal, A., Murthy, H.A.: A generative model for zero-shot learning using conditional variational auto-encoders. CVPR-W (2018)
+20. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. EMNLP (2014)
+21. Qi, H., Brown, M., Lowe, D.G.: Low-shot learning with imprinted weights. CVPR (2018)
+22. Qi, Y., Song, Y.Z., Zhang, H., Liu, J.: Sketch-based image retrieval via siamese convolutional neural network. ICIP (2016)
+23. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Li, F.F.: Imagenet: largescale visual recognition challenge. IJCV 115(3), 211-252 (2015)
+
+24. Saavedra, J.M., Barrios, J.M.: Sketch-based image retrieval using learned keyshapes (lks). BMVC (2015)
+25. Sangkloy, P., Burnell, N., Ham, C., Hays, J.: The sketchy database: learning to retrieve badly drawn bunnies. ACM TOG (2016)
+26. Shen, Y., Liu, L., Shen, F., Shao, L.: Zero-shot sketch-image hashing. CVPR (2018)
+27. Socher, R., Ganjoo, M., Manning, C.D., Ng, A.: Zero-shot learning through cross-modal transfer. In: NeurIPS (2013)
+28. Wang, M., Wang, C., Wu, J.X., Zhang, J.: Community detection in social networks: an in-depth benchmarking study with a procedure-oriented framework. VLDB (2015)
+29. Xian, Y., Lorenz, T., Schiele, B., Akata, Z.: Feature generating networks for zero-shot learning. CVPR (2018)
+30. Xian, Y., Sharma, S., Schiele, B., Akata, Z.: f-vaegan-d2: A feature generating framework for any-shot learning. CVPR (2019)
+31. Yang, Z., Cohen, W.W., Salakhutdinov, R.: Revisiting semi-supervised learning with graph embeddings. arXiv preprint arXiv:1603.08861 (2016)
+32. Yelamarthi, S.K., Reddy, S.K., Mishra, A., Mittal, A.: A zero-shot framework for sketch-based image retrieval. ECCV (2018)
+33. Yu, Q., Yang, Y., Liu, F., Song, Y.Z., Xiang, T., Hospedales, T.M.: Sketch-a-net that beats humans. BMVC (2015)
+34. Zhang, J., Liu, S., Zhang, C., Ren, W., Wang, R., Cao, X.: Sketchnet: sketch classification with web images. CVPR (2016)
+35. Zhang, J., Shen, F., Liu, L., Zhu, F., Yu, M., Shao, L., ad L. V. Gool, H.T.S.: Generative domain-migration hashing for sketch-to-image retrieval. ECCV (2018)
+36. Zhang, R., Lin, L., Zhang, R., Zuo, W., Zhang, L.: Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification. IEEE Transactions on Image Processing 24(12), 4766-4779 (2015)
+37. Zhang, Z., Saligrama, V.: Zero-shot learning via joint latent similarity embedding. In: CVPR (2016)
\ No newline at end of file
diff --git a/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/images.zip b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f8ae1ab23bec0d0d408be781e286402443ee666a
--- /dev/null
+++ b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:298c8ebc5b4a4d605d45b3385ae13320b76fe8ba13bb7a85868d458e65c47adb
+size 472159
diff --git a/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/layout.json b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..859ef36f15f477f25305fac53c7dd4fe13de9e54
--- /dev/null
+++ b/adaptivemargindiversityregularizerforhandlingdataimbalanceinzeroshotsbir/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fbc933e3def4928245ae4ef5b60f03e2e786855ec8859021cbb3e3c80cf80feb
+size 320170
diff --git a/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/8104dbf5-7c8b-4e86-821c-1a6135e556ba_content_list.json b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/8104dbf5-7c8b-4e86-821c-1a6135e556ba_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1fd3ccd6768688f930b894fadbf1d44b52f51e43
--- /dev/null
+++ b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/8104dbf5-7c8b-4e86-821c-1a6135e556ba_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bb35f5cd811107daa302044d62a89c5637977539ddb59b5d0858d4d5903e4ff6
+size 79870
diff --git a/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/8104dbf5-7c8b-4e86-821c-1a6135e556ba_model.json b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/8104dbf5-7c8b-4e86-821c-1a6135e556ba_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d6a6e65006d178366e2e52cb167582727259b66e
--- /dev/null
+++ b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/8104dbf5-7c8b-4e86-821c-1a6135e556ba_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:79a992d287a9ebe42026d9dc14461f4812c9f65b5841bdd8b7b4768a73303062
+size 94407
diff --git a/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/8104dbf5-7c8b-4e86-821c-1a6135e556ba_origin.pdf b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/8104dbf5-7c8b-4e86-821c-1a6135e556ba_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d7c9ba2e2bc1b9f5971b115d01cdeeae7f8286e4
--- /dev/null
+++ b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/8104dbf5-7c8b-4e86-821c-1a6135e556ba_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8ccd72af1ad4bf5b48724bc3b03dacb3c1c6bba023d1f211dbfb2fe1422f34c0
+size 7247028
diff --git a/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/full.md b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..553e4ce3d32c4d30b81440751293cff2b4f20a6a
--- /dev/null
+++ b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/full.md
@@ -0,0 +1,342 @@
+# Adaptive Mixture Regression Network with Local Counting Map for Crowd Counting
+
+Xiyang Liu $^{1\star}$ , Jie Yang $^{2}$ , Wenrui Ding $^{3\star\star}$ , Tieqiang Wang $^{4}$ , Zhijin Wang $^{2}$ , and Junjun Xiong $^{2}$
+
+$^{1}$ School of Electronic and Information Engineering, Beihang University $^{2}$ Shunfeng Technology (Beijing) Co., Ltd
+
+3 Institute of Unmanned Systems, Beihang University
+
+$^{4}$ Institute of Automation, Chinese Academy of Sciences {xiyangliu, ding}@buaa.edu.cn, jieyang2@sfmail.sf-express.com
+
+Abstract. The crowd counting task aims at estimating the number of people located in an image or a frame from videos. Existing methods widely adopt density maps as the training targets to optimize the point-to-point loss. While in testing phase, we only focus on the differences between the crowd numbers and the global summation of density maps, which indicate the inconsistency between the training targets and the evaluation criteria. To solve this problem, we introduce a new target, named local counting map (LCM), to obtain more accurate results than density map based approaches. Moreover, we also propose an adaptive mixture regression framework with three modules in a coarse-to-fine manner to further improve the precision of the crowd estimation: scale-aware module (SAM), mixture regression module (MRM) and adaptive soft interval module (ASIM). Specifically, SAM fully utilizes the context and multi-scale information from different convolutional features; MRM and ASIM perform more precise counting regression on local patches of images. Compared with current methods, the proposed method reports better performances on the typical datasets. The source code is available at https://github.com/xiyang1012/Local-Crowd-Counting.
+
+Keywords: Crowd Counting, Local Counting Map, Adaptive Mixture Regression Network
+
+# 1 Introduction
+
+The main purpose of visual crowd counting is to estimate the numbers of people from static images or frames. Different from pedestrian detection [12,18,15], crowd counting datasets only provide the center points of heads, instead of the precise bounding boxes of bodies. So most of the existing methods draw the density map [11] to calculate crowd number. For example, CSRNet [13] learned a powerful convolutional neural network (CNN) to get the density map with the
+
+
+Fig. 1. Training loss curves (left) and testing loss curves (right) between the two networks sharing VGG16 as the backbone with different regression targets, density map and local counting map on ShanghaiTech Part A dataset. The network trained with the local counting map has the lower error and more stable performance on the testing dataset than the one with the density map
+
+
+
+same size as the input image. Generally, for an input image, the ground truth of its density map is constructed via a Gaussian convolution with a fixed or adaptive kernel on the center points of heads. Finally, the counting result can be represented via the summation of the density map.
+
+In recent years, benefit from the powerful representation learning ability of deep learning, crowd counting researches mainly focus on CNN based methods [36,25,3,20,1] to generate high-quality density maps. The mean absolute error (MAE) and mean squared error (MSE) are adopted as the evaluation metrics of crowd counting task. However, we observed an inconsistency problem for the density map based methods: the training process minimizes the $L_{1} / L_{2}$ error of the density map, which actually represents a point-to-point loss [6], while the evaluation metrics in the testing stage only focus on the differences between the ground-truth crowd numbers and the overall summation of the density maps. Therefore, the model with minimum training error of the density map does not ensure the optimal counting result when testing.
+
+To draw this issue, we introduce a new learning target, named local counting map (LCM), in which each value represents the crowd number of a local patch rather than the probability value indicating whether has a person or not in the density map. In Sec. 3.1, we prove that LCM is closer to the evaluation metric than the density map through a mathematical inequality deduction. As shown in Fig. 1, LCM markedly alleviates the inconsistency problem brought by the density map. We also give an intuitive example to illustrate the prediction differences of LCM and density map. As shown in Fig. 2, the red box represents the dense region and the yellow one represents the sparse region. The prediction of density map is not reliable in dense areas, while LCM has more accurate counting results in these regions.
+
+To further improve the counting performance, we propose an adaptive mixture regression framework to give an accurate estimation of crowd numbers in
+
+
+
+
+(a) GT-DM
+
+
+(b) ES-DM
+
+
+(c) GT-LCM
+Fig. 2. An intuitive comparison between the local counting map (LCM) and the density map (DM) on local areas. LCM has more accurate estimation counts on both dense (the red box) and sparse (the yellow box) populated areas. (GT-DM: ground-truth of DM; ES-DM: estimation of DM; GT-LCM: ground-truth of LCM; ES-LCM: estimation of LCM)
+
+
+(d) ES-LCM
+
+a coarse-to-finer manner. Specifically, our approach mainly includes three modules: 1) scale-aware module (SAM) to fully utilize the context and multi-scale information contained in feature maps from different layers for estimation; 2) mixture regression module (MRM) and 3) adaptive soft interval module (ASIM) to perform precise counting regression on local patches of images.
+
+In summary, the main contributions in this work are in the followings:
+
+- We introduce a new learning target LCM, which alleviates the inconsistency problem between training targets and evaluation criteria, and reports better counting performance compared with the density map.
+- We propose an adaptive mixture regression framework in a coarse-to-finer manner, which fully utilizes the context and multi-scale information from different convolutional features and performs more accurate counting regression on local patches.
+
+The rest of the paper is described as follows: Sec. 2 reviews the previous work of crowd counting; Sec. 3 details our method; Sec. 4 presents the experimental results on typical datasets; Sec. 5 concludes the paper.
+
+# 2 Related Work
+
+Recently, CNN based approaches have become the focus of crowd counting researches. According to regression targets, they can be classified into two categories: density estimation based approaches and direct counting regression ones.
+
+# 2.1 Density Estimation based Approaches
+
+The early work [11] defined the concept of density map and transformed the counting task to estimate the density map of an image. The integral of density
+
+map in any image area is equal to the count of people in the area. Afterwards, Zhang et al. [35] used CNN to regress both the density map and the global count. It laid the foundation for subsequent works based on CNN methods. To improve performance, some methods aimed at improving network structures. MCNN [36] and Switch-CNN [2] adopted multi-column CNN structures for mapping an image to its density map. CSRNet [13] removed multi-column CNN and used dilated convolution to expand the receptive field. SANet [3] introduced a novel encoder-decoder network to generate high-resolution density maps. HACNN [26] employed attention mechanisms at various CNN layers to selectively enhance the features. PaDNet [29] proposed a novel end-to-end architecture for pan-density crowd counting. Other methods aimed at optimizing the loss function. ADMG [30] produced a learnable density map representation. SPANet [6] put forward MEP loss to find the pixel-level subregion with high discrepancy to the ground truth. Bayesian Loss [17] presented a Bayesian loss to adopt a more reliable supervision on the count expectation at each annotated point.
+
+# 2.2 Direct Counting Regression Approaches
+
+Counting regression approaches directly estimate the global or local counting number of an input image. This idea was first adopted in [5], which proposed a multi-output regressor to estimate the counts of people in spatially local regions for crowd counting. Afterwards, Shang et al. [21] made a global estimation for a whole image, and adopted the counting number constraint as a regularization item. Lu et al. [16] regressed a local count of the sub-image densely sampled from the input, and merged the normalized local estimations to a global prediction. Paul et al. [19] proposed the redundant counting, and it was generated with the square kernel instead of the Gaussian kernel adopted by the density map. Chattopadhyay et al. [4] employed a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Stahl et al. [28] adopted a local image divisions method to predict global image-level counts without using any form of local annotations. S-DCNet [32] exploited a spatial divide and conquer network that learned from closed set and generalize to open set scenarios.
+
+Though many approaches have been proposed to generate high-resolution density maps or predict global and local counts, the robust crowd counting of diverse scenes remains hard. Different with previous methods, we firstly introduce a novel regression target, and then adopt an adaptive mixture regression network in a coarse-to-fine manner for better crowd counting.
+
+# 3 Proposed Method
+
+In this section, we first introduce LCM in details and prove its superiority compared with the density map in Sec. 3.1. After that, we describe SAM, MRM and ASIM of the adaptive mixture regression framework in Sec. 3.2, 3.3 and 3.4, respectively. The overview of our framework is shown in Fig. 3.
+
+
+Fig. 3. The overview of our framework mainly including three modules: 1) scale-aware module (SAM), used to enhance multi-scale information of feature maps via multicolumn dilated convolution; 2) mixture regression module (MRM) and 3) adaptive soft interval module (ASIM), used to regress feature maps to the probability vector factor $\pmb{p}_k$ , the scaling factor $\gamma_k$ and the shifting vector factors $\beta_k$ of the $k$ -th mixture, respectively. We adopt the feature maps of layers 3, 4 and 5 as the inputs of SAM. The local counting map (LCM) is calculated according to parameters $\{\pmb{p}_k, \gamma_k, \beta_k\}$ and point-wise operation in Eq. (8). For an input $M \times N$ image and the $w \times h$ patch size, the output of the entire framework is a $\frac{M}{w} \times \frac{N}{h}$ LCM
+
+# 3.1 Local Counting Map
+
+For a given image containing $n$ heads, the ground-truth annotation can be described as $GT(p) = \sum_{i=1}^{n} \delta(p - p_i)$ , where $p_i$ is the pixel position of the $i$ -th head's center point. Generally, the generation of the density map is based on a fixed or adaptive Gaussian kernel $G_\sigma$ , which is described as $D(p) = \sum_{i=1}^{n} \delta(p - p_i) * G_\sigma$ . In this work, we fix the spread parameter $\sigma$ of the Gaussian kernel as 15.
+
+Each value in LCM represents the crowd number of a local patch, rather than a probability value indicating whether has a person or not in the density map. Because heads may be at the boundary of two patches in the process of regionalizing an image, it's unreasonable to divide people directly. Therefore, we generate LCM by summing the density map patch-by-patch. Then, the crowd number of local patch in the ground-truth LCM is not discrete value, but continuous value calculated based on the density map. The LCM can be described as the result of the non-overlapping sliding convolution operation as follows:
+
+$$
+L C M = D * \mathbf {1} _ {(w, h)}, \tag {1}
+$$
+
+where $D$ is the density map, $\mathbf{1}_{(w,h)}$ is the matrix of ones and $(w,h)$ is the local patch size.
+
+Next, we explain the reason that LCM can alleviate the inconsistency problem of the density map mathematically. For a test image, we set the $i$ -th pixel in ground-truth density map as $g_{i}$ and the $i$ -th pixel in estimated density map as $e_{i}$ . The total pixels number of the image is $m$ and the pixels number of the local patch is $t = w \times h$ . The evaluation criteria of mean absolute error (MAE), the error of LCM (LCME) and the error of density map (DME) can be calculated as follows:
+
+$$
+M A E = \left| \left(e _ {1} + e _ {2} + \dots + e _ {m}\right) - \left(g _ {1} + g _ {2} + \dots + g _ {m}\right) \right|, \tag {2}
+$$
+
+$$
+L C M E = \left| \left(e _ {1} + \dots + e _ {t}\right) - \left(g _ {1} + \dots + g _ {t}\right) \right| + \dots +
+$$
+
+$$
+\left| \left(e _ {m - t} + \dots + e _ {m}\right) - \left(g _ {m - t} + \dots + g _ {m}\right) \right|, \tag {3}
+$$
+
+$$
+D M E = \left| e _ {1} - g _ {1} \right| + \left| e _ {2} - g _ {2} \right| + \dots + \left| e _ {m} - g _ {m} \right|. \tag {4}
+$$
+
+According to absolute inequality theorem, we can get the relationship among them:
+
+$$
+M A E \leq L C M E \leq D M E. \tag {5}
+$$
+
+When $t = 1$ , we have LCME = DME. When $t = m$ , we get LCME = MAE. LCME provides a general form of loss function adopted for crowding counting. No matter what value $t$ takes, LCME proves to be a closer bound of MAE than DME theoretically.
+
+On the other side, we clarify the advantages of LCME for training, compared with DME and MAE. 1) DME mainly trains the model to generate probability responses pixel-by-pixel. However, pixel-level position labels generated by a Gaussian kernel may be low-quality and inaccurate for training, due to severe occlusions, large variations of head size, shape and density, etc. There is also a gap between the training loss DME and the evaluation criteria MAE. So the model with minimum training DME does not ensure the optimal counting result when testing with MAE. 2) MAE means direct global counting from an entire image. But global counting is an open-set problem and the crowd number ranges from 0 to $\infty$ , the MAE optimization makes the regression range greatly uncertain. Meanwhile, global counting would ignore all spatial annotated information, which couldn't provide visual density presentations of the prediction results. 3) LCM provides a more reliable training label than the density map, which discards the inaccurate pixel-level position information of density maps and focuses on the count values of local patches. LCME also lessens the gap between DME and MAE. Therefore, we adopt LCME as the training loss rather than MAE or DME.
+
+# 3.2 Scale-aware Module
+
+Due to the irregular placement of cameras, the scales of heads in an image are usually very polytropic, which brings great challenge to crowd counting task. To
+
+deal with this problem, we propose scale-aware module (SAM) to enhance the multi-scale feature extraction capability of the network. The previous works, such as L2SM [33] and S-DCNet [32], mainly focused on the fusion of feature maps from different CNN layers and acquire multi-scale information through feature pyramid network structure. Different from them, the proposed SAM achieves multi-scale information enhancement only on a single layer feature map and performs this operation at different convolutional layers to bring rich information to subsequent regression modules.
+
+For fair comparisons, we treat VGG16 as the backbone network for CNN-based feature extraction. As shown in Fig. 3, we enhance the feature maps of layers 3, 4 and 5 of the backbone through SAM, respectively. SAM first compresses the channel of feature map via $1 \times 1$ convolution. Afterwards, the compressed feature map is processed through dilated convolution with different expansion ratios of 1, 2, 3 and 4 to perceive multi-scale features of heads. The extracted multi-scale feature maps are fused via channel-wise concatenation operation and $3 \times 3$ convolution. The size of final feature map is consistent with the input one.
+
+# 3.3 Mixture Regression Module
+
+Given an testing image, the crowd numbers of different local patches vary a lot, which means great uncertainty on the estimation of local counting. Instead of taking the problem as a hard regression in TasselNet [16], we model the estimation as the probability combination of several intervals. We propose the MRM module to make the local regression more accurate via a coarse-to-fine manner.
+
+First, we discuss the case of coarse regression. For a certain local patch, we assume that the patch contains the upper limit of the crowd as $C_m$ . Thus, the number of people in this patch is considered to be $[0, C_m]$ . We equally divide $[0, C_m]$ into $s$ intervals and the length of each interval is $\frac{C_m}{s}$ . The vector $\pmb{p} = [p_1, p_2, \dots, p_s]^T$ represents the probability of $s$ intervals, and the vector $\pmb{v} = [v_1, v_2, \dots, v_s]^T = [\frac{1 \cdot C_m}{s}, \frac{2 \cdot C_m}{s}, \dots, C_m]^T$ represents the value of $s$ intervals. Then the counting number $C_p$ of a local patch in coarse regression can be obtained as followed:
+
+$$
+C _ {p} = \boldsymbol {p} ^ {T} \boldsymbol {v} = \sum_ {i = 1} ^ {s} p _ {i} \cdot v _ {i} = \sum_ {i = 1} ^ {s} p _ {i} \cdot \frac {i \cdot C _ {m}}{s} = C _ {m} \sum_ {i = 1} ^ {s} \frac {p _ {i} \cdot i}{s}. \tag {6}
+$$
+
+Next, we discuss the situation of fine mixture regression. We assume that the fine regression is consisted of $K$ mixtures. Then, the interval number of the $k$ -th mixture is $s_k$ . The vector $\pmb{p}$ of the $k$ -th mixture is $\pmb{p_k} = [p_{k,1}, p_{k,2}, \dots, p_{k,s}]^T$ and the vector $\pmb{v}$ is $\pmb{v_k} = [v_{k,1}, v_{k,2}, \dots, v_{k,s}]^T = [\frac{1 \cdot C_m}{\prod_{j=1}^k s_j}, \frac{2 \cdot C_m}{\prod_{j=1}^k s_j}, \dots, \frac{s_k \cdot C_m}{\prod_{j=1}^k s_j}]^T$ . The counting number $C_p$ of a local patch in mixture regression can be calculated as followed:
+
+$$
+C _ {p} = \sum_ {k = 1} ^ {K} \boldsymbol {p} _ {k} ^ {T} \boldsymbol {v} _ {k} = \sum_ {k = 1} ^ {K} \left(\sum_ {i = 1} ^ {s _ {k}} p _ {k, i} \cdot \frac {i _ {k} \cdot C _ {m}}{\prod_ {j = 1} ^ {k} s _ {j}}\right) = C _ {m} \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {s _ {k}} \frac {p _ {k , i} \cdot i _ {k}}{\prod_ {j = 1} ^ {k} s _ {j}}. \tag {7}
+$$
+
+To illustrate the operation of MRM clearly, we take the regression with three mixtures $(K = 3)$ for example. For the first mixture, the length of each interval is $C_m / s_1$ . The interval is roughly divided, and the network learns a preliminary estimation of the degree of density, such as sparse, medium, or dense. As the deeper feature in the network contains richer semantic information, we adopt the feature map of layer 5 for the first mixture. For the second and third mixtures, the length of each interval is $C_m / (s_1 \times s_2)$ and $C_m / (s_1 \times s_2 \times s_3)$ , respectively. Based on the fine estimation of the second and third mixtures, the network performs more accurate and detailed regression. Since the shallower features in the network contain detailed texture information, we exploit the feature maps of layer 4 and layer 3 for the second and third mixtures of counting regression, respectively.
+
+# 3.4 Adaptive Soft Interval Module
+
+In Sec 3.3, it is very inflexible to directly divide the regression interval into several non-overlapping intervals. The regression of value at hard-divided interval boundary will cause a significant error. Therefore, we propose ASIM, which can shift and scale interval adaptively to make the regression process smooth.
+
+For shifting process, we add an extra interval shifting vector factor $\beta_{k} = [\beta_{k,1},\beta_{k,2},\dots,\beta_{k,s}]^{T}$ to represent interval shifting of the $i$ -th interval of the $k$ -th mixture, and the index of the $k$ -th mixture $\bar{i}_k$ can be updated to $\bar{i}_k = i_k + \beta_{k,i}$ .
+
+For scaling process, similar to the shifting process, we add an additional interval scaling factor $\gamma$ to represent interval scaling of each mixture, and the interval number of the $k$ -th mixture $\overline{s}_k$ can be updated to $\overline{s}_k = s_k(1 + \gamma_k)$ .
+
+The network can get the output parameters $\{\pmb{p}_k, \gamma_k, \pmb{\beta}_k\}$ for an input image. Based on Eq. (7) and the given parameters $C_m$ and $s_k$ , we can update the mixture regression result $C_p$ to:
+
+$$
+C _ {p} = C _ {m} \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {s _ {k}} \frac {p _ {k , i} \cdot \bar {i} _ {k}}{\prod_ {j = 1} ^ {k} \bar {s} _ {j}} = C _ {m} \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {s _ {k}} \frac {p _ {k , i} \cdot \left(i _ {k} + \beta_ {k , i}\right)}{\prod_ {j = 1} ^ {k} \left[ s _ {j} (1 + \gamma_ {k}) \right]}. \tag {8}
+$$
+
+Now, we detail the specific implementation of MRM and ASIM. As shown in Fig. 3, for the feature maps from SAM, we downsample them to size $\frac{M}{w} \times \frac{N}{h}$ by following a two-stream model ( $1 \times 1$ convolution and avg pooling, $1 \times 1$ convolution and max pooling) and channel-wise concatenation operation. In this way, we can get the fused feature map from the two-stream model to avoid excessive information loss caused via down-sampling. With linear mapping via $1 \times 1$ convolution and different activation functions (ReLU, Tanh and Sigmoid), we get
+
+regression factors $\{\pmb{p}_k,\gamma_k,\pmb{\beta}_k\}$ , respectively. We should note that, $\{\pmb{p}_k,\gamma_k,\pmb{\beta}_k\}$ are the output of MRM and ASIM modules, only related to the input image. LCM is calculated according to parameters $\{\pmb{p}_k,\gamma_k,\pmb{\beta}_k\}$ and point-wise operation in Eq. (8). Crowd number can be calculated via global summation over the LCM. The entire network can be trained end-to-end. The target of network optimization is $L_{1}$ distance between the estimated LCM ( $LCM^{es}$ ) and the ground-truth LCM ( $LCM^{gt}$ ), which is defined as $Loss = \|LCM^{es} - LCM^{gt}\|_1$ .
+
+# 4 Experiments
+
+In this section, we first introduce four public challenging datasets and the essential implementation details in our experiments. After that, we compare our method with state-of-the-art methods. Finally, we conduct extensive ablation studies to prove the effectiveness of each component of our method.
+
+# 4.1 Datasets
+
+We evaluate our method on four publicly available crowd counting benchmark datasets: ShanghaiTech [36] Part A and Part B, UCF-QNRF [8] and UCF-CC-50 [7]. These datasets are introduced as follows.
+
+ShanghaiTech. The ShanghaiTech dataset [36] is consisted of two parts: Part A and Part B, with a total of 330,165 annotated heads. Part A is collected from the Internet and represents highly congested scenes, where 300 images are used for training and 182 images for testing. Part B is collected from shopping street surveillance camera and represents relatively sparse scenes, where 400 images are used for training and 316 images for testing.
+
+UCF-QNRF. The UCF-QNRF dataset [8] is a large crowd counting dataset with 1535 high resolution images and 1.25 million annotated heads, where 1201 images are used for training and 334 images for testing. It contains extremely dense scenes where the maximum crowd count of an image can reach 12865. We resize the long side of each image within 1920 pixels to reduce cache occupancy, due to the large resolution of images in the dataset.
+
+UCF-CC-50. The UCF-CC-50 dataset [7] is an extremely challenging dataset, containing 50 annotated images of complicated scenes collected from the Internet. In addition to different resolutions, aspect ratios and perspective distortions, this dataset also has great variants of crowd numbers, varying from 94 to 4543.
+
+# 4.2 Implementation Details
+
+Evaluation Metrics. We adopt mean absolute error (MAE) and mean squared error (MSE) as metrics to evaluate the accuracy of crowd counting estimation, which are defined as:
+
+$$
+M A E = \frac {1}{N} \sum_ {i = 1} ^ {N} \left| C _ {i} ^ {e s} - C _ {i} ^ {g t} \right|, \quad M S E = \sqrt {\frac {1}{N} \sum_ {i = 1} ^ {N} \left(C _ {i} ^ {e s} - C _ {i} ^ {g t}\right) ^ {2}}, \tag {9}
+$$
+
+where $N$ is the total number of testing images, $C_i^{es}$ (resp. $C_i^{gt}$ ) is the estimated (resp. ground-truth) count of the $i$ -th image, which can be calculated by summing the estimated (resp. ground-truth) LCM of the $i$ -th image.
+
+Data Augmentation. In order to ensure our network can be sufficiently trained and keep good generalization, we randomly crop an area of $m \times m$ pixels from the original image for training. For the ShanghaiTech Part B and UCF-QNRF datasets, $m$ is set to 512. For the ShanghaiTech Part A and UCF-CC-50 datasets, $m$ is set to 384. Random mirroring is also performed during training. In testing, we use the original image to infer without crop and resize operations. For the fair comparison with the previous typical work CSRNet [13] and SANet [3], we does not add the scale augmentation during the training and test stages.
+
+Training Details. Our method is implemented with PyTorch. All experiments are carried out on a server with an Intel Xeon 16-core CPU (3.5GHz), 64GB RAM and a single Titan Xp GPU. The backbone of network is directly adopted from convolutional layers of VGG16 [24] pretrained on ImageNet, and the other convolutional layers employ random Gaussian initialization with a standard deviation of 0.01. The learning rate is initially set to $1e^{-5}$ . The training epoch is set to 400 and the batch size is set to 1. We train our networks with Adam optimization [10] by minimizing the loss function.
+
+# 4.3 Comparisons with State of the Art
+
+The proposed method exhibits outstanding performance on all the benchmarks. The quantitative comparisons with state-of-the-art methods on four datasets are presented in Table 1 and Table 2. In addition, we also tell the visual comparisons in Fig. 6.
+
+ShanghaiTech. We compare the proposed method with multiple classic methods on ShanghaiTech Part A & Part B dataset and it has significant performance improvement. On Part A, our method improves $9.69\%$ in MAE and $14.47\%$ in MSE compared with CSRNet, improves $8.07\%$ in MAE and $5.42\%$ in MSE compared with SANet. On Part B, our method improves $33.77\%$ in MAE and $31.25\%$ in MSE compared with CSRNet, improves $16.43\%$ in MAE and $19.12\%$ in MSE compared with SANet.
+
+UCF-QNRF. We then compare the proposed method with other related methods on the UCF-QNRF dataset. To the best of our knowledge, UCF-QNRF is currently the largest and most widely distributed crowd counting dataset. Bayesian Loss [17] achieves 88.7 in MAE and 154.8 in MSE, which currently maintains the highest accuracy on this dataset, while our method improves $2.37\%$ in MAE and $1.68\%$ in MSE, respectively.
+
+UCF-CC-50. We also conduct experiments on the UCF-CC-50 dataset. The crowd numbers in images vary from 96 to 4633, bringing a great challenging for crowd counting. We follow the 5-fold cross validation as [7] to evaluate our method. With a small amount of training images, our network can still converge well in this dataset. Compared with the latest method Bayesian Loss [17], our method improves $19.76\%$ in MAE and $18.18\%$ in MSE and achieves the state-of-the-art performance.
+
+Table 1. Comparisons with state-of-the-art methods on ShanghaiTech Part A and Part B datasets
+
+| Dataset | Part A | Part B |
| Method | MAE | MSE | MAE | MSE |
| MCNN [36] | 110.2 | 173.2 | 26.4 | 41.3 |
| Switch-CNN [2] | 90.4 | 135.0 | 21.6 | 33.4 |
| CP-CNN [25] | 73.6 | 106.4 | 20.1 | 30.1 |
| CSRNet [13] | 68.2 | 115.0 | 10.6 | 16.0 |
| SANet [3] | 67.0 | 104.5 | 8.4 | 13.6 |
| PACNN [22] | 66.3 | 106.4 | 8.9 | 13.5 |
| SFCN [31] | 64.8 | 107.5 | 7.6 | 13.0 |
| Encoder-Decoder [9] | 64.2 | 109.1 | 8.2 | 12.8 |
| CFF [23] | 65.2 | 109.4 | 7.2 | 12.2 |
| Bayesian Loss [17] | 62.8 | 101.8 | 7.7 | 12.7 |
| SPANet+CSRNet [6] | 62.4 | 99.5 | 8.4 | 13.2 |
| RANet [34] | 59.4 | 102.0 | 7.9 | 12.9 |
| PaDNet [29] | 59.2 | 98.1 | 8.1 | 12.2 |
| Ours | 61.59 | 98.36 | 7.02 | 11.00 |
+
+Table 2. Comparisons with state-of-the-art methods on UCF-QNRF and UCF-CC-50 datasets
+
+| Dataset | UCF-QNRF | UCF-CC-50 |
| Method | MAE | MSE | MAE | MSE |
| MCNN [36] | 277 | 426 | 377.6 | 509.1 |
| Switch-CNN [2] | 228 | 445 | 318.1 | 439.2 |
| Composition Loss [8] | 132 | 191 | - | - |
| Encoder-Decoder [9] | 113 | 188 | 249.4 | 354.5 |
| RANet [34] | 111 | 190 | 239.8 | 319.4 |
| S-DCNet [32] | 104.4 | 176.1 | 204.2 | 301.3 |
| SFCN [31] | 102.0 | 171.4 | 214.2 | 318.2 |
| DSSINet [14] | 99.1 | 159.2 | 216.9 | 302.4 |
| MBTTBF [27] | 97.5 | 165.2 | 233.1 | 300.9 |
| PaDNet [29] | 96.5 | 170.2 | 185.8 | 278.3 |
| Bayesian Loss [17] | 88.7 | 154.8 | 229.3 | 308.2 |
| Ours | 86.6 | 152.2 | 184.0 | 265.8 |
+
+# 4.4 Ablation Studies
+
+In this section, we perform ablation studies on ShanghaiTech dataset and demonstrate the roles of several modules in our approach.
+
+Effect of Regression Target. We analyze the effects of different regression targets firstly. As shown in Table 3, the LCM we introduced has better performance than the density map, with $4.74\%$ boost in MAE and $4.06\%$ boost in MSE on Part A, $8.47\%$ boost in MAE and $6.18\%$ boost in MSE on Part B. As shown in Fig. 4, LCM has more stable and lower MAE & MSE testing curves.
+
+Table 3. An quantitative comparison with two different targets on testing datasets between LCM and density map
+
+ | Part A | Part B |
| Target | MAE | MSE | MAE | MSE |
| density map | 72.98 | 114.89 | 9.79 | 14.40 |
| local counting map | 69.52 | 110.23 | 8.96 | 13.51 |
+
+
+Fig.4. The curves of testing loss for different regression targets LCM and density map. LCM has lower error and smoother convergence curves on both MAE and MSE than density map
+
+
+
+It indicates that LCM alleviates the inconsistency problem between the training target and the evaluation criteria to bring performance improvement. Both of them adopt VGG16 as the backbone networks without other modules.
+
+Effect of Each Module. To validate the effectiveness of several modules, we train our model with four different combinations: 1) VGG16+LCM (Baseline); 2) MRM; 3) MRM+ASIM; 4) MRM+ASIM+SAM. As shown in Table 4, MRM improves the MAE from 69.52 to 65.24 on Part A and from 8.96 to 7.79 on Part B, compared with our baseline direct LCM regression. With ASIM, it improves the MAE from 65.24 to 63.85 on Part A and from 7.79 to 7.56 on Part B. With SAM, it improves the MAE from 63.85 to 61.59 on Part A and from 7.56 to 7.02 on Part B, respectively. The combination of MRM+ASIM+SAM achieves the best performance, 61.59 in MAE and 98.36 in MSE on Part A, 7.02 in MAE and 11.00 in MSE on Part B.
+
+Effect of Local Patch Size. We analyze the effects of different local patch sizes on regression results with MRM. As shown in Table 5, the performance gradually improves with local patch size increasing and it slightly drops until $128 \times 128$ patch size. Our method gets the best performance with $64 \times 64$ patch size on Part A and Part B. When the local patch size is too small, the heads information that local patch can represent is too limited, and it is difficult to map weak features to the counting value. When the local patch size is $1 \times 1$ , the regression target changes from LCM to the density map. When the local
+
+Table 4. Ablation study on different combinations of modules including MRM, ASIM and SAM in the regression framework
+
+ | Part A | Part B |
| Module | MAE | MSE | MAE | MSE |
| LCM | 69.52 | 110.23 | 8.96 | 13.51 |
| MRM | 65.24 | 104.81 | 7.79 | 12.55 |
| MRM+ASIM | 63.85 | 102.48 | 7.56 | 11.98 |
| MRM+ASIM+SAM | 61.59 | 98.36 | 7.02 | 11.00 |
+
+Table 5. The effects of different local patch sizes with MRM module
+
+ | Part A | Part B |
| Size | MAE | MSE | MAE | MSE |
| 16 × 16 | 70.45 | 114.12 | 9.41 | 13.93 |
| 32 × 32 | 69.28 | 109.24 | 8.68 | 13.44 |
| 64 × 64 | 65.24 | 104.81 | 7.79 | 12.55 |
| 128 × 128 | 67.73 | 105.15 | 7.93 | 12.78 |
+
+
+Fig.5. The left figure shows the MAE & MSE performance of different mixtures number on ShanghaiTech Part A. The right one shows the relationship between the number of mixtures and the parameters of mixtures
+
+
+
+patch size is too large, the counting regression range will also expand, making it difficult to perform fine and accurate estimation.
+
+Effect of Mixtures Number $K$ . We measure the performance of adaptive mixture regression network with different mixture numbers $K$ . As shown in Fig. 5, the testing error firstly drops and then slightly arises with the increasing number of $K$ . On the one hand, smaller $K$ (e.g., $K = 1$ ) means single division and it will involve a coarse regression on the local patch. On the other hand, larger $K$ (e.g., $K = 5$ ) means multiple divisions. It's obviously unreasonable when we divide each interval via undersize steps, such as 0.1 and 0.01. The relationship between the number of mixtures and model parameters is shown in Fig. 5. To achieve a proper balance between the accuracy and computational complexity, we take $K = 3$ as the mixtures number in experiments.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Image
+
+
+Ground-truth
+
+
+LCM
+Fig.6. From top to bottom, we exhibit several example images with different densities from sparse, medium to dense. The second, third and fourth columns display ground-truth LCM, LCM generated with the baseline model (VGG16+LCM) and LCM generated with our proposed regression framework, respectively.
+
+
+LCM+Regression Framework
+
+# 5 Conclusion
+
+In this paper, we introduce a new learning target named local counting map, and show its feasibility and advantages in local counting regression. Meanwhile, we propose an adaptive mixture regression framework in a coarse-to-fine manner. It reports marked improvements in counting accuracy and the stability of the training phase, and achieves the start-of-the-art performances on several authoritative datasets. In the future, we will explore better ways of extracting context and multi-scale information from different convolutional layers. Additionally, we will explore other forms of local area supervised learning approaches to further improve crowd counting performance.
+
+# References
+
+1. Babu Sam, D., Sajjan, N.N., Venkatesh Babu, R., Srinivasan, M.: Divide and Grow: Capturing huge diversity in crowd images with incrementally growing CNN. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2018)
+2. Babu Sam, D., Surya, S., Venkatesh Babu, R.: Switching convolutional neural network for crowd counting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
+3. Cao, X., Wang, Z., Zhao, Y., Su, F.: Scale aggregation network for accurate and efficient crowd counting. In: Proceedings of the European Conference on Computer Vision (2018)
+4. Chattopadhyay, P., Vedantam, R., Selvaraju, R.R., Batra, D., Parikh, D.: Counting everyday objects in everyday scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
+5. Chen, K., Loy, C.C., Gong, S., Xiang, T.: Feature mining for localised crowd counting. In: Proceedings of the British Machine Vision Conference (2012)
+6. Cheng, Z.Q., Li, J.X., Dai, Q., Wu, X., Hauptmann, A.G.: Learning spatial awareness to improve crowd counting. In: Proceedings of the International Conference on Computer Vision (2019)
+7. Idrees, H., Saleemi, I., Seibert, C., Shah, M.: Multi-source multi-scale counting in extremely dense crowd images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013)
+8. Idrees, H., Tayyab, M., Athrey, K., Zhang, D., Al-Maadeed, S., Rajpoot, N., Shah, M.: Composition loss for counting, density map estimation and localization in dense crowds. In: Proceedings of the European Conference on Computer Vision (2018)
+9. Jiang, X., Xiao, Z., Zhang, B., Zhen, X., Cao, X., Doermann, D., Shao, L.: Crowd counting and density estimation by trellis encoder-decoder networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)
+0. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
+1. Lempitsky, V., Zisserman, A.: Learning to count objects in images. In: Proceedings of the Conference and Workshop on Neural Information Processing Systems (2010)
+2. Li, J., Liang, X., Shen, S., Xu, T., Feng, J., Yan, S.: Scale-aware fast r-cnn for pedestrian detection. IEEE transactions on Multimedia 20(4), 985-996 (2017)
+3. Li, Y., Zhang, X., Chen, D.: CSRNet: Dilated convolutional neural networks for understanding the highly congested scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
+4. Liu, L., Qiu, Z., Li, G., Liu, S., Ouyang, W., Lin, L.: Crowd counting with deep structured scale integration network. In: Proceedings of the IEEE International Conference on Computer Vision (2019)
+5. Liu, W., Liao, S., Ren, W., Hu, W., Yu, Y.: High-level semantic feature detection: A new perspective for pedestrian detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)
+6. Lu, H., Cao, Z., Xiao, Y., Zhuang, B., Shen, C.: TasselNet: counting maize tassels in the wild via local counts regression network. Plant Methods 13(1), 79 (2017)
+7. Ma, Z., Wei, X., Hong, X., Gong, Y.: Bayesian loss for crowd count estimation with point supervision. In: Proceedings of the International Conference on Computer Vision (2019)
+
+18. Mao, J., Xiao, T., Jiang, Y., Cao, Z.: What can help pedestrian detection? In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
+19. Paul Cohen, J., Boucher, G., Glastonbury, C.A., Lo, H.Z., Bengio, Y.: Count-ception: Counting by fully convolutional redundant counting. In: Proceedings of the International Conference on Computer Vision (2017)
+20. Sam, D.B., Babu, R.V.: Top-down feedback for crowd counting convolutional neural network. In: Thirty-second AAAI conference on artificial intelligence (2018)
+21. Shang, C., Ai, H., Bai, B.: End-to-end crowd counting via joint learning local and global count. In: Proceedings of the International Conference on Image Processing (2016)
+22. Shi, M., Yang, Z., Xu, C., Chen, Q.: Revisiting perspective information for efficient crowd counting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)
+23. Shi, Z., Mettes, P., Snoek, C.G.M.: Counting with focus for free. In: Proceedings of the International Conference on Computer Vision (2019)
+24. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
+25. Sindagi, V.A., Patel, V.M.: Generating high-quality crowd density maps using contextual pyramid CNNs. In: Proceedings of the International Conference on Computer Vision (2017)
+26. Sindagi, V.A., Patel, V.M.: HA-CNN: Hierarchical attention-based crowd counting network. IEEE Transactions on Image Processing 29, 323-335 (2019)
+27. Sindagi, V.A., Patel, V.M.: Multi-level bottom-top and top-bottom feature fusion for crowd counting. In: Proceedings of the IEEE International Conference on Computer Vision (2019)
+28. Stahl, T., Pintea, S.L., van Gemert, J.C.: Divide and count: Generic object counting by image divisions. IEEE Transactions on Image Processing 28(2), 1035-1044 (2018)
+29. Tian, Y., Lei, Y., Zhang, J., Wang, J.Z.: Padnet: Pan-density crowd counting. IEEE Transactions on Image Processing 29, 2714-2727 (2020)
+30. Wan, J., Chan, A.: Adaptive density map generation for crowd counting. In: Proceedings of the International Conference on Computer Vision (2019)
+31. Wang, Q., Gao, J., Lin, W., Yuan, Y.: Learning from synthetic data for crowd counting in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)
+32. Xiong, H., Lu, H., Liu, C., Liu, L., Cao, Z., Shen, C.: From open set to closed set: Counting objects by spatial divide-and-conquer. In: Proceedings of the International Conference on Computer Vision (2019)
+33. Xu, C., Qiu, K., Fu, J., Bai, S., Xu, Y., Bai, X.: Learn to scale: Generating multipolar normalized density maps for crowd counting. In: Proceedings of the International Conference on Computer Vision (2019)
+34. Zhang, A., Shen, J., Xiao, Z., Zhu, F., Zhen, X., Cao, X., Shao, L.: Relational attention network for crowd counting. In: Proceedings of the IEEE International Conference on Computer Vision (2019)
+35. Zhang, C., Li, H., Wang, X., Yang, X.: Cross-scene crowd counting via deep convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
+36. Zhang, Y., Zhou, D., Chen, S., Gao, S., Ma, Y.: Single-image crowd counting via multi-column convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
\ No newline at end of file
diff --git a/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/images.zip b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..26b93766ff77b5fd2d66160a2456365579a49699
--- /dev/null
+++ b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d34ee53a48d685712715b223beeb05125e2200e736456d1a05fa4cbf0e9c7607
+size 593115
diff --git a/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/layout.json b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..04e346997a88d682b34e6ce3bb5d11c7f78b2674
--- /dev/null
+++ b/adaptivemixtureregressionnetworkwithlocalcountingmapforcrowdcounting/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3a3a90e4c696f00f5fe0b573a520be63bcb6dcc304d50313c4f0c08243ada9a2
+size 447447
diff --git a/adaptiveobjectdetectionwithdualmultilabelprediction/50e61c4a-f944-4458-812b-c41d386a454c_content_list.json b/adaptiveobjectdetectionwithdualmultilabelprediction/50e61c4a-f944-4458-812b-c41d386a454c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..abaa19334a208077c20f700d9f68db370fb469b8
--- /dev/null
+++ b/adaptiveobjectdetectionwithdualmultilabelprediction/50e61c4a-f944-4458-812b-c41d386a454c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c7b7d2b44b1a4c8a585e8e8b01ea9eee39d8e36d2004b2f5f91546cd8918fde
+size 74458
diff --git a/adaptiveobjectdetectionwithdualmultilabelprediction/50e61c4a-f944-4458-812b-c41d386a454c_model.json b/adaptiveobjectdetectionwithdualmultilabelprediction/50e61c4a-f944-4458-812b-c41d386a454c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d5635126a6dc175b31a5c47484ef4d213661feb8
--- /dev/null
+++ b/adaptiveobjectdetectionwithdualmultilabelprediction/50e61c4a-f944-4458-812b-c41d386a454c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:30653345dd38a6f18146d17fc95d4452042c4d269f698fef94629a99a24df0b6
+size 90287
diff --git a/adaptiveobjectdetectionwithdualmultilabelprediction/50e61c4a-f944-4458-812b-c41d386a454c_origin.pdf b/adaptiveobjectdetectionwithdualmultilabelprediction/50e61c4a-f944-4458-812b-c41d386a454c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0ccb8021ad53bed41a605b405a3434fad3fe5ac8
--- /dev/null
+++ b/adaptiveobjectdetectionwithdualmultilabelprediction/50e61c4a-f944-4458-812b-c41d386a454c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d89ccd887dc65c90a43aa8547523ac62bd802149b6091383ba4d3ff10a8f3732
+size 1344078
diff --git a/adaptiveobjectdetectionwithdualmultilabelprediction/full.md b/adaptiveobjectdetectionwithdualmultilabelprediction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5f20badb42d0e948138da77356c6401eff6b4ddb
--- /dev/null
+++ b/adaptiveobjectdetectionwithdualmultilabelprediction/full.md
@@ -0,0 +1,267 @@
+# Adaptive Object Detection with Dual Multi-Label Prediction
+
+Zhen Zhao $^{1}$ , Yuhong Guo $^{1,2}$ , Haifeng Shen $^{1}$ , and Jieping Ye $^{1}$
+
+1 DiDi Chuxing
+
+2 Carleton University
+
+{alexzhaozhen,shenhaifeng,yejieping} $@$ didiglobal.com,yuhong.guo@carleton.ca
+
+Abstract. In this paper, we propose a novel end-to-end unsupervised deep domain adaptation model for adaptive object detection by exploiting multi-label object recognition as a dual auxiliary task. The model exploits multi-label prediction to reveal the object category information in each image and then uses the prediction results to perform conditional adversarial global feature alignment, such that the multimodal structure of image features can be tackled to bridge the domain divergence at the global feature level while preserving the discriminability of the features. Moreover, we introduce a prediction consistency regularization mechanism to assist object detection, which uses the multi-label prediction results as an auxiliary regularization information to ensure consistent object category discoveries between the object recognition task and the object detection task. Experiments are conducted on a few benchmark datasets and the results show the proposed model outperforms the state-of-the-art comparison methods.
+
+Keywords: cross-domain object detection, auxiliary task
+
+# 1 Introduction
+
+The success of deep learning models has led to great advances for many computer vision tasks, including image classification [35, 36, 16], image segmentation [24, 43] and object detection [11, 29, 23, 28]. The smooth deployment of the deep models typically assumes a standard supervised learning setting, where a sufficient amount of labeled data is available for model training and the training and test images come from the same data source and distribution. However, in practical applications, the training and test images can come from different domains that exhibit obvious deviations. For example, Figure 1 demonstrates images from domains with different image styles, which obviously present different visual appearances and data distributions. The violation of the i.i.d sampling principle across training and test data prevents effective deployment of supervised learning techniques, while acquiring new labeled data in each test domain is costly and impractical. To address this problem, unsupervised domain adaptation has recently received increasing attention [10, 39, 25, 4].
+
+Unsupervised domain adaptation aims to adapt information from a label-rich source domain to learn prediction models in a target domain that only has
+
+
+
+
+Fig. 1. (a) and (b) are images from real scenes and virtual scenes respectively. It is obvious that the visual appearances of the images from different domains are very different, even if they contain the same categories of objects.
+
+
+(a)
+
+
+
+
+
+
+
+
+(b)
+
+unlabeled instances. Although many unsupervised domain adaptation methods have been developed for simpler image classification and segmentation tasks [10, 25,4,42,37,38], much fewer domain adaptation works have been done on the more complex object detection task, which requires recognizing both the objects and their specific locations. The authors of [2] propose a domain adaptive faster R-CNN model for cross-domain object detection, which employs the adversarial domain adaptation technique [10] to align cross-domain features at both the image-level and instance-level to bridge data distribution gaps. This adaptive faster R-CNN method presents some promising good results. However, due to the typical presence of multiple objects in each image, as shown in Figure 1, both the image-level and instance-level feature alignments can be problematic without considering the specific objects contained. The more recent work [31] proposes to address the problem of global (image-level) feature alignment by incorporating an additional local feature alignment under a strong-weak alignment framework for cross-domain object detection, which effectively improved the performance of the domain adaptive faster R-CNN. Nevertheless, this work still fails to take the latent object category information into account for cross-domain feature alignment. With noisy background and various objects, a whole image can contain very complex information and the overall features of an image can have complex multimodal structures. Aiming to learn an accurate object detector in the target domain, it is important to induce feature representations that minimize the cross-domain feature distribution gaps, while preserving the cross-category feature distribution gaps.
+
+In light of the problem analysis above, in this paper we propose a novel end-to-end unsupervised deep domain adaptation model, Multi-label Conditional distribution Alignment and detection Regularization model (MCAR), for multi-object detection, where the images in the target domain are entirely unanno
+
+tated. The model exploits multi-label prediction as an auxiliary dual task to reveal the object category information in each image and then uses this information as an additional input to perform conditional adversarial cross-domain feature alignment. Such a conditional feature alignment is expected to improve the discriminability of the induced features while bridging the cross-domain representation gaps to increase the transferability and domain invariance of features. Moreover, as object recognition is typically easier to solve and can yield higher accuracy than the more complex object detection task, we introduce a consistency regularization mechanism to assist object detection, which uses the multi-label prediction results as auxiliary regularization information for the object detection part to ensure consistent object category discoveries between the object recognition task and the object detection task.
+
+The contribution of this work can be summarized as follows: (1) This is the first work that exploits multi-label prediction as an auxiliary dual task for the multi-object detection task. (2) We deploy a novel multi-label conditional adversarial cross-domain feature alignment methodology to bridge domain divergence while preserving the discriminability of the features. (3) We introduce a novel prediction consistency regularization mechanism to improve the detection accuracy. (4) We conduct extensive experiments on multiple adaptive multi-object detection tasks by comparing the proposed model with existing methods, and demonstrate effective empirical results for the proposed model.
+
+# 2 Related Work
+
+Object Detection. Detection models have benefited from using advanced convolutional neural networks as feature extractors. Many widely used detection methods are two-stage methods based on the region of interest (ROI) [12, 11, 29]. The RCNN in [12] is the first detection model that deploys the ROI for object detection. It extracts features independently from each region of interest in the image, instead of using the sliding window and manual feature design in traditional object detection methods. Later, the author of [11] proposed a Fast-RCNN detection model, which adopts a ROI pooling operation to share the convolution layers between all ROIs and improve the detection speed and accuracy. The work in [29] made further improvements and proposed the Faster-RCNN, which combines Region Proposal Network (RPN) with Fast-RCNN to replace selective search and further improve detection performance. Faster-RCNN provides a foundation for many subsequent research studies [23, 6, 21, 15, 28]. In this work and many related unsupervised domain adaptation methods, the widely used two-stage method, Faster-RCNN, is adopted as the backbone detection model.
+
+Unsupervised Domain Adaptation. Unsupervised domain adaptation has attracted a lot of attention in computer vision research community and made great progress [10, 30, 25, 20, 7, 33]. The main idea employed in these works is to learn feature representations that align distributions across domains. For example, the work in [10] adopts the principle of generative adversarial networks
+
+(GANs) [14] through a gradient reversal layer (GRL) [9] to achieve cross-domain feature alignment. The work in [25] further extends adversarial adaptation into conditional adversarial domain adaptation by taking the classifier's prediction into account. The works in [30, 3] use image generation to realize cross-domain feature transformation and align the source and target domains. Moreover, some other works adopt distance metric learning methods, such as asymmetric metric learning [20], maximum mean discrepancy (MMD) minimization [7] and Wasserstein distance minimization [33], to achieve domain alignment. Nevertheless, these studies focus on the simpler image classification and segmentation tasks.
+
+Adaptive Object Detection. Recently domain adaptation for object detection has started drawing attention. The work in [2] proposes an adaptive Faster-RCNN method that uses adversarial gradient reversal to achieve image-level and instance-level feature alignment for adaptive cross-domain object detection. [18] adopts image transformation and exploits pseudo labels to realize a weakly supervised cross-domain detection. The work in [19] leverages multi-style image generation between multiple domains to achieve cross-domain object detection. The authors of [31] propose a strong and weak alignment of local and global features to improve cross-domain object detection performance. [44] focuses on relevant areas for selective cross-domain alignment. [17] adopts hierarchical domain feature alignment while adding a scale reduction module and a weighted gradient reversal layer to achieve domain invariance. [1] advances the Mean Teacher paradigm with object relations for cross-domain detection. [34] uses a gradient detach based multi-level feature alignment strategy for cross-domain detection. [40] adopts multi-level feature adversary to achieve domain adaptation. Nevertheless, these methods are limited to cross-domain feature alignment, while failing to take the latent object category information into account when performing feature alignment. Our proposed model employs multi-label object recognition as an auxiliary task and uses it to achieve conditional feature alignment and detection regularization.
+
+# 3 Method
+
+In this section, we present the proposed Multi-label Conditional distribution Alignment and detection Regularization model (MCAR) for cross-domain adaptive object detection. We assume there are two domains from different sources and with different distributions. The source domain is fully annotated for object detection and the target domain is entirely unannotated. Let $X_{s} = \{(x_{i}^{s},\mathbf{b}_{i}^{s},\mathbf{c}_{i}^{s})\}_{i = 1}^{n_{s}}$ denote the annotated images from the source domain, where $x_{i}^{s}$ denotes the $i$ -th image, $\mathbf{b}_i^s$ and $\mathbf{c}_i^s$ denote the bounding boxes' coordinates and the category labels of the corresponding objects contained in the image respectively. Let $X_{t} = \{x_{i}^{t}\}_{i = 1}^{n_{t}}$ denote the unannotated images from the target domain. We assume in total $K$ classes of objects are presented in images of both the source and target domains. We aim to train an object detection model by exploiting the
+
+
+Fig. 2. The structure of the proposed MCAR model. Conditional adversarial global feature alignment is conducted through a domain discriminator by using multi-label prediction results as object category input. Meanwhile, multi-label prediction results are also used to provide a prediction consistency regularization mechanism on object detection after the RPN.
+
+available data from both domains such that the model can have good detection performance in the target domain.
+
+The main idea of the proposed MCAR model is to exploit multi-label prediction (for multi-object recognition) as an auxiliary task and use it to perform both conditional adversarial cross-domain feature alignment and prediction consistency regularization for the target object detection task. This end-to-end deep learning model adopts the widely used Faster-RCNN as the backbone detection network. Its structure is presented in Figure 2. Following this structure, we present the model in detail below.
+
+# 3.1 Multi-Label Prediction
+
+The major difference between object recognition and object detection lies in that the former task only needs to recognize the presence of any object category in the given image, while the latter task needs to identify each specific object and its location in the image. The cross-domain divergence in image features that impacts the object recognition task can also consequently degrade the detection performance, since it will affect the region proposal network and the regional local object classification. Therefore we propose to deploy a simpler task of object recognition to help extract suitable image-level features that can bridge the distribution gap between the source and target domains, while being discriminative for recognizing objects.
+
+In particular, we treat the object recognition task as a multi-label prediction problem [41, 13]. It takes the global image-level features produced by the fea-
+
+ture extraction network $F$ of the Faster-RCNN model as input, and predicts the presence of $K$ object category using $K$ binary classifier networks, $M_{1},\dots ,M_{K}$ . These classifiers can be learned on the annotated images in the source domain, where the global object category label indicator vector $\mathbf{y}_i^s\in \{0,1\} ^K$ for the $i$ -th image can be gathered from its bounding boxes' labels $\mathbf{c}_i^s$ through a fixed transformation operation function $\varphi :\mathbf{c}_i^s\to \mathbf{y}_i^s$ , which simply finds all the existing object categories in $\mathbf{c}_i^s$ and represents their presence using $\mathbf{y}_i^s$ . The multi-label classifiers can then be learned by minimizing the following cross-entropy loss:
+
+$$
+\mathcal {L} _ {\text {m u l t i}} = - \frac {1}{n _ {s}} \sum_ {i = 1} ^ {n _ {s}} \left[ \mathbf {y} _ {i} ^ {s} ^ {\top} \log \left(\mathbf {p} _ {i} ^ {s}\right) + \left(1 - \mathbf {y} _ {i} ^ {s}\right) ^ {\top} \log \left(1 - \mathbf {p} _ {i} ^ {s}\right) \right] \tag {1}
+$$
+
+where each $k$ -th entry of the prediction output vector $\mathbf{p}_i^s$ is produced from the $k$ -th binary classifier:
+
+$$
+\mathbf {p} _ {i k} ^ {s} = M _ {k} \left(F \left(x _ {i} ^ {s}\right)\right) \tag {2}
+$$
+
+which indicates the probability of the presence of objects from the $k$ -th class.
+
+The multi-label classifiers work on the global features extracted before the RPN of the Faster-RCNN. For Faster-RCNN based object detection, these global features will be used through RPNs to extract region proposals and then perform object classification and bounding box regression on the proposed regions. In the source domain, supervision information such as bounding boxes and the object labels are provided for training the detector, while in the target domain, the detection is purely based on the global features extracted and the detection model parameters (for RPN, region classifiers and regressors) obtained in the source domain. Hence it is very important to bridge the domain gap at the global feature level. Moreover, image features that led to good global object recognition performance are also expected to be informative for the local object classification on proposed regions. Therefore we will exploit multi-label prediction for global feature alignment and regional object prediction regularization.
+
+# 3.2 Conditional Adversarial Feature Alignment
+
+The popular generative adversarial network (GAN) [14] has shown that two distributions can be aligned by using a discriminator as an adversary to play a minimax two-player game. Following the same principle, conditional adversary is designed to take label category information into account. It has been suggested in [25, 27] that the cross-covariance of the predicted category information and the global image features can be helpful for avoiding partial alignment and achieving multimodal feature distribution alignment. We propose to integrate the multi-label prediction results together with the global image features extracted by $F$ to perform conditional adversarial feature alignment at the global image level. The key component network introduced is the domain discriminator $D$ , which predicts the domain of the input image instance, with label 1 indicating the source domain and 0 indicating the target domain. As shown in Figure 2,
+
+the discriminator consists of a convolution filter layer $f$ , which reduces the dimension of the input features, and a fully connected layer $FC$ , which integrates the inputs to perform classification. It takes features $F(x_{i})$ and the multi-label prediction probability vector $\mathbf{p}_i$ as input, and uses softmax activation function to produce probabilistic prediction output. For the conditional adversarial training, we adopted a focal loss [22, 31], which uses the prediction confidence deficiency score to weight each instance in order to give more weights to hard-to-classify examples. The loss of conditional adversarial training, $\mathcal{L}_{adv}$ , is as below:
+
+$$
+\min _ {F} \max _ {D} \quad \mathcal {L} _ {a d v} = - \frac {1}{2} \left(\mathcal {L} _ {a d v} ^ {s} + \mathcal {L} _ {a d v} ^ {t}\right) \tag {3}
+$$
+
+$$
+\mathcal {L} _ {a d v} ^ {s} = - \frac {1}{n _ {s}} \sum_ {i = 1} ^ {n _ {s}} \left(1 - D \big (F (x _ {i} ^ {s}), \mathbf {p} _ {i} ^ {s}) \big) ^ {\gamma} \log \big (D \big (F (x _ {i} ^ {s}), \mathbf {p} _ {i} ^ {s}) \big) \right.
+$$
+
+$$
+\mathcal {L} _ {a d v} ^ {t} = - \frac {1}{n _ {t}} \sum_ {i = 1} ^ {n _ {t}} D (F (x _ {i} ^ {t}), {\bf p} _ {i} ^ {t}) ^ {\gamma} \log (1 - D (F (x _ {i} ^ {t}), {\bf p} _ {i} ^ {t}))
+$$
+
+where $\gamma$ is a modulation factor that controls how much to focus on the hard-to-classify example; the global features $F(x_{i})$ and the multi-label prediction probability vector $\mathbf{p}_i$ are integrated through a multi-linear mapping function such that $D(F(x_i),\mathbf{p}_i) = FC(f(F(x_i))\otimes \mathbf{p}_i)$ . With this adversary loss, the feature extractor $F$ will be adjusted to try to confuse the domain discriminator $D$ , while $D$ aims to maximally separate the two domains.
+
+This multi-label prediction conditioned adversarial feature alignment is expected to bridge the domain distribution gaps while preserving the discriminability for object recognition, which will improve the adaptation of the consequent region proposal, object classification on each proposed region and its location identification in the target domain.
+
+# 3.3 Category Prediction based Regularization
+
+The detection task involves recognizing both the objects and their locations, which is relatively more difficult than object recognition [8]. The multi-label classifiers we applied can produce more accurate recognition results as the region proposal mistakes can be accumulated to objection classification on the proposed regions in the detection task. Based on such an observation, we propose a novel category prediction consistency regularization mechanism for object detection by exploiting multi-label prediction results.
+
+Assume $N$ region proposals are generated through the region proposal network (RPN) for an input image $x$ . Each proposal will be classified into one of the $K$ object classes using an object classifier $C$ , while its location coordinates will be produced using a regressor $R$ . The multi-class object classifier produces a length $K$ prediction vector $\hat{\mathbf{q}}$ on each proposal that indicates the probability of the proposed region belonging to one of the $K$ object classes. The object prediction on the total $N$ proposals can form a prediction matrix $Q \in [0,1]^{K \times N}$ . We can then compute an overall multi-object prediction probability vector $\mathbf{q}$ by
+
+taking the row-wise maximum over $Q$ , such that $\mathbf{q}_k = \max(Q(k,:))$ , and use $\mathbf{q}_k$ as the prediction probability of the image $x$ containing the $k$ -th object category. To enforce consistency between the prediction produced by the detector and the prediction produced by the multi-label object recognition, we propose to minimize the $KL$ divergence between their prediction probability vectors $\mathbf{p}$ and $\mathbf{q}$ after renormalizing each vector with softmax function. As $KL$ divergence is an asymmetric measure, we define the consistency regularization loss as:
+
+$$
+\mathcal {L} _ {k l} = \mathcal {L} _ {k l} ^ {s} + \mathcal {L} _ {k l} ^ {t} \tag {4}
+$$
+
+$$
+\mathcal {L} _ {k l} ^ {s} = \frac {1}{2 n _ {s}} \sum_ {i = 1} ^ {n _ {s}} \left(K L \left(\mathbf {p} _ {i} ^ {s}, \mathbf {q} _ {i} ^ {s}\right) + K L \left(\mathbf {q} _ {i} ^ {s}, \mathbf {p} _ {i} ^ {s}\right)\right) \tag {5}
+$$
+
+$$
+\mathcal {L} _ {k l} ^ {t} = \frac {1}{2 n _ {t}} \sum_ {i = 1} ^ {n _ {t}} \left(K L \left(\mathbf {p} _ {i} ^ {t}, \mathbf {q} _ {i} ^ {t}\right) + K L \left(\mathbf {q} _ {i} ^ {t}, \mathbf {p} _ {i} ^ {t}\right)\right) \tag {6}
+$$
+
+With this regularization loss, we expect the multi-label prediction results can assist object detection through unified mutual learning.
+
+# 3.4 Overall End-to-End Learning
+
+The detection loss of the base Faster-RCNN model, denoted as $\mathcal{L}_{det}$ , is computed on the annotated source domain data under supervised classification and regression. It has two components, the proposal classification loss and the bounding box regression loss. We combine the detection loss, the multi-label prediction loss, the conditional adversarial feature alignment loss, and the prediction consistency regularization loss together for end-to-end deep learning. The total loss can be written as:
+
+$$
+\left\{ \begin{array}{c} \mathcal {L} _ {a l l} = \mathcal {L} _ {\text {d e t}} + \lambda \mathcal {L} _ {a d v} + \mu \mathcal {L} _ {m u l t i} + \varepsilon \mathcal {L} _ {k l} \\ \min _ {F} \max _ {D} \quad \mathcal {L} _ {a l l} \end{array} \right. \tag {7}
+$$
+
+where $\lambda$ , $\mu$ , and $\varepsilon$ are trade-off parameters that balance the multiple loss terms. We use SGD optimization algorithm to perform training, while GRL [9] is adopted to implement the gradient sign flip for the domain discriminator part.
+
+# 4 Experiments
+
+We conducted experiments with multiple cross-domain multi-object detection tasks under different adaptation scenarios: (1) Domain adaptation from real to virtual image scenarios, where we used cross-domain detection tasks from PASCAL VOC [8] to Watercolor2K [18] and Comic2K [18] respectively. (2) Domain adaption from normal/clear images to foggy image scenarios, where we used object detection tasks that adapt from Cityscapes [5] to Foggy Cityscapes [32]. In each adaptive object detection task, the images in the source domain are fully annotated and the images in the target domain are entirely unannotated. We present our experimental results and discussions in this section.
+
+Table 1. Test results of domain adaptation for object detection from PASCAL VOC to Watercolor in terms of mean average precision (\%). MC and PR indicate Multilabel-Conditional adversary and Prediction based Regularization, respectively.
+
+| Method | MC PR | bike | bird | car | cat | dog | person | mAP |
| Source-only | | 68.8 | 46.8 | 37.2 | 32.7 | 21.3 | 60.7 | 44.6 |
| BDC-Faster [31] | | 68.6 | 48.3 | 47.2 | 26.5 | 21.7 | 60.5 | 45.5 |
| DA-Faster [2] | | 75.2 | 40.6 | 48.0 | 31.5 | 20.6 | 60.0 | 46.0 |
| SW-DA [31] | | 82.3 | 55.9 | 46.5 | 32.7 | 35.5 | 66.7 | 53.3 |
| SCL [34] | | 82.2 | 55.1 | 51.8 | 39.6 | 38.4 | 64.0 | 55.2 |
| MCAR (Ours) | ✓ | 92.5 | 52.2 | 43.9 | 46.5 | 28.8 | 62.5 | 54.4 |
| ✓✓ | 87.9 | 52.1 | 51.8 | 41.6 | 33.8 | 68.8 | 56.0 |
| Train-on-Target | | 83.6 | 59.4 | 50.7 | 43.7 | 39.5 | 74.5 | 58.6 |
+
+Table 2. Test results of domain adaptation for object detection from PASCAL VOC to Comic, The definition of MC and PR is same as in Table 1.
+
+| Method | MC | PR | bike | bird | car | cat | dog | person | mAP |
| Source-only | | | 32.5 | 12.0 | 21.1 | 10.4 | 12.4 | 29.9 | 19.7 |
| DA-Faster | | | 31.1 | 10.3 | 15.5 | 12.4 | 19.3 | 39.0 | 21.2 |
| SW-DA | | | 36.4 | 21.8 | 29.8 | 15.1 | 23.5 | 49.6 | 29.4 |
| MCAR (Ours) | ✓ | | 40.9 | 22.5 | 30.3 | 23.7 | 24.7 | 53.6 | 32.6 |
| ✓ | ✓ | 47.9 | 20.5 | 37.4 | 20.6 | 24.5 | 50.2 | 33.5 |
+
+# 4.1 Implementation Details
+
+In the experiments, we followed the setting of [31] by using the Faster-RCNN as the backbone detection network, pretraining the model weights on the ImageNet, and using the same 600 pixels of images' shortest side. We set the training epoch as 25, and set $\lambda$ , $\mu$ , $\varepsilon$ , and $\gamma$ as 0.5, 0.01, 0.1, and 5 respectively. The momentum is set as 0.9 and weight decay as 0.0005. For all experiments, we evaluated different methods using mean average precision (mAP) with a threshold of 0.5. By default, in the multi-label learning, all the convolutional layers have 3x3 convolution kernels and 512 channels. The convolution layer in conditional adversary learning also has 3x3 convolution kernel and 512 channels. These convolution parameters can be adjusted to suit different tasks, but our experiments all adopt the default setting, which yield good results.
+
+# 4.2 Domain Adaptation from Real to Virtual Scenes
+
+In this set of experiments, we used the PASCAL VOC [8] dataset as the source domain, and used the Watercolor2k and Comic2k [18] as the target domains.
+
+Table 3. Test results of domain adaptation for object detection from Cityscapes to Foggy Cityscapes in terms of mAP (\%). MC and PR are same as in Table 1.
+
+| Method | MC PR | person | rider | car | truck | bus | train | motorbike | bicycle | mAP |
| Source-only | | 25.1 | 32.7 | 31.0 | 12.5 | 23.9 | 9.1 | 23.7 | 29.1 | 23.4 |
| BDC-Faster [31] | | 26.4 | 37.2 | 42.4 | 21.2 | 29.2 | 12.3 | 22.6 | 28.9 | 27.5 |
| DA-Faster [2] | | 25.0 | 31.0 | 40.5 | 22.1 | 35.3 | 20.2 | 20.0 | 27.1 | 27.6 |
| SC-DA [44] | | 33.5 | 38.0 | 48.5 | 26.5 | 39.0 | 23.3 | 28.0 | 33.6 | 33.8 |
| MAF [17] | | 28.2 | 39.5 | 43.9 | 23.8 | 39.9 | 33.3 | 29.2 | 33.9 | 34.0 |
| SW-DA [31] | | 36.2 | 35.3 | 43.5 | 30.0 | 29.9 | 42.3 | 32.6 | 24.5 | 34.3 |
| DD-MRL [19] | | 30.8 | 40.5 | 44.3 | 27.2 | 38.4 | 34.5 | 28.4 | 32.2 | 34.6 |
| MTOR [1] | | 30.6 | 41.4 | 44.0 | 21.9 | 38.6 | 40.6 | 28.3 | 35.6 | 35.1 |
| Dense-DA [40] | | 33.2 | 44.2 | 44.8 | 28.2 | 41.8 | 28.7 | 30.5 | 36.5 | 36.0 |
| SCL [34] | | 31.6 | 44.0 | 44.8 | 30.4 | 41.8 | 40.7 | 33.6 | 36.2 | 37.9 |
| MCAR (Ours) | ✓ | 31.2 | 42.5 | 43.8 | 32.3 | 41.1 | 33.0 | 32.4 | 36.5 | 36.6 |
| ✓✓ | 32.0 | 42.1 | 43.9 | 31.3 | 44.1 | 43.4 | 37.4 | 36.6 | 38.8 |
| Train-on-Target | | 50.0 | 36.2 | 49.7 | 34.7 | 33.2 | 45.9 | 37.4 | 35.6 | 40.3 |
+
+PASCAL VOC contains realistic images, while Watercolor2k and Comic2k contain virtual scene images. There are significant differences between the source and target domains. The training set of PASCAL VOC (Trainval of PASCAL VOC 2007 and PASCAL VOC 2012) includes 20 different object labels and a total of 16,551 images. Watercolor2k and Comic2k contain 6 different classes ('bicycle', 'bird', 'car', 'cat', 'Dog', 'person'), each providing 2K images, and splitting equally into training and test sets. These 6 categories are included in the 20 categories of PASCAL VOC. We used the 1K training set in each target domain for training the domain adaptation model, while evaluating the model and report results with the 1K test set. In this experiment, we used resnet101 [16] as the backbone network of the detection model.
+
+PASCAL VOC to Watercolor. The test detection results yield by adaptation from PASCAL VOC to Watercolor are reported in Table 1. Our proposed MCAR model is compared with the source-only baseline and the state-of-the-art adaptive object detection methods, including BDC-Faster [31], DA-Faster [2], SW-DA [31], and SCL [34]. The Train-on-Target results, obtained by training on labeled data in the target domain, are provided as upperbound reference values. We can see under the same experimental conditions, our proposed method achieves the best overall result, while only underperforming the Train-on-Target by $2.6\%$ . Comparing to source only, our method achieves a remarkable overall performance improvement of $9.8\%$ . Although SW-DA [31] confirmed the validity of local and global feature alignment and showed a significant performance improvement over other methods, our method surpasses SW-DA by $2.7\%$ . Mean
+
+while, our method also outperforms SCL [34] which relies on stacked multi-level feature alignment. The results suggest the proposed multi-label learning based feature alignment and prediction regularization are effective.
+
+PASCAL VOC to Comic. The results of adaptation from PASCAL VOC to Comic are reported in Table 2. Again, the proposed MCAR method achieved the best adaptive detection result. It outperforms the baseline, source-only (trained on source domain data without any adaptation), by $13.8\%$ , and outperforms the best comparison method, SW-DA, by $4.1\%$ . These results again show that our model is very suitable for adaptive multi-object detection.
+
+# 4.3 Adaptation from Clear to Foggy Scenes.
+
+In this experiment, we perform adaptive object detection from normal clear images to foggy images. We use the Cityscapes dataset as the source domain. Its images came from 27 different urban scenes, where the annotated bounding boxes are generated by the original pixel annotations. We use the Foggy Cityscapes dataset as the target domain. Its images have been rendered by Cityscapes, which can simulate fog in real road conditions with deep rendering. They contain 8 categories: 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle' and 'bicycle'. In this experiment, we used vgg16 [35] as the backbone of the detection model. We recorded the test results on the validation set of Foggy Cityscapes.
+
+The results are reported in the Table 3. We can see the proposed MCAR method achieved the best adaptive detection result. It outperforms source-only by $15.4\%$ , and outperforms the two best comparison methods, Dense-DA [40] and SCL [34], by $2.8\%$ and $0.9\%$ . Moreover, it is worth noting that the performance of the proposed approach is very close to the Train-on-Target; the result of the Train-on-Target is only $1.5\%$ higher than ours. Due to the very complex road conditions in this task, although the multi-label classifier is more capable of category judgment than the detection model, its accuracy is not much higher. Hence in this experiment, we used the combination of the multi-label category prediction and the object detection level category prediction. That is, we used $\text{softmax}(\mathbf{p} + \mathbf{q})$ as the label category information for the conditional adversarial feature alignment. This experiment presents and validates a natural variant of the proposed model.
+
+# 4.4 Ablation Study
+
+The proposed MCAR model has two major mechanisms, Multilabel-conditional adversary (MC) and Prediction based Regularization (PR), which are incorporated into the learning process through the three auxiliary loss terms in Eq.(7): the conditional adversary loss $\mathcal{L}_{adv}$ , the multi-label prediction loss $\mathcal{L}_{multi}$ , and the prediction regularization loss $\mathcal{L}_{kl}$ . The conditional adversary loss uses the multi-label prediction outputs as its conditions, and hence the two loss terms, $\mathcal{L}_{adv}$ and $\mathcal{L}_{multi}$ , together form the multilabel-conditional adversary (MC), while
+
+Table 4. The ablation study results in terms of $\mathrm{mAP}(\%)$ on the adaptive detection task of Cityscapes $\rightarrow$ Foggy Cityscapes. "w/o-adv" indicates dropping the conditional adversary loss; "uadv" indicates replacing the conditional adversary loss with an unconditional adversary loss; "w/o-PR" indicates dropping the prediction regularization loss; and "w/o-MP-PR" indicates dropping both the multilabel prediction loss and the prediction regularization loss.
+
+| Method | person rider car truck bus train motorbike bicycle | mAP |
| MCAR | 32.0 | 42.1 | 43.9 | 31.3 | 44.1 | 43.4 | 37.4 | 36.6 | 38.8 |
| MCAR-w/o-PR | 31.2 | 42.5 | 43.8 | 32.3 | 41.1 | 33.0 | 32.4 | 36.5 | 36.6 |
| MCAR-uadv | 31.7 | 42.0 | 45.7 | 30.4 | 39.7 | 14.9 | 28.6 | 36.5 | 33.7 |
| MCAR-uadv-w/o-PR | 32.8 | 40.1 | 43.8 | 23.0 | 30.9 | 14.3 | 30.3 | 33.1 | 31.0 |
| MCAR-uadv-w/o-MP-PR | 30.5 | 43.2 | 41.4 | 21.7 | 31.4 | 13.7 | 29.8 | 32.6 | 30.5 |
| MCAR-w/o-adv | 25.0 | 34.9 | 34.2 | 13.9 | 29.9 | 10.0 | 22.5 | 30.2 | 25.1 |
+
+Table 5. Parameter sensitivity analysis on the adaptation task from PASCAL VOC to watercolor.
+
+| λ | 0.5 | γ | 5 |
| γ | 1 | 3 | 5 | 7 | 9 | λ | 0.1 | 0.25 | 0.5 | 0.75 | 1 |
| mAP | 44.0 | 46.1 | 54.4 | 49.1 | 44.8 | mAP | 49.1 | 50.2 | 54.4 | 50.1 | 49.3 |
+
+the prediction regularization (PR) is also built on the multi-label prediction outputs through the regularization loss $\mathcal{L}_{kl}$ . To investigate the impact of these loss components, we conducted a more comprehensive ablation study on the adaptive detection task from Cityscapes to Foggy Cityscapes by comparing MCAR with its multiple variants. The variant methods and results are reported in Table 4.
+
+We can see that dropping the conditional adversary loss ( $MCAR - w / o - adv$ ) leads to large performance degradation. This makes sense since the adversarial loss is the foundation for cross-domain feature alignment. By replacing the conditional adversary loss with an unconditional adversary loss, $MCAR - uadv$ loses the multilabel-conditional adversary (MC) component, which leads to remarkable performance degradation and verifies the usefulness of the multi-label prediction based cross-domain multi-modal feature alignment. Dropping the prediction regularization loss from either $MCAR$ , which leads to $MCAR - w / o - PR$ , or $MCAR - uadv$ , which leads to $MCAR - uadv - w / o - PR$ , induces additional performance degradation. This verifies the effectiveness of the prediction regularization strategy, which is built on the multi-label prediction outputs as well. Moreover, by further dropping the multi-label prediction loss from $MCAR - uadv - w / o - PR$ , the variant $MCAR - uadv - w / o - MP - PR$ 's performance also drops slightly. Overall these results validated the effectiveness of the proposed MC and PR mechanisms, as well as the multiple auxiliary loss terms in the proposed learning objective.
+
+
+(a) Source-only
+
+
+(b) MCAR(Ours)
+Fig. 3. Feature visualization results. (a) and (b) respectively represent the feature distribution results of the Source-only model and our model in the clear (Cityscapes) and foggy (Foggy Cityscapes) scenes. Red indicates from the source domain and blue indicates from the target domain
+
+# 4.5 Further Analysis
+
+Feature visualization. On the task of adaptation from Cityscapes to Foggy Cityscapes, we used t-SNE [26] to compare the distribution of induced features between our model and the Source-only model (clear to fogg scenes). The results are shown in Figure 3. We can see that with the feature distribution obtained by source-only (Figure 3(a)), the source domain and target domain are obviously separated, which shows the existence of domain divergence. By contrast, our proposed method produced features that can well confuse the domain discriminators. This suggests that our proposed model has the capacity to bridge the domain distribution divergence and induce domain invariant features.
+
+Parameters sensitivity analysis. We conducted sensitivity analysis on the two hyperparameters, $\lambda$ and $\gamma$ using the adaption task from PASCAL VOC to Watercolor. $\lambda$ controls the weight of adversarial feature alignment, while $\gamma$ controls the degree of focusing on hard-to-classify examples. Other hyperparameters are set to their default values. We conducted the experiment by fixing the value of $\gamma$ to adjust $\lambda$ , and then fixing $\lambda$ to adjust $\gamma$ . Table 5 presents the results. We can see with the decrease of parameter $\gamma$ from its default value 5, the test performance degrades as the influence of domain classifier on difficult samples is weakened and the contribution of easy samples is increased. When $\gamma = 1$ , it leads to the same result as the basic model, suggesting the domain regulation ability basically fails to play its role. On the other hand, a very large $\gamma$ value is not good either, as the most difficult samples will dominate. For $\lambda$ , we find that $\lambda = 0.5$ leads to the best performance. As detection is still the main task, it makes sense to have the $\lambda < 1$ . When $\lambda = 0$ , it degrades to a basic model without feature alignment. Therefore, some value in the middle would be a proper choice.
+
+Qualitative results. Object detection results are suitable to be qualitatively judged through visualization. Hence we present some qualitative adaptive detection results in the target domain in Figure 4. The top row of Figure 4 presents
+
+
+Fig. 4. Qualitative results on adaptive detection. The top row presents examples of domain adaptive detection from PASCAL VOC to Watercolor. The bottom row shows examples of adaptive detection from Cityscapes to Foggy Cityscapes. The green box represents the results obtained by the detection models, and the blue box represents the ground-truth annotation.
+
+the qualitative detection result of three state-of-the-art adaptive detection methods, DA-Faster, SW-DA, and MCAR (ours), and the ground-truth on an image from Watercolor. We can see both 'DA-Faster' and 'SW-DA' have some false positives, while failing to detect the object of 'dog'. Our model correctly detected both the 'person' and the 'dog'. The bottom row of Figure 4 presents the detection results of the DA methods and the ground-truth on an image from Foggy Cityscapes. We can see it is obvious that the cars in the distance are very blurred and difficult to detect due to the fog. The DA-Faster and SW-DA fail to find these cars, while our model successfully detected them.
+
+# 5 Conclusion
+
+In this paper, we propose an unsupervised multi-object cross-domain detection method. We exploit multi-label object recognition as a dual auxiliary task to reveal the category information of images from the global features. The cross-domain feature alignment is conducted by performing conditional adversarial distribution alignment with the combination input of global features and multi-label prediction outputs. We also use the idea of mutual learning to improve the detection performance by enforcing consistent object category predictions between the multi-label prediction over global features and the object classification over detection region proposals. We conducted experiments on multiple cross-domain multi-objective detection datasets. The results show the proposed model achieved the state-of-the-art performance.
+
+# References
+
+1. Cai, Q., Pan, Y., Ngo, C.W., Tian, X., Duan, L., Yao, T.: Exploring object relation in mean teacher for cross-domain detection. In: CVPR (2019)
+2. Chen, Y., Li, W., Sakaridis, C., Dai, D., Van Gool, L.: Domain adaptive faster R-CNN for object detection in the wild. In: CVPR (2018)
+3. Choi, J., Kim, T., Kim, C.: Self-ensembling with gan-based data augmentation for domain adaptation in semantic segmentation. In: ICCV (2019)
+4. Cicek, S., Soatto, S.: Unsupervised domain adaptation via regularized conditional alignment. arXiv preprint arXiv:1905.10885 (2019)
+5. Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding. In: CVPR (2016)
+6. Dai, J., Li, Y., He, K., Sun, J.: R-fcn: Object detection via region-based fully convolutional networks. In: NIPS (2016)
+7. Dziugaite, G.K., Roy, D.M., Ghahramani, Z.: Training generative neural networks via maximum mean discrepancy optimization. In: UAI (2015)
+8. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. IJCV (2010)
+9. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: ICML (2015)
+0. Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. JMLR (2016)
+1. Girshick, R.: Fast R-CNN. In: ICCV (2015)
+2. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR (2014)
+3. Gong, Y., Jia, Y., Leung, T., Toshev, A., Ioffe, S.: Deep convolutional ranking for multilabel image annotation. ICLR (2014)
+4. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: NIPS (2014)
+5. He, K., Gkioxari, G., Dolkar, P., Girshick, R.: Mask R-CNN. In: ICCV (2017)
+6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
+7. He, Z., Zhang, L.: Multi-adversarial Faster-RCNN for unrestricted object detection. In: ICCV (2019)
+8. Inoue, N., Furuta, R., Yamasaki, T., Aizawa, K.: Cross-domain weakly-supervised object detection through progressive domain adaptation. In: CVPR (2018)
+9. Kim, T., Jeong, M., Kim, S., Choi, S., Kim, C.: Diversify and match: A domain adaptive representation learning paradigm for object detection. In: CVPR (2019)
+20. Kulis, B., Saenko, K., Darrell, T.: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In: CVPR (2011)
+21. Lin, T.Y., Dóllar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR (2017)
+22. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dóllar, P.: Focal loss for dense object detection. In: ICCV (2017)
+23. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: ECCV (2016)
+24. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR (2015)
+
+25. Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: NIPS (2018)
+26. Maaten, L.v.d., Hinton, G.: Visualizing data using t-SNE. JMLR (2008)
+27. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
+28. Redmon, J., Farhadi, A.: Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767 (2018)
+29. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: NIPS (2015)
+30. Russo, P., Carlucci, F.M., Tommasi, T., Caputo, B.: From source to target and back: symmetric bi-directional adaptive gan. In: CVPR (2018)
+31. Saito, K., Ushiku, Y., Harada, T., Saenko, K.: Strong-weak distribution alignment for adaptive object detection. In: CVPR (2019)
+32. Sakaridis, C., Dai, D., Van Gool, L.: Semantic foggy scene understanding with synthetic data. IJCV (2018)
+33. Shen, J., Qu, Y., Zhang, W., Yu, Y.: Wasserstein distance guided representation learning for domain adaptation. In: AAAI (2018)
+34. Shen, Z., Maheshwari, H., Yao, W., Savvides, M.: SCL: Towards accurate domain adaptive object detection via gradient detach based stacked complementary losses. arXiv preprint arXiv:1911.02559 (2019)
+35. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
+36. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR (2015)
+37. Tsai, Y.H., Hung, W.C., Schulter, S., Sohn, K., Yang, M.H., Chandraker, M.: Learning to adapt structured output space for semantic segmentation. In: CVPR (2018)
+38. Tsai, Y.H., Sohn, K., Schulter, S., Chandraker, M.: Domain adaptation for structured output via discriminative representations. In: ICCV (2019)
+39. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: CVPR (2017)
+40. Xie, R., Yu, F., Wang, J., Wang, Y., Zhang, L.: Multi-level domain adaptive learning for cross-domain detection. In: ICCV (2019)
+41. Zhang, M.L., Zhou, Z.H.: Multilabel neural networks with applications to functional genomics and text categorization. TKDE (2006)
+42. Zhang, Y., David, P., Gong, B.: Curriculum domain adaptation for semantic segmentation of urban scenes. In: ICCV (2017)
+43. Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: CVPR (2017)
+44. Zhu, X., Pang, J., Yang, C., Shi, J., Lin, D.: Adapting object detectors via selective cross-domain alignment. In: CVPR (2019)
\ No newline at end of file
diff --git a/adaptiveobjectdetectionwithdualmultilabelprediction/images.zip b/adaptiveobjectdetectionwithdualmultilabelprediction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..dc7a2acb360e91185e7bf3743d9ac38fb0cd3afa
--- /dev/null
+++ b/adaptiveobjectdetectionwithdualmultilabelprediction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f2eea760932256579b2045adfe492d364437c292c4062d1b6303a28d877c18f
+size 503970
diff --git a/adaptiveobjectdetectionwithdualmultilabelprediction/layout.json b/adaptiveobjectdetectionwithdualmultilabelprediction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9bc642a90aef262e3408dee8c0571479a2445a16
--- /dev/null
+++ b/adaptiveobjectdetectionwithdualmultilabelprediction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7fcc40e16b695abf7b14fae005b824bb9b3b06cd7fd00215058eb94b99c3aaa
+size 378267
diff --git a/adaptiveofflinequintupletlossforimagetextmatching/04e108e0-cdb4-443a-b73d-7acdfb6976b7_content_list.json b/adaptiveofflinequintupletlossforimagetextmatching/04e108e0-cdb4-443a-b73d-7acdfb6976b7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..56c48b6c78ceae8916b3e10e6eb2afbae756025a
--- /dev/null
+++ b/adaptiveofflinequintupletlossforimagetextmatching/04e108e0-cdb4-443a-b73d-7acdfb6976b7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f98b19db52148c473a7c357feaf10b8c53ca139aff15cc3d62a4337b4c573559
+size 80876
diff --git a/adaptiveofflinequintupletlossforimagetextmatching/04e108e0-cdb4-443a-b73d-7acdfb6976b7_model.json b/adaptiveofflinequintupletlossforimagetextmatching/04e108e0-cdb4-443a-b73d-7acdfb6976b7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f73009b77ed24cf3b2fae2eca311bf390bce6c9b
--- /dev/null
+++ b/adaptiveofflinequintupletlossforimagetextmatching/04e108e0-cdb4-443a-b73d-7acdfb6976b7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ae460cdab38d7b57ea13a82a7a13a48bde809d063e094e9cdf4da5f83351e73
+size 99365
diff --git a/adaptiveofflinequintupletlossforimagetextmatching/04e108e0-cdb4-443a-b73d-7acdfb6976b7_origin.pdf b/adaptiveofflinequintupletlossforimagetextmatching/04e108e0-cdb4-443a-b73d-7acdfb6976b7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9947396a888f353edb78f89e1c06901d18574625
--- /dev/null
+++ b/adaptiveofflinequintupletlossforimagetextmatching/04e108e0-cdb4-443a-b73d-7acdfb6976b7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:622ac4c2c1873d00ecfa0447ef00fa4d655272d775e418000b91d219d258cbdb
+size 3096753
diff --git a/adaptiveofflinequintupletlossforimagetextmatching/full.md b/adaptiveofflinequintupletlossforimagetextmatching/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ceccb9178468eb78ee2d311f25253a5409c651b8
--- /dev/null
+++ b/adaptiveofflinequintupletlossforimagetextmatching/full.md
@@ -0,0 +1,274 @@
+# Adaptive Offline Quintuplet Loss for Image-Text Matching
+
+Tianlang Chen $^{1[0000-0002-6355-6474]}$ , Jiajun Deng $^{2}$ , and Jiebo Luo $^{1}$
+
+1 University of Rochester, {tchen45, jluo}@cs.rochester.edu, 2 University of Science and Technology of China, {djiajun1206}@gmail.com
+
+Abstract. Existing image-text matching approaches typically leverage triplet loss with online hard negatives to train the model. For each image or text anchor in a training mini-batch, the model is trained to distinguish between a positive and the most confusing negative of the anchor mined from the mini-batch (i.e. online hard negative). This strategy improves the model's capacity to discover fine-grained correspondences and non-correspondences between image and text inputs. However, the above approach has the following drawbacks: (1) the negative selection strategy still provides limited chances for the model to learn from very hard-to-distinguish cases. (2) The trained model has weak generalization capability from the training set to the testing set. (3) The penalty lacks hierarchy and adaptiveness for hard negatives with different "hardness" degrees. In this paper, we propose solutions by sampling negatives offline from the whole training set. It provides "harder" offline negatives than online hard negatives for the model to distinguish. Based on the offline hard negatives, a quintuplet loss is proposed to improve the model's generalization capability to distinguish positives and negatives. In addition, a novel loss function that combines the knowledge of positives, offline hard negatives and online hard negatives is created. It leverages offline hard negatives as the intermediary to adaptively penalize them based on their distance relations to the anchor. We evaluate the proposed training approach on three state-of-the-art image-text models on the MS-COCO and Flickr30K datasets. Significant performance improvements are observed for all the models, proving the effectiveness and generality of our approach. Code is available at https://github.com/sunnychencool/AOQ.
+
+Keywords: Image-text matching, Triplet loss, Hard negative mining
+
+# 1 Introduction
+
+Image-text matching is the core task in cross-modality retrieval to measure the similarity score between an image and a text. By image-text matching, a system can retrieve the top corresponding images of a sentence query, or retrieve the top corresponding sentences of an image query.
+
+To train an image-text matching model to predict accurate similarity score, triplet loss is widely used [23,5,6,15,14]. Each given image or text of a training mini-batch is referred to as an anchor. For each image/text anchor, a text/image
+
+that corresponds to the anchor is called a positive while one that does not correspond to the anchor is called a negative. The anchor and its positives/negatives belong to two modalities. A triplet loss is applied to encourage the model to predict higher similarity scores between the anchor and its positives (i.e. positive pairs) than those between the anchor and its negatives (i.e. negative pairs).
+
+To utilize negative pairs to train the model, early approaches [23,5,10] adopt an all-in strategy. For each anchor, all its negatives in the mini-batch participate in the loss computing process. However, in most situations, the semantic meanings of an anchor and its negatives are totally different. With this strategy, the overall training difficulty is relatively low for the model to distinguish between positive and negative pairs. The model only needs to focus on each pair's global semantic meaning difference and may ignore the local matching details. Faghri et al. [6] propose a triplet loss with online hard negatives (i.e. online triplet loss) as a more effective training approach. Specifically, for each anchor in a mini-batch, the model computes its similarity score to all the negatives in the same mini-batch online, and selects the negative with the highest score to the anchor as online hard negative of the anchor. The new triplet loss guides the model to only distinguish between the positives and online hard negatives of the anchor. Compared with the all-in strategy, the models trained by this approach commonly achieve better performance in distinguishing between positives and confusing negatives that have similar semantic meanings to the anchor. This training approach is employed by all the state-of-the-art models [15,14,18,27].
+
+Even with its effectiveness, we argue that the online triplet loss still have three drawbacks in negative selection strategy, distinguishing strategy, and penalization strategy: (1) for the negative selection strategy, the "hardness" degree of online hard negatives is still not sufficient. Given the MS-COCO dataset as example, the training set contains 500K corresponding image-text pairs. When we set the mini-batch size to 128 as in [15,14,18,27], for each online hard negative of an anchor mined from the mini-batch, we can prove that its similarity score rank expectation to the anchor in the whole training set is about 4000 (i.e. $\frac{500K}{128}$ ). The probability of its rank in the top-100 is only about $2.2\%$ . In other words, a very hard negative with a top-100 similarity score rank for the anchor will rarely be sampled to train the model. This decreases the model's capacity to distinguish between the positives and those very confusing negatives. Increasing the mini-batch size could be helpful. However, the mini-batch computational complexity grows sharply. (2) For the distinguishing strategy, the triplet loss only focuses on obtaining the correct rank orders between the positives and negatives of the same anchor. However, it does not guide the model to rank among positive pairs and negative pairs that contain no common samples. Actually, this guidance is essential to improve the model's generalization capability from training to testing, especially when we apply the guidance on the very hard negative pairs. (3) For the penalization strategy, the triplet loss lacks a hierarchy. Ideally, the loss function should guide the model to maintain remarkable score gaps among the pairs of different classes. For example, the positive pairs should obtain far higher similarity scores than very hard negative pairs, and the very
+
+
+Fig. 1. Overview of the proposed training approach. For each anchor, we sample its positives, offline hard negatives and online hard negatives. The training approach gives adaptive penalties to enlarge the similarity score differences among positive pairs, offline hard negative pairs and online hard negative pairs (i.e. the blue, green and brown arrows). On the other hand, extra penalties are added to enlarge the similarity score difference between positive pairs and offline hard negative pairs with different anchors that share similar semantic meanings (i.e. the cyan arrow).
+
+a1 Anchor 1
+Pos. of Anchor 1
+Offline Hard Neg. of Anchor 1
+On Online Hard Neg. of Anchor 1
+$a_2$ Anchor 2
+$N_{off}^{2}$ Offline Hard Neg. of Anchor 2
+Difference between
+Difference between
+$\longleftrightarrow$ Difference between
+Difference between
+
+hard negative pairs should also obtain far higher similarity scores than ordinary hard negative pairs. When a pair's predicted score is close or beyond the boundary of its pair class, the loss function should give it a larger penalty to update the model. However, the current online triplet loss only defines positive and online hard negative pairs. More importantly, it gives an equal penalty to all the pairs when the margin conditions are not satisfied.
+
+To overcome the above drawbacks, we propose a new training approach that can be generally applied on all existing models. Specifically, we utilize a two-round training to additionally sample "harder" negatives offline. In the first round, we train the model by the original online triplet loss. After that, for each image and text anchor in the training set, the model predicts its similarity score to all its negatives in the training set and ranks them. In the second round, given each anchor in a mini-batch, we sample its offline hard negatives directly from its top negative list with the highest similarity score in the whole training set. In this process, multiple kinds of offline hard negative pairs are constructed which share/do not share common elements with the positive pairs. The model is trained by a combination of online triplet loss and offline quintuplet loss to overcome the first two drawbacks successfully. Furthermore, we modify the loss function and feed information of offline hard negative pairs into the online triplet loss term. The complete training loss achieves hierarchical and adaptive penalization for the positive pairs, offline hard negative pairs, and online hard negative pairs with different "hardness" degrees. The framework of the proposed training approach is shown in Figure 1.
+
+Our main contributions are summarized as follows:
+
+- We propose a novel and general training approach for image-text matching models. A new offline quintuplet loss is introduced that can effectively cooperate with the original online triplet loss.
+
+- We skillfully feed the similarity score of offline hard negative pair into online loss term. It serves as a criterion to adaptively penalize different kinds of pairs. We analyze how it works mathematically.
+
+- We evaluate our training approach on three state-of-the-art image-text matching models. Quantitative and qualitative experiments conducted on two publicly available datasets demonstrate its strong generality and effectiveness.
+
+# 2 Related Work
+
+Image-text matching has received much attention in recent years. Most of the previous works focus on the improvement of feature extraction and model design. Early image-text matching approaches [7,13,6,35] directly capture the visual-textual alignment at the level of image and text. Typically, they extract the global image feature by convolutional neural network (CNN), and extract the global text feature by language model such as Skip-gram model [22] or recurrent neural network (RNN). The image-text similarity score is then computed as the inner product [7,13,6] or cosine similarity [35] of the image and text features. The success of attention models for joint visual-textual learning tasks, such as visual question answering (VQA) [34,21,30,12] and image captioning [29,20,31,24,3], leads to the transition to capture image-text correspondence at the level of image regions and words [10,16,23,36]. Typically, these approaches extract the image region feature and word feature from the last pooling layer of CNN and temporal outputs of RNN. They focus on designing effective upper networks that can automatically find, align and aggregate corresponding regions and words to compute the final similarity score. Recently, Anderson et al. [1] extract the image object features by the combination of Faster R-CNN [25] and ResNet [8] for VQA. Based on [1], recent approaches [14,15,18,27,11] further construct the connection between words and image objects. They either propose new mechanisms for object feature extraction, such as feeding saliency information [11] or extracting joint features among objects by constructing object graph [15], or propose different cross-modality aggregation networks [14,27,18,2,9] to improve the aggregation process from object and word features to the final score.
+
+Even though the network design is widely studied, relatively fewer works focus on the training approach. Early image-text matching approaches [7,13,5,32] commonly apply a standard triplet loss whose early form can be found in [28] for word-image embedding. On the other hand, Zhang et al. [35] improve the triplet loss and propose a norm-softmax loss to achieve cross-modal projection. For both losses, all the negatives of an anchor in the same mini-batch are utilized for loss computing. Significant improvement is observed as Faghri et al. [6] propose the triplet loss with online hard negatives. Online triplet mining is first introduced in [26] for face recognition. For image-text matching, it mines the online hard negatives of the anchors from the mini-batch and makes the model only pay attention to these confusing negatives. Almost all the current models [15,14,18,27] apply this online triplet loss. To the best of our knowledge, our work is the first that introduces offline hard negatives for image-text matching.
+
+
+Fig. 2. Training process illustration. Given a positive image-text pair $(I\#1, T\#1)$ , 6 margin-based ranking losses are applied to enlarge its similarity score differences from the online hard negative pairs $(I\#2, T\#1)$ , $(I\#1, T\#2)$ , the offline hard negative pairs $(I\#3, T\#1)$ , $(I\#1, T\#3)$ (with the common anchor), and the derived offline hard negative pairs $(I\#3, T\#3)$ , $(I\#4, T\#4)$ (without the common anchor). Adaptive penalization is imposed via the online losses to adaptively penalize positive and negative pairs with different strengths and directions. The involved samples of each loss are marked by the corresponding squares.
+
+They are mined offline from the whole training set. Motivated by [4] for person re-identification, we propose a quintuplet loss based on offline hard negatives to effectively cooperate with an online triplet loss, leading to significant improvement. It should be noticed that Liu et al. [19] explicitly feed adaptive penalty weight into triplet loss for image-text matching. However, they use it to solve the hubness problem, while we implicitly feed hierarchical information into the model to enlarge the similarity score differences among different pair classes.
+
+# 3 Methods
+
+In this section, we formally present our training approach for image-text matching. In Section 3.1, we introduce the margin-based standard and online triplet losses that are used in previous works. In Section 3.2, we present offline quintuple loss as an effective complement to online triplet loss to significantly improve the performance. In Section 3.3, we propose our final loss function with adaptive penalization and mathematically show how it works. The overall training process and the involved pairs are illustrated in Figure 2.
+
+# 3.1 Triplet Loss for Image-text Matching
+
+Given an input image-text pair, image-text matching models aim to predict the pair's similarity score as a criterion for cross-modality retrieval. To achieve this, positive pairs (i.e. corresponding image-text pairs) and negative pairs (i.e. noncorresponding image-text pairs) are constructed. The model is trained to predict higher similarity score for the positive pairs than the negative ones.
+
+Because the metrics of cross-modality retrieval are based on the ranking performance of multiple candidates on a single query, triplet loss is widely applied
+
+to train the model. It holds a common sample for each positive pair and negative pair as an anchor. The other sample in the positive pair is called the anchor's positive while the other sample in the negative pair is called the anchor's negative. In essence, triplet loss encourages the model to predict higher similarity scores from the anchor to its positives. This is consistent with the retrieval process of finding the corresponding candidates of a query with the high similarity scores.
+
+Early image-text matching works [7,13,5,32] typically apply a standard triplet loss without hard negative mining. Given a training mini-batch that contains a set of positive pairs, the standard triplet loss is defined as:
+
+$$
+\mathcal {L} _ {s t d} = \sum_ {(i, t) \in P} \left(\sum_ {\bar {t} \in T / t} [ \gamma - S (i, t) + S (i, \bar {t}) ] _ {+} + \sum_ {\bar {i} \in I / i} [ \gamma - S (i, t) + S (\bar {i}, t) ] _ {+}\right) \tag {1}
+$$
+
+Here $\gamma$ is the margin of the triplet loss, $[x]_{+} \equiv \max(x, 0)$ . $I$ , $T$ and $P$ are the image, text and positive pair sets of the mini-batch, respectively. $i$ and $t$ are the anchors of the two terms, respectively. $(i, t)$ represents the positive pair, while $(i, \bar{t})$ and $(\bar{i}, t)$ represent the negative pairs available in the mini-batch.
+
+On the other hand, to overcome the drawback of standard triplet loss mentioned in Section 1, Faghri et al. [6] present triplet loss with online hard negatives (i.e. online triplet loss). In particular, for a positive pair $(i,t)$ in a mini-batch, the hard negatives of the anchor $i$ and $t$ are given by $\bar{t}_{on} = \arg\max_{c \in T/t} S(i,c)$ and $\bar{i}_{on} = \arg\max_{b \in I/i} S(b,t)$ , respectively. The online triplet loss is defined as:
+
+$$
+\mathcal {L} _ {\text {o n l i n e}} = \sum_ {(i, t) \in P} ([ \gamma - S (i, t) + S (i, \bar {t} _ {o n}) ] _ {+} + [ \gamma - S (i, t) + S (\bar {i} _ {o n}, t) ] _ {+}) \tag {2}
+$$
+
+Compared with the standard triplet loss, online triplet loss forces the model to only learn to distinguish between the positive and the most confusing negative of an anchor in the mini-batch. This guides the model to not only consider the overall semantic meaning difference of a pair, but also discover correspondences and non-correspondences from the details hidden in local regions and words.
+
+# 3.2 Offline Quintuplet Loss
+
+One problem of online triplet loss in Section 3.1 is that the "hardness" degree of most online hard negatives is still not sufficient, especially when the training involves a large-scale training set and a relatively small batch size. As mentioned in Section 1, the rank of an anchor's online hard negative in the whole training set is commonly not very high. Qualitatively, as shown in Figure 3, the online hard negatives of an anchor typically contain a few related words, objects or scenes to the anchor. However, there exist obvious non-correspondences between the anchor and the negatives. Indeed, the model only needs to find these non-correspondences and strengthen their influence, which is sufficient for the score difference between the positive pair and negative pair to exceed the margin $\gamma$ in Equation 2. However, during inference, when the model encounters "harder"
+
+Anchor
+Positive
+
+A black bicycle leaning against the kitchen cabinets.
+
+Online Hard Negatives
+
+1. A man with glasses is in an office setting.
+2. Two people looking at the food in a fridge.
+3. A man is seen walking pass a store front window.
+4. A young lady is in the kitchen preparing crops of donuts.
+
+Offline Hard Negatives
+
+1. A man putting food in a cart in a kitchen.
+2. A man putting something in a container in an industrial kitchen.
+3. A man standing at a counter preparing food.
+4. A man in a blue shirt and apron stands near a counter that has food stacked on it.
+
+(a)
+(b)
+Fig. 3. Two example anchors, their corresponding positives, their sampled online hard negatives and offline hard negatives.
+
+Two people in a food truck, one looking at an order.
+
+negatives like the offline hard negative examples of Figure 3, the model may not be able to distinguish them from the positives. The non-corresponding parts of these "harder" negatives to the anchor are subtle, and their influence on the predicted score can be offset by the perfectly corresponding parts.
+
+To overcome the problem, we additionally mine "harder" negatives in an offline fashion. In particular, it involves a two-round training. In the first round, the model is trained by the online triplet loss. After that, it performs global similarity score prediction - for each image/text in the training set, the model predicts its similarity score to all its non-corresponding texts/images in the training set, ranks them by their scores and stores the list of the top- $h$ . In the second round, for each anchor in a mini-batch, its offline hard negatives are uniformly sampled from the top- $h$ negatives of the anchor in the whole training set. The model is trained from scratch again by the following loss function:
+
+$$
+\begin{array}{l} \mathcal {L} = \sum_ {(i, t) \in P} \left(\left[ \gamma_ {1} - S (i, t) + S (i, \bar {t} _ {o n}) \right] _ {+} + \left[ \gamma_ {2} - S (i, t) + S (i, \bar {t} _ {o f f}) \right] _ {+}\right) \tag {3} \\ + \left(\left[ \gamma_ {1} - S (i, t) + S (\bar {\imath} _ {o n}, t) \right] _ {+} + \left[ \gamma_ {2} - S (i, t) + S (\bar {\imath} _ {o f f}, t) \right] _ {+})\right) \\ \end{array}
+$$
+
+Here $\bar{t}_{off}$ and $\bar{i}_{off}$ are the offline hard negatives of $i$ and $t$ , $\gamma_{1}$ and $\gamma_{2}$ are the margins of the online and offline triplet losses. It should be noticed that for models with relatively low inference speed, the above mentioned global similarity score prediction step can be time-consuming. In Section 4, we demonstrate that a model can safely utilize the prediction of another efficient model to mine offline hard negatives, which still sharply benefits the training process.
+
+Because the offline hard negatives are very confusing, to make them benefit the training, we should set $\gamma_{2}$ to a lower margin than $\gamma_{1}$ , e.g. 0. However, in this situation, if the positive and offline hard negative pairs share a same anchor, the model will merely learn how to find the subtle non-corresponding parts of the offline hard negative pair, but still does not learn how to deal with the situation when the negative pair's perfect matching parts offset the score influence of non-corresponding parts. We attribute it to the fact that the positive and offline hard negative get close similarity score for their corresponding parts to the same
+
+anchor. The model only needs to find the non-corresponding parts of the negative pair to satisfy the margin condition of $\gamma_{2}$ . Also, as claimed in [4], this setting weakens the model's generalization capability from training to testing.
+
+Considering this, we additionally derive two offline hard negative pairs and modify Equation 3 for the second-round training as follows:
+
+$$
+\begin{array}{l} \mathcal {L} = \sum_ {(i, t) \in P} \left(\left[ \gamma_ {1} - S (i, t) + S (i, \bar {t} _ {o n}) \right] _ {+} + \left[ \gamma_ {2} - S (i, t) + S (i, \bar {t} _ {o f f}) \right] _ {+} + \left[ \gamma_ {2} - S (i, t) + S (\bar {t} _ {o f f}, \bar {t} _ {o f f}) \right] _ {+}\right) \tag {4} \\ + \left(\left[ \gamma_ {1} - S (i, t) + S (\bar {t} _ {o n}, t) \right] _ {+} + \left[ \gamma_ {2} - S (i, t) + S (\bar {t} _ {o f f}, t) \right] _ {+} + \left[ \gamma_ {2} - S (i, t) + S (\tilde {\bar {t}} _ {o f f}, \tilde {t} _ {o f f}) \right] _ {+}\right)) \\ \end{array}
+$$
+
+Here $\widetilde{\bar{i}}_{off}$ and $\widetilde{\bar{t}}_{off}$ are the corresponding image and text of $\bar{t}_{off}$ and $\bar{i}_{off}$ , respectively. Because $\bar{t}_{off}$ and $\bar{i}_{off}$ are offline hard negatives of corresponding $i$ and $t$ , both $(\bar{i}_{off},\bar{t}_{off})$ and $(\widetilde{\bar{i}}_{off},\widetilde{\bar{t}}_{off})$ can be also regarded as offline hard negative pairs (we re-sample $\bar{i}_{off}$ and $\bar{t}_{off}$ if they occasionally correspond to each other). The samples of each pair are non-corresponding but share very similar semantic meanings to each other, and also to $i$ and $t$ . This two new terms guide the model to distinguish between positive and negative pairs without common elements. In Section 4, we prove the effectiveness of deriving the new terms based on $\bar{i}_{off}$ , $\bar{t}_{off}$ instead of $\bar{i}_{on}$ , $\bar{t}_{on}$ . The complete offline loss terms based on anchor $i$ and $t$ contain 4 and 5 elements. Following [4], we define it as an offline quintuplet loss.
+
+# 3.3 Adaptive and Hierarchical Penalization
+
+In Section 3.2, we introduce offline hard negatives which cooperate with online hard negatives to train the model as Equation 4. During the training process, it is natural that we should give different penalty weights to negative pairs with different "hardness" degrees. For example, if the similarity score between a positive pair and a hard negative pair is close, both pairs should obtain higher penalty weight which guides the model to distinguish between them better. However, when we derive each loss term with respect to its contained pairs' similarity scores, the gradients are always constant. This indicates that when the margin condition is not satisfied, the penalty weight is consistent regardless of the closeness degree between the positive and negative pairs.
+
+One simple solution is modifying each loss term to a form of square so that the penalty weight is related to the score difference between the positive and negative pairs. However, we find that the improvement is limited as there are no hierarchical knowledge provided by the loss function. Ideally, we expect that the positive pairs to obtain higher scores than offline hard negative pairs, and that the offline hard negative pairs obtain higher scores than online hard negative pairs. To this end, we feed the information of offline hard negatives into the online loss term. The final loss function for the second-round training is as follows:
+
+$$
+\begin{array}{l} \mathcal {L} = \sum_ {(i, t) \in P} \left(\left(\left(\beta - \frac {S (i , \bar {t} _ {o f f}) - S (i , \bar {t} _ {o n})}{\alpha}\right) [ \gamma_ {1} - S (i, t) + S (i, \bar {t} _ {o n}) ] _ {+} + [ \gamma_ {2} - S (i, t) + S (i, \bar {t} _ {o f f}) ] _ {+} + [ \gamma_ {2} - S (i, t) + S (\bar {t} _ {o f f}, \bar {t} _ {o f f}) ] _ {+}\right) \right. \\ + \left(\left(\beta - \frac {S \left(\tilde {i} _ {o f f} , t\right) - S \left(\tilde {i} _ {o n} , t\right)}{\alpha}\right) \left[ \gamma_ {1} - S (i, t) + S \left(\tilde {i} _ {o n}, t\right) \right] _ {+} + \left[ \gamma_ {2} - S (i, t) + S \left(\tilde {i} _ {o f f}, t\right) \right] _ {+} + \left[ \gamma_ {2} - S (i, t) + S \left(\tilde {\tilde {i}} _ {o f f}, \tilde {\tilde {t}} _ {o f f}\right) \right] _ {+})\right) \\ \end{array}
+$$
+
+Here $\alpha$ and $\beta$ are hyper-parameters. In Section 4, we present that they can be set to consistent values for different models on different datasets.
+
+To better understand how the proposed loss function works, we focus on the first part (line) of Equation 5 which is symmetrical to the second part, and compute its gradient with respect to $S(i,t)$ , $S(i,\bar{t}_{off})$ and $S(i,\bar{t}_{on})$ as follows:
+
+$$
+\begin{array}{l} \frac {\partial \mathcal {L}}{\partial S (i , t)} = (\frac {(S (i , \bar {t} _ {o f f}) - S (i , \bar {t} _ {o n})}{\alpha} - \beta) \mathbb {I} (\gamma_ {1} - S (i, t) + S (i, \bar {t} _ {o n}) > 0) - \mathbb {I} (\gamma_ {2} - S (i, t) + S (i, \bar {t} _ {o f f}) > 0) \\ - \mathbb {I} (\gamma_ {2} - S (i, t) + S (\bar {l} _ {o f f}, \bar {l} _ {o f f}) > 0), \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \frac {\partial \mathcal {L}}{\partial S (i , \bar {t} _ {o f f})} = \left(\frac {S (i , t) - S (i , \bar {t} _ {o n})}{\alpha} - \frac {\gamma_ {1}}{\alpha}\right)) \mathbb {I} (\gamma_ {1} - S (i, t) + S (i, \bar {t} _ {o n}) > 0) + \mathbb {I} (\gamma_ {2} - S (i, t) + S (i, \bar {t} _ {o f f}) > 0), \tag {6} \\ \frac {\partial \mathcal {L}}{\partial S (i , \bar {t} _ {o n})} = \left(\frac {2 S (i , \bar {t} _ {o n}) - S (i , t) - S (i , \bar {t} _ {o f f})}{\alpha} + \beta + \frac {\gamma_ {1}}{\alpha}\right) \mathbb {I} (\gamma_ {1} - S (i, t) + S (i, \bar {t} _ {o n}) > 0) \\ \end{array}
+$$
+
+Here $\mathbb{I}(A)$ is the indicator function: $\mathbb{I}(A) = 1$ if A is true, and O otherwise.
+
+When the margin conditions are not satisfied, the gradient of $\mathcal{L}$ with respect to $S(i,\bar{t}_{on})$ becomes larger when $S(i,\bar{t}_{on})$ is close to the average of $S(i,\bar{t}_{off})$ and $S(i,t)$ , which indicates a larger penalty to make $S(i,\bar{t}_{on})$ lower. For the gradient of $\mathcal{L}$ with respect to $S(i,t)$ , the second and third terms indicate a negative constant which pushes $S(i,t)$ to be higher than $S(i,\bar{t}_{off})$ . In addition, the first term indicates an additional adaptive penalty for $S(i,t)$ to be far away from $S(i,\bar{t}_{on})$ . When $S(i,\bar{t}_{on})$ is remarkably lower than $S(i,\bar{t}_{off})$ , the penalty drops since $S(i,\bar{t}_{on})$ is sufficiently lower. As for the gradient of $\mathcal{L}$ with respect to $S(i,\bar{t}_{off})$ , it is subtle as the second term indicates a positive constant that penalizes $S(i,\bar{t}_{off})$ to be lower than $S(i,t)$ . However, this penalty could be neutralized when $S(i,t)$ and $S(i,\bar{t}_{on})$ are close to each other. In this situation, it prevents the penalty from incorrectly making $S(i,\bar{t}_{off})$ lower than $S(i,\bar{t}_{on})$ .
+
+Overall, the proposed loss function applies adaptive and hierarchical penalties to the positive, offline hard negative and online hard negative pairs based on the differences among their predicted scores. Essentially, the pairs that are close to the boundary of its pair class obtain larger penalty weights, the inter-class score gaps can thus be enlarged among these three kinds of pairs. In Section 4, we demonstrate its strong effectiveness to improve the model's performance.
+
+# 4 Experiments
+
+Extensive experiments are performed to evaluate the proposed training approach. The performance of retrieval is evaluated by the standard recall at $K$ (R@K). It is defined as the fraction of queries for which the correct item belongs to the top- $K$ retrieval items. We first present the datasets, experiment settings and implementation details. We then compare and analyze the performance of the proposed approach with others quantitatively and qualitatively.
+
+# 4.1 Dataset and Experiment Settings
+
+We evaluate our model on two well-known datasets, MS-COCO and Flickr30K. The original MS-COCO dataset [17] contains 82,783 training and 40,504 validation images. Each image is annotated with five descriptions. Following the splits
+
+of [18,14,15], we divide the dataset into 113,283 training images, 5,000 validation images and 5,000 test images. Following [6,14,15], we report the results by averaging over 5 folds of 1K test images or testing on the full 5K test images. Flickr30k [33] consists of 31K images collected from the Flickr website. Each image also corresponds to five human-annotated sentences. Following the split of [18,14,15], we randomly select 1,000 images for validation and 1,000 images for testing and use other images to train the model.
+
+To evaluate the effectiveness and generality of the proposed approach, we apply it to the following current state-of-the-art image-text matching models:
+
+- SCAN [14]. The first model that captures image-text correspondence at the level of objects and words. The word and object features are extracted by bidirectional GRU and the combination of Faster R-CNN [25] and ResNet-101 [8], respectively. Stacked cross attention is fed into the network to discover the full latent alignments using both objects and words as context.
+- BFAN [18]. A novel Bidirectional Focal Attention Network based on SCAN that achieves remarkable improvement. Compared with SCAN, it focuses additionally on eliminating irrelevant fragments from the shared semantics.
+- VSRN [15]. The current state-of-the-art image-text matching models without leveraging extra supervision (the model in [11] is trained by extra saliency-annotated data). It generates object representation by region relationship reasoning and global semantic reasoning.
+
+All the three models are originally trained by triplet loss with online hard negatives. We replace it with the proposed training approach for comparison.
+
+# 4.2 Implementation Details
+
+To perform a fair comparison, for SCAN, BFAN and VSRN, we completely preserve their network structures and model settings (e.g. training batch size, feature dimension and other model-related hyper-parameter settings) as described in their original work. We only replace the online triplet loss by the proposed one to train them. For all the situations, the margins for online and offline ranking losses $\gamma_{1}$ and $\gamma_{2}$ are set to 0.2 and 0, the hyper-parameters $\beta$ and $\alpha$ in Equation 5 are set to 1.5 and 0.3. The top list size $h$ is set to 300 and 60 to sample offline hard negative texts and images (the training texts are 5 times as many training images for both datasets). As mentioned in Section 3.2, for VSRN, it takes 3,400s/620s to perform global similarity score prediction on MS-COCO/Flickr30K. However, for SCAN and BFAN, they hold complex upper networks which make this step extremely time-consuming. Therefore, we skip the first-round training of SCAN and BFAN. The similarity scores predicted by VSRN are also used as a basis for the second-round training of SCAN and BFAN to sample offline hard negatives. We consider this setting valid because, after the second-round training, the final prediction is still made by SCAN or BFAN without the participating of VSRN, which can be regarded as a teacher model. For the first-round training on MS-COCO/Flickr30K, as [15], VSRN is trained by a start learning rate of 0.0002 for
+
+15/10 epochs, and then trained by a lower learning rate of 0.00002 for another 15/10 epochs. For the second-round training on both datasets, SCAN, BFAN and VSRN are trained by a start learning rate of 0.0005, 0.0005 and 0.0002 for 10 epochs, and then trained by a lower learning rate of 0.00005, 0.00005 and 0.00002 for another 5, 5 and 10 epochs, respectively.
+
+# 4.3 Results on MS-COCO and Flickr30K
+
+Table 1. Quantitative evaluation results of image-to-text (sentence) retrieval and text-to-image (image) retrieval on MS-COCO 1K/5K test set. The baseline models (first row) are trained by the triplet loss with online hard negatives. “+OffTri”, “+OffQuin”, “+AdapOffQuin” represent training the model by Equation 3, 4, 5, respectively.
+
+| Model | Sentence Retrieval | Image Retrieval | rsum |
| R@1 | R@5 | R@10 | R@1 | R@5 | R@10 |
| 1K Test Images |
| SCAN [14] | 72.7 | 94.8 | 98.4 | 58.8 | 88.4 | 94.8 | 507.9 |
| SCAN + OffTri | 73.1 | 94.8 | 98.2 | 59.3 | 88.3 | 94.8 | 508.5 |
| SCAN + OffQuin | 73.6 | 95.0 | 98.4 | 59.6 | 88.6 | 95.0 | 510.2 |
| SCAN + AdapOffQuin | 74.1 | 95.2 | 98.5 | 59.8 | 88.6 | 95.0 | 511.2 |
| BFAN [18] | 74.9 | 95.2 | 98.3 | 59.4 | 88.4 | 94.5 | 510.7 |
| BFAN + OffTri | 75.8 | 95.6 | 98.4 | 60.1 | 88.8 | 94.7 | 513.4 |
| BFAN + OffQuin | 76.3 | 95.7 | 98.4 | 60.5 | 89.0 | 94.8 | 514.7 |
| BFAN + AdapOffQuin | 77.3 | 96.0 | 98.5 | 61.2 | 89.2 | 95.0 | 517.2 |
| VSRN [15] | 76.2 | 94.8 | 98.2 | 62.8 | 89.7 | 95.1 | 516.8 |
| VSRN + OffTri | 76.8 | 95.2 | 98.4 | 63.1 | 89.9 | 95.2 | 518.6 |
| VSRN + OffQuin | 76.9 | 95.3 | 98.4 | 63.3 | 90.2 | 95.5 | 519.7 |
| VSRN + AdapOffQuin | 77.5 | 95.5 | 98.6 | 63.5 | 90.5 | 95.8 | 521.4 |
| 5K Test Images |
| SCAN [14] | 50.4 | 82.2 | 90.0 | 38.6 | 69.3 | 80.4 | 410.9 |
| SCAN + AdapOffQuin | 51.2 | 82.5 | 90.1 | 39.4 | 69.7 | 80.4 | 413.3 |
| BFAN [18] | 52.9 | 82.8 | 90.6 | 38.3 | 67.8 | 79.3 | 411.7 |
| BFAN + AdapOffQuin | 57.3 | 84.5 | 91.7 | 40.1 | 69.2 | 80.1 | 422.9 |
| VSRN [15] | 53.0 | 81.1 | 89.4 | 40.5 | 70.6 | 81.1 | 415.7 |
| VSRN + AdapOffQuin | 55.1 | 83.3 | 90.8 | 41.1 | 71.5 | 82.0 | 423.8 |
+
+Table 1 shows the performance comparison of models trained by different approaches on MS-COCO. We can see that all the three models are significantly improved on all the settings when trained by our proposed training approach. As mentioned in Section 4.2, for all the models, the offline hard negatives in their second-round training are sampled from the prediction of the first-round trained VSRN. It indicates that the proposed training approach is insensitive to the model consistency of the two-round training. When the global similarity score prediction step is intractable for the current model, we can train it by sampling offline hard negatives based on the prediction of another more efficient model. Overall, we achieve the most significant improvement on BFAN. In particular, on the more reliable 5K test set, it outperforms the baseline by $8.3\%$ and $4.7\%$ in top-1 sentence retrieval and top-1 image retrieval.
+
+
+Fig. 4. Plotting training epoch against R@1 on the MS-COCO validation set for different training approaches applied on VSRN and BFAN. For the proposed approaches, the training curves correspond to the second-round training. "t2i" and "i2t" represents image retrieval and sentence retrieval, respectively.
+
+
+
+Table 2 shows the performance comparison on Flickr30K. It should be noted that Flickr30K is much smaller than MS-COCO as it contains fewer very confusing negative image-text pairs to be served as high-quality offline hard negative pairs. However, significant improvements are still observed for all the models. In Section 4.4, we show that our proposed training approach has strong robustness for the quality of offline hard negatives.
+
+Table 2. Quantitative evaluation results of sentence retrieval and image retrieval on the Flickr30K test set.
+
+| Model | Sentence Retrieval | Image Retrieval | rsum |
| R@1 | R@5 | R@10 | R@1 | R@5 | R@10 |
| 1K Test Images |
| SCAN [14] | 67.4 | 90.3 | 95.8 | 48.6 | 77.7 | 85.2 | 465.0 |
| SCAN + AdapOffQuin | 70.3 | 92.0 | 95.5 | 50.0 | 79.2 | 86.2 | 473.2 |
| BFAN [18] | 68.1 | 91.4 | 95.9 | 50.8 | 78.4 | 85.8 | 470.4 |
| BFAN + AdapOffQuin | 73.2 | 94.5 | 97.0 | 54.0 | 80.3 | 87.7 | 486.7 |
| VSRN [15] | 71.3 | 90.6 | 96.0 | 54.7 | 81.8 | 88.2 | 482.6 |
| VSRN + AdapOffQuin | 72.8 | 91.8 | 95.8 | 55.3 | 82.2 | 88.4 | 486.3 |
+
+We look deeper into different training approaches by examining VSRN and BFAN's training behaviours3 on the widely-used MS-COCO 1K validation set [6,15,14] (i.e. the first fold of the 5K validation set). As shown in Figure 4, both models' performance obtains continuous improvement as we feed different proposed mechanisms into the training process. When the models are trained by Equation 5, they converge significantly faster than the baselines as it takes less than 10 epochs for them to outperform the highest R@1 of their baselines.
+
+
+Fig. 5. Qualitative image retrieval and sentence retrieval comparison between the baseline training approach and ours on the MS-COCO test set.
+
+# 4.4 Ablation Study and Visualization
+
+Table 3. Performance of different training approach variants on MS-COCO 1K test set. "OnlyOffline" represents the model that is only trained by the offline term. "Fine-tune" represents the model that is fine-tuned in the second-round instead of re-trained from scratch. "OnlineQuin" indicates that we apply online quintuplet loss instead of offline in Equation 4 (i.e. replace $S(\bar{i}_{off},\bar{t}_{off}))$ ), $S(\widetilde{\bar{i}}_{off},\widetilde{\bar{t}}_{off})$ with $S(\bar{i}_{on},\bar{t}_{on}))$ ), $S(\widetilde{\bar{i}}_{on},\widetilde{\bar{t}}_{on}))$ ) to train the model. "w/o OfflineAdap" represents that we replace $S(i,\bar{t}_{off})$ and $S(\bar{i}_{off},t)$ by $S(i,t)$ for the new added terms in Equation 5 to train the model. Performance of selecting different top list size $h$ for offline hard negative text sampling is also studied. The values in parentheses indicate the performance difference between the models trained by the variant and by the proposed approach with the final settings.
+
+| Model | Sentence Retrieval | Image Retrieval |
| R@1 | R@5 | R@10 | R@1 | R@5 | R@10 |
| 1K Test Images |
| BFAN (OnlyOffline) | 1.1(-76.2) | 2.5 (-93.5) | 4.9 (-93.6) | 0.5 (-60.7) | 1.4 (-87.8) | 2.6(-92.4) |
| VSRN (OnlyOffline) | 0.7(-76.8) | 2.1(-93.4) | 3.8(-94.8) | 0.4(-63.1) | 1.2(-89.3) | 2.3(-93.5) |
| BFAN (Fine-tune) | 74.3 (-3.0) | 94.7 (-1.3) | 98.2 (-0.3) | 58.7 (-2.5) | 88.1 (-1.1) | 94.2(-0.8) |
| VSRN (Fine-tune) | 74.5 (-3.0) | 94.3 (-1.2) | 98.1 (-0.5) | 62.0 (-1.5) | 89.3(-1.2) | 94.8(-1.0) |
| BFAN (OnlineQuin) | 75.3 (-2.0) | 95.8 (-0.2) | 98.5 (+0.0) | 59.8 (-1.4) | 88.6 (-0.6) | 94.6(-0.4) |
| VSRN (OnlineQuin) | 76.4 (-1.1) | 94.9 (-0.6) | 98.2 (-0.4) | 62.8 (-0.7) | 89.9(-0.6) | 95.2(-0.6) |
| BFAN (w/o OfflineAdap) | 76.6 (-0.7) | 95.8(-0.2) | 98.4 (-0.1) | 60.8 (-0.4) | 89.1 (-0.1) | 94.8(-0.2) |
| VSRN (w/o OfflineAdap) | 77.1 (-0.4) | 95.4(-0.1) | 98.4 (-0.2) | 63.4 (-0.1) | 90.2 (-0.3) | 95.5(-0.3) |
| VSRN (h = 200) | 77.1 (-0.4) | 95.3(-0.2) | 98.4 (-0.2) | 63.3 (-0.2) | 90.4 (-0.1) | 95.6(-0.2) |
| VSRN (h = 500) | 77.4 (-0.1) | 95.6(+0.1) | 98.6 (+0.0) | 63.5 (+0.0) | 90.4 (-0.1) | 95.7(-0.1) |
| VSRN (h = 1000) | 77.3 (-0.2) | 95.4(-0.1) | 98.6 (+0.0) | 63.3 (-0.2) | 90.3 (-0.2) | 95.6(-0.2) |
+
+First, we validate whether the offline hard negatives can completely replace online hard negatives to train the model. Specifically, we remove the online loss term in Equation 4 to train VSRN and BFAN. As shown in Table 3, the training process fails as it is too difficult for the model to directly learn to
+
+distinguish between the positive pairs and these extremely confusing negative pairs. Also, we demonstrate the usefulness of re-training the model from scratch in the second round. As shown in Table 3, when we apply Equation 5 to fine-tune the model that has already been trained by the online triplet loss and get trapped in a local optimum, it cannot obtain additional improvement. In Equation 4, we create two new terms based on offline negatives. Indeed, we can instead apply them based on online negatives. However, the performance of "OnlineQuin" models are remarkably worse than the models train by Equation 4, this supports our claim of the second problem in Section 1. On the other hand, in Equation 5, we feed the offline hard negative information into the online term for hierarchical penalization. To validate its effectiveness, we replace $S(i,\bar{t}_{off})$ and $S(\bar{i}_{off},t)$ by $S(i,t)$ for the new added terms in Equation 5 to break this hierarchical relation. $\alpha$ and $\beta$ are re-adjusted to achieve the best performance on the validation set. The performance drops to the same level of using Equation 4 to train the models, indicating the effectiveness. In the end, for VSRN, we present the model's performance when selecting different top list size $h$ for offline hard negative text sampling (we always keep it 5 times larger than the top list size for offline hard negative image sampling). We can find that even when $h$ is set to 1000 which indicates significant drops of "hardness" degree of offline hard negatives, the model still achieves great performance. This is consistent with the excellent performance on Flickr30K and proves the robustness of our training approach on smaller datasets when very confusing hard negative pairs are limited.
+
+Figure 5 shows the qualitative comparison between the models trained by different approaches on MS-COCO. For sentence retrieval, given an image query, we show the top-5 retrieved sentences. For image retrieval, given a sentence query, we show the top-3 retrieved images. The correct retrieval items for each query are ticked off. Overall, our training approach guides the model to better find and attend to the detailed non-correspondences of negative image-text pairs such as "snow covered field", "rihiho", "blowing out a candle" and "poster".
+
+# 5 Conclusion
+
+We present a novel training approach for image-text matching. It starts by mining "harder" negatives offline from the whole training set. Based on the mined offline hard negatives, an effective quintuplet loss is proposed to complement the online triplet loss to better distinguish positive and negative pairs. Furthermore, we take the distance relations among positive, offline hard negative and online hard negative pairs into consideration and effectively achieve adaptive penalization for different pairs. Extensive experiments demonstrate the effectiveness and generality of the proposed approach.
+
+# 6 Acknowledgment
+
+This work is supported in part by NSF awards IIS-1704337, IIS-1722847, and IIS-1813709, as well as our corporate sponsors.
+
+# References
+
+1. Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., Zhang, L.: Bottom-up and top-down attention for image captioning and visual question answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6077-6086 (2018)
+2. Chen, T., Luo, J.: Expressing objects just like words: Recurrent visual embedding for image-text matching. arXiv preprint arXiv:2002.08510 (2020)
+3. Chen, T., Zhang, Z., You, Q., Fang, C., Wang, Z., Jin, H., Luo, J.: "factual" or "emotional": Stylized image captioning with adaptive learning and attention. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 519-535 (2018)
+4. Chen, W., Chen, X., Zhang, J., Huang, K.: Beyond triplet loss: a deep quadruplet network for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 403-412 (2017)
+5. Eisenschat, A., Wolf, L.: Linking image and text with 2-way nets. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4601-4611 (2017)
+6. Faghri, F., Fleet, D.J., Kiros, J.R., Fidler, S.: Vse++: Improved visual-semantic embeddings. arXiv preprint arXiv:1707.05612 2(7), 8 (2017)
+7. Frome, A., Corrado, G.S., Shlens, J., Bengio, S., Dean, J., Mikolov, T., et al.: Devise: A deep visual-semantic embedding model. In: Advances in neural information processing systems. pp. 2121-2129 (2013)
+8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016)
+9. Huang, Y., Wang, L.: Acmm: Aligned cross-modal memory for few-shot image and sentence matching. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 5774-5783 (2019)
+0. Huang, Y., Wang, W., Wang, L.: Instance-aware image and sentence matching with selective multimodal LSTM. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2310-2318 (2017)
+1. Ji, Z., Wang, H., Han, J., Pang, Y.: Saliency-guided attention network for image-sentence matching. arXiv preprint arXiv:1904.09471 (2019)
+2. Kim, J.H., Lee, S.W., Kwak, D., Heo, M.O., Kim, J., Ha, J.W., Zhang, B.T.: Multimodal residual learning for visual qa. In: Advances in neural information processing systems. pp. 361-369 (2016)
+3. Kiros, R., Salakhutdinov, R., Zemel, R.S.: Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539 (2014)
+4. Lee, K.H., Chen, X., Hua, G., Hu, H., He, X.: Stacked cross attention for imagetext matching. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 201-216 (2018)
+5. Li, K., Zhang, Y., Li, K., Li, Y., Fu, Y.: Visual semantic reasoning for image-text matching. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4654-4662 (2019)
+6. Li, S., Xiao, T., Li, H., Yang, W., Wang, X.: Identity-aware textual-visual matching with latent co-attention. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1890–1899 (2017)
+7. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740-755. Springer (2014)
+
+18. Liu, C., Mao, Z., Liu, A.A., Zhang, T., Wang, B., Zhang, Y.: Focus your attention: A bidirectional focal attention network for image-text matching. In: Proceedings of the 27th ACM International Conference on Multimedia. pp. 3-11 (2019)
+19. Liu, F., Ye, R., Wang, X., Li, S.: Hal: Improved text-image matching by mitigating visual semantic hubs. arXiv preprint arXiv:1911.10097 (2019)
+20. Lu, J., Xiong, C., Parikh, D., Socher, R.: Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). vol. 6 (2017)
+21. Lu, J., Yang, J., Batra, D., Parikh, D.: Hierarchical question-image co-attention for visual question answering. In: Advances In Neural Information Processing Systems. pp. 289–297 (2016)
+22. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
+23. Nam, H., Ha, J.W., Kim, J.: Dual attention networks for multimodal reasoning and matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 299-307 (2017)
+24. Pedersoli, M., Lucas, T., Schmid, C., Verbeek, J.: Areas of attention for image captioning. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1242-1250 (2017)
+25. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp. 91-99 (2015)
+26. Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: A unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 815-823 (2015)
+27. Wang, Z., Liu, X., Li, H., Sheng, L., Yan, J., Wang, X., Shao, J.: Camp: Cross-modal adaptive message passing for text-image retrieval. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 5764-5773 (2019)
+28. Weston, J., Bengio, S., Usunier, N.: Large scale image annotation: learning to rank with joint word-image embeddings. Machine learning 81(1), 21-35 (2010)
+29. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In: International Conference on Machine Learning. pp. 2048-2057 (2015)
+30. Yang, Z., He, X., Gao, J., Deng, L., Smola, A.: Stacked attention networks for image question answering. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 21-29 (2016)
+31. You, Q., Jin, H., Wang, Z., Fang, C., Luo, J.: Image captioning with semantic attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4651-4659 (2016)
+32. You, Q., Zhang, Z., Luo, J.: End-to-end convolutional semantic embeddings. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5735-5744 (2018)
+33. Young, P., Lai, A., Hodosh, M., Hockenmaier, J.: From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics 2, 67-78 (2014)
+34. Yu, D., Fu, J., Mei, T., Rui, Y.: Multi-level attention networks for visual question answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4709-4717 (2017)
+35. Zhang, Y., Lu, H.: Deep cross-modal projection learning for image-text matching. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 686-701 (2018)
+
+36. Zheng, Z., Zheng, L., Garrett, M., Yang, Y., Shen, Y.D.: Dual-path convolutional image-text embedding with instance loss. arXiv preprint arXiv:1711.05535 (2017)
\ No newline at end of file
diff --git a/adaptiveofflinequintupletlossforimagetextmatching/images.zip b/adaptiveofflinequintupletlossforimagetextmatching/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..de7a53d12dffd52526a0ebaba62a594ff8ed8626
--- /dev/null
+++ b/adaptiveofflinequintupletlossforimagetextmatching/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d70c7bf5c9733d8a88b45363c1210d64cbd8402e34b31f010794ba213cfeca7
+size 546885
diff --git a/adaptiveofflinequintupletlossforimagetextmatching/layout.json b/adaptiveofflinequintupletlossforimagetextmatching/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a43b597710d5887aa08e6348a10a6bc62842af03
--- /dev/null
+++ b/adaptiveofflinequintupletlossforimagetextmatching/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d7fff24bfb82aa35930a00d52ce9aed689869d052aa1cb6cb2a6b1f85c2cca3a
+size 402444
diff --git a/adaptivetasksamplingformetalearning/aa2b9400-49d4-4845-b1d4-c1e0e36478c6_content_list.json b/adaptivetasksamplingformetalearning/aa2b9400-49d4-4845-b1d4-c1e0e36478c6_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..895f91d932c4edb3793384af6c3b519ea9517cc9
--- /dev/null
+++ b/adaptivetasksamplingformetalearning/aa2b9400-49d4-4845-b1d4-c1e0e36478c6_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c901761e44cd447d5cd97eb78fc921ef128779d9960b9db00cac0a1d548a5ddd
+size 77090
diff --git a/adaptivetasksamplingformetalearning/aa2b9400-49d4-4845-b1d4-c1e0e36478c6_model.json b/adaptivetasksamplingformetalearning/aa2b9400-49d4-4845-b1d4-c1e0e36478c6_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..19816bd14c6e9fa111e67a95d6059690ce71863e
--- /dev/null
+++ b/adaptivetasksamplingformetalearning/aa2b9400-49d4-4845-b1d4-c1e0e36478c6_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1858410686ea068d80cad372add45b7e9c5f3da8f351610cb5055409940de45
+size 95261
diff --git a/adaptivetasksamplingformetalearning/aa2b9400-49d4-4845-b1d4-c1e0e36478c6_origin.pdf b/adaptivetasksamplingformetalearning/aa2b9400-49d4-4845-b1d4-c1e0e36478c6_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a3d67e5e47af9d9d810101a044bf1615f6c252f0
--- /dev/null
+++ b/adaptivetasksamplingformetalearning/aa2b9400-49d4-4845-b1d4-c1e0e36478c6_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f2ddcca4963c6a44e5a0050b3b4e68f3ef72f1f71523b8d837ac3d1ff23d5d0
+size 2021160
diff --git a/adaptivetasksamplingformetalearning/full.md b/adaptivetasksamplingformetalearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5d241d8f0eb40357a929f500fd7d12e289a9baf9
--- /dev/null
+++ b/adaptivetasksamplingformetalearning/full.md
@@ -0,0 +1,278 @@
+# Adaptive Task Sampling for Meta-Learning *
+
+Chenghao Liu1 Zhihao Wang2 Doyen Sahoo3 Yuan Fang1 Kun Zhang4 Steven C.H. Hoi1,3
+
+Singapore Management University1 South China University of Technology2
+Salesforce Research Asia3 Carnegie Mellon University4
+{chliu, yfang}@smu.edu.sg, ptkin@outlook.com,
+{dsahoo, shoi}@salesforce.com, kunz1@cmu.edu
+
+Abstract. Meta-learning methods have been extensively studied and applied in computer vision, especially for few-shot classification tasks. The key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time by randomly sampling classes in meta-training data to construct few-shot tasks for episodic training. While a rich line of work focuses solely on how to extract meta-knowledge across tasks, we exploit the complementary problem on how to generate informative tasks. We argue that the randomly sampled tasks could be sub-optimal and uninformative (e.g., the task of classifying "dog" from "laptop" is often trivial) to the meta-learner. In this paper, we propose an adaptive task sampling method to improve the generalization performance. Unlike instance-based sampling, task-based sampling is much more challenging due to the implicit definition of the task in each episode. Therefore, we accordingly propose a greedy class-pair based sampling method, which selects difficult tasks according to class-pair potentials. We evaluate our adaptive task sampling method on two few-shot classification benchmarks, and it achieves consistent improvements across different feature backbones, meta-learning algorithms and datasets.
+
+# 1 Introduction
+
+Deep neural networks have achieved great performance in areas such as image recognition [17], machine translation [9] and speech synthesis [51] when large amounts of labelled data are available. In stark contrast, human intelligence naturally possesses the ability to leverage prior knowledge and quickly learn new concepts from only a handful of samples. Such fast adaptation is made possible by some fundamental structures in human brains such as the "shape bias" to learn the learning procedure [25], which is also known as meta-learning. The fact that deep neural networks fail in the small data regime formulates a desirable problem for understanding intelligence. In particular, leveraging meta-learning
+
+algorithms to solve few-shot learning problems [24, 38] has recently gained much attention, which aims to close the gap between human and machine intelligence by training deep neural networks that can generalize well from very few labelled samples. In this setup, meta-learning is formulated as the extraction of cross-task knowledge that can facilitate the quick acquisition of task-specific knowledge from new tasks.
+
+In order to compensate for the scarcity of training data in few-shot classification tasks, meta-learning approaches rely on an episodic training paradigm. A series of few-shot tasks are sampled from meta-training data for the extraction of transferable knowledge across tasks, which is then applied to new few-shot classification tasks consisting of unseen classes during the meta-testing phase. Specifically, optimization-based meta-learning approaches [46, 12] aim to find a global set of model parameters that can be quickly and effectively fine-tuned for each individual task with just a few gradient descent update steps. Meanwhile, metric-based meta-learning approaches [47, 37] learn a shared distance metric across tasks.
+
+Despite their noticeable improvements, these meta-learning approaches leverage uniform sampling over classes to generate few-shot tasks, which ignores the intrinsic relationships between classes when forming episodes. We argue that exploiting class structures to construct more informative tasks is critical in meta-learning, which improves its ability to adapt to novel classes. For example, in the midst of the training procedure, a randomly sampled task of classifying dogs from laptops may have little effect on the model update due to its simplicity. Furthermore, in the conventional classification problem, prioritizing challenging training examples [43, 42] to improve the generalization performance has been widely used in various fields, ranging from AdaBoost [14] that selects harder examples to train subsequent classifiers, to Focal Loss [28] that adds a soft weighting scheme to emphasize harder examples.
+
+A natural question thus arises: Can we perform adaptive task sampling and create more difficult tasks for meta-learning? Compared to the traditional instance-based adaptive sampling scheme, one key challenge in task sampling is to define the difficulty of a task. A naive solution is to choose the difficult classes since each task is constructed by multiple classes. However, the difficulty of a class, and even the semantics of a class, is dependent on each other. For instance, the characteristics to discriminate "dog" from "laptop" or "car" are relatively easier to uncover than those for discriminating "dog" from "cat" or "tiger". In other words, the difficulty of a task goes beyond the difficulty of individual classes, and adaptive task sampling should consider the intricate relationships between different classes.
+
+In this work, we propose a class-pair based adaptive task sampling method for meta-learning with several appealing qualities. First, it determines the task selection distribution by computing the difficulty of all class-pairs in it. As a result, it could capture the complex-structured relationships between classes in a multi-class few-shot classification problem. Second, since the cost of computing the task selection distribution for $K$ -way classification problem is $(|\mathbb{C}_{tr}|$ choose
+
+$K$ ) or $O(|\mathbb{C}_{tr}|^{K})$ , where $|\mathbb{C}_{tr}|$ is the number of classes in the meta-training data, we further propose a greedy class-pair based adaptive task sampling method which only requires $O(K)$ time. Meanwhile, it can be formally established that the proposed greedy approach in fact samples from a distribution that is identical to that in the non-greedy version. Lastly, our method could be applied to any meta-learning algorithms that follow episodic training and works well with different feature backbones. In summary, our work makes the following contributions. (1) We propose a class-pair based adaptive task sampling approach for meta-learning methods, to improve the generalization performance on unseen tasks. (2) We further develop a greedy class-pair based approach that not only significantly reduces the complexity of task distribution computation, but also guarantees the generation of an identical distribution as that in the non-greedy approach. (3) We study the impact of the adaptive task sampling method by integrating it with various meta-learning approaches and performing comprehensive experiments on the miniImageNet and CIFAR-FS few-shot datasets, which quantitatively demonstrates the superior performance of our method. (4) We also conduct an extensive investigation of different sampling strategies, including class-based method, easy class-pair based method and uncertain class-pair based method. The results show that hard class-pair based sampling consistently leads to more accurate results.
+
+# 2 Related Work
+
+Meta-learning: The original idea of meta-learning, training a meta-model to learn a base model, has existed for at least 20 years [48, 35]. Recently, the meta-learning framework has been used to solve few-shot classification problems. One typical work is the optimization based method. [38] uses the LSTM-based meta-learner to replace the SGD optimizer in the base model. MAML [12] and its variants [27, 4] aim to learn a good model initialization so that the model for new tasks can be learned with a small number of samples and gradient update steps. Another category of work is the metric based method. It learns a set of embedding functions such that when represented in this space, images are easy to be recognized using a non-parametric model like nearest neighbor [50, 44, 37]. All of these methods follow the uniform sampling scheme to generate tasks at each episode. Besides, [46] considers a heuristic sampling method, which uses memory to store all the failure classes from $k$ continuous tasks, and then constructs a hard task from them. [49, 29] utilize pre-defined class structure information to construct tasks in both meta-training and meta-testing phases. In this way, the experiment setting could more closely resemble realistic scenarios. In contrast, our work, inspired by importance sampling in stochastic optimization, aims to adaptively update task generating distribution in the meta-training phase, and this, in turn, improves its ability to adapt to novel classes with few training data in the meta-testing phase. We also present a theoretical analysis of the generalization bound to justify our approach.
+
+
+Fig. 1: The episodic training paradigm for meta-learning few-shot classification.
+
+Adaptive Sampling: Instance-based sampling is ubiquitous in stochastic optimization. Generally, it constantly reevaluates the relative importance of each instance during training. The most common paradigm is to calculate the importance of each instance based on the gradient norm [1], bound on the gradient norm [20], loss [31], approximate loss [21] or prediction probability [7]. One typical line of research work is to leverage adaptive sampling for fast convergence [55, 2]. Researchers also consider improving the generalization performance rather than speeding up training [30]. Specifically, [5] considers instances that increase difficulty. Hard example mining methods also prioritize challenging training examples [43, 28]. Some other researchers prioritize uncertain examples that are close to the model's decision boundary [7, 45]. In this work, we also evaluate easy sampling and uncertain sampling at the task level, but experimental results show that hard sampling performs better. There also exists work for sampling mini-batches instead of a single instance [11, 18]. [52, 53] consider sampling diverse mini-batches via the repulsive point process. Nonetheless, these methods are not designed for meta-learning and few-shot learning.
+
+# 3 Preliminaries
+
+In this section, we review the episodic training paradigm in meta-learning and the vanilla instance-based adaptive sampling method for SGD.
+
+# 3.1 Episodic Training
+
+In the meta-learning problem setting, the goal is to learn models that can learn new tasks from small amounts of data. Formally, we have a large meta-training dataset $\mathbb{D}_{tr}$ (typically containing a large number of classes) and a meta-test dataset $\mathbb{D}_{test}$ , in which their respective category sets $\mathbb{C}_{tr} = \{1,\dots ,|\mathbb{C}_{tr}|\}$ and $\mathbb{C}_{test} = \{| \mathbb{C}_{tr}| + 1,\ldots ,|\mathbb{C}_{tr}| + \mathbb{C}_{test}\}$ are disjoint. We aim to learn a classification model on $\mathbb{D}_{tr}$ that can generalize to unseen categories $\mathbb{C}_{test}$ with one or few training examples per category.
+
+The success of existing meta-learning approaches relies on the episodic training paradigm [50], which mimics the few-shot regime faced at test time during
+
+training on $\mathbb{D}_{tr}$ . Particularly, meta-learning algorithms learn from a collection of $K$ -way- $M$ -shot classification tasks sampled from the amply labelled set $\mathbb{D}_{tr}$ and are evaluated in a similar way on $\mathbb{D}_{test}$ . In each episode of meta-training, we first sample $K$ classes $\mathbb{L}^K \sim \mathbb{C}_{tr}$ . Then, we sample $M$ and $N$ labelled images per class in $\mathbb{L}^K$ to construct the support set $\mathbb{S} = \{(s_m, y_m)_m\}$ and query set $\mathbb{Q} = \{(q_n, y_n)_n\}$ , respectively. The episodic training for few-shot learning is achieved by minimizing, for each episode, the loss of the prediction for each sample in the query set, given the support set. The model is parameterized by $\theta$ and the loss is the negative loglikelihood of the true class of each query sample: $\ell(\theta) = \underset{(S,Q)}{\mathbb{E}}[-\sum_{(q_n, y_n) \in Q} \log p_\theta(y_n | q_n, S)]$ , where $p_\theta(y_n | q_n, S)$ is the classification probability based on the support set. The model then back-propagates the gradient of the total loss $\nabla \ell(\theta)$ . Different meta-learning approaches differ in the manner in which this conditioning on the support set is realized. To better explain how it works, we show its framework in Figure 1.
+
+# 3.2 Instance-based Adaptive Sampling for SGD
+
+Let $\mathbb{D} = \{(x_i, y_i)_i\}$ indicate the training dataset. The probability of selecting each sample is equal at the initial stage (i.e., $p_0(i|\mathbb{D}) = \frac{1}{|\mathbb{D}|}$ ). To emphasize difficult examples while applying SGD, we adaptively update the selection probability $p^{t+1}(i)$ for instance $i$ at iteration $t+1$ according to the current prediction probability $p(y_i|x_i)$ and the selection probability at previous iteration $p^t(i)$ , $p^{t+1}(i) \propto (p^t(i))^{\tau}e^{\alpha(1-p(y_i|x_i))}$ , where the hyperparameters $\tau$ is a discounting parameter and $\alpha$ scales the influence of current prediction. This multiplicative update method has a close relation to maximum loss minimization [42] and AdaBoost [15], which can result in improved generalization performance, especially when only a few "rare" samples exist. Moreover, when the gradient update is weighted by the inverse sampling probability, we obtain an unbiased gradient estimation that improves the convergence by reducing its variance [55, 16].
+
+# 4 Adaptive Task Sampling for Meta-Learning
+
+In this section, we first propose the class-based adaptive task sampling method which is a straightforward extension of the instance-based sampling. Then, we discuss its defect and present the class-pair based sampling method. Finally, we propose the greedy class-pair based sampling method, which significantly reduces the computation cost while still generating the identical task distribution as that in the non-greedy approach.
+
+Class-based Sampling. A major challenge of adaptive task sampling for meta-learning is the implicit definition of the task, which is randomly generated by sampling $K$ classes in each episode. Although direct task based sampling is infeasible, we can adaptively sample classes for each $K$ -way classification task. With this goal in mind, we propose a class-based sampling (c-sampling) approach that updates the class selection probability $p_C^{t + 1}(c)$ in each episode.
+
+Given $\mathbb{S}^t$ and $\mathbb{Q}^t$ at episode $t$ , we could update the class selection probability for each class in current episode $c \in \mathbb{L}_K^t$ in the following way,
+
+$$
+p _ {C} ^ {t + 1} (c) \propto \left(p ^ {t} (c)\right) ^ {\tau} e ^ {\alpha \frac {\sum_ {\left(q _ {n} , y _ {n}\right) \in \mathbb {Q} ^ {t} \mathbb {I} [ c \neq y _ {n} ] p \left(c \mid q _ {n} , \mathbb {S} ^ {t}\right) + \mathbb {I} [ c = y _ {n} ] \left(1 - p \left(c \mid q _ {n} , \mathbb {S} ^ {t}\right)\right)}{N K}}. \tag {1}
+$$
+
+Note that we average the prediction probability of classifying each query sample $n$ into incorrect classes in $\mathbb{L}_K^t$ . Then we can sample $K$ classes without replacement to construct the category set $\mathbb{L}_K^{t + 1}$ for the next episode.
+
+Despite its simplicity, such a sampling approach does suffer from an important limitation. It implicitly assumes that the difficulty of each class is independent. Therefore, it updates the class selection probability in a decoupled way. In concrete words, suppose we have two different tasks: discerning "corgi", "Akita" and "poodle" and discerning "corgi", "car" and "people". Obviously, it is quite hard to tell "corgi" in the first task while it could be easy in the second one. This would be a challenging aspect for updating the class selection probability as the class-based sampling is agnostic to the context of the task and could accidentally assign contradictory scores to the same class. Secondly, even if the class selection probability is updated correctly, it cannot ensure that difficult tasks are generated properly. That is, assembling the most difficult classes do not necessarily lead to a difficult task.
+
+Class-Pair Based Sampling. To address the above issue, we further propose a class-pair based sampling (cp-sampling) approach that exploits the pairwise relationships between classes. This idea is commonly used in the multi-class classification that constructs binary classifiers to discriminate between each pair of classes [3], as two-class problems are much easier to solve. Recently, it has also been considered to extract the pairwise relationships between classes for task-dependent fast adaptation in few-shot learning [40]. In this work, we formulate the task selection probability by leveraging the Markov random field [10] over class pairs. Formally, the probability of choosing a category set $\mathbb{L}_K^{t + 1}$ at episode $t + 1$ is defined as:
+
+$$
+p _ {C P} ^ {t + 1} \left(\mathbb {L} _ {K} ^ {t + 1}\right) \propto \prod_ {(i, j) \in \mathbb {L} _ {K} ^ {t + 1}} C ^ {t} (i, j), \quad \text {s . t .} i, j \in \mathbb {C} _ {t r} \tag {2}
+$$
+
+where $C^t (i,j)$ is a potential function over class pair $(i,j)$ at episode $t$ . Notice that the classes in $\mathbb{C}_{tr}$ form a complete and undirected graph. The category set $\mathbb{L}_K^{t + 1}$ that have a relatively high probability to be selected are those $K$ -cliques with large potentials. Similarly, we adaptively update the potential function $C^{t + 1}(i,j)$ according to
+
+$$
+C ^ {t + 1} (i, j) \leftarrow \left(C ^ {t} (i, j)\right) ^ {\tau} e ^ {\alpha \tilde {p} ((i, j) | \mathbb {S} ^ {t}, \mathbb {Q} ^ {t})}, \quad i \neq j \tag {3}
+$$
+
+where $\bar{p}((i,j)|\mathbb{S}^t, \mathbb{Q}^t)$ denotes the average prediction probability that classifies query samples in class $j$ into its incorrect class $i$ or vice versa. Specifically, we define it as
+
+$$
+\bar {p} ((i, j) | \mathbb {S} ^ {t}, \mathbb {Q} ^ {t}) = \frac {\sum_ {(q _ {n} , y _ {n} = j) \in \mathbb {Q} ^ {t}} p (c = i | q _ {n} , \mathbb {S} ^ {t})}{N} + \frac {\sum_ {(q _ {n} , y _ {n} = i) \in \mathbb {Q} ^ {t}} p (c = j | q _ {n} , \mathbb {S} ^ {t})}{N}. \tag {4}
+$$
+
+
+Fig. 2: A toy example to illustrate how greedy class-pair based sampling chooses 4-class category set $\mathbb{L}_4^{t + 1}$ from 5 classes. The left correlation matrix indicates the class-pair potentials $C^t$ and the right part denotes the state of each step in sequential sampling. The blue number on the right denotes the chosen class and the red circle highlights the highest unnormalized class selection probability. $\odot$ denotes the element-wise multiplication.
+
+Greedy Class-Pair Based Sampling. It is important to note that class-pair based sampling has the disadvantage that $\binom{K}{2} \cdot \binom{\left|\mathbb{C}_{tr}\right|}{K}$ multiplication operations need to be performed for calculating $p_{CP}^{t+1}(\mathbb{L}_K^{t+1})$ for different combinations of $K$ -class in the category set. To significantly reduce the complexity, we now design a greedy class-pair based sampling (gcp-sampling) method, which samples not only at the cost $O(K)$ but also from a distribution identical to that in Eq. (2), due to the independence of the potential function $C^t(i,j)$ over class pairs. In particular, we sequentially sample classes in $K-1$ steps based on the previous results. At episode $t$ , we first sample two classes based on class-pair potential function $C^t(i,j)$ . Then we iteratively sample a new class based on the already sampled classes. Figure 2 gives an example to illustrate the process. Formally, the task selection probability is defined as
+
+$$
+p _ {G C P} ^ {t + 1} \left(\mathbb {L} _ {k + 1} ^ {t + 1}\right) \propto \left\{ \begin{array}{l l} C ^ {t} (i, j), & k = 1 \\ p \left(c \mid \mathbb {L} _ {k} ^ {t + 1}, C ^ {t}\right), & k > 1 \end{array} \right. \tag {5}
+$$
+
+where $p(c = i|\mathbb{L}_k^{t + 1},C^t)\propto \prod_{j\in \mathbb{L}_k^{t + 1}}C^t (i,j)$ . It considers the joint probability over class pairs between the chosen class $i$ and every sampled class $j$ in the category set $\mathbb{L}_k^{t + 1}$ . Compared to the distribution in Eq. (2), the greedy sampling approach in Eq. (5) has a different normalization constant in each step $k$ . However, for the evaluation of task selection distribution, the unnormalized joint probability over the class pairs of a specific category set is identical which makes the distribution in Eq. (5) exactly the same as that in Eq. (2), which we prove in Proposition 1.
+
+Proposition 1 The greedy class-pair based sampling strategy in Eq. (5) is identical to the class-pair based sampling in Eq. (2).
+
+Proof. We present a proof by induction. It is obvious that $p_{GCP}^{t+1}(\mathbb{L}_2^{t+1}) = p_{CP}^{t+1}(\mathbb{L}_2^{t+1})$ since $p_{GCP}^{t+1}(\mathbb{L}_2^{t+1}) \propto C^t(i,j)$ . Now let us consider a general case where we have
+
+Algorithm 1 gcp-sampling: Greedy Class-Pair based Sampling in K-Way-M-Shot
+Require: meta-training data $\mathbb{D}_{tr}$ , hyperparameters $\alpha ,\tau ,T$
+1: Randomly initialize meta model parameter $\theta$ . Initialize class-pair potentials $C$ by ones
+2: for $t = 1,\dots ,T$ do
+3: Initialize $\mathbb{L}_0^t$ by an empty set. Initialize $p(c|\mathbb{L}_0^t,C^{t - 1})$ by $\frac{1}{|\mathbb{C}_{tr}|}$
+4: Sample class pair $(i,j)\propto C(i,j)$ , add class $i$ and $j$ into $\mathbb{L}_0^t$
+5: for $k = 2,\ldots ,K - 1$ do
+6: Update $p(c = i|\mathbb{L}_k^t,C^{t - 1})\propto \prod_{j\in \mathbb{L}_k^t}C^{t - 1}(i,j)$
+7: Sample class $c$ based on $p(c|\mathbb{L}_k^t,C^{t - 1})$ , add class $c$ into $\mathbb{L}_{k + 1}^t$
+8: end for
+9: Construct support set $\mathbb{S}^t$ and query set $\mathbb{Q}^t$ by sampling $M$ and $N$ image per class in category set $\mathbb{L}_K^t$ , respectively
+10: Update meta model $\theta$ based on support set and query set
+11: Update class-pair potentials $C$ according to Eq. (3)
+12: end for
+13: return $\theta_T$
+
+previously sampled $k$ classes with $\mathbb{L}_k^{t + 1}$ and are about to sample the $(k + 1)$ -th class. Suppose we sample a new class $l$ to generate $\mathbb{L}_{k + 1}^{t + 1}$ , according to Eq. (5), we have
+
+$$
+\begin{array}{l} p _ {G C P} ^ {t + 1} (\mathbb {L} _ {k + 1} ^ {t + 1}) = p _ {G C P} ^ {t + 1} (\mathbb {L} _ {k} ^ {t + 1}) p (c = l | \mathbb {L} _ {k} ^ {t + 1}, C ^ {t}) \propto \prod_ {(i, j) \subset \mathbb {L} _ {k} ^ {t + 1}} C ^ {t} (i, j) \prod_ {j \in \mathbb {L} _ {k} ^ {t + 1}} C ^ {t} (l, j) \\ = \prod_ {(i, j) \subset \mathbb {L} _ {k + 1} ^ {t + 1}} C ^ {t} (i, j) = p _ {C P} ^ {t + 1} \left(\mathbb {L} _ {k + 1} ^ {t + 1}\right). \tag {6} \\ \end{array}
+$$
+
+The pseudocode of the proposed gcp-sampling algorithm is given in Algorithm 1. Due to the space limitation, we leave the theoretical analysis of the proposed gcp-sampling method in terms of its generalization ability to the supplementary material.
+
+# 5 Experiments
+
+In this section, we evaluate the proposed adaptive task sampling method on two few-shot classification benchmarks: miniImageNet [50] and CIFAR-FS [6]. We first introduce the datasets and settings, and then present a comparison to state-of-the-art methods, followed by a detailed evaluation of the compatibility when integrating with different meta-learning algorithms and the efficacy of different sampling strategies. Finally, we demonstrate qualitative results to characterize the gcp-sampling.
+
+# 5.1 Datasets and Implementation Details
+
+Datasets. We conduct experiments to evaluate our method on two few-shot classification benchmarks. Firstly, miniImageNet [50] is widely used for few-shot learning, which is constructed based on the ImageNet dataset [39] and thus has high diversity and complexity. This dataset has 100 classes with 600 $84 \times 84$ images per class. These classes are divided into 64, 16 and 20 classes for meta-training, meta-validation and meta-test, respectively, as suggested earlier [38, 12, 46]. Secondly, CIFAR-FS is another recent few-shot image classification benchmark [6] constructed by randomly sampling from the CIFAR-100 dataset [23] using the same criteria as the miniImageNet, and has the same number of classes and samples. The limited resolution of $32 \times 32$ makes the task still difficult. We also use the $64 / 16 / 20$ divisions for consistency with previous studies [6, 26].
+
+Evaluation metric. We report the mean accuracy $(\%)$ of 1000 randomly generated episodes as well as the $95\%$ confidence intervals on the meta-test set. In every episode during meta-test, each class has 15 queries.
+
+Network Architectures. We conduct experiments with 2 different feature extractor architectures, Conv-4 and ResNet-12. Conv-4 is a shallow embedding function proposed by [50] and widely used [12, 4, 44, 36]. It is composed of 4 convolutional blocks, each of which comprises a 64-filter $3 \times 3$ convolution, batch normalization (BN) [19], a ReLU nonlinearity and a $2 \times 2$ max-pooling layer. We also adopt a deep backbone ResNet-12 [17], which achieves significant improvement in recent works [33, 34, 37]. It consists of 4 residual blocks, each of which has three $3 \times 3$ convolutional layers and a $2 \times 2$ max-pooling layer. The number of filters starts from 64 and is doubled every next block. There is also a mean-pooling layer compressing the feature maps to a feature embedding in the end. In our experiments, we integrate gcp-sampling with PN, MetaOptNet-RR and MetaOptNet-SVM with ResNet-12 to compare with state of the arts. We follow the settings of [26] and use SGD with Nesterov momentum of 0.9 and weight decay of 0.0005. Besides, we use Conv-4 to evaluate the compatibility when integrating with different meta-learning algorithms and the efficacy of different sampling strategies. We follow the settings of [8] and use Adam [22] optimizer with an initial learning rate of 0.001.
+
+# 5.2 Results and Analysis
+
+Comparison with state-of-the-art. Tables 1 and 2 present the 5-way 1-shot and 5-way 5-shot results on miniImageNet and CIFAR-FS datasets, respectively. Note that it shows the highest accuracies for which the iterations are chosen by validation. For our approach, we integrate gcp-sampling with PN, MON-RR and MON-SVM, which are strong baselines. For all cases, we achieve comparable performance surpassing prior methods by a meaningful margin. For example, PN with gcp-sampling outperforms the PN with ResNet-12 by around 1.84 and 1.2 percentage points in miniImageNet and 1.89 and 1.0 percentage points in CIFAR-FS. It is worth noting that the adaptive task sampling method is orthogonal to
+
+Table 1: Average 5-way, 1-shot and 5-shot classification accuracies (\%) on the miniImageNet dataset. $\star$ denotes the results from [26].
+
+| Methods | Backbone | 5-way-1-shot | 5-way-5-shot |
| Matching Network [50] | CONV-4 | 43.44 ± 0.77 | 55.31 ± 0.73 |
| Relation Network [47] | CONV-4 | 50.44 ± 0.82 | 65.32 ± 0.70 |
| PN [44] | CONV-4 | 49.42 ± 0.78 | 68.20 ± 0.66 |
| MAML [12] | CONV-4 | 48.70 ± 1.84 | 63.11 ± 0.92 |
| MAML++ [4] | CONV-4 | 52.15 ± 0.26 | 68.32 ± 0.44 |
| MAML++, AS (ours) | CONV-4 | 52.34 ± 0.81 | 69.21 ± 0.68 |
| Bilevel Programming [13] | ResNet-12 | 50.54 ± 0.85 | 64.53 ± 0.68 |
| MetaGAN [54] | ResNet-12 | 52.71 ± 0.64 | 68.63 ± 0.67 |
| SNAIL [33] | ResNet-12 | 55.71 ± 0.99 | 68.88 ± 0.92 |
| adaResNet [34] | ResNet-12 | 56.88 ± 0.62 | 71.94 ± 0.57 |
| TADAM [37] | ResNet-12 | 58.50 ± 0.30 | 76.70 ± 0.30 |
| MTL [46] | ResNet-12 | 61.2 ± 1.8 | 75.5 ± 0.8 |
| PN* [26] | ResNet-12 | 59.25 ± 0.64 | 75.60 ± 0.48 |
| PN with gcp-sampling | ResNet-12 | 61.09 ± 0.66 | 76.80 ± 0.49 |
| MetaOptNet-RR [26] | ResNet-12 | 61.41 ± 0.61 | 77.88 ± 0.46 |
| MetaOptNet-RR with gcp-sampling | ResNet-12 | 63.02 ± 0.63 | 78.91 ± 0.46 |
| MetaOptNet-SVM [26] | ResNet-12 | 62.64 ± 0.61 | 78.63 ± 0.46 |
| MetaOptNet-SVM with gcp-sampling | ResNet-12 | 64.01 ± 0.61 | 79.78 ± 0.47 |
+
+Table 2: Average 5-way, 1-shot and 5-shot classification accuracies (\%) on the CIFAR-FS dataset. \* denotes the results from [26].
+
+| Methods | Backbone | 5-way-1-shot | 5-way-5-shot |
| Relation Network [47] | CONV-4 | 55.0 ± 1.0 | 69.3 ± 0.8 |
| PN* [44] | CONV-4 | 55.5 ± 0.7 | 72.0 ± 0.6 |
| MAML* [12] | CONV-4 | 58.9 ± 1.9 | 71.5 ± 1.0 |
| GNN [41] | CONV-4 | 61.9 | 75.3 |
| R2D2 [26] | CONV-4 | 65.3 ± 0.2 | 79.4 ± 0.1 |
| PN* [26] | ResNet-12 | 72.2 ± 0.7 | 84.2 ± 0.5 |
| PN with gcp-sampling | ResNet-12 | 74.1 ± 0.7 | 84.5 ± 0.5 |
| MetaOptNet-RR [26] | ResNet-12 | 72.6 ± 0.7 | 84.3 ± 0.5 |
| MetaOptNet-RR with gcp-sampling | ResNet-12 | 74.2 ± 0.7 | 85.1 ± 0.4 |
| MetaOptNet-SVM [26] | ResNet-12 | 72.0 ± 0.7 | 84.2 ± 0.5 |
| MetaOptNet-SVM with gcp-sampling | ResNet-12 | 73.9 ± 0.7 | 85.3 ± 0.5 |
+
+the meta-learning algorithm. Moreover, even for a deep feature backbone, our approach is still able to preserve the performance gain.
+
+Compatibility with different meta-learning algorithms. Next, we study the impact of gcp-sampling when integrating with different types of meta-learning algorithm. We consider gradient-based meta-learning methods: MAML, Reptile and MAML++, and metric-based meta-learning methods: PN and MN. The
+
+Table 3: Average 5-way classification accuracies (\%) on miniImageNet and CIFAR-FS. All methods use shallow feature backbone (Conv-4). $\dagger$ denotes the local replication results. We run PN without oversampling the number of ways.
+
+| Model | miniImageNet | CIFAR-FS |
| 1-shot | 5-shot | 1-shot | 5-shot |
| Matching Network† | 48.26 ± 0.76 | 62.27 ± 0.71 | 53.14 ± 0.85 | 68.16 ± 0.76 |
| Matching Network with gcp-sampling | 49.61 ± 0.77 | 63.23 ± 0.75 | 54.72 ± 0.87 | 69.28 ± 0.74 |
| PN† | 44.15 ± 0.76 | 63.89 ± 0.71 | 54.87 ± 0.72 | 71.64 ± 0.58 |
| PN with gcp-sampling | 47.13 ± 0.81 | 64.75 ± 0.72 | 56.12 ± 0.81 | 72.77 ± 0.64 |
| Reptile† | 46.12 ± 0.80 | 63.56 ± 0.70 | 55.86 ± 1.00 | 71.08 ± 0.74 |
| Reptile with gcp-sampling | 47.60 ± 0.80 | 64.56 ± 0.69 | 57.25 ± 0.99 | 71.69 ± 0.71 |
| MAML† | 48.25 ± 0.62 | 64.09 ± 0.70 | 56.93 ± 0.99 | 72.10 ± 0.74 |
| MAML with gcp-sampling | 49.65 ± 0.85 | 65.37 ± 0.70 | 57.62 ± 0.97 | 72.51 ± 0.72 |
| MAML++† | 50.60 ± 0.82 | 68.24 ± 0.68 | 58.87 ± 0.97 | 73.86 ± 0.76 |
| MAML++ with gcp-sampling | 52.34 ± 0.81 | 69.21 ± 0.68 | 60.14 ± 0.97 | 73.98 ± 0.74 |
+
+Table 4: Average 5-way classification accuracies (%) on miniImageNet and CIFAR-FS. Using MAML++ on a Conv-4 backbone, we compare different sampling methods: random, c-sampling with hard class, gcp-sampling with hard/uncertain/easy class.
+
+| Sampling Strategy | miniImageNet | CIFAR-FS |
| 5-way-1-shot | 5-way-5-shot | 5-way-1-shot | 5-way-5-shot |
| random sampling | 50.60 ± 0.82 | 68.24 ± 0.68 | 58.87 ± 0.97 | 73.36 ± 0.76 |
| c-sampling with hard class | 51.43 ± 0.75 | 68.74 ± 0.67 | 58.61 ± 0.92 | 73.98 ± 0.72 |
| gcp-sampling with easy class | 50.88 ± 0.88 | 68.22 ± 0.72 | 58.73 ± 1.14 | 73.41 ± 0.76 |
| gcp-sampling with uncertain class | 51.73 ± 0.87 | 69.01 ± 0.72 | 59.43 ± 1.02 | 73.84 ± 0.82 |
| gcp-sampling with hard class | 52.34 ± 0.81 | 69.21 ± 0.68 | 60.14 ± 0.97 | 74.58 ± 0.74 |
+
+results in Table 3 demonstrate that using gcp-sampling for meta-learning methods consistently improves the few-shot classification performance. Moreover, the performance improvement is more significant for 1-shot classification than 5-shot classification.
+
+Efficacy of different adaptive task sampling strategies. In literature, there exist contradicting ideas in adaptive sampling strategies which work well in different scenarios [7]. Preferring easier samples may be effective when solving challenging problems containing noise or outliers. The opposite hard sample mining strategy may improve the performance since it is more likely to be minority classes. Therefore, we explore different sampling strategies for meta-learning for few-shot classification. As defined in Eq. (4) for hard class, the probability of easy class is $1 - \bar{p}(i,j)$ and uncertain class is $(1 - \bar{p}(i,j))(\bar{p}(i,j))$ , respectively. We report the results in Table 4. We observe that gcp-sampling with hard or
+
+
+
+
+
+
+
+
+
+
+Fig. 3: Impact of hyperparameters $\alpha$ and $\tau$ . First row (a-d): we fix the discounting factor $\tau = 0.5$ and tune the updating factor $\alpha$ ; Second row (e-g): we fix $\alpha = 1$ and tune $\tau$ .
+
+
+
+
+
+
+
+uncertain class outperforms that with random sampling, but uncertain sampling offers a smaller improvement. We also compare gcp-sampling with c-sampling, in which c-sampling achieves similar performance as random sampling, verifying the efficacy of using class pairs to represent task difficulty.
+
+Impact of Hyperparameters $\alpha$ and $\tau$ . In the proposed gcp-sampling, the hyperparameter $\alpha$ controls the aggressiveness of the update while the hyperparameter $\tau$ controls the degree of forgetting past updates. Here we adopt PN with ResNet-12 backbone and report the effect of $\alpha$ and $\tau$ on the testing performance in Figure 3.
+
+Table 5: Time cost comparison between random sampling and gcp-sampling. All the experiments are conducted with PN on the CIFAR-FS dataset.
+
+ | random sampling | gcp-sampling | factor |
| 5-way-1-shot, Conv-4 | 235.4 | 251.8 | 1.070 |
| 5-way-1-shot, ResNet-12 | 531.2 | 554.6 | 1.044 |
| 5-way-5-shot, Conv-4 | 342.2 | 367.3 | 1.073 |
| 5-way-10-shot, Conv-4 | 471.4 | 491.0 | 1.042 |
| 5-way-15-shot, Conv-4 | 617.2 | 634.6 | 1.028 |
| 10-way-1-shot, Conv-4 | 411.3 | 451.7 | 1.098 |
| 15-way-1-shot, Conv-4 | 624.9 | 723.5 | 1.158 |
| 20-way-1-shot, Conv-4 | 816.8 | 992.5 | 1.215 |
+
+Time Cost Analysis. Table 5 shows the time cost comparison between random sampling and gcp-sampling. We adopt PN on the CIFAR-FS dataset and report the average training time for each epoch, which includes task sampling, forward and backward propagation phases. We find that the time taken by gcp-sampling is comparable to the time taken by random-sampling. This is because the training time is dominated by the forward pass and backward pass and the cost of task generation and class-pair potential update is relatively small. Besides, us
+
+ing a deeper backbone significantly increases the time cost but reduces the ratio between gcp-sampling and random-sampling, since it only affects the forward pass and backward pass. Finally, increasing the number of ways would increase the time cost while increasing the number of shots will not. This is because the complexity of gcp-sampling scales linearly to the number of ways.
+
+Visual analysis of adaptive task sampling. To qualitatively characterize adaptive task sampling, we visualize the prototype of each class generated by the training procedure of PN with gcp-sampling and random sampling. We use the t-SNE [32] method to convert the prototypes into two-dimensional vectors by preserving the cosine similarity between them. As shown in Figure 4, the classes sampled by random sampling achieve better clustering results than gcp-sampling. This is because gcp-sampling tends to sample classes with highly overlapping embeddings, which is much more difficult to learn for meta-learner.
+
+
+(a) random sampling
+
+
+(b) gcp-sampling
+Fig. 4: Feature embedding of the classes sampled by (a) random sampling and (b) task adaptive sampling. The dimension reduction is performed based on all 64 training classes of CIFAR-FS, while we show only the 5 selected classes in each sub-figure for better visualization.
+
+
+Fig. 5: Correlation matrix w.r.t. class-pair potentials. Each element indicates the class-pair potential. The higher the correlation weight (i.e., the darker the color), the higher the probability of this two-class combination being sampled. The green and red colors denote the classes sampled by random sampling and adaptive sampling, respectively.
+
+
+Fig. 6: Sample images from classes by (a) random sampling and (b) gcp-sampling.
+
+We also visualize the class-pair potentials constructed by gcp-sampling in Figure 5. We show 16 classes of CIFAR-FS, where the green and red colors denote the classes sampled by random sampling and gcp-sampling, respectively. We can see that the classes sampled by random sampling are often easier to distinguish, which leads to inefficient training, while the gcp-sampling tends to sample the classes that, when combined with other classes, display greater difficulty. We also randomly select some sampled images from each class for observation. As shown in Figure 6, the classes sampled by random sampling do vary greatly (e.g., with unique shapes or colors) and are easier to recognize, while the classes sampled by gcp-sampling are visually more confusing (e.g., small animals or insects in the wild) and much more difficult to distinguish.
+
+# 6 Conclusion
+
+In this paper, we presented an adaptive task sampling method for meta-learning. Our results demonstrated that in meta-learning it is essential for the sampling process to be dependent on tasks, and the proposed method naturally models and exploits this dependence. We showed that the greedy class-pair based sampling method, integrated with PN, MetaOptNet-RR or MetaOptNet-SVM, could achieve competitive results. Furthermore, we demonstrated consistent improvement when integrating the proposed sampling method with different meta-learning methods. Finally, we explore and evaluate different sampling strategies for gcp-sampling, in which the hard class strategy consistently leads to more accurate results.
+
+# Acknowledgment
+
+This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2018-001). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
+
+# References
+
+1. Alain, G., Lamb, A., Sankar, C., Courville, A., Bengio, Y.: Variance reduction in sgd by distributed importance sampling. arXiv preprint arXiv:1511.06481 (2015)
+2. Allen-Zhu, Z., Qu, Z., Richtárik, P., Yuan, Y.: Even faster accelerated coordinate descent using non-uniform sampling. In: International Conference on Machine Learning. pp. 1110-1119 (2016)
+3. Aly, M.: Survey on multiclass classification methods. Neural Netw 19, 1-9 (2005)
+4. Antoniou, A., Edwards, H., Storkey, A.: How to train your maml. arXiv preprint arXiv:1810.09502 (2018)
+5. Bengio, Y., Louradour, J., Collobert, R., Weston, J.: Curriculum learning. In: Proceedings of the 26th annual international conference on machine learning. pp. 41-48. ACM (2009)
+6. Bertinetto, L., Henriques, J.F., Torr, P.H., Vedaldi, A.: Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136 (2018)
+7. Chang, H.S., Learned-Miller, E., McCallum, A.: Active bias: Training more accurate neural networks by emphasizing high variance samples. In: Advances in Neural Information Processing Systems. pp. 1002-1012 (2017)
+8. Chen, W., Liu, Y., Kira, Z., Wang, Y.F., Huang, J.: A closer look at few-shot classification. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 (2019), https://openreview.net/forum?id=HkxLXnAcFQ
+9. Cho, K., Van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)
+0. Cross, G.R., Jain, A.K.: Markov random field texture models. IEEE Transactions on Pattern Analysis & Machine Intelligence PAMI-5(1), 25-39 (1983)
+1. Csiba, D., Richtárik, P.: Importance sampling for minibatches. The Journal of Machine Learning Research 19(1), 962-982 (2018)
+2. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70. pp. 1126-1135. JMLR.org (2017)
+3. Franceschi, L., Frasconi, P., Salzo, S., Grazzi, R., Pontil, M.: Bilevel programming for hyperparameter optimization and meta-learning. In: International Conference on Machine Learning. pp. 1563-1572 (2018)
+4. Freund, Y., Schapire, R.: A short introduction to boosting. Journal-Japanese Society For Artificial Intelligence 14(771-780), 1612 (1999)
+5. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences 55(1), 119-139 (1997)
+6. Gopal, S.: Adaptive sampling for sgd by exploiting side information. In: International Conference on Machine Learning. pp. 364-372 (2016)
+7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016)
+8. Horváth, S., Richtárik, P.: Nonconvex variance reduced optimization with arbitrary sampling. arXiv preprint arXiv:1809.04146 (2018)
+9. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning. pp. 448-456 (2015)
+
+20. Katharopoulos, A., Fleuret, F.: Biased importance sampling for deep neural network training. arXiv preprint arXiv:1706.00043 (2017)
+21. Katharopoulos, A., Fleuret, F.: Not all samples are created equal: Deep learning with importance sampling. arXiv preprint arXiv:1803.00942 (2018)
+22. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
+23. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Tech. rep., Citeseer (2009)
+24. Lake, B.M., Salakhutdinov, R., Tenenbaum, J.B.: Human-level concept learning through probabilistic program induction. Science 350(6266), 1332-1338 (2015)
+25. Landau, B., Smith, L.B., Jones, S.S.: The importance of shape in early lexical learning. Cognitive development 3(3), 299-321 (1988)
+26. Lee, K., Maji, S., Ravichandran, A., Soatto, S.: Meta-learning with differentiable convex optimization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 10657-10665 (2019)
+27. Li, Z., Zhou, F., Chen, F., Li, H.: Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835 (2017)
+28. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision. pp. 2980-2988 (2017)
+29. Liu, L., Zhou, T., Long, G., Jiang, J., Zhang, C.: Learning to propagate for graph meta-learning. arXiv preprint arXiv:1909.05024 (2019)
+30. London, B.: A pac-bayesian analysis of randomized learning with application to stochastic gradient descent. In: Advances in Neural Information Processing Systems. pp. 2931-2940 (2017)
+31. Loshchilov, I., Hutter, F.: Online batch selection for faster training of neural networks. arXiv preprint arXiv:1511.06343 (2015)
+32. Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(Nov), 2579-2605 (2008)
+33. Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P.: A simple neural attentive meta-learner. In: ICLR (2017)
+34. Munkhdalai, T., Yuan, X., Mehri, S., Trischler, A.: Rapid adaptation with conditionally shifted neurons. In: International Conference on Machine Learning. pp. 3661-3670 (2018)
+35. Naik, D.K., Mammone, R.J.: Meta-neural networks that learn by learning. In: [Proceedings 1992] IJCNN International Joint Conference on Neural Networks. vol. 1, pp. 437-442. IEEE (1992)
+36. Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 (2018)
+37. Oreshkin, B., López, P.R., Lacoste, A.: Tadam: Task dependent adaptive metric for improved few-shot learning. In: Advances in Neural Information Processing Systems. pp. 721-731 (2018)
+38. Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2016)
+39. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. International journal of computer vision 115(3), 211-252 (2015)
+40. Rusu, A.A., Rao, D., Sygnowski, J., Vinyals, O., Pascanu, R., Osindero, S., Hadsell, R.: Meta-learning with latent embedding optimization. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 (2019), https://openreview.net/forum?id=BJgklhAcK7
+
+41. Satorras, V.G., Bruna, J.: Few-shot learning with graph neural networks. In: ICLR (2018)
+42. Shalev-Shwartz, S., Wexler, Y.: Minimizing the maximal loss: How and why. In: ICML. pp. 793-801 (2016)
+43. Shrivastava, A., Gupta, A., Girshick, R.: Training region-based object detectors with online hard example mining. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 761-769 (2016)
+44. Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Advances in Neural Information Processing Systems. pp. 4077-4087 (2017)
+45. Song, H., Kim, S., Kim, M., Lee, J.G.: Ada-boundary: Accelerating the dnn training via adaptive boundary batch selection (2018)
+46. Sun, Q., Liu, Y., Chua, T.S., Schiele, B.: Meta-transfer learning for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 403-412 (2019)
+47. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1199-1208 (2018)
+48. Thrun, S., Pratt, L.: Learning to learn: Introduction and overview. In: Learning to learn, pp. 3-17. Springer (1998)
+49. Triantafillou, E., Zhu, T., Dumoulin, V., Lamblin, P., Xu, K., Goroshin, R., Gelada, C., Swersky, K., Manzagol, P.A., Larochelle, H.: Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096 (2019)
+50. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: Advances in neural information processing systems. pp. 3630-3638 (2016)
+51. Ze, H., Senior, A., Schuster, M.: Statistical parametric speech synthesis using deep neural networks. In: 2013 IEEE international conference on acoustics, speech and signal processing. pp. 7962-7966. IEEE (2013)
+52. Zhang, C., Kjellstrom, H., Mandt, S.: Determinantal point processes for mini-batch diversification. arXiv preprint arXiv:1705.00607 (2017)
+53. Zhang, C., Öztireli, C., Mandt, S., Salvi, G.: Active mini-batch sampling using repulsive point processes. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 5741-5748 (2019)
+54. Zhang, R., Che, T., Ghahramani, Z., Bengio, Y., Song, Y.: Metagan: An adversarial approach to few-shot learning. In: Advances in Neural Information Processing Systems. pp. 2365-2374 (2018)
+55. Zhao, P., Zhang, T.: Stochastic optimization with importance sampling for regularized loss minimization. In: international conference on machine learning. pp. 1-9 (2015)
\ No newline at end of file
diff --git a/adaptivetasksamplingformetalearning/images.zip b/adaptivetasksamplingformetalearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c35a1d29815093af490baf8abee23fabb41baafd
--- /dev/null
+++ b/adaptivetasksamplingformetalearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fc61ca1a658dc1e418ee88c8f11e669869ab518e44b8420008aea48390331329
+size 675597
diff --git a/adaptivetasksamplingformetalearning/layout.json b/adaptivetasksamplingformetalearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..74d304d1ddbc3ac8649598f054a4525884c0e103
--- /dev/null
+++ b/adaptivetasksamplingformetalearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1d514d251903c2c6d8595fed54f4b3714c0b404291e4c65e8974d62e5138751d
+size 426450
diff --git a/adaptivetextrecognitionthroughvisualmatching/678244b4-08e8-4073-ad03-4edf81a7dbad_content_list.json b/adaptivetextrecognitionthroughvisualmatching/678244b4-08e8-4073-ad03-4edf81a7dbad_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ce8aa00ec3bdf117aabfd13d95f886566f0a8e19
--- /dev/null
+++ b/adaptivetextrecognitionthroughvisualmatching/678244b4-08e8-4073-ad03-4edf81a7dbad_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0c1006c30f14f57dff1321e3dbb0f4e068dfbb8ee6ceec867074a191d2df24bf
+size 78410
diff --git a/adaptivetextrecognitionthroughvisualmatching/678244b4-08e8-4073-ad03-4edf81a7dbad_model.json b/adaptivetextrecognitionthroughvisualmatching/678244b4-08e8-4073-ad03-4edf81a7dbad_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..20b5a3cd33f3bbcd0511d358e341bf23de928620
--- /dev/null
+++ b/adaptivetextrecognitionthroughvisualmatching/678244b4-08e8-4073-ad03-4edf81a7dbad_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef3b5824f2ee6402c0786b1d7a04054d44bcf435c46472a168549512320b887a
+size 95666
diff --git a/adaptivetextrecognitionthroughvisualmatching/678244b4-08e8-4073-ad03-4edf81a7dbad_origin.pdf b/adaptivetextrecognitionthroughvisualmatching/678244b4-08e8-4073-ad03-4edf81a7dbad_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..29852b6188a714de2464874623a66b88621ec215
--- /dev/null
+++ b/adaptivetextrecognitionthroughvisualmatching/678244b4-08e8-4073-ad03-4edf81a7dbad_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:66e1186ef00529761493bfe2207c6577ba359333d45656d5ea6ca668e30df84e
+size 6400966
diff --git a/adaptivetextrecognitionthroughvisualmatching/full.md b/adaptivetextrecognitionthroughvisualmatching/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd1b83d14f8b8c6757bf51c77135490de0859d74
--- /dev/null
+++ b/adaptivetextrecognitionthroughvisualmatching/full.md
@@ -0,0 +1,290 @@
+# Adaptive Text Recognition through Visual Matching
+
+Chuhan Zhang1, Ankush Gupta2, and Andrew Zisserman1
+
+Visual Geometry Group, Department of Engineering Science
+
+University of Oxford
+
+{czhang,az}@robots.ox.ac.uk
+
+$^{2}$ DeepMind, London
+
+ankushgupta@google.com
+
+Abstract. This work addresses the problems of generalization and flexibility for text recognition in documents. We introduce a new model that exploits the repetitive nature of characters in languages, and decouples the visual decoding and linguistic modelling stages through intermediate representations in the form of similarity maps. By doing this, we turn text recognition into a visual matching problem, thereby achieving generalization in appearance and flexibility in classes.
+
+We evaluate the model on both synthetic and real datasets across different languages and alphabets, and show that it can handle challenges that traditional architectures are unable to solve without expensive retraining, including: (i) it can change the number of classes simply by changing the exemplars; and (ii) it can generalize to novel languages and characters (not in the training data) simply by providing a new glyph exemplar set. In essence, it is able to carry out one-shot sequence recognition. We also demonstrate that the model can generalize to unseen fonts without requiring new exemplars from them.
+
+Code, data, and model checkpoints are available at: http://www.robots. ox.ac.uk/\~vgg/research/FontAdaptor20/.
+
+Keywords: text recognition, sequence recognition, similarity maps
+
+# 1 Introduction
+
+Our objective in this work is generalization and flexibility in text recognition. Modern text recognition methods [2, 7, 23, 32] achieve excellent performance in many cases, but generalization to unseen data, i.e., novel fonts and new languages, either requires large amounts of data for primary training or expensive fine-tuning for each new case.
+
+The text recognition problem is to map an image of a line of text $\mathbf{x}$ into the corresponding sequence of characters $\mathbf{y} = (y_{1}, y_{2}, \ldots, y_{k})$ , where $k$ is the length of the string and $y_{i} \in \mathcal{A}$ are characters in alphabet $\mathcal{A}$ (e.g., $\{\mathbf{a}, \mathbf{b}, \ldots, \mathbf{z}, <\text{space}> \})$ .
+
+
+Fig. 1: Visual matching for text recognition. Current text recognition models learn discriminative features specific to character shapes (glyphs) from a predefined (fixed) alphabet. We train our model instead to establish visual similarity between given character glyphs (top) and the text-line image to be recognized (left). This makes the model highly adaptable to unseen glyphs, new alphabets/languages, and extensible to novel character classes, e.g., English $\rightarrow$ Greek, without further training. Brighter colors correspond to higher visual similarity.
+
+Current deep learning based methods [7,23,32] cast this in the encoder-decoder framework [8, 37], where first the text-line image is encoded through a visual ConvNet [22], followed by a recurrent neural network decoder, with alignment between the visual features and text achieved either through attention [3] or Connectionist Temporal Classification (CTC) [13].
+
+Impediments to generalization. The conventional methods for text recognition train the visual encoder and the sequence decoder modules in an end-to-end manner. While this is desirable for optimal co-adaptation, it induces monolithic representations which confound visual and linguistic functions. Consequently, these methods suffer from the following limitations: (1) Discriminative recognition models specialize to fonts and textures in the training set, hence generalize poorly to novel visual styles. (2) The decoder discriminates over a fixed alphabet/number of characters. (3) The encoder and decoder are tied to each other, hence are not inter-operable across encoders for new visual styles or decoders for new languages. Therefore, current text recognition methods generalize poorly and require re-initialization or fine-tuning for new alphabets and languages. Further, fine-tuning typically requires new training data for the target domain and does not overcome these inherent limitations.
+
+Recognition by matching. Our method is based on a key insight: text is a sequence of repetitions of a finite number of discrete entities. The repeated entities are characters in a text string, and glyphs, i.e., visual representations of characters/symbols, in a text-line image. We re-formulate the text recognition problem as one of visual matching. We assume access to glyph exemplars (i.e., cropped images of characters), and task the visual encoder to localize these repeated glyphs in the given text-line image. The output of the visual encoder is a similarity map which encodes the visual similarity of each spatial location in the text-line to each glyph in the alphabet as shown in Figure 1. The decoder ingests this similarity map to infer the most probable string. Figure 2 summarizes the proposed method.
+
+Overcoming limitations. The proposed model overcomes the above mentioned limitations as follows: (1) Training the encoder for visual matching relieves it from learning specific visual styles (fonts, colors etc.) from the training data, improving generalization over novel visual styles. (2) The similarity map is agnostic to the number of different glyphs, hence the model generalizes to novel alphabets (different number of characters). (3) The similarity map is also agnostic to visual styles, and acts as an interpretable interface between the visual encoder and the decoder, thereby disentangling the two.
+
+Contributions. Our main contributions are threefold. First, we propose a novel network design for text recognition aimed at generalization. We exploit the repetition of glyphs in language, and build this similarity between units into our architecture. The model is described in Sections 3 and 4. Second, we show that the model outperforms state-of-the-art methods in recognition of novel fonts unseen during training (Section 5). Third, the model can be applied to novel languages without expensive fine-tuning at test time; it is only necessary to supply glyph exemplars for the new font set. These include languages/alphabets with different number of characters, and novel styles e.g., characters with accents or historical characters "f" (also in Section 5).
+
+Although we demonstrate our model for document OCR where a consistent visual style of glyphs spans the entire document, the method is applicable to scene-text/text-in-the-wild (e.g., SVT [41], ICDAR [18,19] datasets) where each instance has a unique visual style (results in supplementary material).
+
+# 2 Related Work
+
+Few-shot recognition. Adapting model behavior based on class exemplars has been explored for few-shot object recognition. Current popular few-shot classification methods, e.g., Prototypical Nets [34], Matching Nets [40], Relation Nets [36], and MAML [11], have been applied only to recognition of single instances. Our work addresses the unique challenges associated with one-shot classification of multiple instances in sequences. To the best of our knowledge this is the first work to address one-shot sequence recognition. We discuss these challenges and the proposed architectural innovations in Section 3.4. A relevant work is from Cao et al. [5] which tackles few-shot video classification, but similar to few-shot object recognition methods, they classify the whole video as a single instance.
+
+Text recognition. Recognizing text in images is a classic problem in pattern recognition. Early successful applications were in reading handwritten documents [4, 22], and document optical character recognition (OCR) [33]. The OCR industry standard—Tesseract [33]—employs specialized training data for each supported language/alphabet.3 Our model enables rapid adaptation to
+
+
+Fig.2: Visual matching for text recognition. We cast the problem of text recognition as one of visual matching of glyph exemplars in the given text-line image. The visual encoder $\varPhi$ embeds the glyph-line $\pmb{g}$ and text-line $\pmb{x}$ images and produces a similarity map $\mathcal{S}$ , which scores the similarity of each glyph against each position along the text-line. Then, ambiguities in (potentially) imperfect visual matching are resolved to produce the enhanced similarity map $S^{*}$ . Finally, similarity scores are aggregated to output class probabilities $\mathcal{P}$ using the ground-truth glyph width contained in $\mathcal{M}$ .
+
+novel visual styles and alphabets and does not require such expensive fine-tuning/specialization. More recently, interest has been focussed towards text in natural images. Current methods either directly classify word-level images [16], or take an encoder-decoder approach [8, 37]. The text-image is encoded through a ConvNet, followed by bidirectional-LSTMs for context aggregation. The image features are then aligned with string labels either using Connectionist Temporal Classification (CTC) [13,15,30,35] or through attention [3,6,7,23,31]. Recognizing irregularly shaped text has garnered recent interest which has seen a resurgence of dense character-based segmentation and classification methods [28,42]. Irregular text is rectified before feature extraction either using geometric transformations [24,31,32,44] or by re-generating the text image in canonical fonts and colors [43]. Recently, Baek et al. [2] present a thorough evaluation of text recognition methods, unifying them in a four-stage framework—input transformation, feature extraction, sequence modeling, and string prediction.
+
+# 3 Model Architecture
+
+Our model recognizes a given text-line image by localizing glyph exemplars in it through visual matching. It takes both the text-line image and an alphabet image containing a set of exemplars as input, and predicts a sequence of probabilities
+
+over $N$ classes as output, where $N$ is equal to the number of exemplars given in the alphabet image. For inference, a glyph-line image is assembled from the individual character glyphs of a reference font simply by concatenating them side-by-side, and text-lines in that font can then be read.
+
+The model has two main components: (1) a visual similarity encoder (Section 3.1) which outputs a similarity map encoding the similarity of each glyph in the text-line image, and (2) an alphabet agnostic decoder (Section 3.2) which ingests this similarity map to infer the most probable string. In Section 3.3 we give details for the training objective. Figure 2 gives a concise schematic of the model.
+
+# 3.1 Visual Similarity Encoder
+
+The visual similarity encoder is provided with a set of glyphs for the target alphabet, and tasked to localize these glyphs in the input text-line image to be recognized. It first embeds the text-line and glyphs using a shared visual encoder $\varPhi$ and outputs a similarity map $S$ which computes the visual similarity between all locations in the text-line against all locations in every glyph in the alphabet.
+
+Mathematically, let $\pmb{x} \in \mathbb{R}^{H \times W \times C}$ be the text-line image, with height $H$ , width $W$ and $C$ channels. Let the glyphs be $\{g_i\}_{i=1}^{i=|\mathcal{A}|}$ , $g_i \in \mathbb{R}^{H \times W_i \times C}$ , where $\mathcal{A}$ is the alphabet, and $W_i$ is the width of the $i^{th}$ glyph. The glyphs are stacked along the width to form a glyph-line image $\pmb{g} \in \mathbb{R}^{H \times W_g \times C}$ . Embeddings are obtained using the visual encoder $\varPhi$ for both the text-line $\varPhi(\pmb{x}) \in \mathbb{R}^{1 \times W' \times D}$ and the glyph-line $\varPhi(\pmb{g}) \in \mathbb{R}^{1 \times W_g' \times D}$ , where $D$ is the embedding dimensionality. The output widths are downsampled by the network stride $s$ (i.e., $W' = \frac{W}{s}$ ). Finally, each spatial location along the width in the glyph-line image is scored against the every location in the text-line image to obtain the similarity map $S \in [-1, 1]^{W_g' \times W'}$ :
+
+$$
+S _ {i j} = \left\langle \Phi (\boldsymbol {g}) _ {i}, \Phi (\boldsymbol {x}) _ {j} \right\rangle = \frac {\Phi (\boldsymbol {g}) _ {i} ^ {T} \Phi (\boldsymbol {x}) _ {j}}{\left\| \Phi (\boldsymbol {g}) _ {i} \right\| \cdot \left\| \Phi (\boldsymbol {x}) _ {j} \right\|} \tag {1}
+$$
+
+where score is the cosine similarity, and $i\in \{1,\dots ,W_q^{\prime}\}$ $j\in \{1,\ldots ,W^{\prime}\}$
+
+# 3.2 Alphabet Agnostic Decoder
+
+The alphabet agnostic decoder discretizes the similarity maps into probabilities for each character in the alphabet for all spatial locations along the width of the text-line image. Concretely, given the visual similarity map $S \in \mathbb{R}^{W_g' \times W'}$ it outputs logits over the alphabet for each location in the text-line: $\mathcal{P} \in \mathbb{R}^{|A| \times W'}$ , $\mathcal{P}_{ij} = \log p(y_i | x_j)$ , where $x_j$ is the $j^{th}$ column in text-line image (modulo encoder stride) and $y_i$ is the $i^{th}$ character in the alphabet $A$ .
+
+A simple implementation would predict the argmax or sum of the similarity scores aggregated over the extent of each glyph in the similarity map. However, this naive strategy does not overcome ambiguities in similarities or produce smooth/consistent character predictions. Hence, we proceed in two steps: first,
+
+similarity disambiguation resolves ambiguities over the glyphs in the alphabet producing an enhanced similarity map $(S^{*})$ by taking into account the glyph widths and position in the line image, and second, class aggregator computes character class probabilities by aggregating the scores inside the spatial extent of each glyph in $S^{*}$ . We detail the two steps next; the significance of each component is established empirically in Section 5.4.
+
+Similarity disambiguation. An ideal similarity map would have square regions of high-similarity. This is because the width of a character in the glyph and text-line images will be the same. Hence, we encode glyph widths along with local $x$ , $y$ coordinates using a small MLP into the similarity map. The input to the MLP at each location is the similarity map value $S$ stacked with: (1) two channels of $x$ , $y$ coordinates (normalized to [0, 1]), and (2) a glyph width-map $\mathcal{G}$ : $\mathcal{G} = \pmb{w}_g\mathbb{1}^T$ , where $\pmb{w}_g \in \mathbb{R}^{W_g'}$ is a vector of glyph widths in pixels; see Figure 2 for an illustration. For disambiguation over all the glyphs (columns of $S$ ), we use a self-attention module [38] which outputs the final enhanced similarity map $S^*$ of the same size as $S$ .
+
+Class aggregator. The class aggregator $\Lambda$ maps the similarity map to logits over the alphabet along the horizontal dimension in the text-line image: $\Lambda : \mathbb{R}^{W_g' \times W'} \mapsto \mathbb{R}^{|A| \times W'}$ , $S^* \mapsto \mathcal{P}$ . This mapping can be achieved by multiplication through a matrix $M \in \mathbb{R}^{|A| \times W_g'}$ which aggregates (sums) the scores in the span of each glyph: $\mathcal{P} = M S^*$ , such that $M = [m_1, m_2, \ldots, m_{|\mathcal{A}|}]^T$ and $m_i \in \{0, 1\}^{W_g'} = [0, \ldots, 0, 1, \ldots, 1, 0, \ldots, 0]$ where the non-zero values correspond to the span of the $i^{th}$ glyph in the glyph-line image.
+
+In practice, we first embed columns of $S^*$ and $M^T$ independently using learnt linear embeddings. The embeddings are $\ell_2$ -normalized before the matrix product (equivalent to cosine similarity). We also expand the alphabet to add an additional "boundary" class (for CTC) using a learnt $m_{|\mathcal{A}| + 1}$ . Since, the decoder is agnostic to the number of characters in the alphabet, it generalizes to novel alphabets.
+
+# 3.3 Training Loss
+
+The dense per-pixel decoder logits over the alphabet $\mathcal{P}$ are supervised using the CTC loss [12] $(\mathcal{L}_{CTC})$ to align the predictions with the output label. We also supervise the similarity map output of the visual encoder $S$ using an auxiliary cross-entropy loss $(\mathcal{L}_{sim})$ at each location. We use ground-truth character bounding-boxes for determining the spatial span of each character. The overall training objective is the following two-part loss,
+
+$$
+\mathcal {L} _ {p r e d} = \mathcal {L} _ {C T C} (\operatorname {S o f t M a x} (\mathcal {P}), \boldsymbol {y} _ {g t})) \tag {2}
+$$
+
+$$
+\mathcal {L} _ {s i m} = - \sum_ {i j} \log (\operatorname {S o f t M a x} \left(S _ {y _ {i j}}\right)) \tag {3}
+$$
+
+$$
+\mathcal {L} _ {\text {t o t a l}} = \mathcal {L} _ {\text {p r e d}} + \lambda \mathcal {L} _ {\text {s i m}} \tag {4}
+$$
+
+where, $\mathrm{SoftMax}(\cdot)$ normalization is over the alphabet (rows), $\mathbf{y}_{gt}$ is the string label, and $y_{i}$ is the ground-truth character associated with the $i^{th}$ position in the glyph-line image. The model is insensitive to the value of $\lambda$ within a reasonable range (see supplementary), and we use $\lambda = 1$ for a good balance of losses.
+
+# 3.4 Discussion: One-shot Sequence Recognition
+
+Our approach can be summarized as a method for one-shot sequence recognition. Note, existing few-shot methods [17, 34, 36, 40] are not directly applicable to this problem of one-shot sequence recognition, as they focus on classification of the whole of the input (e.g. an image) as a single instance. Hence, these cannot address the following unique challenges associated with (text) sequences: (1) segmentation of the imaged text sequence into characters of different widths; (2) respecting language-model/sequence-regularity in the output. We develop a novel neural architectural solutions for the above, namely: (1) A neural architecture with explicit reasoning over similarity maps for decoding sequences. The similarity maps are key for generalization at both ends—novel fonts/visual styles and new alphabets/languages respectively. (2) Glyph width aware similarity disambiguation, which identifies contiguous square blocks in noisy similarity maps from novel data. This is critical for robustness against imprecise visual matching. (3) Class aggregator, aggregates similarity scores over the reference width-spans of the glyphs to produce character logit scores over the alphabet. It operates over a variable number of characters/classes and glyph-widths. The importance of each of these components is established in the ablation experiments in Section 5.4.
+
+# 4 Implementation details
+
+The architectures of the visual similarity encoder and the alphabet agnostic decoder are described in Section 4.1 and Section 4.2 respectively, followed by training set up in Section 4.3.
+
+# 4.1 Visual Similarity Encoder
+
+The visual similarity encoder $(\varPhi)$ encodes both the text-line $(x)$ and glyph-line $(g)$ images into feature maps. The inputs of height 32 pixels, width $W$ and 1 channel (grayscale images) are encoded into a tensor of size $1\times \frac{W}{2}\times 256$ . The glyph-line image's width is held fixed to a constant $W_{g} = 720$ px: if $\sum_{i = 1}^{i = |\mathcal{A}|}W_i < W_g$ the image is padded at the end using the glyph, otherwise the image is
+
+Table 1: Visual encoder architecture (Sections 3.1 and 4.1). The input is an image of size $32 \times W \times 1$ (height $\times$ width $\times$ channels).
+
+| layer | kernel | channels in / out | pooling | output size H×W |
| conv1 | 3×3 | 1 | / | 64 | max = (2, 2) | 16 | × | W/2 |
| resBlock1 | 3×3 | 64 | / | 64 | max = (1, 2) | 8 | × | W/2 |
| resBlock2 | 3×3 | 64 | / | 128 | max = (2, 2) | 4 | × | W/4 |
| upsample | - | - | - | | (2, 2) | 8 | × | W/2 |
| skip | 3×3 | 128+64 | / | 128 | - | 8 | × | W/2 |
| pool | - | - | - | | avg = (2, 1) | 4 | × | W/2 |
| conv2 | 1×1 | 128 | / | 64 | - | 4 | × | W/2 |
| reshape | - | 64 | / | 256 | - | 1 | × | W/2 |
+
+downsampled bilinearly to a width of $W_{g} = 720$ px. The text-line image's input width is free (after resizing to a height of 32 proportionally). The encoder is implemented as a U-Net [29] with two residual blocks [14]; detailed architecture in Table 1. The visual similarity map $(\mathcal{S})$ is obtained by taking the cosine distance between all locations along the width of the encoded features from text-line $\varPhi(\boldsymbol{x})$ and glyph-line $\varPhi(\boldsymbol{g})$ images.
+
+# 4.2 Alphabet Agnostic Decoder
+
+Similarity disambiguation. We use the self-attention based Transformer model [38] with three layers with four attention heads each. The input to this module is the similarity map $S$ stacked with with local positions $(x, y)$ and glyph widths, which are then encoded through a three-layer $(4 \times 16, 16 \times 32, 32 \times 1)$ MLP with ReLU non-linearity [26].
+
+Class aggregator. The columns of $S^*$ and glyph width templates (refer to Section 3.2) are embedded independently using linear embeddings of size $W_g' \times W_g'$ , where $W_g' = \frac{W_g}{s} = \frac{720}{2} = 360$ ( $s$ = encoder stride).
+
+Inference. We decode greedily at inference, as is common after training with CTC loss. No additional language model (LM) is used, except in Experiment VS-3 (Section 5.5), where a 6-gram LM learnt from over 10M sentences from the WMT News CWEl (2015) English corpus [1] is combined with the model output with beam-search using the algorithm in [25] (parameters: $\alpha = 1.0$ , $\beta = 2.0$ , beam-width=15).
+
+# 4.3 Training and Optimization
+
+The entire model is trained end-to-end by minimizing the training objective Equation (4). We use online data augmentation on both the text-line and glyph images, specifically random translation, crops, contrast, and blur. All parameters, for both ours and SotA models, are initialized with random weights. We use the Adam optimizer [20] with a constant learning rate of 0.001, a batch size of 12 and train until validation accuracy saturates (typically 100k iterations) on a single Nvidia Tesla P40 GPU. The models are implemented in PyTorch [27].
+
+# 5 Experiments
+
+We compare against state-of-the-art text-recognition models for generalization to novel fonts and languages. We first describe the models used for comparisons (Section 5.1), then datasets and evaluation metrics (Section 5.2), followed by an overview of the experiments (Section 5.3), and a thorough component analysis of the model architecture (Section 5.4). Finally, we present the results (Section 5.5) of all the experiments.
+
+
+Fig. 3: Left: FontSynth splits. Randomly selected fonts from each of the five font categories - (1) regular (R), (2) bold (B), (3) italic (I), (4) light (L) - used for generating the synthetic training set, and (5) other (i.e. none of the first four) - used for the test set. Right: Synthetic data. Samples from FontSynth (top) generated using fonts from MJSynth [16], and Omniglot-Seq (bottom) generated using glyphs from Omniglot [21] as fonts (Section 5.2).
+
+# 5.1 State-of-the-art Models in Text Recognition
+
+For comparison to state-of-the-art methods, we use three models: (i) Baek et al. [2] for scene-text recognition; (ii) Tesseract [33], the industry standard for document OCR; and (iii) Chowdhury et al. [9] for handwritten text recognition.
+
+For (i), we use the open-source models provided, but without the transformation module (since documents do not have the scene-text problem of non-rectilinear characters). Note, our visual encoder has similar number of parameters as in the encoder ResNet of [2] (theirs: 6.8M, ours: 4.7M parameters). For (ii) and (iii) we implement the models using the published architecture details. Further details of these networks, and the verification of our implementations is provided in the supplementary material.
+
+# 5.2 Datasets and Metrics
+
+FontSynth. We take 1400 fonts from the MJSynth dataset [16] and split them into five categories by their appearance attributes as determined from their names: (1) regular, (2) bold, (3) italic, (4) light, and (5) others (i.e., all fonts with none of the first four attributes in their name); visualized in Figure 3 (left). We use the first four splits to create a training set, and (5) for the test set. For training, we select 50 fonts at random from each split and generate 1000 text-line and glyph images for each font. For testing, we use all the 251 fonts in category (5). LRS2 dataset [10] is used as the text source. We call this dataset FontSynth; visualization in Figure 3 (right) and further details in the supplementary.
+
+Omniglot-Seq. Omniglot [21] consists of 50 alphabets with a total of 1623 characters, each drawn by 20 different writers. The original one-shot learning task is defined for single characters. To evaluate our sequence prediction network we generate a new Omniglot-Seq dataset with sentence images as following. We randomly map alphabets in Omniglot to English, and use them as 'fonts' to render text-line images as in FontSynth above. We use the original alphabet splits (30 training, 20 test) and generate data online for training, and 500 lines per alphabet for testing. Figure 3 (right) visualizes a few samples.
+
+
+Fig. 4: Google1000 printed books dataset. (left): Text-line image samples from the Google1000 [39] evaluation set for all the languages, namely, English, French, Italian and Spanish. (right): Common set of glyph exemplars used in our method for all books in the evaluation set for English and accents for the other languages.
+
+
+
+
+
+Google1000. Google1000 [39] is a standard benchmark for document OCR released as part of ICDAR 2007. It constitutes scans of 1000 public domain historical books in English (EN), French (FR), Italian (IT) and Spanish (ES) languages; Table 2 provides a summary. Figure 4 visualizes a few samples from this dataset. This dataset poses significant challenges due to severe degradation, blur, show-through (from behind), inking, fading, oblique text-lines etc. Type-faces from $18^{th}$ century are significantly different from
+
+modern fonts, containing old ligatures like " $\mathfrak{st}, \mathfrak{ct}, Q_i$ "". We use this dataset only for evaluation: further details in supplementary.
+
+Table 2: Google1000 dataset summary. Total number of books, alphabet size and percentage of letters with accent (counting accented characters a new) for various languages in the Google1000.
+
+| language → | EN | FR | IT | ES |
| # books | 780 | 40 | 40 | 140 |
| alphabet size | 26 | 35 | 29 | 32 |
| % accented letters | 0 | 2.6 | 0.7 | 1.5 |
+
+Evaluation metrics. We measure the character (CER) and word error rates (WER); definitions in supplementary.
+
+# 5.3 Overview of Experiments
+
+The goal of our experiments is to evaluate the proposed model against state-of-the-art models for text recognition on their generalization ability to (1) novel visual styles (VS) (e.g., novel fonts, background, noise etc.), and (2) novel alphabets/languages (A). Specifically, we conduct the following experiments:
+
+1. VS-1: Impact of number of training fonts. We use FontSynth to study the impact of the number of different training fonts on generalization to novel fonts when the exemplars from the testing fonts are provided.
+2. VS-2: Cross glyph matching. In this experiment, we do not assume access to the testing font. Instead of using exemplars from the test font, the most similar font from the training set is selected automatically.
+3. VS-3: Transfer from synthetic to real data. This evaluates transfer of models trained on synthetic data to real data with historical typeface and degradation.
+
+4. A-1: Transfer to novel alphabets. This evaluates transfer of models trained on English to new Latin languages in Google1000 with additional characters in the alphabet (e.g., French with accented characters).
+
+5. A-2: Transfer to non-Latin glyphs. The above experiments both train and test on Latin alphabets. Here we evaluate the generalization of the models trained on English fonts to non-Latin scripts in Omniglot-Seq (e.g., from English to Greek).
+
+# 5.4 Ablation Study
+
+We ablate each major component of the proposed model on the VS1 experiment to evaluate its significance. Table 3 reports the recognition accuracy on the FontSynth test set when trained on one (R) and all four $(\mathrm{R} + \mathrm{B} + \mathrm{L} + \mathrm{I})$ font attributes. Without the decoder (last row), simply reporting the argmax from the visual similarity map reduces to nearest-neighbors or one-shot Prototypical Nets [34] method. This is ineffective for unsegmented text recognition ( $49\%$ CER vs. $9.4\%$ CER for the full model). Excluding
+
+Table 3: Model component analysis. The first row corresponds to the full model; the last row corresponds to reading out characters using the CTC decoder from the output of the visual encoder. $R, B, L$ and $I$ correspond to the FontSynth training splits: Regular, Bold, Light and Italic respectively.
+
+| sim. enc. S | sim. disamb. | agg. embed. | training data |
| pos. enc. | self-attn | R | R+B+L+I |
| CER | WER | CER | WER |
| ✓ | ✓ | ✓ | ✓ | 9.4 | 30.1 | 5.6 | 22.3 |
| ✓ | ✗ | ✓ | ✓ | 11.8 | 37.9 | 7.9 | 22.9 |
| ✓ | ✗ | ✗ | ✓ | 23.9 | 68.8 | 13.0 | 52.0 |
| ✓ | ✓ | ✓ | ✗ | 22.9 | 65.8 | 8.5 | 26.4 |
| ✓ | ✗ | ✗ | ✗ | 25.8 | 63.1 | 18.4 | 45.0 |
| ✓ | - | - | - | 49.0 | 96.2 | 38.3 | 78.9 |
+
+the position encoding in the similarity disambiguation module leads to a moderate drop. The similarity disambiguation (sim. disamb.) and linear embedding in class aggregator (agg. embed.) are both important, especially when the training data is limited. With more training data, the advantage brought by these modules becomes less significant, while improvement from position encoding does not have such a strong correlation with the amount of training data.
+
+# 5.5 Results
+
+VS-1: Impact of number of training fonts. We investigate the impact of the number of training fonts on generalization to unseen fonts. For this systematic evaluation, we train the models on an increasing number of FontSynth splits–regular, regular + bold, regular + bold + light, etc. and evaluate on FontSynth test set. These splits correspond to increments of 50 new fonts with a different appearance attribute. Table 4 summarizes the results. The three baseline SotA models have similar CER when trained on the same amount of data. Tesseract [33] has a slightly better performance but generalizes poorly when there is only one attribute in training. Models with an attention-based LSTM (Attn. Baek et al. [2], Chowdhury et al. [9]) achieve lower WER than those without due to better language modelling. Notably, our model achieves the same accuracy
+
+Table 4: VS-1, VS-2: Generalization to novel fonts with/without known test glyphs and increasing number of training fonts. The mean error rates (in %; ↓ is better) on FontSynth test set. For cross matching (ours-cross), standard-dev is reported in parenthesis. R, B, L and I correspond to the FontSynth training splits; OS stands for the Omniglot-Seq dataset (Section 5.2).
+
+| training set → | R | R+B | R+B+L | R+B+L+I | R+B+L+I+OS |
| model | test glyphs known | CER | WER | CER | WER | CER | WER | CER | WER | CER | WER |
| CTC Beak et al. [2] | X | 17.5 | 46.1 | 11.5 | 30.3 | 10.4 | 28.2 | 10.4 | 27.7 | — | — |
| Attn. Beak et al. [2] | X | 16.5 | 41.0 | 12.7 | 34.5 | 11.1 | 27.4 | 10.3 | 23.6 | — | — |
| Tesseract [33] | X | 19.2 | 48.6 | 12.3 | 37.0 | 10.8 | 31.7 | 9.1 | 27.8 | — | — |
| Chowdhury et al. [9] | X | 16.2 | 39.1 | 12.6 | 28.6 | 11.5 | 29.5 | 10.5 | 24.2 | — | — |
| ours-cross mean std | X | 11.0 | 33.7 | 9.3 | 30.8 | 9.1 | 28.6 | 7.6 | 22.2 | 7.0 | 25.8 |
| (2.9) | (9.8) | (1.4) | (5.9) | (1.1) | (2.2) | (0.2) | (0.9) | (0.9) | (3.7) |
| ours-cross selected ours | X | 9.8 | 30.0 | 8.4 | 29.4 | 8.4 | 27.8 | 7.2 | 21.8 | 5.3 | 18.3 |
| 9.4 | 30.2 | 8.3 | 28.8 | 8.1 | 27.3 | 5.6 | 22.4 | 3.5 | 12.8 |
+
+with 1 training attribute (CER=9.4%) as the SotA's with 4 training attributes (CER>10%), i.e., using 150 (=3×50) less training fonts, proving the strong generalization ability of the proposed method to unseen fonts.
+
+Leveraging visual matching. Since, our method does not learn class-specific filters (unlike conventional discriminatively trained models), but instead is trained for visual matching, we can leverage non-English glyphs for training. Hence, we further train on Omniglot-Seq data and drastically reduce the CER from $5.6\%$
+
+(4 attributes) to $3.5\%$ . Being able to leverage language-agnostic data for training is a key strength of our model.
+
+VS-2: Cross glyph matching. In VS-1 above, our model assumed privileged access to glyphs from the test image. Here we consider the setting where glyphs exemplars from training fonts are used instead. This we term as cross matching, denoted 'ours-cross' in Table 4. We randomly select 10 fonts from each font attribute and use those as glyph exemplars. In Table 4 we report the aggregate mean and standard-deviation over all attributes. To automatically find the best font match, we also measure the
+
+
+Fig.5: VS-2: Cross matching on FontSynth. Our model maintains its performance when using training fonts as glyph exemplars instead of test-image glyphs (refer to Section 5.5). On the $x$ -axis we show the FontSynth training splits (Figure 3 left).
+
+similarity between the reference and unseen fonts by computing the column-wise entropy in the similarity map $\mathcal{S}$ during inference: Similarity scores within each glyph span are first aggregated to obtain logits $\mathcal{P} \in \mathbb{R}^{|A| \times W'}$ , the averaged entropy of logits over columns $\frac{1}{W'} \sum_{i}^{W'} -P_i \log(P_i)$ is then used as the criterion to choose the best-matched reference font. Performance from the best-matched exemplar set is reported in 'ours-cross selected' in Table 4. With CER close to the last row where test glyphs are provided, it is shown that the model does not rely on extra information from the new fonts to generalize to different visual styles. Figure 5 details the performance for each attribute separately. The accuracy is largely insensitive to particular font attributes—indicating the strong ability of our model to match glyph shapes. Further, the variation decreases as expected as more training attributes are added.
+
+VS-3: Transfer from synthetic to real data. We evaluate models trained with synthetic data on the real-world Google1000 test set for generalization to novel visual fonts and robustness against degradation and other nuisance factors in real data. To prevent giving per test sample specific privileged information to our model, we use a common glyph set extracted from Google1000 (visualized in Figure 4). This glyph set is
+
+Table 5: VS-3: Generalization from synthetic to real data. Mean error rates (in %; ↓ is better) on Google1000 English document for models trained only on synthetic data (Section 5.5). LM stands for 6-gram language model.
+
+ | CTC
+Beak [2] | Attn.
+Beak [2] | Tesseract
+[33] | Ch. et al.
+[9] | ours |
| LM | X | ✓ | X | ✓ | X | ✓ | X | ✓ | X | ✓ |
| CER | 3.5 | 3.14 | 5.4 | 5.4 | 4.65 | 3.8 | 5.5 | 5.6 | 3.1 | 2.4 |
| WER | 12.9 | 11.4 | 13.1 | 13.8 | 15.9 | 12.2 | 14.9 | 15.6 | 14.9 | 8.0 |
+
+used for all test samples, i.e., is not sample specific. Table 5 compares our model trained on FontSynth+Omniglot-Seq against the SotAs. These models trained on modern fonts are not able to recognize historical ligatures like long s: "f" and usually classify it as the character "f". Further, they show worse ability for handling degradation problems like fading and show-through, and thus are out-performed by our model, especially when supported by a language model (LM) (CER: ours = 2.4% vs. CTC = 3.14%).
+
+A-1: Transfer to novel alphabets. We evaluate our model trained on English FontSynth + Omniglot-Seq to other languages in Google1000, namely, French, Italian and Spanish. These languages have more characters than English due to accents (see Table 2). We expand the glyph set from English to include the accented glyphs shown in Figure 4. For comparison, we pick the CTC Baek et al. [2] (the SotA with the lowest CER when training data is limited), and adapt it to the new alphabet size by fine-tuning the last linear classifier layer on an increasing number of training samples. Figure 6 summarizes the results. Images for fine-tuning are carefully selected to cover as many new classes as possible. For all three languages, at least 5 images with new classes are required in fine-tuning to match our performance without fine-tuning; Depending on the number of new
+
+
+Fig. 6: A-2: Transfer to novel alphabets in Google1000. We evaluate models trained over the English alphabet on novel languages in the Google1000 dataset, namely, French, Italian and Spanish. CER is reported (in %; ↓ is better).
+
+classes in this language (for French 16 samples are required). Note that for our model we do not need fine-tuning at all, just supplying exemplars of new glyphs gives a good performance.
+
+A-2: Transfer to non-Latin glyphs. In the above experiments, the models were both trained and tested on English/Latin script and hence, are not tasked to generalize to completely novel glyph shapes. Here we evaluate the generalization ability of our model to new glyph shapes by testing the model trained on FontSynth + Omniglot-Seq on the Omniglot-Seq test set, which consists of novel alphabets/scripts. We provide our model with glyph exemplars from the randomly generated alphabets (Section 5.2). Our model achieves $\mathrm{CER} = 1.8\% /7.9\%$ , $\mathrm{WER} = 7.6\% /31.6\%$ (with LM/without LM), which demonstrates strong generalization to novel scripts. Note, the baseline text recognition models trained on FontSynth (English fonts) cannot perform this task, as they cannot process completely new glyph shapes.
+
+# 6 Conclusion
+
+We have developed a method for text recognition which generalizes to novel visual styles (e.g., fonts, colors, backgrounds etc.), and is not tied to a particular alphabet size/language. It achieves this by recasting the classic text recognition as one of visual matching, and we have demonstrated that the matching can leverage random shapes/glyphs (e.g., Omniglot) for training. Our model is perhaps the first to demonstrate one-shot sequence recognition, and achieves superior generalization ability as compared to conventional text recognition methods without requiring expensive adaptation/fine-tuning. Although the method has been demonstrated for text recognition, it is applicable to other sequence recognition problems like speech and action recognition.
+
+Acknowledgements. This research is funded by a Google-DeepMind Graduate Scholarship and the EPSRC Programme Grant Seebibyte EP/M013774/1. We would like to thank Triantafyllos Afouras, Weidi Xie, Yang Liu and Erika Lu for discussions and proof-reading.
+
+# References
+
+1. EMNLP 2015 Tenth Workshop On Statistical Machine Translation. http://www.statmt.org/wmt15/8
+2. Baek, J., Kim, G., Lee, J., Park, S., Han, D., Yun, S., Oh, S.J., Lee, H.: What is wrong with scene text recognition model comparisons? dataset and model analysis. In: Proc. ICCV (2019) 1, 4, 8, 9, 11, 12, 13
+3. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014) 2, 4
+4. Bunke, H., Bengio, S., Vinciarelli, A.: Offline recognition of unconstrained handwritten texts using HMMs and statistical language models. PAMI 26(6), 709-720 (2004) 3
+5. Cao, K., Ji, J., Cao, Z., Chang, C.Y., Niebles, J.C.: Few-shot video classification via temporal alignment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10618-10627 (2020) 3
+6. Cheng, Z., Bai, F., Xu, Y., Zheng, G., Pu, S., Zhou, S.: Focusing attention: Towards accurate text recognition in natural images. In: Proc. ICCV (2017) 4
+7. Cheng, Z., Xu, Y., Bai, F., Niu, Y., Pu, S., Zhou, S.: Aon: Towards arbitrarily-oriented text recognition. In: Proc. CVPR (2018) 1, 2, 4
+8. Cho, K., van Merrienboer, B., Gülçehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: EMNLP (2014) 2, 4
+9. Chowdhury, A., Vig, L.: An efficient end-to-end neural model for handwritten text recognition. Proc. BMVC (2018) 8, 11, 12, 13
+0. Chung, J.S., Zisserman, A.: Lip reading in the wild. In: Proc. ACCV (2016) 9
+1. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proc. ICML (2017) 3
+2. Graves, A., Fernández, S., Gomez, F., Schmidhuber, J.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proc. ICML. ACM (2006) 6
+3. Graves, A., Schmidhuber, J.: Framework phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks 18(5-6), 602-610 (2005) 2, 4
+4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. CVPR (2016) 7
+5. He, P., Huang, W., Qiao, Y., Loy, C.C., Tang, X.: Reading scene text in deep convolutional sequences. In: Thirtieth AAAI conference on artificial intelligence (2016) 4
+6. Jaderberg, M., Simonyan, K., Vedaldi, A., Zisserman, A.: Synthetic data and artificial neural networks for natural scene text recognition. In: Workshop on Deep Learning, NIPS (2014) 4, 9
+7. Jia, X., De Brabandere, B., Tuytelaars, T., Gool, L.V.: Dynamic filter networks. In: Proc. NIPS (2016) 7
+8. Karatzas, D., Gomez-Bigorda, L., Nicolaou, A., Ghosh, S., Bagdanov, A., Iwamura, M., Matas, J., Neumann, L., Chandrasekhar, V.R., Lu, S., Shafait, F., Uchida, S., Valveny, E.: ICDAR 2015 robust reading competition. In: Proc. ICDAR. pp. 1156-1160 (2015) 3
+9. Karatzas, D., Shafait, F., Uchida, S., Iwamura, M., Bigorda, L.G., Mestre, S.R., Mas, J., Mota, D.F., Almazan, J.A., de las Heras, L.P.: ICDAR 2013 robust reading competition. In: Proc. ICDAR (2013) 3
+
+20. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 8
+21. Lake, B.M., Salakhutdinov, R., Tenenbaum, J.B.: Human-level concept learning through probabilistic program induction. Science (2015) 9
+22. LeCun, Y., Boser, B.E., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W.E., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Computation 1(4), 541-551 (1989) 2, 3
+23. Lee, C.Y., Osindero, S.: Recursive recurrent nets with attention modeling for OCR in the wild. In: Proc. CVPR (2016) 1, 2, 4
+24. Liu, W., Chen, C., Wong, K.Y.K.: Char-net: A character-aware neural network for distorted scene text recognition. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018) 4
+25. Maas, A., Xie, Z., Jurafsky, D., Ng, A.: Lexicon-free conversational speech recognition with neural networks. In: NAACL-HLT (2015) 8
+26. Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proc. ICML (2010) 8
+27. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch (2017) 8
+28. Pengyuan, L., Minghui, L., Cong, Y., Wenhao, W., Xiang, B.: Mask textspotter: An end-to-end trainable neural network for spotting text with arbitrary shapes. In: Proc. ECCV (2018) 4
+29. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Proc. MICCAI. pp. 234-241. Springer (2015) 7
+30. Shi, B., Bai, X., Yao, C.: An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. PAMI (2016) 4
+31. Shi, B., Wang, X., Lyu, P., Yao, C., Bai, X.: Robust scene text recognition with automatic rectification. In: Proc. CVPR (2016) 4
+32. Shi, B., Yang, M., Wang, X., Lyu, P., Yao, C., Bai, X.: Aster: An attentional scene text recognizer with flexible rectification. PAMI (2018) 1, 2, 4
+33. Smith, R.: An overview of the tesseractOCR engine. In: Ninth international conference on document analysis and recognition (ICDAR 2007). vol. 2, pp. 629-633. IEEE (2007) 3, 8, 11, 12, 13
+34. Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Proc. NIPS (2017) 3, 7, 11
+35. Su, B., Lu, S.: Accurate scene text recognition based on recurrent neural network. In: Proc. ACCV (2014) 4
+36. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hesperdales, T.M.: Learning to compare: Relation network for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1199-1208 (2018) 3, 7
+37. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in neural information processing systems. pp. 3104-3112 (2014) 2, 4
+38. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Proc. NIPS (2017) 6, 8
+39. Vincent, L.: Google book search: Document understanding on a massive scale. In: PROC. ninth International Conference on Document Analysis and Recognition (ICDAR). pp. 819-823. Washington, DC (2007) 9, 10
+40. Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., Wierstra, D.: Matching networks for one shot learning. In: Proc. NIPS (2016) 3, 7
+
+41. Wang, K., Belongie, S.: Word spotting in the wild. In: Proc. ECCV (2010) 3
+42. Wei, F., Wenhao, H., Fei, Y., Xu-Yao, Z., Cheng-Lin, L.: Textdragon: An end-to-end framework for arbitrary shaped text spotting. In: Proc. ICCV (2019) 4
+43. Yang, L., Zhaowen, W., Hailin, J., Ian, W.: Synthetically supervised feature learning for scene text recognition. In: Proc. ECCV (2018) 4
+44. Zhan, F., Lu, S.: Esir: End-to-end scene text recognition via iterative image rectification. In: Proc. CVPR (2019) 4
\ No newline at end of file
diff --git a/adaptivetextrecognitionthroughvisualmatching/images.zip b/adaptivetextrecognitionthroughvisualmatching/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c98f280ce0cf53440f46a66991890186a656661e
--- /dev/null
+++ b/adaptivetextrecognitionthroughvisualmatching/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b6a1a02b52822bee373b544f648929f9222db8cd0944590b2195f4e714cb6f71
+size 358576
diff --git a/adaptivetextrecognitionthroughvisualmatching/layout.json b/adaptivetextrecognitionthroughvisualmatching/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..11dbb2174fb45ba1582d868d3dddd1212ea0a7c3
--- /dev/null
+++ b/adaptivetextrecognitionthroughvisualmatching/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4efc7fac65b9fb021085875d81a4ca7f683e4423fb75d6abac516e520940f559
+size 406193
diff --git a/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/6d3c8f2b-f0a0-44ec-a9d1-838ed341691b_content_list.json b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/6d3c8f2b-f0a0-44ec-a9d1-838ed341691b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..489a9ebfff7e7da172b46c40ab70d6e0ae574222
--- /dev/null
+++ b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/6d3c8f2b-f0a0-44ec-a9d1-838ed341691b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:87f2328fb23ac851e952507504d4ff5b935fefeb70ea5fdf88336cbcd200dab7
+size 78924
diff --git a/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/6d3c8f2b-f0a0-44ec-a9d1-838ed341691b_model.json b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/6d3c8f2b-f0a0-44ec-a9d1-838ed341691b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f863a25c6f25bb869c38903d3dabedf91fa991f5
--- /dev/null
+++ b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/6d3c8f2b-f0a0-44ec-a9d1-838ed341691b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bd776cfb14dcafe5d13e6c705f428cbc5a1cd1668b3c5bf7c1bcaea64ce7c5c8
+size 96217
diff --git a/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/6d3c8f2b-f0a0-44ec-a9d1-838ed341691b_origin.pdf b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/6d3c8f2b-f0a0-44ec-a9d1-838ed341691b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..20b8e62a044feb14113e5a1ed3671497515ad14e
--- /dev/null
+++ b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/6d3c8f2b-f0a0-44ec-a9d1-838ed341691b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1a8fd76cd53880da8b72c5007559fa49062a1352519b50c41b16c2da837a5c97
+size 3058403
diff --git a/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/full.md b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e1fd5d92c58d1c03b874f85f9ffa17725516b295
--- /dev/null
+++ b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/full.md
@@ -0,0 +1,345 @@
+# Adaptive Variance Based Label Distribution Learning For Facial Age Estimation
+
+Xin Wen $^{1,2}$ , Biying Li $^{1,2}$ , Haiyun Guo $^{1,3}$ , Zhiwei Liu $^{1,2}$ , Guosheng Hu $^{5}$ , Ming Tang $^{1}$ , and Jinqiao Wang $^{1,2,4}$
+
+$^{1}$ National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
+
+$^{2}$ School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
+
+3 ObjectEye Inc., Beijing, China
+
+$^{4}$ NEXWISE Co., Ltd, Guangzhou, China
+
+5 Anyvision
+
+{xin.wen,biying.li,haiyun.guo,zhiwei.liu,tangm,jqwang}@nlpr.ia.ac.cn, huguosheng100@gmail.com
+
+Abstract. Estimating age from a single facial image is a classic and challenging topic in computer vision. One of its most intractable issues is label ambiguity, i.e., face images from adjacent age of the same person are often indistinguishable. Some existing methods adopt distribution learning to tackle this issue by exploiting the semantic correlation between age labels. Actually, most of them set a fixed value to the variance of Gaussian label distribution for all the images. However, the variance is closely related to the correlation between adjacent ages and should vary across ages and identities. To model a sample-specific variance, in this paper, we propose an adaptive variance based distribution learning (AVDL) method for facial age estimation. AVDL introduces the data-driven optimization framework, meta-learning, to achieve this. Specifically, AVDL performs a meta gradient descent step on the variable (i.e. variance) to minimize the loss on a clean unbiased validation set. By adaptively learning proper variance for each sample, our method can approximate the true age probability distribution more effectively. Extensive experiments on FG-NET and MORPH II datasets show the superiority of our proposed approach to the existing state-of-the-art methods.
+
+Keywords: age estimation, distribution learning, meta-learning
+
+# 1 Introduction
+
+Age estimation is a challenging and hot research topic, which is to predict the person's age from his/her facial image. It has a lot of potential applications, including demographic statistics collection, commercial user management, video security surveillance, etc. However, there are numerous internal or external factors that affect the estimation results, including the race, illumination, image
+
+
+
+
+
+
+
+
+(a)
+
+
+
+
+
+
+
+
+(b)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+18
+(c)
+Fig. 1. The motivation of the proposed method. In each subfigure, the age probability distribution in the lower part corresponds to the middle image in the upper. The images above the dotted line belong to the same person and so do the images below the dotted line. On the one hand, by comparing (a) with (b) or (c) with (d), we can see that the facial appearance variation between adjacent ages of the same person varies at different ages. Correspondingly, the variance of the age probability distribution should differ across ages. On the other hand, by comparing (b) with (c), we can see that even at the same age, the aging process between different persons differs, thus the variance also varies across different persons
+
+
+31
+(d)
+
+quality and so on. Besides, facial images from adjacent ages of the same person, especially for adults, usually look similar, resulting in the label ambiguity.
+
+Recently, several deep learning methods have been proposed to improve the performance of facial age estimation. The most common methods model the face age prediction as a classification or a regression problem. The classification based methods treat each age as an independent class, which ignores the adjacent relationship between classes. Considering the continuity of age, regression methods predict age according to the extracted features. However, as presented by previous work [31, 33], the regression methods face the overfitting problem, which is caused by the randomness of the human aging process and the ambiguous mapping between facial appearance and the actual age. In addition, some ranking-based methods are proposed to achieve more accurate age estimation. Those approaches make use of individuals' ordinal information and employ multiple binary classifiers to determine the final age of the input image. Furthermore, Geng et al. [13, 8] propose the label distribution learning (LDL) method which assumes that the real age can be represented by a discrete distribution. As their experiments show, it can help improve age estimation using Kullback-Leibler
+
+(K-L) divergence to measure the similarity between the predicted and ground truth distribution.
+
+For the label distribution learning methods, the mean of the distribution is the ground truth age. However, the variance of the distribution is usually unknown for a face image. The previous methods often treat variance as a hyperparameter and simply set it to a fixed value for all images. We think these methods are suboptimal because the variance is highly related to the correlation between adjacent ages and should vary across different ages and different persons, as illustrated in Fig. 1. The assumption that all the images sharing the same variance potentially degrades the model performance.
+
+To tackle the above issues, in this paper, we propose a novel adaptive variance based distribution learning method (AVDL) for age estimation. Specifically, we introduce meta-learning which utilizes validation set as meta-objective and is applicable to online hyper-parameter adaptation work [28], to model sample-specific variance and thus better approximate true age probability distribution. As Fig. 2 shows, we firstly select a small validation set. For each iteration, with a disturbing variable added to variance, we use K-L loss as the training loss to update the training model parameter. Then we share the updated parameter with validation model and use predicted expectation age and ground truth on validation set to get L1 loss as the meta-objective. With this meta-guider, the disturbing variable is updated by gradient descent and adaptively find the proper variance with which model could perform better on validation set. The main contributions of this work can be summarized as follows:
+
+- We propose a novel adaptive variance based distribution learning (AVDL) method for facial age estimation. AVDL can effectively model the correlation between adjacent ages and better approximate the age label distribution.
+
+- Unlike the existing deep models which assume the variance across ages and identities is the same, we introduce a data-driven method, meta-learning, to learn the sample-specific variance. To our knowledge, we are the first deep model using meta-learning method to adaptively learn different variances for different samples.
+
+- Extensive experiments on FG-NET and MORPH II datasets show the superiority of our proposed approach to the existing state-of-the-art methods.
+
+# 2 Related Work
+
+# 2.1 Facial Age Estimation
+
+In recent years, with rapid development of convolution neural network (CNN) in computer vision tasks, such as facial landmark detection[23], face recognition[38, 3], pedestrian attribute[35], semantic segmentation [46, 45], deep learning methods were also improved the performance of age estimation. Here we briefly review some representative works in the facial age estimation field. Dex et al. [30] regarded the facial age estimation as a classification problem and predicted ages with
+
+the expectation of ages weighted by classification probability. Tan et al. [33] proposed an age group classification method called age group-n-encoding method. However, these classification methods ignored the adjacent relationship between classes or groups. To overcome this, Niu et al. [24] proposed a multiple output CNN learning algorithm which took account of the ordinal information of ages for estimation. Shen et al. [32] proposed Deep Regression Forests by extending differentiable decision trees to deal with regression. Furthermore, Li et al. [22] proposed BridgeNet, which consists of local regressors and gating networks, to effectively explore the continuous relationship between age labels. Tan et al. [34] proposed a complex Deep Hybrid-Aligned Architecture (DHAA) that consists of global, local and global-local branches and jointly optimized the architecture with complementary information. Besides, Xie et al. [39] proposed two ensemble learning methods both utilized ordinal regression modeling for age estimation.
+
+# 2.2 Distribution Learning
+
+Distribution learning is a learning method proposed to solve the problem of label ambiguity [10], which has been utilized in a number of recognition tasks, such as head pose estimation [12, 8], and age estimation [41, 20]. Geng et al. [13, 11] proposed two adaptive label distribution learning (ALDL) algorithms, i.e. IIS-ALDL and BFGS-ALDL, to iteratively learn the estimation function parameters and the label distributions variance. Though ALDL used an adaptive variance learning, our proposed method is different in three ways. Firstly, ALDL utilized traditional optimization method like BFGS while ours uses deep learning and CNN. Secondly, ALDL chose better samples in current training iteration to estimate new variance while our method uses meta-learning to get adaptive variance. The third point is ALDL updated variance only by estimating the training sample, which may cause overfitting. Our adaptive variance is supervised by validation set to be more general. Distribution learning of label was also used to remedy the shortage of training data with exact ages. Hou et al. [20] proposed a semi-supervised adaptive label distribution learning method. It used unlabeled data to enhance the label distribution adaptation to find a proper variance for each age. However, aging tendencies varies and variances of people at the same age could be different. Gao et al. [9] jointly used LDL and expectation regression to alleviate the inconsistency between training and testing. Moreover, Pan et al. [25] proposed a mean-variance loss for robust age estimation. Li et al. [21] proposed label distribution refinery to adaptively estimate the age distributions without assumptions about the form of label distribution, barely took into account the correlation of adjacent ages. While our method used Gaussian label distribution with adaptively meta-learned variance, which pays more attention to neighboring ages and ordinal information.
+
+# 2.3 Meta-learning
+
+Our proposed AVDL is an instantiation of meta-learning [36, 1], i.e., learning to learn. According to the type of leveraged meta data, this concept can be
+
+classified to several types [37] including transferring knowledge from empirically similar tasks, transferring trained model parameters between tasks, building meta-models to learn data characteristics and learn purely from model evaluations. Model Agnostic Meta-Learning (MAML) [7] learned a model parameter initialization to perform better on target tasks. With the guidance of meta information, MAML took one gradient descent step on meta-objective to update model parameters [16]. The idea of using validation loss as meta-objective was applied in few-shot learning [27]. With reference to few-shot learning, Ren et al. [28] proposed a reweighting method (L2RW) guided by validation set. This method tried to solve the problem that data imbalance and label noise are both in the training set. The crucial criteria of L2RW is a small unbiased clean validation set which was taken as the supervisor of learning sample weight. As validation set performance measures the quality of hyper-parameters, taking it as meta-objective could not only be applied to sample reweighting but also to any other online hyper-parameter adaptation tasks. Inspired by this, we propose AVDL to incorporate validation set based meta-learning and label distribution learning to adaptively learn the label variance.
+
+# 3 Methodology
+
+In this section, we firstly give a description of the label distribution learning (LDL) method in age estimation. Then we introduce our adaptive variance based distribution learning(AVDL) method based on meta-learning framework.
+
+# 3.1 The Label Distribution Learning Problem Revisit
+
+Let $X$ denote an input image with ground truth label $y$ , $y \in \{0, 1, \dots, 100\}$ . The model is trained to predict a value as close to the ground truth label as possible. For traditional age estimation method, the ground truth is an integer. While in LDL method, to express the ambiguity of labels, Gao et al. [8] transform the real value $y$ to a normal distribution $\mathbf{p}(y, \sigma)$ to denote the new ground truth. Mean value is set to the ground truth label $y$ and $\sigma$ is the normal distribution variance. Here we adopt the boldface lowercase letters like $\mathbf{p}(y, \sigma)$ to denote vectors, and use $p_k(y, \sigma)$ ( $k \in [0, 100]$ ) to represent the $k$ -th element of $\mathbf{p}(y, \sigma)$ :
+
+$$
+p _ {k} (y, \sigma) = \frac {1}{\sqrt {2 \pi} \sigma} \exp \left(- \frac {(k - y) ^ {2}}{2 \sigma^ {2}}\right) \tag {1}
+$$
+
+where $p_k$ is the probability that the true age is $k$ years old. It represents the connection between the class $k$ and $y$ in a normal distribution view.
+
+In the training process, assuming $G(*, \theta)$ as the classification function of the trained estimation model, $\theta$ represents the model parameters, $\mathbf{z}(X, \theta) = G(X, \theta)$ transforms the input $X$ to the classification vector $\mathbf{z}(X, \theta)$ . A softmax function is utilized to transfer $\mathbf{z}(X, \theta)$ into a probability distribution $\hat{\mathbf{p}}(X, \theta)$ , the $k$ -th
+
+element of which can be denoted by:
+
+$$
+\hat {p} _ {k} (X, \theta) = \frac {\exp \left(z _ {k} (X , \theta)\right)}{\sum_ {n} \exp \left(z _ {n} (X , \theta)\right)} \tag {2}
+$$
+
+where $z_{k}(X,\theta)$ is the $k$ -th element of $\mathbf{z}(X,\theta)$ .
+
+LDL tries to generate the predicted softmax probability distribution as similar to the ground truth distribution as possible. So the Kullback-Leibler (K-L) divergence is employed to measure the difference between the predicted distribution and ground-truth distribution [8]:
+
+$$
+L _ {K L} (X, y, \theta , \sigma) = \sum_ {k} p _ {k} (y, \sigma) \ln \frac {p _ {k} (y , \sigma)}{\hat {p} _ {k} (X , \theta)} \tag {3}
+$$
+
+Then the K-L loss is used to update model parameters with SGD optimizer.
+
+LDL method aims to construct a normal distribution of ground truth to approximate the real distribution, the key of which is the variance $\sigma$ . For most LDL methods, this hyper-parameter is simply set to a fixed value, 2.0 in most cases. However, in fact, the variances for different people, or people of different ages could not be absolutely the same. So we propose a method to search proper variance for each image.
+
+# 3.2 Adaptive Distribution Learning Based on Meta-learning
+
+In machine learning, the loss on validation set is one of the guiders to adjust hyper-parameters for generalization. Therefore, using a clean unbiased validation set can help train a more general model. However, traditional training mode usually tunes the hyper-parameter manually. Inspired by the meta-learning work [28], we propose the adaptive variance based distribution learning (AVDL) algorithm guided by validation set, which offers an effective strategy to learn the sample-specific variance.
+
+As we mentioned in Section 3.1, the most important hyper-parameter of LDL is the variance $\sigma$ . Because our goal is to search for proper $\sigma$ of each image while training, in this section we use $\sigma$ to represent the variance vector for a batch of training data. The optimal $\sigma$ in each iteration depends on the optimal model parameter $\theta$ :
+
+$$
+\theta^ {*} (\sigma) = \arg \min _ {\theta} L _ {K L} \left(X _ {t r}, y _ {t r}, \theta , \sigma\right) \tag {4}
+$$
+
+$$
+\sigma^ {*} = \arg \min _ {\sigma , \sigma \geq 0} L _ {1} \left(X _ {v a l}, y _ {v a l}, \theta^ {*}, \sigma\right) \tag {5}
+$$
+
+where $L_{1}(X_{val},y_{val},\theta^{*},\sigma)$ denotes the validation loss. $X_{tr}$ is the training input image while $y_{tr}$ is its label. $X_{val}$ is the validation input image while its label is $y_{val}$ . To solve the optimization problem, we divided the training process into several process. Fig. 2 shows the computation graph of our proposed method.
+
+
+Fig. 2. Computation graph of AVDL in one iteration. The ground truth of each input image is transformed to a normal distribution. The model on top is for training and the other is for validation. The train model and validation model share the network architecture and parameters. The training loss is K-L loss while the validation loss is L1 loss. Process 1,2,3 belongs to traditional training steps. Perturbing variable $\xi$ is added to initial distribution variance to get variance $\sigma$ . By adding the training gradient descent step $-\nabla \theta$ , the training model parameter $\theta$ is updated to $\theta'$ and is assigned to the validation model. Process 4 uses the descent gradient of $\xi$ in validation loss to get the modified $\xi'$ and $\sigma'$ . Process 5,6 shows the improved forward and backward computation with a proper variance $\sigma'$ .
+
+We choose a fixed number of images with correct labels from each class in the training set $n$ to make a small unbiased validation set with $m$ images, $m \ll n$ . We utilize $\sigma_{i}$ to denote variance for $i$ -th image image while we set the initial value of variances of all images to a fixed value $\sigma_{i0}$ . To search a proper variance, we perturb each $\sigma_{i}$ by $\xi_{i}$ :
+
+$$
+\sigma_ {i} = \sigma_ {i 0} + \xi_ {i} \tag {6}
+$$
+
+where $\xi_{i}$ is the $i$ -th component of perturbing vector $\xi$ which is set to 0 for initialization. Clearly, searching a proper $\sigma$ is equal to searching a proper $\xi$ .
+
+Firstly, as Fig. 2 process 1, 2 and 3 show, in the $t$ -th iteration, the input training batch calculates K-L loss as described in Section 3.1 with a perturbed $\sigma$ . Update the model parameter $\theta_t$ with SGD to get $\hat{\theta}_{t+1}$ :
+
+$$
+\hat {\theta} _ {t + 1} = \theta_ {t} - \alpha \bigtriangledown_ {\theta} L _ {K L} \left(X _ {t r}, y _ {t r}, \theta_ {t}, \sigma\right) \tag {7}
+$$
+
+$\alpha$ is the descent step.
+
+The training loss is related to distribution. To compensate the lack of constrain in the final predicted age value, we adopt L1 loss on validation to measure
+
+the distance between expectation age of prediction and the validation ground truth [9]:
+
+$$
+L _ {1} \left(X _ {v a l}, y _ {v a l}, \hat {\theta} _ {t + 1}, \xi\right) = \left| \hat {y} ^ {*} \left(X _ {v a l}, \hat {\theta} _ {t + 1}, \xi\right) - y _ {v a l} \right| \tag {8}
+$$
+
+$$
+\hat {y} ^ {*} \left(X _ {v a l}, \hat {\theta} _ {t + 1}, \xi\right) = \sum_ {k} \hat {p} _ {k} \left(X _ {v a l}, \hat {\theta} _ {t + 1}, \xi\right) l _ {k} \tag {9}
+$$
+
+where $\hat{p}_k$ is the $k$ -th element in the prediction vector of validation input $X_{val}$ and $l_k$ denotes the age value of the $k$ -th class, i.e. $l_k \in \mathcal{Y}$ . The expectation age computing method is also used for estimating test images in Section 4.
+
+Algorithm 1 Adaptive Variance Based Distribution Learning
+Input: Training set $S_{tr} = X_{tr},y_{trn}$ ; Validation set $S_{val} = X_{val},y_{valm}$ $m\ll n$ Initial model parameter $\theta_0$ ; Initial variance $\sigma_0$
+Output: Final model parameter $\theta_T$
+1: for t = 0,1,2...T-1 do
+2: Sample training batch $S_{tr,t}$ from $S_{tr}$
+3: Sample validation batch $S_{val,t}$ from $S_{val}$
+4: $\xi \gets 0$
+5: $\sigma \gets \sigma_0 + \xi$
+6: $L_{KL}(X_{tr},y_{tr},\theta_t,\sigma)\gets \mathrm{NetForward}(X_{tr},y_{tr},\theta_t,\sigma)$
+7: $\hat{\theta}_{t + 1}(\sigma)\gets \theta_t - \alpha \bigtriangledown_\theta L_K(L_X_{tr},y_{tr},\theta_t,\sigma)\quad \% \hat{\theta}_{t + 1}$ is a function of $\sigma$
+8: $L_{1}(X_{val},y_{val},\hat{\theta}_{t + 1}(\sigma),\sigma)\gets \mathrm{NetForward}(X_{val},y_{val},\hat{\theta}_{t + 1}(\sigma),\sigma)$
+9: $\hat{\xi}\gets \xi -\beta \nabla \xi L_1(X_{val},y_{val},\hat{\theta}_{t + 1}(\sigma),\sigma)$ % the gradient of $\xi$ on $L1$ loss equals to the gradient of $\sigma$ on $L1$ loss
+10: $\hat{\sigma}\gets \sigma_0 + \hat{\xi}\quad \%$ modify $\sigma$ adaptively
+11: $\hat{L}_{KL}(X_{tr},y_{tr},\theta_t,\hat{\sigma})\gets \mathrm{Forward}(X_{tr},y_{tr},\theta_t,\hat{\sigma})$
+12: $\theta_{t + 1}\gets SGD(\hat{L}_{KL}(X_{tr},y_{tr},\theta_t,\hat{\sigma}),\theta_t)$ % update with SGD optimizer
+13: end for
+
+The better hyper-parameter means better validation performance. In that, we update the perturbation $\xi$ with gradient descent step:
+
+$$
+\hat {\xi} = \xi - \beta \nabla_ {\xi} L _ {1} \left(X _ {v a l}, y _ {v a l}, \hat {\theta} _ {t + 1}, \xi\right) \tag {10}
+$$
+
+where $\beta$ is the descent step size. This step is corresponding to the process 4 in Fig. 2. Due to the non-negativity restriction of $\sigma$ , we normalize the $\xi$ into the range [-1,1], using the mapping $\xi_i \to \frac{2\xi_i - \max(\xi) - \min(\xi)}{\max(\xi) - \min(\xi)}$ . Then update the variance $\sigma$ according to Eq.(6). In the third step of training, with the modified variance, we calculate forward K-L loss of the training input, then update model parameter with SGD optimizer, as the process 5,6 in Fig. 2 shows.
+
+We listed step-by-step pseudo code in Algorithm 1. According to step 9 in Algorithm 1, there is a two-stage deviation computation of variable $\xi$ . PyTorch autograd mechanism can achieve this operation handily.
+
+# 4 Experiments
+
+In this section, we first introduce the datasets used in the experiments, i.e., MORPH II [29], FG-NET [26] and IMDB-WIKI [31]. Then we detail the experiment settings. Next, we validate the superiority of our approach with comparisons to the state-of-the-art facial age estimation methods. Finally, we conduct some ablation studies on our method.
+
+# 4.1 Datasets
+
+Morph II is the most popular dataset for age estimation. The dataset contains 55,134 color facial images of 13,000 individuals whose ages range from 16 to 77. On this dataset, we employ three typical protocols for evaluation: Setting I: 80-20 protocol. We randomly divide this dataset into two non-overlapped parts, i.e., $80\%$ for training and $20\%$ for testing. Setting II: Partial 80-20 protocol. Following the experimental setting in [33], we extract a subset of 5,493 facial images from Caucasian descent, these images are splitted into two parts: $80\%$ of facial images for training and $20\%$ for testing. Setting III: S1-S2-S3 protocol. Similar to [33, 22], Morph II dataset is splitted into three non-overlapped subsets S1, S2 and S3, and all experiments are repeated twice. Firstly, train on S1 and test on $\mathrm{S2 + S3}$ . Then, train on S2 and test on $\mathrm{S1 + S3}$ . The performance of the two experiments and their average MAE are shown respectively.
+
+FG-NET contains 1,002 color or gray facial images of 82 individuals whose ages are ranging from 0 to 69. We follow a widely used leave-one-person-out (LOPO) protocol [25,4] in our experiments, and the average performance over 82 splits is reported.
+
+IMDB-WIKI is the largest facial image dataset with age labels, which consists of 523,051 images in total. This dataset is constituted of two parts: IMDB (460,723 images) and WIKI (62,328 images). We follow the practice in [22] and use this dataset to pretrain our model. Specifically, We remove non-face images and partial multi-face images. Finally, about 270,000 images are reserved.
+
+# 4.2 Implementation Details
+
+We use the detection algorithm in [44] to obtain the face detection box and five facial landmark coordinates, which are then used to align the input facial image of the network. We resize the input image to $224 \times 224$ .
+
+Following the settings in [9], we augment the face images with random horizontal flipping, scaling, rotating and translating during training time. For testing, we input both the image and its flipped version to the network, and then average their predictions as the final results.
+
+We adopt ResNet-18 [19] as our backbone network and pretrain the network on IMDB-WIKI dataset for better initialization. We use the SGD optimizer with
+
+batch size 32 to optimize the network. The weight decay and the momentum are set to 0.0005 and 0.9. The initial learning rate is set to 0.01 and decays by 0.1 for every 20 epochs. we set the initial value of variances of all images to 1, and train the deep convolution neural network with PyTorch on 4 GTX TITAN X GPUs.
+
+Table 1. The comparisons between the proposed method and other state-of-the-art methods on MORPH II under Setting I. Bold indicates the best (* indicates the model was pre-trained on the IMDB-WIKI dataset; † indicates the model was pre-trained on the MS-Celeb-1M dataset [17])
+
+| Method | MAE | Parameters | Year |
| ORCNN [24] | 3.27 | 479.7K | 2016 |
| RGAN* [6] | 2.61 | - | 2017 |
| VGG-16 CNN + LDAE* [2] | 2.35 | 138M | 2017 |
| SSR-Net* [40] | 3.16 | 40.9K | 2018 |
| DRFs[32] | 2.17 | 138M | 2018 |
| M-V Loss* [25] | 2.16 | 138M | 2018 |
| DLDL-V2† [9] | 1.97 | 3.7M | 2018 |
| C3AE* [43] | 2.75 | 39.7K | 2019 |
| DHAA [34] | 1.91 | 100M | 2019 |
| AVDL* | 1.94 | 11M | - |
+
+# 4.3 Evaluation Criteria
+
+According to previous works [31, 33], we measure the performance of age estimation by the Mean Absolute Error (MAE) which is calculated using the average of the absolute errors between estimated age and chronological age.
+
+# 4.4 Comparisons With State-of-the-arts
+
+On Morph II. We first compare the proposed method with other state-of-the-art methods on MORPH II dataset in Setting I, as illustrated in Table 1. We achieve the second best performance, which is slightly lower than DHAA [34] by 0.03. It is worth to note that DHAA is large and complex, their parameters are around 10 times larger than ours, though without additional face dataset for pre-training. Moreover, using the same pre-training dataset, we surpass the M-V Loss by a significant margin of 0.22.
+
+Table 2 shows the test result under Setting II. We achieve the best performance, which is slightly higher than BridgeNet [22] by 0.01. Nevertheless, we have fewer parameters than BridgeNet. That is to say, we achieve the performance nearly to theirs with a significantly lower model complexity at the same
+
+time. As Table 3 shows, we achieve MAE of 2.53 under Setting III. Our method performs much better than the current state-of-the-art. All of the above comparisons consistently demonstrate the effectiveness of the proposed method.
+
+Table 2. The comparisons between the proposed method and other state-of-the-art methods on MORPH II dataset (Setting II) and FG-NET dataset. Bold indicates the best (* indicates the model was pre-trained on the IMDB-WIKI dataset)
+
+| Method | MORPH II | FG-NET | Parameters | Year |
| OHRANK [4] | 6.07 | 4.48 | - | 2011 |
| CA-SVR [5] | 5.88 | 4.67 | - | 2013 |
| Human [18] | 6.30 | 4.70 | - | 2015 |
| DEX* [31] | 2.68 | 3.09 | 138M | 2018 |
| DRFs [32] | 2.91 | 3.85 | 138M | 2018 |
| M-V Loss* [25] | - | 2.68 | 138M | 2018 |
| AGEn* [33] | 2.52 | 2.96 | 138M | 2018 |
| C3AE* [43] | - | 2.95 | 39.7K | 2019 |
| BridgeNet* [22] | 2.38 | 2.56 | 138M | 2019 |
| DHAA* [34] | - | 2.59 | 100M | 2019 |
| AVDL* | 2.37 | 2.32 | 11M | - |
+
+On FG-NET. As shown in Table 2, we compare our model with state-of-the-art models on FG-Net. Our method achieves the lowest MAE of 2.32, which improves the state-of-the-art performance by a large margin of 0.24. Experimental results show that our method is effective even when there are only a few training images.
+
+# 4.5 Ablation Study
+
+In this subsection, we conduct ablation study on MORPH II dataset under Setting I to conduct ablation study.
+
+The superiority of adaptive variance to fixed variance value. We train a set of baseline models, which all adopt ResNet-18 and K-L divergence loss but with different fixed variance values. Theoretically, the larger variance indicates the smoother distribution which refers to the stronger correlation in that age group. In comparison, the smaller variance represents the sharper distribution and the weaker correlation. If the variance is set too high, i.e., the label distribution is too smooth, the age estimation may not perform well. As Fig. 3 shows, the MAE increases along with the growth of variance when it is higher than 3, which indicates the worse performance. When the variance reduces to 0, it assumes there is no correlation between ages which is similar to the assumption when regarding age estimation as classification problem. However, considering
+
+Table 3. The comparisons between the proposed method and other state-of-the-art methods on MORPH II under Setting III. Bold indicates the best (* indicates the model was pre-trained on the IMDB-WIKI dataset)
+
+| Method | Train | Test | MAE | Avg |
| KPLS [14] | S1 | S2+S3 | 4.21 | 4.18 |
| S2 | S1+S3 | 4.15 |
| BIF+KCCA [15] | S1 | S2+S3 | 4.00 | 3.98 |
| S2 | S1+S3 | 3.95 |
| CPLF [42] | S1 | S2+S3 | 3.72 | 3.63 |
| S2 | S1+S3 | 3.54 |
| DRFs [32] | S1 | S2+S3 | - | 2.98 |
| S2 | S1+S3 | - |
| DOEL [39] | S1 | S2+S3 | - | 2.75 |
| S2 | S1+S3 | - |
| AGEn* [33] | S1 | S2+S3 | 2.82 | 2.70 |
| S2 | S1+S3 | 2.58 |
| BridegNet* [22] | S1 | S2+S3 | 2.74 | 2.63 |
| S2 | S1+S3 | 2.51 |
| AVDL* | S1 | S2+S3 | 2.64 | 2.53 |
| S2 | S1+S3 | 2.41 |
+
+the gradual changing of face in aging, taking a proper use of age correlation can help age estimation. As illustrated in Fig. 3, when the fixed variance is less than 3, the MAE fluctuates. It validates that setting a fixed variance is suboptimal because the age correlation can not be the same for different people in different ages. The best performance of baseline is achieved with a variance of 3. However, it is still much worse than our proposed method, AVDL. In Fig. 3, we also show the performance achieved by training the ResNet-18 with cross-entropy loss, which is our baseline method by treating age estimation as classification task. In summary, Fig 3 demonstrates the superiority of the adaptive variance. Actually, for each dataset and experiment setting, our approach is compared to the baseline method with fixed variance. We observed from Fig 3 that the variation in fixed variance value within a certain range has little impact on performance, due to limited time, we only search the best variance for MORPH II(Setting I) and apply it to other experiments. In addition, the baseline with the fixed variance of MORPH II(Setting II), MORPH II(Setting III), and FGNET are 2.66, 2.79, 2.64, respectively.
+
+The influence of different sample number in validation set. As [28] shows, a balanced meta dataset could provide balanced class knowledge. For the same purpose, we choose an unbiased validation set as meta dataset. As for the composition of the clean validation set, we try different sizes of validation set. We respectively random select 1, 2 and 3 images from each class in the training set to form the validation set for experiment. From Table 4, we can find that
+
+
+Fig.3. The MAE results on MORPH II under Setting I. The blue line denotes the results of the baseline model trained with different fixed variance. The red line is the result of the baseline model trained with cross-entropy loss and the green one is the result of AVDL
+
+Table 4. The performance comparison on selecting different number of facial images of each age to form validation set
+
+| Number of images | 1 | 2 | 3 |
| MAE of AVDL | 1.98 | 1.96 | 1.94 |
+
+the larger the validation set is, the better the model performs. However, since all validation set is used in each iteration, it needs more time and memory as the size of validation set increases. Considering the time and space cost, for each dataset setting, we randomly chose three image from each class to form the validation set.
+
+# 4.6 Visualization and Discussion
+
+Considering the affordance and credibility, here we display some visual results of AVDL in age estimation and variance adaptation.
+
+We use the learned variance of samples to show the effectiveness of AVDL and to justify our motivation. Under the Setting I on MORPH II, each age, ranging from 16 to 60, possesses a group of face images belonging to different person identities. While there is no person whose images covering the full age range. We select images of several persons at different ages with their adapted variances in Fig. 4(a). As [11] mentioned, the age variances of younger or older people tend to be smaller than those of middle age. And the variances vary between people in the same age group. Besides, the variance in Fig. 4(b) shows the visualization of the adjusted variances in a mini-batch. The initial variance for each sample, as indicated in Section 4.2, is set to 1. The learned variances are shown in the horizontal band above in which each block represents a sample and the color of the block indicates the magnitude of variance. The blocks are arranged from left to right according to the ages of samples. The band below is the legend which indicates the relationship between the magnitude of variance
+
+
+
+
+Fig. 4. Examples of age estimation results by AVDL. (a) shows some samples at different ages on Morph II with adapted variances. According to the Gaussian curves, it can be proved that for younger and older people, the variances tend to be smaller while for middle age, larger. (b) uses heat map to visualize the adaptively learned variances $\sigma$ corresponding to different ages.
+
+and the color of block. Same as Fig. 4(a) shows, the variance in young age and old age is smaller. Besides, the variances in the band fluctuate slightly which demonstrates the variance is different for people.
+
+# 5 Conclusions
+
+In this paper, we propose a novel method for age estimation, named adaptive variance based distribution learning(AVDL). AVDL introduces meta-learning to adaptively adjust the variance for each image in single iteration. It achieves better performances than others on multiple age estimation datasets. Our experiments also show that AVDL can guide variance to get close to real facial aging law. The idea that using meta-learning to guide key hyper-parameters is inspirational and we will explore more possibilities of it.
+
+# Acknowledgments
+
+This work was supported by Key-Area Research and Development Program of Guangdong Province (No.2019B010153001), National Natural Science Foundation of China (No.61772527,61806200,61976210), China Postdoctoral science Foundation(No.2019M660859), Open Project of Key Laboratory of Ministry of Public Security for Road Traffic Safety (No.2020ZDSYSKFKT04).
+
+# References
+
+1. Andrychowicz, M., Denil, M., Gomez, S., Hoffman, M.W., Pfau, D., Schaul, T., Shillingford, B., De Freitas, N.: Learning to learn by gradient descent by gradient descent. In: Advances in neural information processing systems. pp. 3981-3989 (2016)
+2. Antipov, G., Baccouche, M., Berrani, S.A., Dugelay, J.L.: Effective training of convolutional neural networks for face-based gender and age prediction. Pattern Recognition 72, 15-26 (2017)
+3. Cao, D., Zhu, X., Huang, X., Guo, J., Lei, Z.: Domain balancing: Face recognition on long-tailed domains. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5671-5679 (2020)
+4. Chang, K.Y., Chen, C.S., Hung, Y.P.: Ordinal hyperplanes ranker with cost sensitivities for age estimation. In: Computer vision and pattern recognition (cvpr), 2011 IEEE conference on. pp. 585-592. IEEE (2011)
+5. Chen, K., Gong, S., Xiang, T., Change Loy, C.: Cumulative attribute space for age and crowd density estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2467-2474 (2013)
+6. Duan, M., Li, K., Li, K.: An ensemble cnn2elm for age estimation. IEEE Transactions on Information Forensics and Security 13(3), 758-772 (2017)
+7. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70. pp. 1126-1135. JMLR.org (2017)
+8. Gao, B.B., Xing, C., Xie, C.W., Wu, J., Geng, X.: Deep label distribution learning with label ambiguity. IEEE Transactions on Image ProceRing 26(6), 2825-2838 (2017)
+9. Gao, B.B., Zhou, H.Y., Wu, J., Geng, X.: Age estimation using expectation of label distribution learning. In: IJCAI. pp. 712-718 (2018)
+0. Geng, X.: Label distribution learning. IEEE Transactions on Knowledge and Data Engineering 28(7), 1734-1748 (2016)
+1. Geng, X., Wang, Q., Xia, Y.: Facial age estimation by adaptive label distribution learning. In: 2014 22nd International Conference on Pattern Recognition. pp. 4465-4470. IEEE (2014)
+2. Geng, X., Xia, Y.: Head pose estimation based on multivariate label distribution. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1837-1842 (2014)
+3. Geng, X., Yin, C., Zhou, Z.H.: Facial age estimation by learning from label distributions. IEEE transactions on pattern analysis and machine intelligence 35(10), 2401-2412 (2013)
+4. Guo, G., Mu, G.: Simultaneous dimensionality reduction and human age estimation via kernel partial least squares regression. In: CVPR 2011. pp. 657-664. IEEE (2011)
+5. Guo, G., Mu, G.: Joint estimation of age, gender and ethnicity: Cca vs. pls. In: 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). pp. 1-6. IEEE (2013)
+6. Guo, J., Zhu, X., Zhao, C., Cao, D., Lei, Z., Li, S.Z.: Learning meta face recognition in unseen domains. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6163-6172 (2020)
+7. Guo, Y., Zhang, L., Hu, Y., He, X., Gao, J.: Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In: European conference on computer vision. pp. 87-102. Springer (2016)
+
+18. Han, H., Otto, C., Liu, X., Jain, A.K.: Demographic estimation from face images: Human vs. machine performance. IEEE transactions on pattern analysis and machine intelligence 37(6), 1148-1161 (2014)
+19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016)
+20. Hou, P., Geng, X., Huo, Z.W., Lv, J.Q.: Semi-supervised adaptive label distribution learning for facial age estimation. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)
+21. Li, P., Hu, Y., Wu, X., He, R., Sun, Z.: Deep label refinement for age estimation. Pattern Recognition 100, 107178 (2020)
+22. Li, W., Lu, J., Feng, J., Xu, C., Zhou, J., Tian, Q.: Bridgenet: A continuity-aware probabilistic network for age estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1145-1154 (2019)
+23. Liu, Z., Zhu, X., Hu, G., Guo, H., Tang, M., Lei, Z., Robertson, N.M., Wang, J.: Semantic alignment: Finding semantically consistent ground-truth for facial landmark detection. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3462-3471 (2019)
+24. Niu, Z., Zhou, M., Wang, L., Gao, X., Hua, G.: Ordinal regression with multiple output cnn for age estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4920-4928 (2016)
+25. Pan, H., Han, H., Shan, S., Chen, X.: Mean-variance loss for deep age estimation from a face. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5285-5294 (2018)
+26. Panis, G., Lanitis, A., Tsapatsoulis, N., Cootes, T.F.: Overview of research on facial ageing using the fg-net ageing database. Iet Biometrics 5(2), 37-46 (2016)
+27. Ren, M., Triantafillou, E., Ravi, S., Snell, J., Swersky, K., Tenenbaum, J.B., Larochelle, H., Zemel, R.S.: Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676 (2018)
+28. Ren, M., Zeng, W., Yang, B., Urtasun, R.: Learning to reweight examples for robust deep learning. arXiv preprint arXiv:1803.09050 (2018)
+29. Ricanek, K., Tesafaye, T.: Morph: A longitudinal image database of normal adult age-progression. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06). pp. 341-345. IEEE (2006)
+30. Rothe, R., Timofte, R., Van Gool, L.: Dex: Deep expectation of apparent age from a single image. In: The IEEE International Conference on Computer Vision (ICCV) Workshops (December 2015)
+31. Rothe, R., Timofte, R., Van Gool, L.: Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision 126(2-4), 144-157 (2018)
+32. Shen, W., Guo, Y., Wang, Y., Zhao, K., Wang, B., Yuille, A.L.: Deep regression forests for age estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2304-2313 (2018)
+33. Tan, Z., Wan, J., Lei, Z., Zhi, R., Guo, G., Li, S.Z.: Efficient group-n encoding and decoding for facial age estimation. IEEE transactions on pattern analysis and machine intelligence 40(11), 2610-2623 (2017)
+34. Tan, Z., Yang, Y., Wan, J., Guo, G., Li, S.Z.: Deeply-learned hybrid representations for facial age estimation. In: IJCAI. pp. 3548-3554 (2019)
+35. Tan, Z., Yang, Y., Wan, J., Wan, H., Guo, G., Li, S.: Attention-based pedestrian attribute analysis. IEEE Transactions on Image Processing PP, 1-1 (07 2019). https://doi.org/10.1109/TIP.2019.2919199
+
+36. Thrun, S., Pratt, L.: Learning to learn. Springer Science & Business Media (2012)
+37. Vanschoeren, J.: Meta-learning: A survey. arXiv preprint arXiv:1810.03548 (2018)
+38. Wang, G., Han, H., Shan, S., Chen, X.: Cross-domain face presentation attack detection via multi-domain disentangled representation learning. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2020)
+39. Xie, J.C., Pun, C.M.: Deep and ordinal ensemble learning for human age estimation from facial images. IEEE Transactions on Information Forensics and Security 15, 2361-2374 (2020)
+40. Yang, T.Y., Huang, Y.H., Lin, Y.Y., Hsiu, P.C., Chuang, Y.Y.: Ssr-net: A compact soft stagewise regression network for age estimation. In: IJCAI. vol. 5, p. 7 (2018)
+41. Yang, X., Gao, B.B., Xing, C., Huo, Z.W., Wei, X.S., Zhou, Y., Wu, J., Geng, X.: Deep label distribution learning for apparent age estimation. In: Proceedings of the IEEE international conference on computer vision workshops. pp. 102-108 (2015)
+42. Yi, D., Lei, Z., Li, S.Z.: Age estimation by multi-scale convolutional network. In: Asian conference on computer vision. pp. 144-158. Springer (2014)
+43. Zhang, C., Liu, S., Xu, X., Zhu, C.: C3ae: Exploring the limits of compact model for age estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 12587-12596 (2019)
+44. Zhao, X., Liang, X., Zhao, C., Tang, M., Wang, J.: Real-time multi-scale face detector on embedded devices. Sensors 19(9), 2158 (2019)
+45. Zhu, B., Chen, Y., Tang, M., Wang, J.: Progressive cognitive human parsing. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
+46. Zhu, B., Chen, Y., Wang, J., Liu, S., Zhang, B., Tang, M.: Fast deep matting for portrait animation on mobile phone. In: Proceedings of the 25th ACM international conference on Multimedia. pp. 297-305 (2017)
\ No newline at end of file
diff --git a/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/images.zip b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a615b3ba3688e5a1c93b04b1171d9b7155b9bcef
--- /dev/null
+++ b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6bbf38248781b1111f273ef525e3bb431362f70edc23fc7fcad280f8e919d59d
+size 370018
diff --git a/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/layout.json b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..69e2c13e258ae74a2f4dc3ed58e0cf36e1fcfe56
--- /dev/null
+++ b/adaptivevariancebasedlabeldistributionlearningforfacialageestimation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9ed6d17290b44cf5b0ea4328899fe21267bbb4e598c4ca9ecec1b87460245172
+size 437371
diff --git a/adaptivevideohighlightdetectionbylearningfromuserhistory/37439d5a-404b-468b-a8a8-a5a429aa1bd9_content_list.json b/adaptivevideohighlightdetectionbylearningfromuserhistory/37439d5a-404b-468b-a8a8-a5a429aa1bd9_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f2c8f3cd81ddc8f001852298e9a1358a87347a64
--- /dev/null
+++ b/adaptivevideohighlightdetectionbylearningfromuserhistory/37439d5a-404b-468b-a8a8-a5a429aa1bd9_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:84c75ebec5f1b1a4dbea3eff680182a444a036d96a3c69fc3b985c037432b046
+size 77994
diff --git a/adaptivevideohighlightdetectionbylearningfromuserhistory/37439d5a-404b-468b-a8a8-a5a429aa1bd9_model.json b/adaptivevideohighlightdetectionbylearningfromuserhistory/37439d5a-404b-468b-a8a8-a5a429aa1bd9_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..96c5a0b60c519d697f86cb0722c5d9c5816e9c64
--- /dev/null
+++ b/adaptivevideohighlightdetectionbylearningfromuserhistory/37439d5a-404b-468b-a8a8-a5a429aa1bd9_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7095ed5d5708733e34720f87c911db23f936ebc52180aa77ef876ff0ac9940d3
+size 97283
diff --git a/adaptivevideohighlightdetectionbylearningfromuserhistory/37439d5a-404b-468b-a8a8-a5a429aa1bd9_origin.pdf b/adaptivevideohighlightdetectionbylearningfromuserhistory/37439d5a-404b-468b-a8a8-a5a429aa1bd9_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b2291dfb18c99e98d579c6a88e147b31203ee48a
--- /dev/null
+++ b/adaptivevideohighlightdetectionbylearningfromuserhistory/37439d5a-404b-468b-a8a8-a5a429aa1bd9_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad1bfe73fc86cf4181e9aa10e7dbfd86f444c32dd77b8f4229629966dcd60a1d
+size 2044657
diff --git a/adaptivevideohighlightdetectionbylearningfromuserhistory/full.md b/adaptivevideohighlightdetectionbylearningfromuserhistory/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fee3b5374a13a1053bdde610d2d0156a2cf0b941
--- /dev/null
+++ b/adaptivevideohighlightdetectionbylearningfromuserhistory/full.md
@@ -0,0 +1,292 @@
+# Adaptive Video Highlight Detection by Learning from User History
+
+Mrigank Rochan $^{1[0000-0001-9513-6573]}$ , Mahesh Kumar Krishna Reddy $^{1[0000-0001-5645-4931]}$ , Linwei Ye $^{1[0000-0002-7375-452X]}$ , and Yang Wang $^{1,2[0000-0001-9447-1791]}$
+
+1 University of Manitoba, Winnipeg MB R3T 2N2, Canada
+
+$^{2}$ Huawei Technologies, Canada
+
+{mrochan, kumarkm, yel3, ywang}@cs.umanitoba.ca
+
+Abstract. Recently, there is an increasing interest in highlight detection research where the goal is to create a short duration video from a longer video by extracting its interesting moments. However, most existing methods ignore the fact that the definition of video highlight is highly subjective. Different users may have different preferences of highlight for the same input video. In this paper, we propose a simple yet effective framework that learns to adapt highlight detection to a user by exploiting the user's history in the form of highlights that the user has previously created. Our framework consists of two sub-networks: a fully temporal convolutional highlight detection network $H$ that predicts highlight for an input video and a history encoder network $M$ for user history. We introduce a newly designed temporal-adaptive instance normalization (T-AIN) layer to $H$ where the two sub-networks interact with each other. T-AIN has affine parameters that are predicted from $M$ based on the user history and is responsible for the user-adaptive signal to $H$ . Extensive experiments on a large-scale dataset show that our framework can make more accurate and user-specific highlight predictions.
+
+Keywords: Video highlightight detection $\cdot$ User-adaptive learning
+
+# 1 Introduction
+
+There is a proliferation in the amount of video data captured and shared everyday. It has given rise to multifaceted challenges, including editing, indexing and browsing of this massive amount of video data. This has drawn attention of the research community to build automated video highlight detection tools. The goal of highlight detection is to reduce an unedited video to its interesting visual moments and events. A robust highlight detection solution can enhance video browsing experience by providing quick video preview, facilitating video sharing on social media and assisting video recommendation systems.
+
+Even though we have made significant progress in highlight detection, the existing methods are missing the ability to adapt its predictions to users. The
+
+main thrust of research in highlight detection has been on building generic models. However, different users have different preferences in term of detected highlights [25, 34]. Generic highlight detection models ignore the fact that the definition of a video highlight is inherently subjective and depends on each individual user's preference. This can greatly limit the adoption of these models in real-world applications. In Fig. 1, we illustrate the subjective nature of highlights. The input video contains events such as cycling, cooking, and eating. A generic highlight detection model mainly predicts the cycling event as the highlight. But if we examine the user's history (previously created highlights by the user), we can infer that this user is interested in cooking scenes. Therefore, a highlight detection model should predict the cooking event as the highlight instead. Motivated by this observation, we propose an adaptive highlight detection model that explicitly takes user's history into consideration while generating highlights.
+
+
+Fig. 1. The definition of highlight of a video is inherently subjective and depends on each user's preference. In contrast to a generic highlight detection model, an adaptive highlight detection model (like ours) incorporates a user's previously created highlights (e.g., GIFs from multiple videos) when predicting highlights of an input video. This allows the model to make more accurate and user-specific highlight predictions.
+
+Although a user's visual highlight history can provide a stronger and more reliable cue of their interests than non-visual meta-data [25], there is very limited research on adapting highlight detection using this visual information. To the best of our knowledge, the recent work by Molino and Gygli [25] is the only prior work on this topic. Their method considers a user's previously created highlights (available as $\mathrm{GIFs}^1$ ) from multiple videos when generating new highlights for that user. They propose a ranking model that predicts a higher score for interesting video segments as opposed to non-interesting ones while conditioning on the user's past highlights (i.e., user's history). However, their method has some limitations. Firstly, it operates at the segment level and samples a fixed number of positive and negative segments from a video for learning. This means that the method does not process an entire video which is essential to capture temporal dependencies shown to be vital in many video understanding tasks [19,
+
+[24, 32, 50]. Moreover, it is sensitive to the number of positive and negative samples used in the learning. Secondly, it requires to precompute shot boundaries using a shot detection algorithm [8] for sampling a set of positive/negative video segments. This makes their pipeline computationally complex and expensive. Lastly, their model directly combines user's history features with the features of sampled video segments to predict user-specific highlights. We demonstrate in our experiments that this is not as effective as our proposed model.
+
+In this paper, we introduce a novel user-adaptive highlight detection framework that is simple and powerful. Given an input video for highlight detection and the user's history (highlight GIFs from multiple videos that the user has previously created), our model seamlessly combines the user's history information with the input video to modulate the highlight detection so as to make user-specific and more precise highlight prediction. Our framework consists of two sub-networks: a fully temporal convolutional highlight detection network $H$ which produces the highlight for an input video, and a history encoder network $M$ that encodes the user's history information. We propose temporal-adaptive instance normalization (T-AIN), a conditional temporal instance normalization layer for videos. We introduce T-AIN layers to $H$ where the interaction between the two sub-networks $H$ and $M$ takes place. T-AIN layers have affine transformation parameters that are predicted from $M$ based on the user's history. In other words, $M$ acts a guiding network to $H$ . Through the adjustable affine parameters in T-AIN layers, $H$ can adapt highlight predictions to different users based on their preferences as expressed in their histories. Note that our method does not require expensive shot detection. Moreover, it can utilize an entire video for learning instead of a few sampled video segments.
+
+To summarize, the main contributions of our paper are the following. (1) We study user-adaptive highlight detection using user history in the form of previously created highlights by the user. This problem has many commercial applications, but is not well studied in the literature. (2) Different from ranking models [11, 25, 45, 46] commonly used in highlight detection, we are first to employ a fully temporal convolutional model in highlight detection. Our model does not require expensive shot detection algorithm and can process an entire video at once. This makes our proposed model simpler operationally. (3) We propose temporal-adaptive instance normalization layer (T-AIN) for videos. T-AIN layers have adjustable affine parameters which allow the highlight detection network to adapt to different users. (4) We experiment on a large-scale dataset and demonstrate the effectiveness of our approach. (5) We further explore the application of our model as a pre-trained model for video summarization, a task closely related to highlight detection.
+
+# 2 Related Work
+
+Highlight detection aims to identify key events in a video that a user is likely to find interesting. Towards this, we exploit the highlights that the user has created in the past. In this section, we discuss the several lines of related work.
+
+Our work is closely related to existing highlight detection methods where the goal is to extract the interesting moments from a long duration video [25, 39]. Many prior methods [11, 14, 25, 36, 46, 47] employ a ranking formulation. They learn to score interesting segments of a video higher than non-interesting segments. Our work is particularly related to [11, 25]. Gygli et al. [11] propose a GIF creation technique using a highlight detection method. The limitation of this method is that it is a generic highlight detector, whereas we propose a model that is capable of making user-specific predictions. Molino and Gygli [25] propose a model that takes a user's history as an input to make personalized predictions. Their method is a ranking model that operates on a few sampled positive and negative segments of a video combined with the user's highlight history to learn a personalized model. Different from them, our proposed highlight detection model is convolutional in nature. It can accept an entire video (of any length) and the user's history of different sizes as an input to produce a personalized highlight. Unlike [25], we do not require shot detection as pre-processing. Lastly, we develop a more effective way to leverage user history representation in the network instead of directly concatenating with the input video features [25].
+
+This paper has also connection with the research in video summarization. Different from highlight detection, video summarization aims to generate a concise overview of a video [25, 39]. Early research [16, 17, 20, 23, 24, 26, 28, 35, 54, 32, 51] on summarization mainly develop unsupervised methods. These methods design hand-crafted heuristics to satisfy properties such as diversity and representativeness in the generated summary. Weakly supervised methods also exist that exploit cues from web images or videos [4, 16, 17, 31, 35] and video category details [27, 30] for summarization. Supervised methods [7, 9, 10, 32, 49, 50, 53] have shown to achieve superior performance due to the fact that they learn directly from annotated training data with human-generated summaries. However, these methods do not consider each user's preference. In theory, it is possible to make user-adaptive predictions by performing user-specific training. But it is expensive in practice as it would result in per-user model [25]. However, we learn a single model that can be adapted to different users by incorporating their histories. Moreover, we do not need to retrain the model for a new user. The adaption to a new user only involves some simple feed-forward computation. Lastly, in video summarization, there is some work that focuses on personalization but they either use meta-data [1, 2, 13, 37] or consider user textual query [22, 33, 41, 52] to personalize video summary. Different from them, our method operates on visual features from user video and user's history to make user-specific prediction.
+
+Finally, our approach is partly inspired by recent research in style transfer in images using conditional normalization layers, e.g., adaptive instance normalization [12] and conditional batch normalization [6]. Apart from style transfer, both layers have been applied in many other computer vision problems [3, 5, 15, 21, 29, 43]. These layers firstly normalize the activations to zero mean and unit variance, then adjust the activations via affine transformation parameters inferred from external data [29]. Since the layers are designed to uniformly modulate
+
+the activations spatially, their applications are limited to images. In contrast, in this paper, we propose a normalization layer that uniformly modulates the activations temporally, making it appropriate for video understanding tasks.
+
+# 3 Our Approach
+
+Given a video $\mathbf{v}$ , we represent each frame in the video as a $D$ -dimensional feature vector. The video can be represented as a tensor of dimension $1 \times T \times D$ , where $T$ and $D$ are the number of frames and dimension of frame feature vector of each frame in the video, respectively. $T$ varies for different videos. For an user $U$ , let $\mathcal{H} = \{h_1, \dots, h_n\}$ denotes the user's history which is a collection of visual highlights that the user has created in the past.
+
+Given $\mathbf{v}$ and $\mathcal{H}$ , our goal is to learn a mapping function $G$ that predicts two scores for each frame in $\mathbf{v}$ indicating it being non-highlight and highlight. Thus, the final output $S$ is of dimension $1 \times T \times 2$ for the input video $\mathbf{v}$ :
+
+$$
+S = G (\mathbf {v}, \mathcal {H}) = G (\mathbf {v}, \{h _ {1}, \dots , h _ {n} \}). \tag {1}
+$$
+
+We refer to $G$ as the adaptive highlight detector.
+
+# 3.1 Background: Temporal Convolution Networks
+
+In recent years, temporal convolution networks (e.g., FCSN [32]) have shown to achieve impressive performance on video understanding tasks. These networks mainly perform 1D operations (e.g., 1D convolution, 1D pooling) over the temporal dimension (e.g., over frames in a video). This is analogous to the 2D operations (e.g., 2D convolution, 2D pooling) commonly used in CNN models for image-related tasks. For example, the work in [32] uses temporal convolutional networks for video summarization, where the task is formulated as predicting a binary label for each frame in a video. Our proposed model is based on the temporal convolution network proposed in [32]. But we extend the model to perform user-specific video highlight detection.
+
+# 3.2 Temporal-Adaptive Instance Normalization
+
+Let $\mathbf{o}^i$ indicate the activations of $i$ -th layer in the temporal convolution neural network $(f_{T})$ for the input video $\mathbf{v}$ . We use $C^i$ and $T^i$ to denote the number of channels and temporal length of activation in that layer, respectively. We define the Temporal-Adaptive Instance Normalization (T-AIN), a conditional normalization layer for videos. T-AIN is inspired by the basic principles of Instance Normalization [40]. The activation is firstly normalized channel-wise along temporal dimension (obtaining $\mathbf{o}_{norm}^i$ ), followed by a uniform modulation with affine transformation. Different from InstanceNorm [40], the affine parameters, scale and bias, in T-AIN are not learnable but inferred using external data (i.e., a user's history $(\mathcal{H})$ in our case) which is encoded to a vector $m$ using another
+
+temporal convolution network $(g_{T})$ . Thus, T-AIN is also conditional (on $\mathcal{H}$ ) in nature. The activation value from T-AIN at location $c \in C^i$ and $t \in T^i$ is
+
+$$
+\left(\frac {o _ {c , t} ^ {i} - \mathrm {E} [ o _ {c} ^ {i} ]}{\sqrt {\operatorname {V a r} [ o _ {c} ^ {i} ] + \epsilon}}\right) \gamma_ {c} ^ {i} + \delta_ {c} ^ {i}, \tag {2}
+$$
+
+where $o_{c,t}^{i}$ , $\operatorname{E}[o_c^i]$ and $\operatorname{Var}[o_c^i]$ are the activation before normalization, expectation and variance of the activation $\mathbf{o}^i$ in channel $c$ , respectively. T-AIN computes the $\operatorname{E}[o_c^i]$ and $\operatorname{Var}[o_c^i]$ along temporal dimension independently for each channel and every input sample (video) as:
+
+$$
+\mathrm {E} \left[ o _ {c} ^ {i} \right] = \mu_ {c} ^ {i} = \frac {1}{T ^ {i}} \left(\sum_ {t} o _ {c, t} ^ {i}\right), \tag {3}
+$$
+
+$$
+\operatorname {V a r} \left[ o _ {c} ^ {i} \right] = \operatorname {E} \left[ \left(o _ {c} ^ {i} - \operatorname {E} \left[ o _ {c} ^ {i} \right]\right) ^ {2} \right] = \frac {1}{T ^ {i}} \sum_ {t} \left(o _ {c, t} ^ {i} - \mu_ {c} ^ {i}\right) ^ {2}. \tag {4}
+$$
+
+In Eq. 2, $\gamma_c^i$ and $\delta_c^i$ are the modulation parameters in the T-AIN layer. We obtain $\gamma_c^i$ and $\delta_c^i$ from the encoded vector $m$ generated using the external data. T-AIN firstly scales each value along the temporal length in channel $c$ of temporally normalized activations $\mathbf{o}_{norm}^i$ by $\gamma_c^i$ and then shifts it by $\delta_c^i$ . Similar to InstanceNorm, these statistics vary across channel $C^i$ . We provide details on how we compute these parameters when using user's history $\mathcal{H}$ as an external data in Sec. 3.3. In Fig. 2, we visualize the operations in a T-AIN layer.
+
+
+Fig. 2. Overview of a temporal-adaptive instance normalization layer (T-AIN). For an input video $\mathbf{v}$ , let $\mathbf{o}^i$ be the activation map with channel dimension $C^i$ and temporal length $T^i$ in the $i$ -th layer of a temporal convolutional network $f_{T}$ . Let $g_{T}$ be another temporal convolutional network that encodes external data (e.g., user's history $\mathcal{H}$ ) into a vector representation $m$ of dimension $2C^i$ . T-AIN firstly temporally normalizes $\mathbf{o}^i$ in each channel to obtain $\mathbf{o}_{norm}^i$ . It then uniformly scales and shifts $\mathbf{o}_{norm}^i$ in channel $c$ (where $c \in C^i$ ) over time by $\gamma_c^i$ and $\delta_c^i$ , respectively. The values of $\gamma_c^i$ and $\delta_c^i$ are obtained from $m$ . As can be seen, the main characteristics of T-AIN include temporal operation, no learnable parameters, and conditional on external data.
+
+T-AIN is related to the conditional batch normalization (CBN) [6] and the adaptive instance normalization (AIN) [12]. The main difference is that CBN and AIN work spatially, so they are appropriate for image-related tasks like style
+
+transfer. In contrast, T-AIN is designed to operate along time, which makes it suitable for video highlight detection and video understanding tasks in general.
+
+# 3.3 Adaptive Highlight Detector
+
+The adaptive highlight detector $G$ consists of two sub-networks: a highlight detection network $H$ and a history encoder network $M$ . $H$ is responsible for scoring each frame in an input video to indicate whether or not it should be included in the highlight. The role of $M$ is to firstly encode the user's history information and then guide $H$ in a manner that the generated highlight is adapted to the user's history. Next, we discuss the sub-networks design and learning in detail.
+
+Highlight Detection Network. The highlight detection network $H$ is based on FCSN [32]. It is an encoder-decoder style architecture which is fully convolutional in nature. One advantage of this network is that it is not restricted to fixed-length input videos. It can handle videos with arbitrary lengths. Another advantage is that it is effective in capturing the long-term temporal dynamics among the frames of a video beneficial for video understanding tasks such as highlight detection.
+
+$H$ accepts a video $\mathbf{v}$ with feature representation of dimension $1\times T\times D$ , where $T$ is number of frames in $\mathbf{v}$ and $D$ is the dimension of each frame feature vector. It produces an output of dimension $1\times T\times 2$ indicating non-highlight and highlight scores for the $T$ frames.
+
+The encoder $(F_v)$ of $H$ has a stack of seven convolutional blocks. The first five blocks (i.e., conv-blk1 to conv-blk5) consist of several temporal convolution followed by a ReLU and a temporal max pooling operations. The last two blocks (conv6 to conv7) have a temporal convolution, followed by a ReLU and a dropout layer. The encoder $F_v$ gives two outputs: a feature map from its last layer and a skip connection from block conv-blk4.
+
+The output of encoder $F_{v}$ is fed to the decoder network $(D_v)$ . We introduce T-AIN layers at sites where these two outputs enter $D_{v}$ . We obtain a feature map by applying a $1 \times 1$ convolution and a temporal fractionally-strided convolution operation (deconv1) to the output of first T-AIN, which is added with the feature map from a $1 \times 1$ convolution to the output of second T-AIN layer. Finally, we apply another fractionally-strided temporal convolution (deconv2) to obtain a final prediction of shape $1 \times T \times 2$ denoting two scores (non-highlight or highlight) for each frame in the video. Fig. 3 (top) visualizes the architecture of $H$ .
+
+History Encoder Network. The history encoder network $M$ is an integral piece of our framework. It acts as a guiding network that guides the highlight network $H$ by adaptively computing the affine transformation parameters of its T-AIN layers. Using these affine parameters, $M$ modulates the activations in $H$ to produce adaptive highlight predictions.
+
+The configuration of this network is same as the encoder $F_{v}$ in $H$ with few changes towards the end. It performs an average temporal pooling to the output
+
+
+Fig. 3. Overview of our proposed model, Adaptive-H-FCSN. The model consists of two sub-networks, a highlight detection network $H$ and a history encoder network $M$ . $H$ is an encoder-decoder architecture that takes a frame-level vector feature representation of a user input video with $T$ frames. It then generates scores (highlight vs. non-highlight) for each frame in the video while taking information from $M$ . $M$ takes vector feature representation of each element (i.e., highlights the user has previously created) in the user's history as an input and encodes it to a vector $\mathbf{z}_h$ . This vector $\mathbf{z}_h$ is then simply fed to a fully connected layer $FC$ to produce the affine parameters $\gamma_j$ and $\delta_j$ in the $j$ -th T-AIN layer of decoder $D_v$ where $j = 1,2$ . This way the highlight detection for the input video is adapted to the user.
+
+of convolution blocks, which is combined with a skip connection from the input that is then fed to a fully-connected layer. The skip connection involves an average pooling and a fully-connected operation to match dimensions.
+
+The network accepts user's history collection $\mathcal{H}$ of shape $1\times n\times D$ as an input, where $n$ is the number of elements/highlights in the user's history and $D$ is the dimension of the vector representation of each element. In our implementation, we obtain a $D$ -dimensional vector from a highlight using C3D [38]. Note that $n$ varies for different users. After the combining stage, the network generates a latent code $\mathbf{z}_h$ which is a fixed-length user-specific vector.
+
+Next, we forward $\mathbf{z}_h$ to a fully connected layer $(FC)$ to decode the latent code $\mathbf{z}_h$ into a set of vectors $(\gamma_j,\delta_j)$ where $j = 1,2$ . The parameters $\gamma_{j}$ and $\delta_{j}$ denote the scaling and bias parameters of $j$ -th T-AIN layer in the decoder $D_{v}$ of $H$ , respectively. These affine parameters are uniformly applied at every temporal location in the feature map. This allows to incorporate the user's history information in $H$ and adjust it in a way that the predicted highlight is adapted to the user's history. This way we obtain a user-specific highlight prediction for the input user video. Fig. 3 (bottom) presents the architecture of $M$ .
+
+With $F_{v},D_{v}$ and $M$ , we can rewrite Eq. 1 as:
+
+$$
+S = D _ {v} \left(F _ {v} (\mathbf {v}), M \left(\left\{h _ {1}, \dots , h _ {n} \right\}\right)\right). \tag {5}
+$$
+
+By applying this design, we learn a generic video representation using $F_{v}$ and extract a user-specific latent representation with $M$ . Finally, by injecting user
+
+specific latent information to $D_v$ through T-AIN layers, we allow the model to adapt the highlight detection to the user's history.
+
+Note that we use temporal convolutions over the highlights in the user history. In addition, we also investigate a non-local model [42, 48] with self-attention instead of temporal convolutions for $M$ . Here, the output of self-attention is firstly average pooled to produce a single vector and then fed to a fully-connected layer. We find that the non-local model performs slightly inferior to the temporal convolutions model. This is probably because the highlights in the history are ordered based on their time of creation in the dataset, so the temporal convolutions allow the history encoder $M$ to capture some implicit temporal correlations among them.
+
+# 3.4 Learning and Optimization
+
+We train our adaptive highlight detector $G$ using a cross-entropy loss. For an input video $\mathbf{v}$ with $T$ frames and corresponding binary indicator ground truth label vector (indicating whether a frame is non-highlight or highlight), we define a highlight detection loss $\mathcal{L}_{highlight}$ as:
+
+$$
+\mathcal {L} _ {\text {h i g h l i g h t}} = - \frac {1}{T} \sum_ {t = 1} ^ {T} \log \left(\frac {\exp \left(\lambda_ {t , l _ {c}}\right)}{\sum_ {c = 1} ^ {2} \exp \left(\lambda_ {t , c}\right)}\right), \tag {6}
+$$
+
+where $\lambda_{t,c}$ is the predicted score of $t$ -th frame to be the $c$ -th class (non-highlight or highlight) and $\lambda_{t,l_c}$ is the score predicted for the ground truth class.
+
+The goal of our learning is to find optimal parameters $\Theta_{F_v}^*$ , $\Theta_{D_v}^*$ and $\Theta_M^*$ in the encoder $F_v$ , decoder $D_v$ of the highlight detection network $H$ , and the history encoder network $M$ , respectively. The learning objective can be expressed as:
+
+$$
+\Theta_ {F _ {v}} ^ {*}, \Theta_ {D _ {v}} ^ {*}, \Theta_ {M} ^ {*} = \underset {\Theta_ {F _ {v}}, \Theta_ {D _ {v}}, \Theta_ {M}} {\arg \min } \mathcal {L} _ {\text {h i g h l i g h t}} (F _ {v}, D _ {v}, M). \tag {7}
+$$
+
+For brevity, we use Adaptive-H-FCSN to denote our adaptive highlight detection model learned using Eq. 7.
+
+# 4 Experiments
+
+# 4.1 Dataset
+
+We conduct experiments on the largest publicly available highlight detection dataset, PHD-GIFs [25]. It is also the only large-scale dataset that has user history information for highlight detection. The released dataset consists of 119,938 videos, 13,822 users and 222,015 annotations. The dataset has 11,972 users in training, 1,000 users in validation, and 850 users in testing. There is no overlap among users in these three subsets.
+
+Apart from being large-scale, this dataset is also interesting because it contains user-specific highlight examples indicating what exactly a user is interested
+
+in when creating highlights. The ground truth segment-level annotation comes from GIFs that a user creates (by extracting key moments) from YouTube videos. In this dataset, a user has GIFs from multiple videos where the last video of the user is considered for highlight prediction and the remaining ones are treated as examples in the user's history.
+
+The dataset only provides YouTube video ID for the videos and not the original videos. So we need to download the original videos from YouTube to carry out the experiments. We were able to download 104,828 videos and miss the remaining videos of the dataset since they are no longer available on YouTube. In the end, we are able to successfully process 7,036 users for training, 782 users for validation and 727 users for testing. Note that code of previous methods on this dataset are not available (except pre-trained Video2GIF [11]), so we implement several strong baselines (see Sec. 4.3) in the paper.
+
+# 4.2 Setup and Implementation Details
+
+Evaluation metrics: We use the mean Average Precision (mAP) as our evaluation metric. It measures the mean of the average precision of highlight detection calculated for every video in the testing dataset. Different from object detection where all the detections are accumulated from images to compute the average precision, highlight detection treats videos separately because it is not necessary a highlighted moment in a particular video is more interesting than non-highlighted moments in a different video [36]. This metric is commonly used to measure the quality of predictions in highlight detection [11, 25, 36, 45].
+
+Feature representation: Following prior work [25], we extract C3D [38] (conv5) layer features and use it as feature representation in the model for the input videos and user's history. For an input video, we extract C3D-features at frame-level. For a highlight video in the user's history, we prepare a single vector representation by averaging its frame-level C3D features.
+
+Training details: We train our models from scratch. All the models are trained with a constant learning rate of 0.0001. We use Adam [18] optimization algorithm for training the models. Note that we apply this training strategy in all our experiments including the other analysis (Sec. 4.5).
+
+Since the dataset has users that create multiple GIFs for a video, we follow [25] to prepare a single ground truth for the video by taking their union.
+
+Testing details: Given a new test user video and the user's history, we use our trained model to predict highlight score for each frame which is then sent to the evaluation metrics to measure the quality of predicted highlight. We follow the evaluation protocol of previous work [11,25] for fair comparison. Note that our model can handle variable length input videos and variable number of history elements. We consider full user's history while predicting highlights.
+
+# 4.3 Baselines
+
+We compare the proposed Adaptive-H-FCSN with the following strong baselines:
+
+FCSN [32]: This network is the state of the art in video summarization which we adapt as our highlight detection network. FCSN has no instance normalization layers. We train and evaluate FCSN on the PHD-GIFs dataset.
+
+Video2GIF [11]: This baseline is a state-of-the-art highlight detection model. We evaluate the publicly available pre-trained model.
+
+FCSN-aggregate: In this baseline, we train FCSN [32] by directly combining the user history with the input video. More specifically, we firstly obtain a vector representation for the user history by averaging the features of elements in the history. Next, we add this aggregated history with the feature representation of each frame in the input video.
+
+H-FCSN: This baseline is a variant of highlight detection network $H$ we presented in Sec. 3.3, where we replace the T-AIN layers in the decoder of $H$ with the unconditional temporal instance normalization layers. We do not have the history encoder network $M$ . This results in Adaptive-H-FCSN transformed to a generic highlight detection model with no adaptation to users.
+
+H-FCSN-aggregate: Similar to FCSN-aggregate, we directly combine the user's history features with an input video features and learn H-FCSN. Different from H-FCSN, this is not a generic highlight detection model but a user-adaptive highlight detection model as we allow the model to leverage the user's history information in the training and inference.
+
+# 4.4 Results and Comparison
+
+In Table 1, we provide the experiment results (in terms of mAP %) of our final model Adaptive-H-FCSN along with the baselines and other alternative methods.
+
+Table 1. Performance (mAP%) comparison between Adaptive-H-FCSN and other approaches. We compare with both non-adaptive and adaptive highlight detection methods. Our method Adaptive-H-FCSN outperforms the other alternative methods. We also compare with Adaptive-H-FCSN-attn that uses self-attention in the history encoder (see Sec. 3.3). Note that all the listed methods use C3D feature representation
+
+| Method | mAP (%) | User-adaptive |
| Random | 12.27 | X |
| FCSN [32] | 15.22 | X |
| Video2GIF [11] | 14.75 | X |
| H-FCSN | 15.04 | X |
| FCSN-aggregate | 15.62 | ✓ |
| H-FCSN-aggregate | 15.73 | ✓ |
| Adaptive-H-FCSN-attn | 16.37 | ✓ |
| Adaptive-H-FCSN | 16.73 | ✓ |
+
+Adaptive-H-FCSN outperforms all the baselines. Results show that directly combining (i.e., FCSN-aggregate and H-FCSN-aggregate) history information with input video only slightly improves the highlight detection results in comparison to FCSN and H-FCSN that do not leverage users history information.
+
+However, we notice a significant performance gain for Adaptive-H-FCSN model. This result validates that directly combining user history information with the input video is a sub-optimal solution for user-adaptive highlight detection. Additionally, this result reveals that proposed T-AIN layer plays a critical role in producing more accurate and user-specific highlight detection. It is also noteworthy that we obtain a lower performance (nearly $1\%$ ) for Video2GIF [11] than reported in PHD-GIFs [25] which implies that our test set is more challenging.
+
+Fig. 4 shows some qualitative examples for the generic baseline model (H-FCSN) and our proposed adaptive highlight detection model (Adaptive-H-FCSN). We can see that our model successfully captures the information in user's history and produces highlights that are adapted to the user's preference.
+
+
+Fig. 4. Qualitative examples for different methods. We show examples of the generic highlight detection model (H-FCSN) and our user-adaptive model (Adaptive-H-FCSN) on four videos. For each video, we show the user's history (multiple GIFs) and few sampled frames from the highlight predictions of the two models. Based on the user's history, we find that in (a) the user has interest in animals; (b) the user is interested in faces that dominate a scene; (c) the user is inclined to highlight goal scoring scenes; and (d) the user focuses on cooking. These visualizations indicate that adaptation to the user's interest is important for a meaningful and accurate highlights. Compared with H-FCSN, the prediction of Adaptive-H-FCSN is more consistent with the user history.
+
+# 4.5 Analysis
+
+Effect of affine parameters. We analyze the importance of affine parameters $\gamma_c^i$ and $\delta_c^i$ (Eq. 2) for adaptive highlight detection. In Table 2, we report the highlight detection performance for different possible choices of these parameters. We find that when these parameters are adaptively computed ( $\gamma_c^i = \gamma_c^h$ , $\delta_c^i = \delta_c^h$ ) from another network capturing the user history information (i.e., Adaptive-H-FCSN) significantly boosts the highlight detection performance as opposed to cases when it is directly learned ( $\gamma_c^i = \gamma_c^*$ , $\delta_c^i = \delta_c^*$ ) and set to a fixed value ( $\gamma_c^i = 1$ , $\delta_c^i = 0$ ) in the main highlight detection network. Thus, the proposed T-AIN layer is key to obtain user-adaptive highlights.
+
+Table 2. Impact of affine parameters on highlight detection. Here we show the performance (mAP%) for different choices of affine parameters $\gamma_c^i$ and $\delta_c^i$ in Eq. 2
+
+| Method | γi=1, δi=0 | γi=γ*, δi=δ* | γi=γh, δi=δh |
| H-FCSN | 14.64 | 15.04 | - |
| H-FCSN-aggregate | 14.87 | 15.73 | - |
| Adaptive-H-FCSN | - | - | 16.73 |
+
+Effect of user's history size. We perform additional study to analyze how sensitive our model is to the length of a user's history (i.e., numbers of highlights previously created). We restrict the number of history elements for users in the training. That is, we consider only $h$ highlight videos from the user's history in training. During testing, we consider the user's full history.
+
+Table 3 shows the results of various methods as a function of number of elements $(h = 0,1,5,n)$ in user's history. We observe that Adaptive-H-FCSN outperforms generic highlight model (H-FCSN) even when there is a single highlight in the user's history. We notice the performance of Adaptive-H-FCSN gradually improves when we increase the number of history elements, whereas H-FCSN-aggregate doesn't show similar performance trend. It achieves the best results when we utilize a user's full history (i.e., $h = n$ ).
+
+Table 3. Impact of an user's history size (i.e., number of history elements/highlights) on different methods. Here we vary the history size $h$ as 0 (no history), 1, 5, and $n$ (full history). The performance of our model improves with the increase in history size
+
+| History size (H) | h=0 | h=1 | h=5 | h=n |
| H-FCSN | 15.04 | - | - | - |
| H-FCSN-aggregate | - | 15.62 | 15.04 | 15.73 |
| Adaptive-H-FCSN | - | 15.57 | 15.69 | 16.73 |
+
+# 4.6 Application to Video Summarization
+
+Video summarization is closely related to highlight detection. Highlight detection aims at extracting interesting moments and events of a video, while video summarization aims to generate a concise synopsis of a video. Popular datasets in summarization are very small [31], making learning and optimization challenging. We argue that pretraining using a large-scale video data from a related task, such as PHD-GIFs [25] in highlight detection, could tremendously help video summarization models. In video summarization, this idea remains unexplored. In order to validate our notion and compare with recent state-of-the-art in [44], we select the SumMe dataset [9] which has only 25 videos.
+
+We evaluate our trained H-FCSN (i.e., the generic highlight detection model we proposed in Sec. 4.3) directly on SumMe. In Table 4, we compare the performance of our H-FCSN (trained on the PHD-GIFs [25] dataset) on SumMe with
+
+state-of-the-art supervised video summarization methods. Following prior work [44], we randomly select $20\%$ of data in SumMe for testing. We repeat this experiment five times (as in [44]) and report the average performance. Surprisingly, even though we do not train on SumMe, our model achieves state-of-the-art summarization performance, outperforming contemporary supervised models. Therefore, we believe that the future research in video summarization should consider pretraining their model on a large-scale video data from a related task such as highlight detection. We envision that this way we can simultaneously make progress in both highlight detection and video summarization.
+
+Table 4. Performance comparison in term of F-score (\%) on SumMe. Note that unlike other methods, we do not train on SumMe rather directly test our trained (using PHD-GIFs) model for summarization. Results of other methods are taken from [44]
+
+| Method | F-score (%) |
| Interesting [9] | 39.4 |
| Submodularity [10] | 39.7 |
| DPP-LSTM [50] | 38.6 |
| GANsup [24] | 41.7 |
| DR-DSNsup [54] | 42.1 |
| S2N [44] | 43.3 |
| Ours (H-FCSN) | 44.4 |
+
+# 5 Conclusion
+
+We have proposed a simple yet novel framework Adaptive-H-FCSN for adaptive highlight detection using the user history which has received less attention in the literature. Different from commonly applied ranking model, we introduced a convolutional model for highlight detection that is computationally efficient as it can process an entire video of any length at once and also does not require expensive shot detection computation. We proposed temporal-adaptive normalization (T-AIN) that has affine parameters which is adaptively computed using the user history information. The proposed T-AIN leads to high-performing and user-specific highlight detection. Our empirical results on a large-scale dataset indicate that the proposed framework outperforms alternative approaches. Lastly, we further demonstrate an application of our learned model in video summarization where the learning is currently limited to small datasets.
+
+Acknowledgements. The work was supported by NSERC. We thank NVIDIA for donating some of the GPUs used in this work.
+
+# References
+
+1. Agnihotri, L., Kender, J., Dimitrova, N., Zimmerman, J.: Framework for personalized multimedia summarization. In: ACM SIGMM International Workshop on Multimedia Information Retrieval (2005)
+2. Babaguchi, N., Ohara, K., Ogura, T.: Learning personal preference from viewer's operations for browsing and its application to baseball video retrieval and summarization. IEEE Transactions on Multimedia (2007)
+3. Brock, A., Donahue, J., Simonyan, K.: Large scale gan training for high fidelity natural image synthesis. In: International Conference on Learning Representations (2018)
+4. Cai, S., Zuo, W., Davis, L.S., Zhang, L.: Weakly-supervised video summarization using variational encoder-decoder and web prior. In: European Conference on Computer Vision (2018)
+5. Chen, T., Lucic, M., Houlsby, N., Gelly, S.: On self modulation for generative adversarial networks. In: International Conference on Learning Representations (2018)
+6. De Vries, H., Strub, F., Mary, J., Larochelle, H., Pietquin, O., Courville, A.C.: Modulating early visual processing by language. In: Advances in Neural Information Processing Systems (2017)
+7. Gong, B., Chao, W.L., Grauman, K., Sha, F.: Diverse sequential subset selection for supervised video summarization. In: Advances in Neural Information Processing Systems (2014)
+8. Gygli, M.: Ridiculously fast shot boundary detection with fully convolutional neural networks. In: International Conference on Content-Based Multimedia Indexing (2018)
+9. Gygli, M., Grabner, H., Riemenschneider, H., Van Gool, L.: Creating summaries from user videos. In: European Conference on Computer Vision (2014)
+0. Gygli, M., Grabner, H., Van Gool, L.: Video summarization by learning submodular mixtures of objectives. In: IEEE Conference on Computer Vision and Pattern Recognition (2015)
+1. Gygli, M., Song, Y., Cao, L.: Video2gif: Automatic generation of animated gifs from video. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)
+2. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: IEEE International Conference on Computer Vision (2017)
+3. Jaime, A., Echigo, T., Teraguchi, M., Satoh, F.: Learning personalized video highlights from detailed mpg-7 metadata. In: International Conference on Image Processing (2002)
+4. Jiao, Y., Yang, X., Zhang, T., Huang, S., Xu, C.: Video highlight detection via deep ranking modeling. In: Pacific-Rim Symposium on Image and Video Technology (2017)
+5. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)
+6. Khosla, A., Hamid, R., Lin, C.J., Sundaresan, N.: Large-scale video summarization using web-image priors. In: IEEE Conference on Computer Vision and Pattern Recognition (2013)
+7. Kim, G., Xing, E.P.: Reconstructing storyline graphs for image recommendation from web community photos. In: IEEE Conference on Computer Vision and Pattern Recognition (2014)
+
+18. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations (2015)
+19. Lea, C., Flynn, M.D., Vidal, R., Reiter, A., Hager, G.D.: Temporal convolutional networks for action segmentation and detection. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)
+20. Lee, Y.J., Ghosh, J., Grauman, K.: Discovering important people and objects for egocentric video summarization. In: IEEE Conference on Computer Vision and Pattern Recognition (2012)
+21. Liu, M.Y., Huang, X., Mallya, A., Karras, T., Aila, T., Lehtinen, J., Kautz, J.: Few-shot unsueprvised image-to-image translation. In: IEEE International Conference on Computer Vision (2019)
+22. Liu, W., Mei, T., Zhang, Y., Che, C., Luo, J.: Multi-task deep visual-semantic embedding for video thumbnail selection. In: IEEE Conference on Computer Vision and Pattern Recognition (2015)
+23. Lu, Z., Grauman, K.: Story-driven summarization for egocentric video. In: IEEE Conference on Computer Vision and Pattern Recognition (2013)
+24. Mahasseni, B., Lam, M., Todorovic, S.: Unsupervised video summarization with adversarial LSTM networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)
+25. del Molino, A.G., Gygli, M.: PhD-gifs: Personalized highlight detection for automatic gif creation. In: ACM Multimedia (2018)
+26. Ngo, C.W., Ma, Y.F., Zhang, H.J.: Automatic video summarization by graph modeling. In: IEEE International Conference on Computer Vision (2003)
+27. Panda, R., Das, A., Wu, Z., Ernst, J., Roy-Chowdhury, A.K.: Weakly supervised summarization of web videos. In: IEEE International Conference on Computer Vision (2017)
+28. Panda, R., Roy-Chowdhury, A.K.: Collaborative summarization of topic-related videos. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)
+29. Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)
+30. Potapov, D., Douze, M., Harchaoui, Z., Schmid, C.: Category-specific video summarization. In: European Conference on Computer Vision (2014)
+31. Rochan, M., Wang, Y.: Video summarization by learning from unpaired data. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)
+32. Rochan, M., Ye, L., Wang, Y.: Video summarization using fully convolutional sequence networks. In: European Conference on Computer Vision (2018)
+33. Sharghi, A., Gong, B., Shah, M.: Query-focused extractive video summarization. In: European Conference on Computer Vision (2016)
+34. Soleymani, M.: The quest for visual interest. In: ACM Multimedia (2015)
+35. Song, Y., Vallmitjana, J., Stent, A., Jaime, A.: Tvsum: Summarizing web videos using titles. In: IEEE Conference on Computer Vision and Pattern Recognition (2015)
+36. Sun, M., Farhadi, A., Seitz, S.: Ranking domain-specific highlights by analyzing edited videos. In: European Conference on Computer Vision (2014)
+37. Takahashi, Y., Nitta, N., Babaguchi, N.: User and device adaptation for sports video content. In: IEEE International Conference on Multimedia and Expo (2007)
+38. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3d convolutional networks. In: IEEE International Conference on Computer Vision (2015)
+
+39. Truong, B.T., Venkatesh, S.: Video abstraction: A systematic review and classification. ACM Transactions on Multimedia Computing, Communications, and Applications (2007)
+40. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)
+41. Vasudevan, A.B., Gygli, M., Volokitin, A., Van Gool, L.: Query-adaptive video summarization via quality-aware relevance estimation. In: ACM Multimedia (2017)
+42. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)
+43. Wang, X., Yu, K., Dong, C., Change Loy, C.: Recovering realistic texture in image super-resolution by deep spatial feature transform. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)
+44. Wei, Z., Wang, B., Nguyen, M.H., Zhang, J., Lin, Z., Shen, X., Mech, R., Samaras, D.: Sequence-to-segment networks for segment detection. In: Advances in Neural Information Processing Systems (2018)
+45. Xiong, B., Kalantidis, Y., Ghadiyaram, D., Grauman, K.: Less is more: Learning highlight detection from video duration. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)
+46. Yao, T., Mei, T., Rui, Y.: Highlight detection with pairwise deep ranking for first-person video summarization. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)
+47. Yu, Y., Lee, S., Na, J., Kang, J., Kim, G.: A deep ranking model for spatio-temporal highlight detection from a 360 video. In: AAAI Conference on Artificial Intelligence (2018)
+48. Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative adversarial networks. In: International Conference on Machine Learning (2019)
+49. Zhang, K., Chao, W.L., Sha, F., Grauman, K.: Summary transfer: Exemplar-based subset selection for video summarization. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)
+50. Zhang, K., Chao, W.L., Sha, F., Grauman, K.: Video summarization with long short-term memory. In: European Conference on Computer Vision (2016)
+51. Zhang, K., Grauman, K., Sha, F.: Retrospective encoders for video summarization. In: European Conference on Computer Vision (2018)
+52. Zhang, Y., Kampffmeyer, M., Liang, X., Tan, M., Xing, E.P.: Query-conditioned three-player adversarial network for video summarization. In: British Machine Vision Conference (2018)
+53. Zhao, B., Li, X., Lu, X.: Hierarchical recurrent neural network for video summarization. In: ACM Multimedia (2017)
+54. Zhou, K., Qiao, Y., Xiang, T.: Deep reinforcement learning for unsupervised video summarization with diversity-representativeness reward. In: AAAI Conference on Artificial Intelligence (2018)
\ No newline at end of file
diff --git a/adaptivevideohighlightdetectionbylearningfromuserhistory/images.zip b/adaptivevideohighlightdetectionbylearningfromuserhistory/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0644077f3cb8f12babd0c83a67bc745111c53468
--- /dev/null
+++ b/adaptivevideohighlightdetectionbylearningfromuserhistory/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7388d4936d26ee0fb909dbdc9b56cc6a13d73a9244369fc1d52f2b9657d2a433
+size 267218
diff --git a/adaptivevideohighlightdetectionbylearningfromuserhistory/layout.json b/adaptivevideohighlightdetectionbylearningfromuserhistory/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..449f86a683dd44ef21aa3436df0fdd10ab80325d
--- /dev/null
+++ b/adaptivevideohighlightdetectionbylearningfromuserhistory/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3861efd980c8d8e1c8ca04786154ce40b161ed97b0980b849d3f35a958a96c18
+size 479107
diff --git a/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/8589589d-550d-4dee-a0af-ee1bcde3f45e_content_list.json b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/8589589d-550d-4dee-a0af-ee1bcde3f45e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2589c6f8f781d5fd5e963eb29b6a18cec825dbfe
--- /dev/null
+++ b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/8589589d-550d-4dee-a0af-ee1bcde3f45e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4abbfa8c05c2426a4a83ce643a108b33dcff9ac2d66ddafdd75d9a145594595f
+size 80828
diff --git a/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/8589589d-550d-4dee-a0af-ee1bcde3f45e_model.json b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/8589589d-550d-4dee-a0af-ee1bcde3f45e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0917375a43012f92467d77cd864110b9eb259d68
--- /dev/null
+++ b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/8589589d-550d-4dee-a0af-ee1bcde3f45e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d69ebd13cfc776ffc9d473deb44dc659b3f1e564c2ad6400e254bb184ab2b89
+size 96503
diff --git a/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/8589589d-550d-4dee-a0af-ee1bcde3f45e_origin.pdf b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/8589589d-550d-4dee-a0af-ee1bcde3f45e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..bd1260eb9fa2ff5dfc5e4731efc7559b5e04724e
--- /dev/null
+++ b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/8589589d-550d-4dee-a0af-ee1bcde3f45e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d31b44d73b21abec33d5fa79e4e6f1fed02ea3365cbec35e0818143f3d77cd6c
+size 3124999
diff --git a/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/full.md b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..83a681dc8274e974dd0eff21794f9ae8c264cef7
--- /dev/null
+++ b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/full.md
@@ -0,0 +1,315 @@
+# Adversarial Background-Aware Loss for Weakly-supervised Temporal Activity Localization
+
+Kyle Min and Jason J. Corso
+
+University of Michigan, Ann Arbor, MI 48109
+
+{kylemin,jjcorso}@umich.edu
+
+Abstract. Temporally localizing activities within untrimmed videos has been extensively studied in recent years. Despite recent advances, existing methods for weakly-supervised temporal activity localization struggle to recognize when an activity is not occurring. To address this issue, we propose a novel method named A2CL-PT. Two triplets of the feature space are considered in our approach: one triplet is used to learn discriminative features for each activity class, and the other one is used to distinguish the features where no activity occurs (i.e. background features) from activity-related features for each video. To further improve the performance, we build our network using two parallel branches which operate in an adversarial way: the first branch localizes the most salient activities of a video and the second one finds other supplementary activities from non-localized parts of the video. Extensive experiments performed on THUMOS14 and ActivityNet datasets demonstrate that our proposed method is effective. Specifically, the average mAP of IoU thresholds from 0.1 to 0.9 on the THUMOS14 dataset is significantly improved from $27.9\%$ to $30.0\%$ .
+
+Keywords: A2CL-PT, temporal activity localization, adversarial learning, weakly-supervised learning, center loss with a pair of triplets
+
+# 1 Introduction
+
+The main goal of temporal activity localization is to find the start and end times of activities from untrimmed videos. Many of the previous approaches are fully supervised: they expect that ground-truth annotations for temporal boundaries of each activity are accessible during training [22, 20, 26, 32, 2, 11, 14]. However, collecting these frame-level activity annotations is time-consuming and difficult, leading to annotation noise. Hence, a weakly-supervised version has taken foot in the community: here, one assumes that only video-level ground-truth activity labels are available. These video-level activity annotations are much easier to collect and already exist across many datasets [8, 23, 6, 15, 31], thus weakly-supervised methods can be applied to a broader range of situations.
+
+Current work in weakly-supervised temporal activity localization shares a common framework [12, 16, 17, 19, 9]. First, rather than using a raw video, they
+
+
+(a)
+
+
+(b)
+Fig. 1. (a): An illustration of the proposed A2CL-PT. $F$ and $f$ are aggregated video-level features where $f$ is designed to be more attended to the background features. $c$ is their corresponding center and $c_{n}$ is the negative center. A triplet of $(F, c, c_{n})$ is used to learn discriminative features. We propose to exploit another triplet of $(c, F, f)$ which distinguishes background features from the activity-related features. We call this method of two triplets ACL-PT. In addition, we design our network with two parallel branches so that the two separate sets of centers can be learned in an adversarial way. We call our final proposed method A2CL-PT. (b): Sample frames of a video containing *Diving* activity class from THUMOS14 dataset [5] and the corresponding results of activity localization. It is shown that our final method A2CL-PT performs the best.
+
+use a sequence of features extracted by deep networks where the features are much smaller than the raw video in size. Second, they apply a fully-connected layer to embed the pre-extracted features to the task-specific feature space. Third, they project the embedded features to the label space by applying a 1-D convolutional layer to those features. The label space has the same dimension as the number of activities, so the final output becomes a sequence of vectors that represents the classification scores for each activity over time. Each sequence of vectors is typically referred to as CAS (Class Activation Sequence) [21] or T-CAM (Temporal Class Activation Map) [17]. Finally, activities are localized by thresholding this T-CAM. T-CAM is sometimes applied with the softmax function to generate class-wise attention. This top-down attention represents the probability mass function for each activity over time.
+
+An important component in weakly-supervised temporal activity localization is the ability to automatically determine background portions of the video where no activity is occurring. For example, BaS-Net [9] suggests using an additional suppression objective to suppress the network activations on the background portions. Nguyen et al. [18] proposes a similar objective to model the background contents. However, we argue that existing methods are not able to sufficiently distinguish background information from activities of interest for each video even though such an ability is critical to strong temporal activity localization.
+
+To this end, we propose a novel method for the task of weakly-supervised temporal activity localization, which we call Adversarial and Angular Center Loss with a Pair of Triplets (A2CL-PT). It is illustrated in Fig. 1(a). Our key innovation is that we explicitly enable our model to capture the background re
+
+gion of the video while using an adversarial approach to focus on completeness of the activity learning. Our method is built on two triplets of vectors of the feature space, and one of them is designed to distinguish background portions from the activity-related parts of a video. Our method is inspired by the angular triplet-center loss (ATCL) [10] originally designed for multi-view 3D shape retrieval. Let us first describe what ATCL is and then how we develop our novel method of A2CL-PT.
+
+In ATCL [10], a center is defined as a parameter vector representing the center of a cluster of feature vectors for each class. During training, the centers are updated by reducing the angular distance between the embedded features and their corresponding class centers. This groups together features that correspond to the same class and distances features from the centers of other class clusters (i.e. negative centers), making the learned feature space more useful for discriminating between classes. It follows that each training sample is a triplet of a feature vector, its center, and a negative center where the feature serves as an anchor.
+
+Inspired by ATCL, we first formulate a loss function to learn discriminative features. ATCL cannot be directly applied to our problem because it assumes that all the features are of the same size, whereas an untrimmed video can have any number of frames. Therefore, we use a different feature representation at the video-level. We aggregate the embedded features by multiplying the top-down attention described above at each time step. The resulting video-level feature representation has the same dimension as the embedded features, so we can build a triplet whose anchor is the video-level feature vector (it is $(F, c, c_{n})$ in Fig. 1(a)). This triplet ensures that the embedded features of the same activity are grouped together and that they have high attention values at time steps when the activity occurs.
+
+More importantly, we argue that it is possible to exploit another triplet. Let us call the features at time steps when some activity occurs activity features, and the ones where no activity occurs background features. The main idea is that the background features should be distinguished from the activity features for each video. First, we generate a new class-wise attention from T-CAM. It has higher attention values for the background features when compared to the original top-down attention. If we aggregate the embedded features with this new attention, the resulting video-level feature will be more attended to the background features than the original video-level feature is. In a discriminative feature space, the original video-level feature vector should be closer to its center than the new video-level feature vector is. This property can be achieved by using the triplet of the two different video-level feature vectors and their corresponding center where the center behaves as an anchor (it is $(c,F,f)$ in Fig. 1(a)). The proposed triplet is novel and will be shown to be effective. Since we make use of a pair of triplets on the same feature space, we call it Angular Center Loss with a Pair of Triplets (ACL-PT).
+
+To further improve the localization performance, we design our network to have two parallel branches which find activities in an adversarial way, also illus-
+
+trated in Fig. 1(a). Using a network with a single branch may be dominated by salient activity features that are too short to localize all the activities in time. We zero out the most salient activity features localized by the first branch for each activity so that the second (adversarial) branch can find other supplementary activities from the remaining parts of the video. Here, each branch has its own set of centers which group together the features for each activity and one 1-D convolutional layer that produces T-CAM. The two adversary T-CAMs are weighted to produce the final T-CAM that is used to localize activities. We want to note that our network produces the final T-CAM with a single forward pass so it is trained in an end-to-end manner. We call our final proposed method Adversarial and Angular Center Loss with a Pair of Triplets (A2CL-PT). It is shown in Fig. 1(b) that our final method performs the best.
+
+There are three main contributions in this paper:
+
+- We propose a novel method using a pair of triplets. One facilitates learning discriminative features. The other one ensures that the background features are distinguishable from the activity-related features for each video.
+- We build an end-to-end two-branch network by adopting an adversarial approach to localize more complete activities. Each branch comes with its own set of centers so that embedded features of the same activity can be grouped together in an adversarial way by the two branches.
+- We perform extensive experiments on THUMOS14 and ActivityNet datasets and demonstrate that our method outperforms all the previous state-of-the-art approaches.
+
+# 2 Related Work
+
+Center loss (CL) [25] is recently proposed to reduce the intra-class variations of feature representations. CL learns a center for each class and penalizes the Euclidean distance between the features and their corresponding centers. Triplet-center loss (TCL) [4] shows that using a triplet of each feature vector, its corresponding center, and a nearest negative center is effective in increasing the inter-class separability. TCL enforces that each feature vector is closer to its corresponding center than to the nearest negative center by a pre-defined margin. Angular triplet-center loss (ATCL) [10] further improves TCL by using the angular distance. In ATCL, it is much easier to design a better margin because it has a clear geometric interpretation and is limited from 0 to $\pi$ .
+
+BaS-Net [9] and Nguyen et al. [18] are the leading state-of-the-art methods for weakly-supervised temporal activity localization. They take similar approaches to utilize the background portions of a video. There are other recent works without explicit usage of background information. Liu et al. [12] utilizes multi-branch network where T-CAMs of these branches differ from each other. This property is enforced by the diversity loss: the sum of the simple cosine distances between every pair of the T-CAMs. 3C-Net applies an idea of CL, but the performance is limited because CL does not consider the inter-class separability.
+
+
+Fig. 2. An illustration of our overall architecture. It consists of two streams (RGB and optical flow), and each stream consists of two (first and adversarial) branches. Sequences of features are extracted from two input streams using pre-trained I3D networks [1]. We use two fully-connected layers with ReLU activation (FC) to compute the embedded features $\mathbf{X}_i^r, \mathbf{X}_i^o$ . Next, T-CAMs $\mathbf{C}_i^r, \mathbf{C}_i^o$ are computed by applying 1-D convolutional layers (Conv). The most salient activity features localized by the first branch are zeroed out for each activity class, and the resulting features are applied with different 1-D convolutional layers (Conv) to produce $\mathbf{C}_i^{ra}, \mathbf{C}_i^{oa}$ . Using the embedded features $\mathbf{X}_i^r, \mathbf{X}_i^o$ and T-CAMs $\mathbf{C}_i^r, \mathbf{C}_i^o, \mathbf{C}_i^{ra}, \mathbf{C}_i^{oa}$ , we compute the term of A2CL-PT (Eq. 16). The final T-CAM $\mathbf{C}_i^F$ is computed from the four T-CAMs and these T-CAMs are used to compute the loss function for classification (Eq. 19).
+
+Using an end-to-end two-branch network that operates in an adversarial way is proposed in Adversarial Complementary Learning (ACoL) [30] for the task of weakly-supervised object localization. In ACoL, object localization maps from the first branch are used to erase the salient regions of the input feature maps for the second branch. The second branch then tries to find other complementary object areas from the remaining regions. To the best of our knowledge, we are the first to merge the idea of ACoL with center loss and to apply it to weakly-supervised temporal activity localization.
+
+# 3 Method
+
+The overview of our proposed method is illustrated in Fig. 2. The total loss function is represented as follows:
+
+$$
+\mathcal {L} = \alpha \mathcal {L} _ {\mathrm {A 2 C L - P T}} + \mathcal {L} _ {\mathrm {C L S}} \tag {1}
+$$
+
+where $\mathcal{L}_{\mathrm{A2CL - PT}}$ and $\mathcal{L}_{\mathrm{CLS}}$ denote our proposed loss term and the classification loss, respectively. $\alpha$ is a hyperparameter to control the weight of A2CL-PT term. In this section, we describe each component of our method in detail.
+
+# 3.1 Feature Embedding
+
+Let us say that we have $N$ training videos $\{v_{i}\}_{i = 1}^{N}$ . Each video $v_{i}$ has its ground-truth annotation for video-level label $\mathbf{y}_i\in \mathbb{R}^{N_c}$ where $N_{c}$ is the number of activity classes. $\mathbf{y}_i(j) = 1$ if the activity class $j$ is present in the video and $\mathbf{y}_i(j) = 0$ otherwise. We follow previous works [19, 16] to extract the features for both RGB and optical flow streams. First, we divide $v_{i}$ into non-overlapping 16-frame segments. We then apply I3D [1] pretrained on Kinetics dataset [6] to the segments. The intermediate $D$ -dimensional ( $D = 1024$ ) outputs after the global pooling layer are the pre-extracted features. For the task-specific feature embedding, we use two fully-connected layers with ReLU activation. As a result, sequences of the embedded features $\mathbf{X}_i^r,\mathbf{X}_i^o\in \mathbb{R}^{D\times l_i}$ are computed for RGB and optical flow stream where $l_{i}$ denotes the temporal length of the features of the video $v_{i}$ .
+
+# 3.2 Angular Center Loss with a Pair of Triplets (ACL-PT)
+
+For simplicity, we first look at the RGB stream. The embedded features $\mathbf{X}_i^r$ are applied with a 1-D convolutional layer. The output is T-CAM $\mathbf{C}_i^r\in \mathbb{R}^{N_c\times l_i}$ which represents the classification scores of each activity class over time. We compute class-wise attention $\mathbf{A}_i^r\in \mathbb{R}^{N_c\times l_i}$ by applying the softmax function to T-CAM:
+
+$$
+\mathbf {A} _ {i} ^ {r} (j, t) = \frac {\exp \left(\mathbf {C} _ {i} ^ {r} (j , t)\right)}{\sum_ {t ^ {\prime} = 1} ^ {l _ {i}} \exp \left(\mathbf {C} _ {i} ^ {r} (j , t ^ {\prime})\right)} \tag {2}
+$$
+
+where $j \in \{1, \dots, N_c\}$ denotes each activity class and $t$ is for each time step. Since this top-down attention represents the probability mass function of each activity over time, we can use it to aggregate the embedded features $\mathbf{X}_i^r$ :
+
+$$
+\mathbf {F} _ {i} ^ {r} (j) = \sum_ {t = 1} ^ {l _ {i}} \mathbf {A} _ {i} ^ {r} (j, t) \mathbf {X} _ {i} ^ {r} (t) \tag {3}
+$$
+
+where $\mathbf{F}_i^r (j)\in \mathbb{R}^D$ denotes a video-level feature representation for the activity class $j$ . Now, we can formulate a loss function that is inspired by ATCL [10] on the video-level feature representations as follows:
+
+$$
+\mathcal {L} _ {\mathrm {A T C L}} ^ {r} = \frac {1}{N} \sum_ {i = 1} ^ {N} \sum_ {j: \mathbf {y} _ {i} (j) = 1} \max \left(0, \mathcal {D} \left(\mathbf {F} _ {i} ^ {r} (j), \mathbf {c} _ {j} ^ {r}\right) - \mathcal {D} \left(\mathbf {F} _ {i} ^ {r} (j), \mathbf {c} _ {n _ {i, j} ^ {r}} ^ {r}\right) + m _ {1}\right) \tag {4}
+$$
+
+where $\mathbf{c}_j^r\in \mathbb{R}^D$ is the center of activity class $j$ , $n_{i,j}^{r} = \operatorname *{argmin}_{k\neq j}\mathcal{D}\bigl (\mathbf{F}_i^r (j),\mathbf{c}_k^r\bigr)$ is an index for the nearest negative center, and $m_{1}\in [0,\pi ]$ is an angular margin. It is based on the triplet of $(\mathbf{F}_i^r (j),\mathbf{c}_j^r,\mathbf{c}_{n_{i,j}}^r)$ that is illustrated in Fig. 1(a). Here, $\mathcal{D}(\cdot)$ represents the angular distance:
+
+$$
+\mathcal {D} \left(\mathbf {F} _ {i} ^ {r} (j), \mathbf {c} _ {j} ^ {r}\right) = \operatorname {a r c c o s} \left(\frac {\mathbf {F} _ {i} ^ {r} (j) \cdot \mathbf {c} _ {j} ^ {r}}{\| \mathbf {F} _ {i} ^ {r} (j) \| _ {2} \| \mathbf {c} _ {j} ^ {r} \| _ {2}}\right) \tag {5}
+$$
+
+Optimizing the loss function of Eq. 4 ensures that the video-level features of the same activity class are grouped together and that the inter-class variations of those features are maximized at the same time. As a result, the embedded features are learned to be discriminative and T-CAM will have higher values for the activity-related features.
+
+For the next step, we exploit another triplet. We first compute a new class-wise attention $\mathbf{a}_i^r\in \mathbb{R}^{N_c\times l_i}$ from T-CAM:
+
+$$
+\mathbf {a} _ {i} ^ {r} (j, t) = \frac {\exp (\beta \mathbf {C} _ {i} ^ {r} (j , t))}{\sum_ {t ^ {\prime} = 1} ^ {l _ {i}} \exp (\beta \mathbf {C} _ {i} ^ {r} (j , t ^ {\prime}))} \tag {6}
+$$
+
+where $\beta$ is a scalar between 0 and 1. This new attention still represents the probability mass function of each activity over time, but it is supposed to have lower values for the activity features and higher values for the background features when compared to the original attention $\mathbf{A}_i^r$ . Therefore, if we aggregate the embedded features $\mathbf{X}_i^r$ using $\mathbf{a}_i^r$ , the resulting new video-level feature $\mathbf{f}_i^r$ should attend more strongly to the background features than $\mathbf{F}_i^r$ is. This property can be enforced by introducing a different loss function based on the new triplet of $(\mathbf{c}_j^r, \mathbf{F}_i^r(j), \mathbf{f}_i^r(j))$ that is also illustrated in Fig. 1(a):
+
+$$
+\mathcal {L} _ {\mathrm {N T}} ^ {r} = \frac {1}{N} \sum_ {i = 1} ^ {N} \sum_ {j: \mathbf {y} _ {i} (j) = 1} \max \left(0, \mathcal {D} \left(\mathbf {F} _ {i} ^ {r} (j), \mathbf {c} _ {j} ^ {r}\right) - \mathcal {D} \left(\mathbf {f} _ {i} ^ {r} (j), \mathbf {c} _ {j} ^ {r}\right) + m _ {2}\right) \tag {7}
+$$
+
+where the subscript NT refers to the new triplet and $m_2 \in [0, \pi]$ is an angular margin. Optimizing this loss function makes the background features more distinguishable from the activity features. Merging the two loss functions of Eq. 4 and Eq. 7 gives us a new loss based on a pair of triplets, which we call Angular Center Loss with a Pair of Triplets (ACL-PT):
+
+$$
+\mathcal {L} _ {\mathrm {A C L - P T}} ^ {r} = \mathcal {L} _ {\mathrm {A T C L}} ^ {r} + \gamma \mathcal {L} _ {\mathrm {N T}} ^ {r} \tag {8}
+$$
+
+where $\gamma$ is a hyperparameter denoting the relative importance of the two losses.
+
+Previous works on center loss [25, 4, 10] suggest using an averaged gradient (typically denoted as $\Delta \mathbf{c}_j^r$ ) to update the centers for better stability. Following this convention, the derivatives of each term of Eq. 8 with respect to the centers are averaged. For simplicity, we assume that the centers have unit length. Refer to the supplementary material for general case without such assumption. Let $\tilde{\mathcal{L}}_{\mathrm{ATCL}_{i,j}}^r$ and $\tilde{\mathcal{L}}_{\mathrm{NT}_{i,j}}^r$ be the loss terms inside the max operation of the $i$ -th sample and of the $j$ -th activity class as follows:
+
+$$
+\mathcal {L} _ {\mathrm {A T C L} _ {i, j}} ^ {r} = \mathcal {D} \left(\mathbf {F} _ {i} ^ {r} (j), \mathbf {c} _ {j} ^ {r}\right) - \mathcal {D} \left(\mathbf {F} _ {i} ^ {r} (j), \mathbf {c} _ {n _ {i, j} ^ {r}} ^ {r}\right) + m _ {1} \tag {9}
+$$
+
+$$
+\tilde {\mathcal {L}} _ {\mathrm {N T} _ {i, j}} ^ {r} = \mathcal {D} \left(\mathbf {F} _ {i} ^ {r} (j), \mathbf {c} _ {j} ^ {r}\right) - \mathcal {D} \left(\mathbf {f} _ {i} ^ {r} (j), \mathbf {c} _ {j} ^ {r}\right) + m _ {2} \tag {10}
+$$
+
+Next, let $\mathbf{g}_{1_{i,j}}^r$ and $\mathbf{g}_{2_{i,j}}^r$ be the derivatives of Eq. 9 with respect to $\mathbf{c}_j^r$ and $\mathbf{c}_{n_{i,j}}^r$ , respectively; and let $\mathbf{h}_{i,j}^r$ be the derivative of Eq. 10 with respect to $\mathbf{c}_j^r$ . For
+
+example, $\mathbf{g}_{1_{i,j}}^r$ is given by:
+
+$$
+\mathbf {g} _ {1 _ {i, j}} ^ {r} = - \frac {\mathbf {F} _ {i} ^ {r} (j)}{\sin (\mathcal {D} (\mathbf {F} _ {i} ^ {r} (j) , \mathbf {c} _ {j} ^ {r})) \| \mathbf {F} _ {i} ^ {r} (j) \| _ {2}} \tag {11}
+$$
+
+Then, we can represent the averaged gradient considering the three terms:
+
+$$
+\Delta \mathbf {c} _ {j} ^ {r} = \Delta_ {\mathbf {g} _ {1 _ {i, j}} ^ {r}} + \Delta_ {\mathbf {g} _ {2 _ {i, j}} ^ {r}} + \Delta_ {\mathbf {h} _ {i, j} ^ {r}} \tag {12}
+$$
+
+For example, $\varDelta_{\mathbf{g}_{1,i,j}^{r}}$ is computed as follows:
+
+$$
+\Delta_ {\mathbf {g} _ {1 _ {i, j}} ^ {r}} = \frac {1}{N} \left(\frac {\sum_ {i : \mathbf {y} _ {i} (j) = 1} \mathbf {g} _ {1 _ {i , j}} ^ {r} \delta \left(\tilde {\mathcal {L}} _ {\mathrm {A T C L} _ {i , j}} ^ {r} > 0\right)}{1 + \sum_ {i : \mathbf {y} _ {i} (j) = 1} \delta \left(\tilde {\mathcal {L}} _ {\mathrm {A T C L} _ {i , j}} ^ {r} > 0\right)}\right) \tag {13}
+$$
+
+Here, $\delta (condition) = 1$ if the condition is true and $\delta (condition) = 0$ otherwise. Finally, the centers are updated using $\varDelta\mathbf{c}_j^r$ for every iteration of the training process by a gradient descent algorithm. More details can be found in the supplementary material.
+
+# 3.3 Adopting an adversarial approach (A2CL-PT)
+
+We further improve the performance of the proposed ACL-PT by applying an adversarial approach inspired by ACoL [30]. For each stream, there are two parallel branches that operate in an adversarial way. The motivation is that a network with a single branch might be dominated by salient activity features that are not enough to localize all the activities in time. We zero out the most salient activity features localized by the first branch for activity class $j$ of $v_{i}$ as follows:
+
+$$
+\mathbf {X} _ {i, j} ^ {r a} (t) = \left\{ \begin{array}{l l} \mathbf {0}, & \text {i f} \mathbf {C} _ {i} ^ {r} (j, t) \in \operatorname {t o p - k} _ {a} \text {e l e m e n t s o f} \mathbf {C} _ {i} ^ {r} (j) \\ \mathbf {X} _ {i} ^ {r} (t), & \text {o t h e r w i s e} \end{array} \right. \tag {14}
+$$
+
+where $\mathbf{X}_{i,j}^{ra} \in \mathbb{R}^{D \times l_i}$ denotes the input features of activity class $j$ for the second (adversarial) branch and $k_a$ is set to $\left\lfloor \frac{l_i}{s_a} \right\rfloor$ for a hyperparameter $s_a$ that controls the ratio of zeroed-out features. For each activity class $j$ , a separate 1-D convolutional layer of the adversarial branch transforms $\mathbf{X}_{i,j}^{ra}$ to the classification scores of the activity class $j$ over time. By iterating over all the activity classes, new T-CAM $\mathbf{C}_i^{ra} \in \mathbb{R}^{N_c \times l_i}$ is computed. We argue that $\mathbf{C}_i^{ra}$ can be used to find other supplementary activities that are not localized by the first branch. By using the original features $\mathbf{X}_i^r$ , new T-CAM $\mathbf{C}_i^{ra}$ , and a separate set of centers $\{\mathbf{c}_j^{ra}\}_{j=1}^{N_c}$ , we can compute the loss of ACL-PT for this adversarial branch $\mathcal{L}_{\mathrm{ACL - PT}}^{ra}$ in a similar manner (Eq. 1-7). We call the sum of the losses of the two branches Adversarial and Angular Center Loss with a Pair of Triplets (A2CL-PT):
+
+$$
+\mathcal {L} _ {\mathrm {A 2 C L - P T}} ^ {r} = \mathcal {L} _ {\mathrm {A C L - P T}} ^ {r} + \mathcal {L} _ {\mathrm {A C L - P T}} ^ {r a} \tag {15}
+$$
+
+In addition, the losses for the optical flow stream $\mathcal{L}_{\mathrm{ACL - PT}}^o$ and $\mathcal{L}_{\mathrm{ACL - PT}}^{oa}$ are also computed in the same manner. As a result, the total A2CL-PT term is given by:
+
+$$
+\mathcal {L} _ {\mathrm {A 2 C L - P T}} = \mathcal {L} _ {\mathrm {A 2 C L - P T}} ^ {r} + \mathcal {L} _ {\mathrm {A 2 C L - P T}} ^ {o} \tag {16}
+$$
+
+# 3.4 Classification Loss
+
+Following the previous works [19, 12, 16], we use the cross-entropy between the predicted pmf (probability mass function) and the ground-truth pmf of activities for classifying different activity classes in a video. We will first look at the RGB stream. For each video $v_{i}$ , we compute the class-wise classification scores $\mathbf{s}_i^r \in \mathbb{R}^{N_c}$ by averaging top- $k$ elements of $\mathbf{C}_i^r$ per activity class where $k$ is set to $\lceil \frac{l_i}{s} \rceil$ for a hyperparameter $s$ . Then, the softmax function is applied to compute the predicted pmf of activities $\mathbf{p}_i^r \in \mathbb{R}^{N_c}$ . The ground-truth pmf $\mathbf{q}_i$ is obtained by $l_1$ -normalizing $\mathbf{y}_i$ . Then, the classification loss for the RGB stream is:
+
+$$
+\mathcal {L} _ {\mathrm {C L S}} ^ {r} = \frac {1}{N} \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {N _ {c}} - \mathbf {q} _ {i} (j) \log \left(\mathbf {p} _ {i} ^ {r} (j)\right) \tag {17}
+$$
+
+The classification loss for the optical flow stream $\mathcal{L}_{\mathrm{CLS}}^o$ is computed in a similar manner. $\mathcal{L}_{\mathrm{CLS}}^{ra}$ and $\mathcal{L}_{\mathrm{CLS}}^{oa}$ of adversarial branches are also computed in the same way.
+
+Finally, we compute the final T-CAM $\mathbf{C}_i^F$ from the four T-CAMs (two from the RGB stream: $\mathbf{C}_i^r$ , $\mathbf{C}_i^{ra}$ , two from the optical flow stream: $\mathbf{C}_i^o$ , $\mathbf{C}_i^{oa}$ ) as follows:
+
+$$
+\mathbf {C} _ {i} ^ {F} = \mathbf {w} ^ {r} \cdot \left(\mathbf {C} _ {i} ^ {r} + \omega \mathbf {C} _ {i} ^ {r a}\right) + \mathbf {w} ^ {o} \cdot \left(\mathbf {C} _ {i} ^ {o} + \omega \mathbf {C} _ {i} ^ {o a}\right) \tag {18}
+$$
+
+where $\mathbf{w}^r, \mathbf{w}^o \in \mathbb{R}^{N_c}$ are class-specific weighting parameters that are learned during training and $\omega$ is a hyperparameter for the relative importance of T-CAMs from the adversarial branch. We can then compute the classification loss for the final T-CAM $\mathcal{L}_{\mathrm{CLS}}^F$ in the same manner. The total classification loss is given by:
+
+$$
+\mathcal {L} _ {\mathrm {C L S}} = \mathcal {L} _ {\mathrm {C L S}} ^ {r} + \mathcal {L} _ {\mathrm {C L S}} ^ {r a} + \mathcal {L} _ {\mathrm {C L S}} ^ {o} + \mathcal {L} _ {\mathrm {C L S}} ^ {o a} + \mathcal {L} _ {\mathrm {C L S}} ^ {F} \tag {19}
+$$
+
+# 3.5 Classification and Localization
+
+During the test time, we use the final T-CAM $\mathbf{C}_i^F$ for the classification and localization of activities following the previous works [19, 16]. First, we compute the class-wise classification scores $\mathbf{s}_i^F \in \mathbb{R}^{N_c}$ and the predicted pmf of activities $\mathbf{p}_i^F \in \mathbb{R}^{N_c}$ as described in Section 3.4. We use $\mathbf{p}_i^F$ for activity classification. For activity localization, we first find a set of possible activities that has positive classification scores, which is $\{j : \mathbf{s}_i^F(j) > 0\}$ . For each activity in this set, we localize all the temporal segments that has positive T-CAM values for two or more successive time steps. Formally, a set of localized temporal segments for $v_i$ is:
+
+$$
+\{[ s, e ]: \forall t \in [ s, e ], \mathbf {C} _ {i} ^ {F} (t) > 0 \text {a n d} \mathbf {C} _ {i} ^ {F} (s - 1) < 0 \text {a n d} \mathbf {C} _ {i} ^ {F} (e + 1) < 0 \} \tag {20}
+$$
+
+where $\mathbf{C}_i^F(0)$ and $\mathbf{C}_i^F(l_i + 1)$ are defined to be any negative values and $e \geq s + 2$ . The localized segments for each activity are non-overlapping. We assign a confidence score for each localized segment, which is the sum of the maximum T-CAM value of the segment and the classification score of it.
+
+# 4 Experiments
+
+# 4.1 Datasets and Evaluation
+
+We evaluate our method on two datasets: THUMOS14 [5] and ActivityNet1.3 [3]. For the THUMOS14 dataset, the validation videos are used for training without temporal boundary annotations and the test videos are used for evaluation following the convention in the literature. This dataset is known to be challenging because each video has a number of activity instances and the duration of the videos varies widely. For the ActivityNet1.3 dataset, we use the training set for training and the validation set for evaluation. Following the standard evaluation protocol, we report mean average precision (mAP) at different intersection over union (IoU) thresholds.
+
+# 4.2 Implementation Details
+
+First, we extract RGB frames from each video at 25 fps and generate optical flow frames by using the TV-L1 algorithm [29]. Each video is then divided into non-overlapping 16-frame segments. We apply I3D networks [1] pre-trained on Kinetics dataset [6] to the segments to obtain the intermediate 1024-dimensional features after the global pooling layer. We train our network in an end-to-end manner using a single GPU (TITAN Xp).
+
+For the THUMOS14 dataset [5], we train our network using a batch size of 32. We use the Adam optimizer [7] with learning rate $10^{-4}$ and weight decay 0.0005. The centers are updated using the SGD algorithm with learning rate 0.1 for the RGB stream and 0.2 for the optical flow stream. The kernel size of the 1-D convolutional layers for the T-CAMs is set to 1. We set $\alpha$ in Eq. 1 to 1 and $\gamma$ in Eq. 8 to 0.6. For $\beta$ in Eq. 6, we randomly generate a number between 0.001 and 0.1 for each training sample. We set angular margins $m_{1}$ to 2 and $m_{2}$ to 1. $s_a$ of Eq. 14 and $s$ for the classification loss are set to 40 and 8, respectively. Finally, $\omega$ in Eq. 18 is set to 0.6. The whole training process of $40.5\mathrm{k}$ iterations takes less than 14 hours.
+
+For the ActivityNet1.3 dataset [3], it is shown from the previous works [19, 16] that post-processing of the final T-CAM is required. We use an additional 1-D convolutional layer (kernel size=13, dilation=2) to post-process the final T-CAM. The kernel size of the 1-D convolutional layers for T-CAMs is set to 3. In addition, we change the batch size to 24. The learning rate for centers are 0.05 and 0.1 for the RGB and optical flow streams, respectively. We set $\alpha$ to 2, $\gamma$ to 0.2, and $\omega$ to 0.4. The remaining hyperparameters of $\beta$ , $m_1$ , $m_2$ , $s_a$ , and $s$ are the same as above. We train the network for 175k iterations.
+
+# 4.3 Comparisons with the State-of-the-art
+
+We compare our final method A2CL-PT with other state-of-the-art approaches on the THUMOS14 dataset [5] in Table 1. Full supervision refers to training from frame-level activity annotations, whereas weak supervision indicates training
+
+Table 1. Performance comparison of A2CL-PT with state-of-the-art methods on the THUMOS14 dataset [5]. A2CL-PT significantly outperforms all the other weakly-supervised methods. $\dagger$ indicates an additional usage of other ground-truth annotations or independently collected data. A2CL-PT also outperforms all weakly $\dagger$ -supervised methods that use additional data at higher IoUs (from 0.4 to 0.9). The column AVG is for the average mAP of IoU threshold from 0.1 to 0.9.
+
+| Supervision | Method | mAP(%)@IoU |
| 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | AVG |
| Full | S-CNN [22] | 47.7 | 43.5 | 36.3 | 28.7 | 19.0 | 10.3 | 5.3 | - | - | - |
| R-C3D [26] | 54.5 | 51.5 | 44.8 | 35.6 | 28.9 | - | - | - | - | - |
| SSN [32] | 66.0 | 59.4 | 51.9 | 41.0 | 29.8 | - | - | - | - | - |
| TAL-Net [2] | 59.8 | 57.1 | 53.2 | 48.5 | 42.8 | 33.8 | 20.8 | - | - | - |
| BSN [11] | - | - | 53.5 | 45.0 | 36.9 | 28.4 | 20.0 | - | - | - |
| GTAN [14] | 69.1 | 63.7 | 57.8 | 47.2 | 38.8 | - | - | - | - | - |
| Weak† | Liu et al. [12] | 57.4 | 50.8 | 41.2 | 32.1 | 23.1 | 15.0 | 7.0 | - | - | - |
| 3C-Net [16] | 59.1 | 53.5 | 44.2 | 34.1 | 26.6 | - | 8.1 | - | - | - |
| Nguyen et al. [18] | 64.2 | 59.5 | 49.1 | 38.4 | 27.5 | 17.3 | 8.6 | 3.2 | 0.5 | 29.8 |
| STAR [27] | 68.8 | 60.0 | 48.7 | 34.7 | 23.0 | - | - | - | - | - |
| Weak | UntrimmedNet [24] | 44.4 | 37.7 | 28.2 | 21.1 | 13.7 | - | - | - | - | - |
| STPN [17] | 52.0 | 44.7 | 35.5 | 25.8 | 16.9 | 9.9 | 4.3 | 1.2 | 0.1 | 21.2 |
| W-TALC [19] | 55.2 | 49.6 | 40.1 | 31.1 | 22.8 | - | 7.6 | - | - | - |
| AutoLoc [21] | - | - | 35.8 | 29.0 | 21.2 | 13.4 | 5.8 | - | - | - |
| CleanNet [13] | - | - | 37.0 | 30.9 | 23.9 | 13.9 | 7.1 | - | - | - |
| MAAN [28] | 59.8 | 50.8 | 41.1 | 30.6 | 20.3 | 12.0 | 6.9 | 2.6 | 0.2 | 24.9 |
| BaS-Net [9] | 58.2 | 52.3 | 44.6 | 36.0 | 27.0 | 18.6 | 10.4 | 3.9 | 0.5 | 27.9 |
| A2CL-PT (Ours) | 61.2 | 56.1 | 48.1 | 39.0 | 30.1 | 19.2 | 10.6 | 4.8 | 1.0 | 30.0 |
+
+only from video-level activity labels. For fair comparison, we use the symbol $\dagger$ to separate methods utilizing additional ground-truth annotations [16, 27] or independently collected data [12, 18]. The column AVG is for the average mAP of IoU thresholds from 0.1 to 0.9 with a step size of 0.1. Our method significantly outperforms other weakly-supervised methods across all metrics. Specifically, an absolute gain of $2.1\%$ is achieved in terms of the average mAP when compared to the best previous method (BaS-Net [9]). We want to note that our method performs even better than the methods of weak $\dagger$ supervision at higher IoUs.
+
+We also evaluate A2CL-PT on the ActivityNet1.3 dataset [3]. Following the standard evaluation protocol of the dataset, we report mAP at different IoU thresholds, which are from 0.05 to 0.95. As shown in Table 2, our method again achieves the best performance.
+
+# 4.4 Ablation Study and Analysis
+
+We perform an ablation study on the THUMOS14 dataset [5]. In Table 3, we analyze the two main contributions of this work, which are the usage of the newly-suggested triplet (Eq. 7) and the adoption of adversarial approach (Eq. 15). ATCL refers to the baseline that uses only the loss term of Eq. 4. We use the superscript $+$ to indicate the addition of adversarial branch. As described in Section 3.2, ACL-PT additionally uses the new triplet on top of the baseline.
+
+Table 2. Performance comparison on the ActivityNet1.3 dataset [3]. A2CL-PT again achieves the best performance. $\dagger$ indicates an additional usage of other ground-truth annotations or independently collected data. The column AVG is for the average mAP of IoU threshold from 0.5 to 0.95.
+
+| Supervision | Method | mAP(%)@IoU |
| 0.5 | 0.55 | 0.6 | 0.65 | 0.7 | 0.75 | 0.8 | 0.85 | 0.9 | 0.95 |
| Weak† | Liu et al. [12] | 34.0 | - | - | - | - | 20.9 | - | - | - | 5.7 |
| Nguyen et al. [18] | 36.4 | - | - | - | - | 19.2 | - | - | - | 2.9 |
| STAR [27] | 31.1 | - | - | - | - | 18.8 | - | - | - | 4.7 |
| Weak | STPN [17] | 29.3 | - | - | - | - | 16.9 | - | - | - | 2.6 |
| MAAN [28] | 33.7 | - | - | - | - | 21.9 | - | - | - | 5.5 |
| BaS-Net [9] | 34.5 | - | - | - | - | 22.5 | - | - | - | 4.9 |
| A2CL-PT (Ours) | 36.8 | 33.6 | 30.8 | 27.8 | 24.9 | 22.0 | 18.1 | 14.9 | 10.2 | 5.2 |
+
+Table 3. Performance comparison of different ablative settings on the THUMOS14 dataset [5]. The superscript + indicates that we add an adversarial branch to the baseline method. It demonstrates that both components are effective.
+
+| Method | New triplet | Adversarial | mAP(%)@IoU |
| 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | AVG(0.1:0.9) |
| ATCL | | | 44.7 | 34.8 | 25.7 | 15.8 | 8.3 | 27.4 |
| ATCL+ | | ✓ | 43.7 | 35.1 | 26.3 | 15.7 | 8.3 | 27.2 |
| ACL-PT | ✓ | | 46.6 | 37.2 | 28.9 | 18.2 | 10.0 | 29.2 |
| A2CL-PT | ✓ | ✓ | 48.1 | 39.0 | 30.1 | 19.2 | 10.6 | 30.0 |
+
+We can observe that our final proposed method, A2CL-PT, performs the best. It implies that both components are necessary to achieve the best performance and each of them is effective. Interestingly, adding an adversarial branch does not bring any performance gain without our new triplet. We think that although using ACL-PT increases the localization performance by learning discriminative features, it also makes the network sensitive to salient activity-related features.
+
+We analyze the impact of two main hyperparameters in Fig. 3. The first one is $\alpha$ that controls the weight of A2CL-PT term (Eq. 1), and the other one is $\omega$ that is for the relative importance of T-CAMs from adversarial branches (Eq. 18). We can observe from Fig. 3(a) that positive $\alpha$ always brings the performance gain. It indicates that A2CL-PT is effective. As seen in Fig. 3(b), the performance is increased by using an adversarial approach when $\omega$ is less or equal to 1. If $\omega$ is greater than 1, T-CAMs of adversarial branches will play a dominant role in activity localization. Therefore, the results tell us that the adversarial branches provide mostly supplementary information.
+
+# 4.5 Qualitative Analysis
+
+We perform a qualitative analysis to better understand our method. In Fig. 4, qualitative results of our A2CL-PT on four videos from the test set of the THUMOS14 dataset [5] are presented. (a), (b), (c), and (d) are examples of JavelinThrow, HammerThrow, ThrowDiscus, and HighJump, respectively. Detection denotes the localized activity segments. For additional comparison, we
+
+
+(a)
+
+
+(b)
+Fig. 3. We analyze the impact of two main hyperparameters $\alpha$ and $\omega$ . (a): Positive $\alpha$ always provides the performance gain, so it indicates that our method is effective. (b): If $\omega$ is too large, the performance is decreased substantially. It implies that T-CAMs of adversarial branches provide mostly supplementary information.
+
+also show the results of BaS-Net [9], which is the leading state-of-the-art method. We use three different colors on the contours of sampled frames: blue, green, and red which denote true positive, false positive, and false negative, respectively. In (a), there are multiple instances of false positive. These false positives are challenging because the person in the video swings the javelin, which can be mistaken for a throw. Similar cases are observed in (b). One of the false positives includes the person drawing the line on the field, which looks similar to a HammerThrow activity. In (c), some false negative segments are observed. Interestingly, this is because the ground-truth annotations are wrong; that is, the ThrowDiscuss activity is annotated but it does not actually occur in these cases. In (d), all the instances of the HighJump activity are successfully localized. Other than the unusual situations, our method performs well in general.
+
+# 5 Conclusion
+
+We have presented A2CL-PT as a novel method for weakly-supervised temporal activity localization. We suggest using two triplets of vectors of the feature space to learn discriminative features and to distinguish background portions from activity-related parts of a video. We also propose to adopt an adversarial approach to localize activities more thoroughly. We perform extensive experiments to show that our method is effective. A2CL-PT outperforms all the existing state-of-the-art methods on major datasets. Ablation study demonstrates that both contributions are significant. Finally, we qualitatively analyze the effectiveness of our method in detail.
+
+Acknowledgement We thank Stephan Lemmer, Victoria Florence, Nathan Louis, and Christina Jung for their valuable feedback and comments. This research was, in part, supported by NIST grant 60NANB17D191.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+Fig. 4. Qualitative results on the THUMOS14 dataset [5]. Detection denotes the localized activity segments. The results of BaS-Net [9] are also included for additional comparison. Contours of the sampled frames have three different colors. We use blue, green, and red to indicate true positives, false positives, and false negatives, respectively. (a): An example of JavelinThrow activity class. The observed false positives are challenging. The person in the video swings the javelin on the frames of these false positives, which can be mistaken for a throw. (b): An example of HammerThrow. One of the false positives include the person who draws the line on the field. It is hard to distinguish the two activities. (c): An example of ThrowDiscus. Multiple false negatives are observed, which illustrates the situations where the ground-truth activity instances are wrongly annotated. (d): An example of HighJump without such unusual cases. It can be observed that our method performs well in general.
+
+# References
+
+1. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6299-6308 (2017)
+2. Chao, Y.W., Vijayanarasimhan, S., Seybold, B., Ross, D.A., Deng, J., Sukthankar, R.: Rethinking the faster r-cnn architecture for temporal action localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1130-1139 (2018)
+3. Fabian Caba Heilbron, Victor Escorcia, B.G., Niebles, J.C.: Activitynet: A large-scale video benchmark for human activity understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 961-970 (2015)
+4. He, X., Zhou, Y., Zhou, Z., Bai, S., Bai, X.: Triplet-center loss for multi-view 3d object retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1945-1954 (2018)
+5. Jiang, Y.G., Liu, J., Roshan Zamir, A., Toderici, G., Laptev, I., Shah, M., Sukthankar, R.: THUMOS challenge: Action recognition with a large number of classes. http://csrc.vcf.edu/THUMOS14/ (2014)
+6. Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., Viola, F., Green, T., Back, T., Natev, P., et al.: The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017)
+7. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
+8. Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: Hmdb: a large video database for human motion recognition. In: 2011 International Conference on Computer Vision. pp. 2556-2563. IEEE (2011)
+9. Lee, P., Uh, Y., Byun, H.: Background suppression network for weakly-supervised temporal action localization. In: AAAI (2020)
+0. Li, Z., Xu, C., Leng, B.: Angular triplet-center loss for multi-view 3d shape retrieval. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 8682-8689 (2019)
+1. Lin, T., Zhao, X., Su, H., Wang, C., Yang, M.: Bsn: Boundary sensitive network for temporal action proposal generation. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 3-19 (2018)
+2. Liu, D., Jiang, T., Wang, Y.: Completeness modeling and context separation for weakly supervised temporal action localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1298-1307 (2019)
+3. Liu, Z., Wang, L., Zhang, Q., Gao, Z., Niu, Z., Zheng, N., Hua, G.: Weakly supervised temporal action localization through contrast based evaluation networks. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 3899-3908 (2019)
+4. Long, F., Yao, T., Qiu, Z., Tian, X., Luo, J., Mei, T.: Gaussian temporal awareness networks for action localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 344-353 (2019)
+5. Monfort, M., Andonian, A., Zhou, B., Ramakrishnan, K., Bargal, S.A., Yan, T., Brown, L., Fan, Q., Gutfruend, D., Vondrick, C., et al.: Moments in time dataset: one million videos for event understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence pp. 1-8 (2019). https://doi.org/10.1109/TPAMI.2019.2901464
+
+16. Narayan, S., Cholakkal, H., Khan, F.S., Shao, L.: 3c-net: Category count and center loss for weakly-supervised action localization. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 8679-8687 (2019)
+17. Nguyen, P., Liu, T., Prasad, G., Han, B.: Weakly supervised action localization by sparse temporal pooling network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6752-6761 (2018)
+18. Nguyen, P.X., Ramanan, D., Fowlkes, C.C.: Weakly-supervised action localization with background modeling. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 5502-5511 (2019)
+19. Paul, S., Roy, S., Roy-Chowdhury, A.K.: W-talc: Weakly-supervised temporal activity localization and classification. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 563-579 (2018)
+20. Shou, Z., Chan, J., Zareian, A., Miyazawa, K., Chang, S.F.: Cdc: Convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 5734-5743 (2017)
+21. Shou, Z., Gao, H., Zhang, L., Miyazawa, K., Chang, S.F.: Autoloc: Weakly-supervised temporal action localization in untrimmed videos. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 154-171 (2018)
+22. Shou, Z., Wang, D., Chang, S.F.: Temporal action localization in untrimmed videos via multi-stage cnns. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1049-1058 (2016)
+23. Soomro, K., Zamir, A.R., Shah, M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)
+24. Wang, L., Xiong, Y., Lin, D., Van Gool, L.: Untrimmed nets for weakly supervised action recognition and detection. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. pp. 4325-4334 (2017)
+25. Wen, Y., Zhang, K., Li, Z., Qiao, Y.: A discriminative feature learning approach for deep face recognition. In: European conference on computer vision. pp. 499-515. Springer (2016)
+26. Xu, H., Das, A., Saenko, K.: R-c3d: Region convolutional 3d network for temporal activity detection. In: Proceedings of the IEEE international conference on computer vision. pp. 5783-5792 (2017)
+27. Xu, Y., Zhang, C., Cheng, Z., Xie, J., Niu, Y., Pu, S., Wu, F.: Segregated temporal assembly recurrent networks for weakly supervised multiple action detection. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 9070-9078 (2019)
+28. Yuan, Y., Lyu, Y., Shen, X., Tsang, I.W., Yeung, D.Y.: Marginalized average attentional network for weakly-supervised learning. In: International Conference on Learning Representations (ICLR) (2019)
+29. Zach, C., Pock, T., Bischof, H.: A duality based approach for realtime tv-1 optical flow. In: Joint pattern recognition symposium. pp. 214-223. Springer (2007)
+30. Zhang, X., Wei, Y., Feng, J., Yang, Y., Huang, T.S.: Adversarial complementary learning for weakly supervised object localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1325-1334 (2018)
+31. Zhao, H., Torralba, A., Torresani, L., Yan, Z.: Hacs: Human action clips and segments dataset for recognition and temporal localization. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 8668-8678 (2019)
+32. Zhao, Y., Xiong, Y., Wang, L., Wu, Z., Tang, X., Lin, D.: Temporal action detection with structured segment networks. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2914-2923 (2017)
\ No newline at end of file
diff --git a/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/images.zip b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..55ebe3b05bbfe4b24082c610370f6ec7fa5a6105
--- /dev/null
+++ b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:66be2580e813ada692bad925cee10f96117ebb2ed45fa366baf49f750a6851d5
+size 550970
diff --git a/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/layout.json b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..44632a76b0228d97370bba333eaad988e4374cf1
--- /dev/null
+++ b/adversarialbackgroundawarelossforweaklysupervisedtemporalactivitylocalization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d7cc28f6817de6ddb36939ffe61057df0168d09f2f4a5b01affc71929bb71c28
+size 437317
diff --git a/adversarialcontinuallearning/8c1919de-dcbc-445e-a55c-aa507e399cb3_content_list.json b/adversarialcontinuallearning/8c1919de-dcbc-445e-a55c-aa507e399cb3_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7cccb7c791827b541a79faf857cf83decadce311
--- /dev/null
+++ b/adversarialcontinuallearning/8c1919de-dcbc-445e-a55c-aa507e399cb3_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:73993f78ccd4f4703da1ed907f29452725bcd599d76cb2a218d144f6d7cf198d
+size 75344
diff --git a/adversarialcontinuallearning/8c1919de-dcbc-445e-a55c-aa507e399cb3_model.json b/adversarialcontinuallearning/8c1919de-dcbc-445e-a55c-aa507e399cb3_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b1bf14260defc1d260ca678b15d4afde95dfffd3
--- /dev/null
+++ b/adversarialcontinuallearning/8c1919de-dcbc-445e-a55c-aa507e399cb3_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a2007cc94f8ad590ad38f17364f7c829fb0ad1720210cc2f8ba8c89244be5083
+size 90918
diff --git a/adversarialcontinuallearning/8c1919de-dcbc-445e-a55c-aa507e399cb3_origin.pdf b/adversarialcontinuallearning/8c1919de-dcbc-445e-a55c-aa507e399cb3_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3b475dc9603f3a465da0d6eb45c3364db266efb9
--- /dev/null
+++ b/adversarialcontinuallearning/8c1919de-dcbc-445e-a55c-aa507e399cb3_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:24b8c1bcceeb01b70595ab6c3b8349751098152ff363d176ce831c8c778f2c6e
+size 531369
diff --git a/adversarialcontinuallearning/full.md b/adversarialcontinuallearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..04333b8d67d93d4e43bd70a534a61ee11878c746
--- /dev/null
+++ b/adversarialcontinuallearning/full.md
@@ -0,0 +1,234 @@
+# Adversarial Continual Learning
+
+Sayna Ebrahimi $^{1,2}$ , Franziska Meier $^{1}$ , Roberto Calandra $^{1}$ , Trevor Darrell $^{2}$ , and Marcus Rohrbach $^{1}$
+
+1Facebook AI Research, USA 2UC Berkeley EECS, Berkeley, CA, USA {sayna,trevor}@eecs.berkeley.edu, {fmeier,rcalandra,mrf}@fb.com
+
+Abstract. Continual learning aims to learn new tasks without forgetting previously learned ones. We hypothesize that representations learned to solve each task in a sequence have a shared structure while containing some task-specific properties. We show that shared features are significantly less prone to forgetting and propose a novel hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features required to solve a sequence of tasks. Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills. We demonstrate our hybrid approach is effective in avoiding forgetting and show it is superior to both architecture-based and memory-based approaches on class incrementally learning of a single dataset as well as a sequence of multiple datasets in image classification. Our code is available at https://github.com/facebookresearch/Adversarial-Continual-Learning.
+
+# 1 Introduction
+
+Humans can learn novel tasks by augmenting core capabilities with new skills learned based on information for a specific novel task. We conjecture that they can leverage a lifetime of previous task experiences in the form of fundamental skills that are robust to different task contexts. When a new task is encountered, these generic strategies form a base set of skills upon which task-specific learning can occur. We would like artificial learning agents to have the ability to solve many tasks sequentially under different conditions by developing task-specific and task-invariant skills that enable them to quickly adapt while avoiding *catastrophic forgetting* [24] using their memory.
+
+One line of continual learning approaches learns a single representation with a fixed capacity in which they detect important weight parameters for each task and minimize their further alteration in favor of learning new tasks. In contrast, structure-based approaches increase the capacity of the network to accommodate new tasks. However, these approaches do not scale well to a large number of tasks if they require a large amount of memory for each task. Another stream of approaches in continual learning relies on explicit or implicit experience replay by storing raw samples or training generative models, respectively. In this paper, we propose a novel adversarial continual learning (ACL) method in which a disjoint latent space representation composed of task-specific or private
+
+
+Fig. 1: Factorizing task-specific and task-invariant features in our method (ACL) while learning $T$ sequential tasks at a time. Left: Shows ACL at training time where the Shared module is adversarially trained with the discriminator to generate task-invariant features $(\mathbf{z}_S)$ while the discriminator attempts to predict task labels. Architecture growth occurs at the arrival of the $k^{th}$ task by adding a task-specific modules denoted as $P^k$ and $p^k$ , optimized to generate orthogonal representation $\mathbf{z}_P$ to $\mathbf{z}_S$ . To prevent forgetting, 1) Private modules are stored for each task and 2) A shared module which is less prone to forgetting, yet is also retrained with experience reply with a limited number of exemplars Right: At test time, the discriminator is removed and ACL uses the $P^k$ module for the specific task it is evaluated on.
+
+latent space is learned for each task and a task-invariant or shared feature space is learned for all tasks to enhance better knowledge transfer as well as better recall of the previous tasks. The intuition behind our method is that tasks in a sequence share a part of the feature representation but also have a part of the feature representation which is task-specific. The shared features are notably less prone to forgetting and the tasks-specific features are important to retain to avoid forgetting the corresponding task. Therefore, factorizing these features separates the part of the representation that forgets from that which does not forget. To disentangle the features associated with each task, we propose a novel adversarial learning approach to enforce the shared features to be task-invariant and employ orthogonality constraints [30] to enforce the shared features to not appear in the task-specific space.
+
+Once factorization is complete, minimizing forgetting in each space can be handled differently. In the task-specific latent space, due to the importance of these features in recalling the task, we freeze the private module and add a new one upon finishing learning a task. The shared module, however, is significantly less susceptible to forgetting and we only use the replay buffer mechanism in this space to the extend that factorization is not perfect, i.e., when tasks have little overlap or have high domain shift in between, using a tiny memory containing samples stored from prior tasks will help with better factorization and hence higher performance. We empirically found that unlike other memory-based methods in which performance increases by increasing the samples from prior tasks, our model requires a very tiny memory budget beyond which its perfor
+
+mance remains constant. This alleviates the need to use old data, as in some applications it might not be possible to store a large amount of data if any at all. Instead, our approach leaves room for further use of memory, if available and need be, for architecture growth. Our approach is simple yet surprisingly powerful in not forgetting and achieves state-of-the-art results on visual continual learning benchmarks such as MNIST, CIFAR100, Permuted MNIST, miniImageNet, and a sequence of 5 tasks.
+
+# 2 Related Work
+
+# 2.1 Continual learning
+
+The existing continual learning approaches can be broadly divided into three categories: memory-based, structure-based, and regularization-based methods.
+
+Memory-based methods: Methods in this category mitigate forgetting by relying on storing previous experience explicitly or implicitly wherein the former raw samples [6, 21, 28, 26, 27] are saved into the memory for rehearsal whereas in the latter a generative model such as a GAN [32] or an autoencoder [16] synthesizes them to perform pseudo-rehearsal. These methods allow for simultaneous multi-task learning on i.i.d. data which can significantly reduce forgetting. A recent study on tiny episodic memories in CL [7] compared methods such as GEM [21], A-GEM [6], MER [27], and ER-RES [7]. Similar to [27], for ER-RES they used reservoir sampling using a single pass through the data. Reservoir sampling [39] is a better sampling strategy for long input data compared to random selection. In this work, we explicitly store raw samples into a very tiny memory used for replay buffer and we differ from prior work by how these stored examples are used by specific parts of our model (discriminator and shared module) to prevent forgetting in the features found to be shared across tasks.
+
+Structure-based methods: These methods exploit modularity and attempt to localize inference to a subset of the network such as columns [29], neurons [11, 41], a mask over parameters [23, 31]. The performance on previous tasks is preserved by storing the learned module while accommodating new tasks by augmenting the network with new modules. For instance, Progressive Neural Nets (PNNs) [29] statically grow the architecture while retaining lateral connections to previously frozen modules resulting in guaranteed zero forgetting at the price of quadratic scale in the number of parameters. [41] proposed dynamically expandable networks (DEN) in which, network capacity grows according to tasks relatedness by splitting/duplicating the most important neurons while time-stamping them so that they remain accessible and re-trainable at all time. This strategy despite introducing computational cost is inevitable in continual learning scenarios where a large number of tasks are to be learned and a fixed capacity cannot be assumed.
+
+Regularization methods: In these methods [18, 42, 1, 25, 10], the learning capacity is assumed fixed and continual learning is performed such that the change in parameters is controlled and reduced or prevented if it causes performance downgrade on prior tasks. Therefore, for parameter selection, there has
+
+to be defined a weight importance measurement concept to prioritize parameter usage. For instance, inspired by Bayesian learning, in elastic weight consolidation (EWC) method [18] important parameters are those to have the highest in terms of the Fisher information matrix. HAT [31] learns an attention mask over important parameters. Authors in [10] used per-weight uncertainty defined in Bayesian neural networks to control the change in parameters. Despite the success gained by these methods in maximizing the usage of a fixed capacity, they are often limited by the number of tasks.
+
+# 2.2 Adversarial learning
+
+Adversarial learning has been used for different problems such as generative models [13], object composition [2], representation learning [22], domain adaptation [36], active learning [34], etc. The use of an adversarial network enables the model to train in a fully-differentiable manner by adjusting to solve the minimax optimization problem [13]. Adversarial learning of the latent space has been extensively researched in domain adaptation [14], active learning [34], and representation learning [17, 22]. While previous literature is concerned with the case of modeling single or multiple tasks at once, here we extend this literature by considering the case of continuous learning where multiple tasks need to be learned in a sequential manner.
+
+# 2.3 Latent Space Factorization
+
+In the machine learning literature, multi-view learning, aims at constructing and/or using different views or modalities for better learning performances [3, 40]. The approaches to tackle multi-view learning aim at either maximizing the mutual agreement on distinct views of the data or focus on obtaining a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace using Canonical correlation analysis and clustering [8], Gaussian processes [33], etc. Therefore, the concept of factorizing the latent space into shared and private parts has been extensively explored for different data modalities. Inspired by the practicality of factorized representation in handling different modalities, here we factorize the latent space learned for different tasks using adversarial learning and orthogonality constraints [30].
+
+# 3 Adversarial Continual learning (ACL)
+
+We consider the problem of learning a sequence of $T$ data distributions denoted as $\mathcal{D}^{tr} = \{\mathcal{D}_1^{tr},\dots ,\mathcal{D}_T^{tr}\}$ , where $\mathcal{D}_k^{tr} = \{(\mathbf{X}_i^k,\mathbf{Y}_i^k,\mathbf{T}_i^k)_{i = 1}^{n_k}\}$ is the data distribution for task $k$ with $n$ sample tuples of input $(\mathbf{X}^k\in \mathcal{X})$ , output label $(\mathbf{Y}^k\in \mathcal{Y})$ , and task label $(\mathbf{T}^k\in \mathcal{T})$ . The goal is to sequentially learn the model $f_{\theta}:\mathcal{X}\to \mathcal{Y}$ for each task that can map each task input to its target output while maintaining its performance on all prior tasks. We aim to achieve this by learning a disjoint latent space representation composed of a task-specific latent space for each task and a
+
+task-invariant feature space for all tasks to enhance better knowledge transfer as well as better catastrophic forgetting avoidance of prior knowledge. We mitigate catastrophic forgetting in each space differently. For the task-invariant feature space, we assume a limited memory budget of $\mathcal{M}^k$ which stores $m$ samples $x_{i = 1\dots m}\sim \mathcal{D}_{j = 1\dots k - 1}^{tr}$ from every single task prior to $k$ .
+
+We begin by learning $f_{\theta}^{k}$ as a mapping from $\mathbf{X}^k$ to $\mathbf{Y}^k$ . For $C$ -way classification task with a cross-entropy loss, this corresponds to
+
+$$
+\mathcal {L} _ {\text {t a s k}} \left(f _ {\theta} ^ {k}, \mathbf {X} ^ {k}, \mathbf {Y} ^ {k}, \mathcal {M} ^ {k}\right) = - \mathbb {E} _ {\left(x ^ {k}, y ^ {k}\right) \sim \left(\mathbf {X} ^ {k}, \mathbf {Y} ^ {k}\right) \cup \mathcal {M} ^ {k}} \sum_ {c = 1} ^ {C} \mathbb {1} _ {[ c = y ^ {k} ]} \log \left(\sigma \left(f _ {\theta} ^ {k} \left(x ^ {k}\right)\right)\right) \tag {1}
+$$
+
+where $\sigma$ is the softmax function and the subscript $i = \{1, \dots, n_t\}$ is dropped for simplicity. In the process of learning a sequence of tasks, an ideal $f^k$ is a model that maps the inputs to two independent latent spaces where one contains the shared features among all tasks and the other remains private to each task. In particular, we would like to disentangle the latent space into the information shared across all tasks ( $\mathbf{z}_S$ ) and the independent or private information of each task ( $\mathbf{z}_P$ ) which are as distinct as possible while their concatenation followed by a task-specific head outputs the desired targets.
+
+To this end, we introduce a mapping called Shared $(S_{\theta_S}:\mathcal{X}\to \mathbf{z}_S)$ and train it to generate features that fool an adversarial discriminator $D$ . Conversely, the adversarial discriminator $(D_{\theta_D}:\mathbf{z}_S\rightarrow \mathcal{T})$ attempts to classify the generated features by their task labels $(\mathbf{T}^{k\in \{0,\dots ,T\}})$ . This is achieved when the discriminator is trained to maximize the probability of assigning the correct task label to generated features while simultaneously $S$ is trained to confuse the discriminator by minimizing $\log (D(S(x^k)))$ . This corresponds to the following $T$ -way classification cross-entropy adversarial loss for this minimax game
+
+$$
+\mathcal {L} _ {\mathrm {a d v}} \left(D, S, \mathbf {X} ^ {k}, \mathbf {T} ^ {k}, \mathcal {M} ^ {k}\right) = \min _ {S} \max _ {D} \sum_ {k = 0} ^ {T} \mathbb {1} _ {[ k = t ^ {k} ]} \log \left(D \left(S \left(x ^ {k}\right)\right)\right). \tag {2}
+$$
+
+Note that the extra label zero is associated with the 'fake' task label paired with randomly generated noise features of $\mathbf{z}_S^{\prime} \sim \mathcal{N}(\mu, \Sigma)$ . In particular, we use adversarial learning in a different regime that appears in most works related to generative adversarial networks [13] such that the generative modeling of input data distributions is not utilized here because the ultimate task is to learn a discriminative representation.
+
+To facilitate training $S$ , we use the Gradient Reversal layer [12] that optimizes the mapping to maximize the discriminator loss directly $(\mathcal{L}_{\mathrm{task}_S} = -\mathcal{L}_{\mathrm{D}})$ . In fact, it acts as an identity function during forward propagation but negates its inputs and reverses the gradients during back propagation. The training for $S$ and $D$ is complete when $S$ is able to generate features that $D$ can no longer predict the correct task label for leading $\mathbf{z}_S$ to become as task-invariant as possible. The private module $(P_{\theta_P} : \mathcal{X} \to \mathbf{z}_P)$ , however, attempts to accommodate the task-invariant features by learning merely the features that are specific to the
+
+task in hand and do not exist in $\mathbf{z}_S$ . We further factorize $\mathbf{z}_S$ and $\mathbf{z}_P$ by using orthogonality constraints introduced in [30], also known as "difference" loss in the domain adaptation literature [4], to prevent the shared features between all tasks from appearing in the private encoded features. This corresponds to
+
+$$
+\mathcal {L} _ {\text {d i f f}} (S, P, \mathbf {X} ^ {k}, \mathcal {M} ^ {k}) = \sum_ {k = 1} ^ {T} | | (S (x ^ {k})) ^ {\mathrm {T}} P ^ {k} (x ^ {k}) | | _ {F} ^ {2}, \tag {3}
+$$
+
+where $||\cdot||_F$ is the Frobenius norm and it is summed over the encoded features of all $P$ modules encoding samples for the current tasks and the memory.
+
+Final output predictions for each task are then predicted using a task-specific multi-layer perceptron head which takes $\mathbf{z}_P$ concatenated with $\mathbf{z}_S$ ( $\mathbf{z}_P \oplus \mathbf{z}_S$ ) as an input. Taken together, these loss form the complete objective for ACL as
+
+$$
+\mathcal {L} _ {\mathrm {A C L}} = \lambda_ {1} \mathcal {L} _ {\mathrm {a d v}} + \lambda_ {2} \mathcal {L} _ {\text {t a s k}} + \lambda_ {3} \mathcal {L} _ {\text {d i f f}}, \tag {4}
+$$
+
+where $\lambda_1, \lambda_2$ , and $\lambda_3$ are regularizers to control the effect of each component. The full algorithm for ACL is given in Alg. 1 in the appendix.
+
+# 3.1 Avoiding forgetting in ACL
+
+Catastrophic forgetting occurs when a representation learned through a sequence of tasks changes in favor of learning the current task resulting in performance downgrade on previous tasks. The main insight to our approach is decoupling the conventional single representation learned for a sequence of tasks into two parts: a part that must not change because it contains task-specific features without which complete performance retrieval is not possible, and a part that is less prone to change as it contains the core structure of all tasks.
+
+To fully prevent catastrophic forgetting in the first part (private features), we use compact modules that can be stored into memory. If factorization is successfully performed, the second part remains highly immune to forgetting. However, we empirically found that when disentanglement cannot be fully accomplished either because of the little overlap or large domain shift between the tasks, using a tiny replay buffer containing few samples for old data can be beneficial to retain high ACC values as well as mitigating forgetting.
+
+# 3.2 Evaluation metrics
+
+After training for each new task, we evaluate the resulting model on all prior tasks. Similar to [21, 10], to measure ACL performance we use ACC as the average test classification accuracy across all tasks. To measure forgetting we report backward transfer, BWT, which indicates how much learning new tasks has influenced the performance on previous tasks. While BWT $< 0$ directly reports catastrophic forgetting, BWT $> 0$ indicates that learning new tasks has helped with the preceding tasks.
+
+$$
+\mathrm {B W T} = \frac {1}{T - 1} \sum_ {i = 1} ^ {T - 1} R _ {T, i} - R _ {i, i}, \quad \mathrm {A C C} = \frac {1}{T} \sum_ {i = 1} ^ {T} R _ {T, i} \tag {5}
+$$
+
+where $R_{n,i}$ is the test classification accuracy on task $i$ after sequentially finishing learning the $n^{\text{th}}$ task. We also compare methods based on the memory used either in the network architecture growth or replay buffer. Therefore, we convert them into memory size assuming numbers are 32-bit floating point which is equivalent to 4bytes.
+
+# 4 Experiments
+
+In this section, we review the benchmark datasets and baselines used in our evaluation as well as the implementation details.
+
+# 4.1 ACL on Vision Benchmarks
+
+Datasets: We evaluate our approach on the commonly used benchmark datasets for $T$ -split class incremental learning where the entire dataset is divided into $T$ disjoint subsets or tasks. We use common image classification datasets including 5-Split MNIST and Permuted MNIST [20], previously used in [25, 42, 10], 20-Split CIFAR100 [19] used in [42, 21, 6], and 20-Split miniImageNet [38] used in [7, 43]. We also benchmark ACL on a sequence of 5-Datasets including SVHN, CIFAR10, not-MNIST, Fashion-MNIST and, MNIST and report average performance over multiple random task orderings. Dataset statistics are given in Table 5a in the appendix. No data augmentation of any kind has been used in our analysis.
+
+Baselines: From the prior work, we compare with state-of-the-art approaches in all the three categories described in Section 2 including Elastic Weight Consolidation (EWC) [18], Progressive neural networks (PNNs) [29], and Hard Attention Mask (HAT) [31] using implementations provided by [31] unless otherwise stated. For memory-based methods including A-GEM, GEM, and ER-RES, for Permuted MNIST, 20-Split CIFAR100, and 20-Split miniImageNet, we relied on the implementation provided by [7], but changed the experimental setting from single to multi-epoch and without using 3 Tasks for cross-validation for a more fair comparison against ACL and other baselines. On Permuted MNIST results for SI [42] are reported from [31], for VCL [25] those are obtained using their original provided code, and for Uncertainty-based CL in Bayesian framework (UCB) [10] are directly reported from the paper. We also perform fine-tuning, and joint training. In fine-tuning (ORD-FT), an ordinary single module network without the discriminator is continuously trained without any forgetting avoidance strategy in the form of experience replay or architecture growth. In joint training with an ordinary network (ORD-JT) and our ACL setup (ACL-JT) we learn all the tasks jointly in a multitask learning fashion using the entire dataset at once which serves as the upper bound for average accuracy on all tasks, as it does not adhere to the continual learning scenario.
+
+Implementation details: For all ACL experiments except for Permuted MNIST and 5-Split MNIST we used a reduced AlexNet [15] architecture as the backbone for $S$ and $P$ modules for a fair comparison with the majority of our baselines.
+
+However, ACL can be also used with more sophisticated architectures (see our code repository for implementation of ACL with reduced ResNet18 backbone). However, throughout this paper, we only report our results using AlexNet. The architecture in $S$ is composed of 3 convolutional and 4 fully-connected (FC) layers whereas $P$ is only a convolutional neural network (CNN) with similar number of layers and half-sized kernels compared to those used in $S$ . The private head modules $(p)$ and the discriminator are all composed of a small 3-layer perceptron. Due to the differences between the structure of our setup and a regular network with a single module, we used a similar CNN structure to $S$ followed by larger hidden FC layers to match the total number of parameters throughout our experiments with our baselines for fair comparisons. For 5-Split MNIST and Permuted MNIST where baselines use a two-layer perceptron with 256 units in each and ReLU nonlinearity, we used a two-layer perceptron of size $784 \times 175$ and $175 \times 128$ with ReLU activation in between in the shared module and a single-layer of size $784 \times 128$ and ReLU for each $P$ . In each head, we also used an MLP with layers of size 256 and 28, ReLU activations, and a 14-unit softmax layer. In all our experiments, no pre-trained model is used. We used stochastic gradient descent in a multi-epoch setting for ACL and all the baselines.
+
+# 5 Results and Discussion
+
+In the first set of experiments, we measure ACC, BWT, and the memory used by ACL and compare it against state-of-the-art methods with or without memory constraints on 20-Split miniImageNet. Next, we provide more insight and discussion on ACL and its component by performing an ablation study and visualizations on this dataset. In Section 6, we evaluate ACL on a more difficult continual learning setting where we sequentially train on 5 different datasets. Finally, in section (7), we demonstrate the experiments on sequentially learning single datasets such as 20-Split CIFAR100 and MNIST variants.
+
+# 5.1 ACL Performance on 20-Split miniImageNet
+
+Starting with 20-Split miniImageNet, we split it in 20 tasks with 5 classes at a time. Table 1a shows our results obtained for ACL compared to several baselines. We compare ACL with HAT as a regularization based method with no experience replay memory dependency that achieves ACC=59.45 ± 0.05 with BWT=-0.04 ± 0.03%. Results for the memory-based methods of ER-RES and A-GEM are re(produced) by us using the implementation provided in [7] by applying modifications to the network architecture to match with ACL in the backbone structure as well as the number of parameters. We only include A-GEM in Table 1a which is only a faster algorithm compared to its precedent GEM with identical performance.
+
+Table 1: CL results on 20-Split miniImageNet measuring ACC (\%), BWT (\%), and Memory (MB). $(^{**})$ denotes that methods do not adhere to the continual learning setup: ACL-JT and ORD-JT serve as the upper bound for ACC for ACL/ORD networks, respectively. $(^{*})$ denotes result is re(produced) by us using the original provided code. $(\dagger)$ denotes result is obtained using the re-implementation setup by [31]. BWT of Zero indicates the method is zero-forgetting guaranteed. (b) Cumulative ablation study of ACL on miniImageNet where $P$ : private modules, $S$ : shared module, $D$ : discriminator, $\mathcal{L}_{\mathrm{diff}}$ : orthogonality constraint, and RB: replay buffer memory of one sample per class. All results are averaged over 3 runs and standard deviation is given in parentheses
+
+(a)
+
+| Method | ACC% | BWT% | Arch (MB) | Replay Buffer (MB) |
| HAT* [31] | 59.45(0.05) | -0.04(0.03) | 123.6 | - |
| PNN † [29] | 58.96(3.50) | Zero | 588 | - |
| ER-RES* [7] | 57.32(2.56) | -11.34(2.32) | 102.6 | 110.1 |
| A-GEM* [6] | 52.43(3.10) | -15.23(1.45) | 102.6 | 110.1 |
| ORD-FT | 28.76(4.56) | -64.23(3.32) | 37.6 | - |
| ORD-JT** | 69.56(0.78) | - | 5100 | - |
| ACL-JT** | 66.89(0.32) | - | 5100 | - |
| ACL (Ours) | 62.07(0.51) | 0.00(0.00) | 113.1 | 8.5 |
+
+(b)
+
+| # | S | P | D | Ldiff | RB | ACC% | BWT% |
| 1 | x | | | | | 21.19(4.43) | -60.10(4.14) |
| 2 | | x | | | | 29.09(5.67) | Zero |
| 3 | x | | x | | | 32.82(2.71) | -28.67(3.61) |
| 4 | x | x | | x | | 49.13(3.45) | -3.99(0.42) |
| 5 | x | x | | | | 50.15(1.41) | -14.32(2.34) |
| 6 | x | x | | | x | 51.19(1.98) | -9.12(2.98) |
| 7 | x | x | | x | x | 52.07(2.49) | -0.01(0.01) |
| 8 | x | x | x | | | 55.72(1.42) | -0.12(0.34) |
| 9 | x | x | x | x | | 57.66(1.44) | -3.71(1.31) |
| 10 | x | x | x | | x | 60.28(0.52) | 0.00(0.00) |
| 11 | x | x | x | x | x | 62.07(0.51) | 0.00(0.00) |
+
+A-GEM and ER-RES use an architecture with 25.6M parameters (102.6MB) along with storing 13 images of size $(84\times 84\times 3)$ per class (110.1MB) resulting in total memory size of 212.7MB. ACL is able to outperform all baselines in $\mathrm{ACC} = 62.07\pm 0.51$ , BWT $= 0.00\pm 0.00$ , using total memory of 121.6MB for architecture growth (113.1MB) and storing 1 sample per class for replay buffer
+
+(8.5MB). In our ablation study in Section 5.2, we will show our performance without using replay buffer for this dataset is $\mathrm{ACC} = 57.66 \pm 1.44$ . However, ACL is able to overcome the gap by using only one image per class (5 per task) to achieve $\mathrm{ACC} = 62.07 \pm 0.51$ without the need to have a large buffer for old data in learning datasets like miniImagenet with diverse sets of classes.
+
+Table 2: Comparison of the effect of the replay buffer size between ACL and other baselines including A-GEM [6], and ER-RES [7] on 20-Split miniImageNet where unlike the baselines, ACL's performance remains unaffected by the increase in number of samples stored per class as discussed in 5.2. The results from this table are used to generate Fig. 2 in the appendix.
+
+| Samples per class | 1 | 3 | 5 | 13 |
| A-GEM[6] | 45.14(3.42) | 49.12(4.69) | 50.24(4.56) | 52.43(3.10) |
| ER-RES[7] | 40.21(2.68) | 46.87(4.51) | 53.45(3.45) | 57.32(2.56) |
| ACL (ours) | ACC | 62.07(0.51) | 61.80(0.50) | 61.69(0.61) | 61.33(0.40) |
| BWT | 0.00(0.00) | 0.01(0.00) | 0.01(0.00) | -0.01(0.02) |
+
+# 5.2 Ablation Studies on 20-Split miniImageNet
+
+We now analyze the major building blocks of our proposed framework including the discriminator, the shared module, the private modules, replay buffer, and the difference loss on the miniImagenet dataset. We have performed a complete cumulative ablation study for which the results are summarized in Table 1b and are described as follows:
+
+Shared and private modules: Using only a shared module without any other ACL component (ablation #1 in Table 1b) yields the lowest ACC of $21.19 \pm 4.43$ as well as the lowest BWT performance of $-60.10 \pm 4.14$ while using merely private modules (ablation #2) obtains a slightly better ACC of $29.05 \pm 5.67$ and a zero-guaranteed forgetting by definition. However, in both scenarios the ACC achieved is too low considering the random chance being $20\%$ which is due to the small size of networks used in $S$ and $P$ .
+
+Discriminator and orthogonality constraint $(\mathcal{L}_{\mathrm{diff}})$ : The role of adversarial training or presence of $D$ on top of $S$ and $P$ can be seen by comparing the ablations $\#8$ and $\#5$ where in the latter $D$ , as the only disentanglement mechanism, is eliminated. We observe that ACC is improved from $50.15 \pm 1.41$ to $55.72 \pm 1.42\%$ and BWT is increased from $-14.32 \pm 2.34$ to $-0.12 \pm 0.34\%$ . On the other hand, the effect of orthogonality constraint as the only factorization mechanism is shown in ablation $\#4$ where the $\mathcal{L}_{\mathrm{diff}}$ can not improve the ACC performance, but it increases BWT form $-14.32 \pm 2.34$ to $-3.99 \pm 0.42$ . Comparing ablations $\#8$ and $\#4$ shows the importance of adversarial training in factorizing the latent spaces versus orthogonality constraint if they were to be used individually. To compare the role of adversarial and diff losses in the presence of replay buffer (RB), we can compare $\#7$ and $\#10$ in which the $D$ and $\mathcal{L}_{\mathrm{diff}}$ are ablated,
+
+respectively. It appears again that $D$ improves ACC more than $\mathcal{L}_{\mathrm{diff}}$ by reaching $\mathrm{ACC} = 60.28 \pm 0.52$ whereas $\mathcal{L}_{\mathrm{diff}}$ can only achieve $\mathrm{ACC} = 52.07 \pm 2.49$ . However, the effect of $D$ and $\mathcal{L}_{\mathrm{diff}}$ on BWT is nearly the same.
+
+Replay buffer: Here we explore the effect of adding the smallest possible memory replay to ACL, i. e., storing one sample per class for each task. Comparing ablation #9 and the most complete version of ACL (#11) shows that adding this memory improves both the ACC and BWT by $4.41\%$ and $3.71\%$ , respectively. We also evaluated ACL using more samples in the memory. Table 2 shows that unlike A-GEM and ER-RES approaches in which performance increases with more episodic memory, in ACL, ACC remains nearly similar to its highest performance. Being insensitive to the amount of old data is a remarkable feature of ACL, not because of the small memory it consumes, but mainly due to the fact that access to the old data might be prohibited or very limited in some real-world applications. Therefore, for a fixed allowed memory size, a method that can effectively use it for architecture growth can be considered as more practical for such applications.
+
+# 6 ACL Performance on a sequence of 5-Datasets
+
+In this section, we present our results for continual learning of 5 tasks using ACL in Table 3b. Similar to the previous experiment we look at both ACC and BWT obtained for ACL, finetuning as well as UCB as our baseline. Results for this sequence are averaged over 5 random permutations of tasks and standard deviations are given in parenthesis. CL on a sequence of datasets has been previously performed by two regularization based approaches of UCB and HAT where UCB was shown to be superior [10]. With this given sequence, ACL is able to outperform UCB by reaching $\mathrm{ACC} = 78.55(\pm 0.29)$ and $\mathrm{BWT} = -0.01$ using only half of the memory size and also no replay buffer. In Bayesian neural networks such as UCB, there exists double number of parameters compared to a regular model representing mean and variance of network weights. It is very encouraging to see that ACL is not only able to continually learn on a single dataset, but also across diverse datasets.
+
+# 7 Additional Experiments
+
+20-Split CIFAR100: In this experiment we incrementally learn CIFAR100 in 5 classes at a time in 20 tasks. As shown in Table 3, HAT is the most competitive baseline, although it does not depend on memory and uses 27.2MB to store its architecture in which it learns task-based attention maps reaching $\mathrm{ACC} = 76.96 \pm 1.23\%$ . PNN uses 74.7MB to store the lateral modules to the memory and guarantees zero forgetting. Results for A-GEM, and ER-Reservoir are re(produced) by us using a CNN similar to our shared module architecture. We use fully connected layers with more number of neurons to compensate for the remaining number of parameters reaching 25.4MB of memory. We also
+
+stored 13 images per class (1300 images of size $(32 \times 32 \times 3)$ in total) which requires 16.0MB of memory. However, ACL achieves ACC=(78.08 ± 1.25)% with BWT=0.00 ± 0.01)% using only 25.1MB to grow private modules with 167.2K parameters (0.6MB) without using memory for replay buffer which is mainly due to the overuse of parameters for CIFAR100 which is considered as a relevantly 'easy' dataset with all tasks (classes) sharing the same data distribution. Disentangling shared and private latent spaces, prevents ACL from using redundant parameters by only storing task-specific parameters in $P$ and $p$ modules. In fact, as opposed to other memory-based methods, instead of starting from a large network and using memory to store samples, which might not be available in practice due to confidentiality issues (e.g. medical data), ACL uses memory to gradually add small modules to accommodate new tasks and relies on knowledge transfer through the learned shared module. The latter is what makes ACL to be different than architecture-based methods such as PNN where the network grows by the entire column which results in using a highly disproportionate memory to what is needed to learn a new task with.
+
+Permuted MNIST: Another popular variant of the MNIST dataset in CL literature is Permuted MNIST where each task is composed of randomly permuting pixels of the entire MNIST dataset. To compare against values reported in prior work, we particularly report on a sequence of $T = 10$ and $T = 20$ tasks with ACC, BWT, and memory for ACL and baselines. To further evaluate ACL's ability in handling more tasks, we continually learned up to 40 tasks. As shown in Table 4 in the appendix, among the regularization-based methods, HAT achieves the highest performance of $91.6\%$ [31] using an architecture of size 1.1MB. Vanilla VCL improves by $7\%$ in ACC and $6.5\%$ in BWT using a K-means core-set memory size of 200 samples per task (6.3MB) and an architecture size similar to HAT. PNN appears as a strong baseline achieving $\mathrm{ACC} = 93.5\%$ with guaranteed zero forgetting. Finetuning (ORD-FT) and joint training (ORD-JT) results for an ordinary network, similar to EWC and HAT (a two-layer MLP with 256 units and ReLU activations), are also reported as reference values for lowest BWT and highest achievable ACC, respectively. ACL achieves the highest accuracy among all baselines for both sequences of 10 and 20 equal to $\mathrm{ACC} = 98.03 \pm 0.01$ and $\mathrm{ACC} = 97.81 \pm 0.03$ , and $\mathrm{BWT} = -0.01\%$ $\mathrm{BWT} = 0\%$ , respectively which shows that performance of ACL drops only by $0.2\%$ as the number of tasks doubles. ACL also remains efficient in using memory to grow the architecture compactly by adding only 55K parameters (0.2MB) for each task resulting in using a total of 2.4MB and 5.0MB when $T = 10$ and $T = 20$ , respectively for the entire network including the shared module and the discriminator. We also observed that the performance of our model does not change as the number of tasks increases to 30 and 40 if each new task is accommodated with a new private module. We did not store old data and used memory only to grow the architecture by 55K parameters (0.2MB).
+
+5-Split MNIST: As the last experiment in this section, we continually learn 0-9 MNIST digits by following the conventional pattern of learning 2 classes over 5 sequential tasks [25, 42, 10]. As shown in Table 6 in the appendix, we compare
+
+Table 3: CL results on 20-Split CIFAR100 measuring ACC (\%), BWT (\%), and Memory (MB). $(^{**})$ denotes that methods do not adhere to the continual learning setup: ACL-JT and ORD-JT serve as the upper bound for ACC for ACL/ORD networks, respectively. $(^{*})$ denotes result is obtained by using the original provided code. $(\dagger)$ denotes result is obtained using the re-implementation setup by [31]. $(^o)$ denotes result is reported by [7]. BWT of Zero indicates the method is zero-forgetting guaranteed. All results are averaged over 3 runs and standard deviation is given in parentheses.
+
+(a) 20-Split CIFAR100
+
+| Method | ACC% | BWT% | Arch (MB) | Replay Buffer (MB) |
| HAT * [31] | 76.96(1.23) | 0.01(0.02) | 27.2 | - |
| PNN†[29] | 75.25(0.04) | Zero | 93.51 | - |
| A-GEMo[6] | 54.38(3.84) | -21.99(4.05) | 25.4 | 16 |
| ER-RESo[7] | 66.78(0.48) | -15.01(1.11) | 25.4 | 16 |
| ORD-FT | 34.71(3.36) | -48.56(3.17) | 27.2 | - |
| ORD-JT** | 78.67(0.34) | - | 764.5 | - |
| ACL-JT** | 79.91(0.05) | - | 762.6 | - |
| ACL (Ours) | 78.08(1.25) | 0.00(0.01) | 25.1 | - |
+
+(b) Sequence of 5 Datasets
+
+| Method | ACC% | BWT% | Arch (MB) | Replay Buffer (MB) |
| UCB * [10] | 76.34(0.12) | -1.34(0.04) | 32.8 | - |
| ORD-FT | 27.32(2.41) | -42.12(2.57) | 16.5 | - |
| ACL (Ours) | 78.55(0.29) | -0.01(0.15) | 16.5 | - |
+
+ACL with regularization-based methods with no memory dependency (EWC, HAT, UCB, Vanilla VCL) and methods relying on memory only (GEM), and VCL with K-means Core-set (VCL-C) where 40 samples are stored per task. ACL reaches $\mathrm{ACC} = (99.76 \pm 0.03)\%$ with zero forgetting outperforming UCB with $\mathrm{ACC} = 99.63\%$ which uses nearly $40\%$ more memory size. In this task, we only use architecture growth (no experience replay) where 54.3K private parameters are added for each task resulting in memory requirement of 1.6MB to store all private modules. Our core architecture has a total number of parameters of 420.1K. We also provide naive finetuning results for ACL and a regular single-module network with (268K) parameters (1.1MB). Joint training results for the regular network (ORD-JT) is computed as $\mathrm{ACC} = 99.89 \pm 0.01$ for ACL which requires 189.3MB for the entire dataset as well as the architecture.
+
+Table 4: CL results on Permuted MNIST. measuring ACC (\%), BWT (\%), and Memory (MB). $(^{**})$ denotes that methods do not adhere to the continual learning setup: ACL-JT and ORD-JT serve as the upper bound for ACC for ACL/ORD networks, respectively. $(^*)$ denotes result is obtained by using the original provided code. $(\ddagger)$ denotes result reported from original work. $(^o)$ denotes the results reported by [7] and $(^{oo})$ denotes results are reported by [31]; T shows the number of tasks. Note the difference between BWT of Zero and 0.00 where the former indicates the method is zero-forgetting guaranteed by definition and the latter is computed using Eq. 5. All results are averaged over 3 runs, the standard deviation is provided in parenthesis.
+
+| Method | ACC% | BWT% | Arch (MB) | Replay Buffer (MB) |
| EWCoo [18] (T=10) | 88.2 | - | 1.1 | - |
| HAT‡ [31] (T=10) | 97.4 | - | 2.8 | - |
| UCB‡ [10] (T=10) | 91.44(0.04) | -0.38(0.02) | 2.2 | - |
| VCL*[25] (T=10) | 88.80(0.23) | -7.90(0.23) | 1.1 | - |
| VCL-C* [25] (T=10) | 95.79(0.10) | -1.38(0.12) | 1.1 | 6.3 |
| PNNo [29] (T=20) | 93.5(0.07) | Zero | N/A | - |
| ORD-FT (T=10) | 44.91(6.61) | -53.69(1.91) | 1.1 | - |
| ORD-JT** (T=10) | 96.03(0.02) | - | 189.3 | - |
| ACL-JT** (T=10) | 98.45(0.02) | - | 194.4 | - |
| ACL (Ours) (T=10) | 98.03(0.01) | -0.01(0.01) | 2.4 | - |
| ACL (Ours) (T=20) | 97.81(0.03) | 0.00(0.00) | 5.0 | - |
| ACL (Ours) (T=30) | 97.81(0.03) | 0.00(0.00) | 7.2 | - |
| ACL (Ours) (T=40) | 97.80(0.02) | 0.00(0.00) | 9.4 | - |
+
+# 8 Conclusion
+
+In this work, we proposed a novel hybrid continual learning algorithm that factorizes the representation learned for a sequence of tasks into task-specific and task-invariant features where the former is important to be fully preserved to avoid forgetting and the latter is empirically found to be remarkably less prone to forgetting. The novelty of our work is that we use adversarial learning along with orthogonality constraints to disentangle the shared and private latent representations which results in compact private modules that can be stored into memory and hence, efficiently preventing forgetting. A tiny replay buffer, although not critical, can be also integrated into our approach if forgetting occurs in the shared module. We established a new state of the art on CL benchmark datasets.
+
+# References
+
+1. Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: Learning what (not) to forget. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 139-154 (2018)
+2. Azadi, S., Pathak, D., Ebrahimi, S., Darrell, T.: Compositional gan: Learning image-conditional binary composition. International Journal of Computer Vision pp. 1-16 (2020)
+3. Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: Proceedings of the eleventh annual conference on Computational learning theory. pp. 92-100. ACM (1998)
+4. Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., Erhan, D.: Domain separation networks. In: Advances in neural information processing systems. pp. 343-351 (2016)
+5. Chaudhry, A., Dokania, P.K., Ajanthan, T., Torr, P.H.: Riemannian walk for incremental learning: Understanding forgetting and intransigence. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 532-547 (2018)
+6. Chaudhry, A., Ranzato, M., Rohrbach, M., Elhoseiny, M.: Efficient lifelong learning with A-GEM. In: International Conference on Learning Representations (2019)
+7. Chaudhry, A., Rohrbach, M., Elhoseiny, M., Ajanthan, T., Dokania, P.K., Torr, P.H., Ranzato, M.: Continual learning with tiny episodic memories. arXiv preprint arXiv:1902.10486 (2019)
+8. Chaudhuri, K., Kakade, S.M., Livescu, K., Sridharan, K.: Multi-view clustering via canonical correlation analysis. In: Proceedings of the 26th annual international conference on machine learning. pp. 129-136. ACM (2009)
+9. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A LargeScale Hierarchical Image Database. In: CVPR09 (2009)
+0. Ebrahimi, S., Elhoseiny, M., Darrell, T., Rohrbach, M.: Uncertainty-guided continual learning with bayesian neural networks. In: International Conference on Learning Representations (2020)
+1. Fernando, C., Banarse, D., Blundell, C., Zwols, Y., Ha, D., Rusu, A.A., Pritzel, A., Wierstra, D.: Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734 (2017)
+2. Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17(1), 2096-2030 (2016)
+3. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. pp. 2672-2680 (2014)
+4. Hoffman, J., Tzeng, E., Park, T., Zhu, J.Y., Isola, P., Saenko, K., Efros, A., Darrell, T.: CyCADA: Cycle-consistent adversarial domain adaptation. In: International Conference on Machine Learning. pp. 1989-1998 (2018)
+5. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: Alexnet-level accuracy with 50x fewer parameters and $< 0.5$ mb model size. arXiv preprint arXiv:1602.07360 (2016)
+6. Kemker, R., Kanan, C.: Fearnnet: Brain-inspired model for incremental learning. In: International Conference on Learning Representations (2018)
+7. Kim, H., Mnih, A.: Disentangling by factorising. arXiv preprint arXiv:1802.05983 (2018)
+
+18. Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences p. 201611835 (2017)
+19. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Tech. rep., CiteSeer (2009)
+20. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278-2324 (1998)
+21. Lopez-Paz, D., et al.: Gradient episodic memory for continual learning. In: Advances in Neural Information Processing Systems. pp. 6467-6476 (2017)
+22. Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., Frey, B.: Adversarial autoencoders. arXiv preprint arXiv:1511.05644 (2015)
+23. Mallya, A., Lazebnik, S.: Packet: Adding multiple tasks to a single network by iterative pruning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
+24. McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: The sequential learning problem. In: Psychology of learning and motivation, vol. 24, pp. 109-165. Elsevier (1989)
+25. Nguyen, C.V., Li, Y., Bui, T.D., Turner, R.E.: Variational continual learning. In: ICLR (2018)
+26. Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: icarl: Incremental classifier and representation learning. In: CVPR (2017)
+27. Riemer, M., Cases, I., Ajemian, R., Liu, M., Rish, I., Tu, Y., Tesauro, G.: Learning to learn without forgetting by maximizing transfer and minimizing interference. In: International Conference on Learning Representations (2019)
+28. Robins, A.: Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science 7(2), 123-146 (1995)
+29. Rusu, A.A., Rabinowitz, N.C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., Hadsell, R.: Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016)
+30. Salzmann, M., Ek, C.H., Urtasun, R., Darrell, T.: Factorized orthogonal latent spaces. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. pp. 701-708 (2010)
+31. Serra, J., Suris, D., Miron, M., Karatzoglou, A.: Overcoming catastrophic forgetting with hard attention to the task. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 4548-4557. PMLR (2018)
+32. Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. In: Advances in Neural Information Processing Systems. pp. 2990-2999 (2017)
+33. Shon, A., Grochow, K., Hertzmann, A., Rao, R.P.: Learning shared latent structure for image synthesis and robotic imitation. In: Advances in neural information processing systems. pp. 1233-1240 (2006)
+34. Sinha, S., Ebrahimi, S., Darrell, T.: Variational adversarial active learning. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 5972-5981 (2019)
+35. Srivastava, R.K., Masci, J., Kazerounian, S., Gomez, F., Schmidhuber, J.: Compete to compute. In: Advances in neural information processing systems. pp. 2310-2318 (2013)
+
+36. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7167-7176 (2017)
+37. Van Der Maaten, L.: Accelerating t-sne using tree-based algorithms. The Journal of Machine Learning Research 15(1), 3221-3245 (2014)
+38. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: Advances in neural information processing systems. pp. 3630-3638 (2016)
+39. Vitter, J.S.: Random sampling with a reservoir. ACM Transactions on Mathematical Software (TOMS) 11(1), 37-57 (1985)
+40. Xu, C., Tao, D., Xu, C.: A survey on multi-view learning. arXiv preprint arXiv:1304.5634 (2013)
+41. Yoon, J., Yang, E., Lee, J., Hwang, S.J.: Lifelong learning with dynamically expandable networks. In: International Conference on Learning Representations (2018)
+42. Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 3987-3995. PMLR (2017)
+43. Zhang, M., Wang, T., Lim, J.H., Feng, J.: Prototype reminding for continual learning. arXiv preprint arXiv:1905.09447 (2019)
\ No newline at end of file
diff --git a/adversarialcontinuallearning/images.zip b/adversarialcontinuallearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..eac61a77f3b99d5a18c58aaa257bef8d05788004
--- /dev/null
+++ b/adversarialcontinuallearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff2b609987d4f9970e709074728e26cb04e640213a8eabeff6e5c5b6f9374e6e
+size 337741
diff --git a/adversarialcontinuallearning/layout.json b/adversarialcontinuallearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..94032cd0ce6b62e8354aba44c2e07666acf8d47d
--- /dev/null
+++ b/adversarialcontinuallearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b19ea21306f2b0c84cd964c9fa0bf13bd54b64d48840400fcfb5a22cfdf833f6
+size 404012
diff --git a/adversarialdataaugmentationviadeformationstatistics/a6645722-1fe1-439e-94fa-390802f6580e_content_list.json b/adversarialdataaugmentationviadeformationstatistics/a6645722-1fe1-439e-94fa-390802f6580e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a39e91ae6f2258e48c120916205f89c2032776e7
--- /dev/null
+++ b/adversarialdataaugmentationviadeformationstatistics/a6645722-1fe1-439e-94fa-390802f6580e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:009414208a15cee77141534f5d834e2a5aabd95bb8a8a58f5e4c687cd2c27b16
+size 82073
diff --git a/adversarialdataaugmentationviadeformationstatistics/a6645722-1fe1-439e-94fa-390802f6580e_model.json b/adversarialdataaugmentationviadeformationstatistics/a6645722-1fe1-439e-94fa-390802f6580e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..bd2dc979a64939dace518d7be4ac8b8e60d3cebc
--- /dev/null
+++ b/adversarialdataaugmentationviadeformationstatistics/a6645722-1fe1-439e-94fa-390802f6580e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:71c6648f3a84f8ae01ae6ce2237938f71b8b275b4e704fcca1e4aa1dcec8d2c9
+size 100725
diff --git a/adversarialdataaugmentationviadeformationstatistics/a6645722-1fe1-439e-94fa-390802f6580e_origin.pdf b/adversarialdataaugmentationviadeformationstatistics/a6645722-1fe1-439e-94fa-390802f6580e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fb68ec5dee8f72d34db2c56b5c62c436e31e60fb
--- /dev/null
+++ b/adversarialdataaugmentationviadeformationstatistics/a6645722-1fe1-439e-94fa-390802f6580e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:307ad67cf6aeb902ab816c0dd20587b19374469888a4548d06c764fe7857d151
+size 2229537
diff --git a/adversarialdataaugmentationviadeformationstatistics/full.md b/adversarialdataaugmentationviadeformationstatistics/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..348c81981e3451b7ba73a1b63674755d5be47aea
--- /dev/null
+++ b/adversarialdataaugmentationviadeformationstatistics/full.md
@@ -0,0 +1,335 @@
+# Adversarial Data Augmentation via Deformation Statistics
+
+Sahin Olut1, Zhengyang Shen1, Zhenlin Xu1, Samuel Gerber2, and Marc Niethammer1
+
+1 UNC Chapel Hill {olut,zyshen,zhenlinx,mn}@cs.unc.edu
+2 Kitware Inc. samuel.gerber@kitware.com
+
+Abstract. Deep learning models have been successful in computer vision and medical image analysis. However, training these models frequently requires large labeled image sets whose creation is often very time and labor intensive, for example, in the context of 3D segmentations. Approaches capable of training deep segmentation networks with a limited number of labeled samples are therefore highly desirable. Data augmentation or semi-supervised approaches are commonly used to cope with limited labeled training data. However, the augmentation strategies for many existing approaches are either hand-engineered or require computationally demanding searches. To that end, we explore an augmentation strategy which builds statistical deformation models from unlabeled data via principal component analysis and uses the resulting statistical deformation space to augment the labeled training samples. Specifically, we obtain transformations via deep registration models. This allows for an intuitive control over plausible deformation magnitudes via the statistical model and, if combined with an appropriate deformation model, yields spatially regular transformations. To optimally augment a dataset we use an adversarial strategy integrated into our statistical deformation model. We demonstrate the effectiveness of our approach for the segmentation of knee cartilage from 3D magnetic resonance images. We show favorable performance to state-of-the-art augmentation approaches.
+
+Keywords: Data augmentation, image registration, and segmentation
+
+Image segmentation is an important task in computer vision and medical image analysis, for example, to localize objects of interest or to plan treatments and surgeries. Deep neural networks (DNNs) achieve state-of-the-art segmentation performance in these domains [20, 19, 44]. However, training DNNs typically relies on large datasets with labeled structures of interest. In many cases, and for medical segmentation problems in particular, labeled training data is scarce, as obtaining manual segmentations is costly and requires expertise [35]. To allow for training of well-performing DNNs from a limited number of segmentations, various data augmentation strategies have been proposed [24]. Augmentation strategies range from pre-defined random transformations [23, 25, 1, 27] to learning-based approaches [15, 42, 14]. Random transformations usually include intensity changes such as contrast enhancement, brightness adjustments, as well
+
+as random deformations, e.g., affine transformations. These methods are often difficult to tune as they do not directly estimate image variations observable in real data [10, 43, 24]. Generative Adversarial Networks (GANs) [12] are also widely used to generate data augmentations [5, 18, 33]. However, the proposed methods do not explicitly model deformation spaces and hence only have indirect control over augmentation realism.
+
+In fact, existing learning-based data augmentation techniques are generally not based on statistical deformation models as correspondences between random natural image pairs might not be meaningful. However, if such deformations can be established, as is frequently the case for medical images of the same anatomy within or across patients, they may be used to create plausible deformations for data augmentation [43, 18]. While statistical deformation models have a long history in computer vision [8, 7] and medical image analysis [28, 34] they have not been well explored in the context of data augmentation.
+
+Our proposed data augmentation method (AdvEigAug) uses learned deformation statistics as a sensible constraint within an adversarial data augmentation setting. Specifically, we make the following contributions:
+
+1) We explicitly model the deformation distribution via principal component analysis (PCA) to guide data augmentation. This allows us to estimate reasonable deformation ranges for augmentation.
+2) We propose to efficiently estimate this PCA deformation space via deep image registration models. We explore PCA models on displacement and momentum fields, where the momentum fields assure spatial regularity via the integration of a partial differential equation model.
+3) We integrate our PCA model into an adversarial formulation to select deformations for augmentation which are challenging for the segmentation.
+4) We extensively evaluate our augmentation approach and show favorable performance with respect to state-of-the-art augmentation approaches.
+
+The manuscript is organized as follows: Sec. 1 gives an overview of related work; Sec 2 describes our AdvEigAug technique; Sec. 3 describes our experimental setup; Sec. 4 presents and discusses the results of our method.
+
+# 1 Related Work
+
+We focus our discussion here on related data augmentation and semi-supervised learning approaches that use adversarial training or image registrations.
+
+Adversarial Training It is well known that adversarial examples created by locally perturbing an input with imperceptible changes may drastically affect image classification results [30]. But it has also been shown that training DNNs with adversarial examples can improve DNN robustness and performance [13], which is our goal. Here, we focus on adversarial training via spatial transformations.
+
+Engstrom et al. [11] use rigid transformations to generate adversarial examples for data augmentations, whereas Kanbak et al. [21] develop ManiFool, an adversarial training technique which is based on affine transformations. A more general adversarial deformation strategy is pursued by Xiao et al. [37] where a displacement field is optimized to cause misclassifications. Smoothness of the displacement field is encouraged via regularization.
+
+All these approaches focus on classification instead of segmentation. Furthermore, transformation models are prescribed rather than inferred from observed transformations in the data and selecting a deformation magnitude requires an iterative approach. Our AdvEigAug approach instead explores using statistical deformation models obtained from image pairs by fluid- or displacement based registrations. The statistical model also results in a clear guidance for the perturbation magnitudes, thereby eliminating the need for iterations.
+
+Data Augmentation and Semi-Supervised Learning via Registration Image registration is widely used for atlas based segmentation [17]. As it allows estimating deformations between unlabeled image pairs it has also been used to create plausible data deformations for data augmentation. Via semi-supervised learning this allows training deep segmentation networks with very limited labeled training data by exploiting large unlabeled image sets. Such approaches have successfully been used in medical image segmentation [38, 43, 5].
+
+For example, Zhao et al. [43] train spatial and appearance transformation networks to synthesize labeled images which can then be used to train a segmentation network. Chaitanya et al. [5] model appearance variations and spatial transformations via GANs for few-shot image segmentation. Xu et al. [38] use a semi-supervised approach to jointly train a segmentation and a registration network. This allows the segmentation network to benefit from the transformations generated via the registration network while simultaneously improving registration results by including the obtained segmentations in the image similarity loss.
+
+However, the approaches above do not employ adversarial samples for training segmentation networks and do not use statistical deformation models. Instead, our AdvEigAug approach captures deformation statistics via a PCA model which is efficiently estimated, separately for each sample, via deep registration networks and integrated into an adversarial training strategy.
+
+# 2 Method
+
+Our approach is an the adversarial training scheme. In particular, it builds upon, extends, and combines two different base strategies: (1) the estimation of a statistical deformation model and (2) the creation of adversarial examples that can fool a segmentation network and improve its training. Our proposed AdvEigAug approach builds upon the ManiFool augmentation idea [21]. ManiFool is limited to adversarial deformations for classification which are subject to an affine transformations model. The adversarial directions are based on the gradient with respect to the classification loss, but how far to move in this gradient
+
+direction requires tuning. Our approach on the other hand estimates a statistical deformation space which can be much richer than an affine transformation. In fact, we propose two efficient ways of computing these spaces, both based on non-parametric registration approaches: an approach based on a deep registration network directly predicting displacement fields [3] and another deep network predicting the initial momentum of the large deformation diffeomorphic metric mapping model (LDDMM) [4, 31]. The benefit of the LDDMM model is that it can assure spatially regular (diffeomorphic) spatial transformations. Furthermore, as we estimate a statistical model we can sample from it and we also have a direct measure of the range of deformations which are consistent with deformations observed in the data. Note that these deformation networks can use labeled data [2, 38] to improve registration accuracies, but can also be trained directly on intensity images [40, 3, 31], which is the strategy we follow here.
+
+As our approach is related to ManiFool and the augmentation approach in [11] we introduce our reformulation of their concepts for image segmentation and affine transformations in Sec. 2.1. Sec. 2.2 introduces our AdvEigAug approach.
+
+# 2.1 Baseline Method: AdvAffine
+
+Our baseline method is related to [11, 21] where rigid and affine transformations (in 2D) are generated with the goal of fooling a classifier. We extend the approach to 3D affine transformations and apply it to image segmentation instead of classification. While this is a minor change in terms of the loss function (see details below) it precludes simple approaches for step-size selection, i.e., to determine how strong of an affine transformation to apply. For example, while the Manifool approach takes adversarial steps until the classification label changes such an approach is no longer appropriate when dealing with image segmentations as one is now dealing with a set of labels instead of one.
+
+Specifically, we assume we have a deep neural segmentation network, NN, which, given an input image, $I$ , results in a segmentation $\hat{y}$ . We also assume we have a segmentation loss, $loss(y, \hat{y})$ (typically a binary cross-entropy loss averaged over the image volume; or a classification loss in [11]), where $y$ is the target segmentation. Our goal is to determine how to spatially transform an input image $I$ so that it is maximally detrimental (i.e., adversarial) for the loss to be minimized. We parameterize the affine transformation as
+
+$$
+\Phi^ {- 1} (x; \theta) = (E + A) x + b, A, E \in \mathbb {R} ^ {d \times d}, b, x \in \mathbb {R} ^ {d}, \tag {1}
+$$
+
+where $d$ denotes the spatial dimension ( $d = 3$ in our experiments), $x$ denotes the spatial coordinate, $\theta = \{A,b\}$ are the affine transformation parameters, and $E$ is the identity matrix, which allows us to easily start from the identity transform ( $A = 0, b = 0$ ). The transformed image is then $I \circ \Phi^{-1}$ . To compute an adversarial direction we perform gradient ascent with respect to the loss
+
+$$
+\mathcal {L} (\theta) = \operatorname {l o s s} (y, \hat {y}), \text {s . t .} \hat {y} = N N (I \circ \Phi^ {- 1} (\cdot ; \theta)). \tag {2}
+$$
+
+It is unclear how far to step into the ascent direction for segmentation problems: we simply take $t$ steps with a chosen learning rate, $\Delta t$ , i.e., $\theta_{t + 1} = \theta_t + \Delta t\frac{\partial\mathcal{L}(\theta_t)}{\partial\theta}$ .
+
+This process is repeated for each sample in the training data set. The resulting affine transformations are then applied to the images and their segmentations (using nearest neighbor interpolation) to augment the training dataset.
+
+# 2.2 Proposed Method: AdvEigAug
+
+Fig. 1 gives an overview of our proposed approach. We use a statistical deformation model to capture plausible anatomical variations and efficiently estimate them via two different deep registration approaches [31, 9]. These statistical deformation models are integrated into an adversarial strategy which selects a deformation direction to challenge the segmentation loss.
+
+
+Fig. 1. Overview: We first build the deformation model (1-3) and then obtain an adversarial deformation from the segmentation network (4-6). Our method generates samples (warped images and warped segmentations) that are difficult to segment, but that help training the segmentation network.
+
+Deformation Models Following the approach in Sec. 2.1 we want to obtain adversarial samples subject to a given transformation model. However, instead of specifying this transformation model (e.g., an affine model as in $AdvAffine$ ) we want to learn plausible models from data.
+
+Displacement Deformation Model We parameterize the transformation via a small number of learned displacement field basis elements:
+
+$$
+\Phi^ {- 1} \left(x; \left\{\theta_ {i} ^ {d} \right\}\right) = x + u (x) \approx x + \mu^ {d} (x) + \sum_ {i = 1} ^ {B} \theta_ {i} ^ {d} b _ {i} ^ {d} (x), \tag {3}
+$$
+
+where $u(x)$ is the displacement field which we approximate by $B$ basis elements. Here, $\mu^d (x)$ is the mean displacement field, $\{b_i^d (x)\}$ is a set of basis displacement fields, and $\{\theta_i^d\}$ are the basis coefficients with respect to which we will compute the adversarial direction. These basis fields could be specified, but we estimate them from sample image pairs (see Statistical Model paragraph below).
+
+Fluid Deformation Model Specifying the transformation via basis displacement fields, $\{b_i^d (x)\}$ as in Eq. 3 might result in transformations that are not spatially smooth or invertible. For example, for large values of the coefficients $\{\theta_{i}\}$ folds might occur. This can be avoided by transformation models based on fluid mechanics [16]. Foldings are avoided in such models by indirectly representing the transformation via integration of a velocity field. Essentially, this amounts to concatenating an infinite number of small deformation whose regularity is easier to control. We use the following LDDMM transformation model:
+
+$$
+\phi_ {t} ^ {- 1} + D \phi^ {- 1} v = 0, \phi^ {- 1} (x, 0) = x, \tag {4}
+$$
+
+$$
+m _ {t} + d i v (v) m + D v ^ {T} (m) + D m (v) = 0, v = K \star m, \tag {5}
+$$
+
+$$
+m (x, 0) = m _ {0} (x) \approx \mu^ {m} (x) + \sum_ {i = 1} ^ {B} \theta_ {i} ^ {m} b _ {i} ^ {m} (x), \Phi^ {- 1} (x; \left\{\theta_ {i} ^ {m} \right\}) = \phi^ {- 1} (x, 1), \tag {6}
+$$
+
+where $(\cdot)_t$ denotes a partial derivative with respect to time and $D$ the Jacobian. This model is based on the LDDMM shooting equations of [41, 36, 32] which allow specifying the spatial transformation $\phi^{-1}(x,t)$ for all times $t$ via the initial momentum $m_0(x)$ and the solution of the Euler-Poincaré equation for diffeomorphisms [41] (Eq. 5). Our desired transformation $\Phi^{-1}(x;\{\theta_i^m\})$ is obtained after integration for unit time starting from the initial momentum which (similar to the displacement transformation model) we approximate and parameterize via a finite set of momentum basis vector fields $\{b_i^m (x)\}$ ; $\{\theta_i^m\}$ are the basis coefficients and $\mu^m (x)$ is the mean momentum field. The instantaneous velocity field, $v$ , is obtained by smoothing the momentum field, $m$ , via $K$ . We use a multi-Gaussian smoother $K$ for our experiments [26]. Note that for a sufficiently strong smoother $K$ the resulting spatial transformations are assured to be diffeomorphic. We will see that this is a convenient property for our augmentation strategy.
+
+Statistical Model The introduced deformation models require the specification of their mean displacement and momentum vector fields $(\mu^d (x),\mu^m (x))$ as well as of their basis vector fields $\{\{b_i^d (x)\} ,\{b_i^m (x)\}$ ). We learn them from data. Specifically, given a source image $I$ we register it to $N$ target images $\{I_i\}$ based on the displacement or the fluid deformation model respectively (without the finite basis approximation). This results in a set of $N$ displacement fields $\{u_{i}(x)\}$ or initial momentum fields $\{m_i(x)\}$ from which we can build the statistical model. Specifically, we obtain the set of basis vectors $(\{b_i^d (x)\} ,\{b_i^m (x)\})$ via principal component analysis (PCA) based on the covariance matrix, $C$ , of the displacement or initial momentum fields. As we estimate these spaces relative to a source image $I$ (which defines the tangent space for the transformations)
+
+we typically set \(\mu^d(x) = 0\) and \(\mu^m(x) = 0\). In the following we denote the set of (eigenvalue,eigenvector) pairs of the covariance matrix as \{(\lambda_i, e_i(x))\}\), where eigenvalues are ordered in descending order, i.e., \(\lambda_i \geq \lambda_{i+1} \geq 0\). The same analysis will hold for the displacement-based and the fluid transformation models. The only difference will be how to obtain the transformations from the basis representations. As computing registrations by numerical optimization is costly, we approximate their solution via deep-learning registration models [3, 31]. These can rapidly predict the displacement or initial momentum fields.
+
+# Algorithm 1 AdvEigAug
+
+Inputs: $I_0$ , a set of (zero-centered) displacement fields for $X$ registering $I_0$ to a set of target images, $y$ segmentation mask
+
+Outputs: augmented image $I_{aug}$ ,
+
+$y_{aug}$ mask for augmented image
+
+Compute the Gramian matrix
+
+$\mathbf{G} = \mathbf{X}\mathbf{X}^{\mathrm{T}}$
+
+$\{\lambda_i\}, \{e_i\} =$ eigendecompose $(\mathbf{G})$
+
+Compute deformation & warp image
+
+$\{\theta_1\}_{1}^B\gets 0$
+
+$\phi^{-1}(x)\gets x + \mu^{d}(x) + \sum_{i = 1}^{B}\theta_{i}e_{i}(x)$
+
+$\hat{\mathbf{I}}_0 = \mathbf{I}_0\circ \phi^{-1}(\mathbf{x})$
+
+Get the gradient wrt. $\theta$
+
+$\hat{\mathbf{y}} =$ predict_segmentation $(\hat{\mathbf{I}}_0)$
+
+$\mathrm{f} = \nabla_{\theta}$ (segmentation_loss(y, $\hat{y}$ )
+
+# Update $\theta$
+
+$$
+\theta \gets \frac {f | r |}{\sqrt {\sum_ {i = 1} ^ {B} f _ {i} ^ {2} / \lambda_ {i}}}.
+$$
+
+$\phi^{-1}(x)\gets x + \mu^d (x) + \sum_{i = 1}^{B}\theta_ie_i(x)$
+
+Warp image and mask
+
+$\mathbf{I}_{aug} = \mathbf{I_0}\circ \phi^{-1}(\mathbf{x})$
+
+$\mathbf{y}_{aug} = \mathbf{y}\circ \phi^{-1}(\mathbf{x})$
+
+Low-rank Approximation Given a set of $N$ centered displacement or initial momentum vector fields (linearized into column vectors) we can write the covariance matrix, $C$ , and its low-rank approximation $C_l$ as
+
+$$
+C = \sum_ {i = 1} ^ {N} \lambda_ {i} e _ {i} e _ {i} ^ {T} \approx \sum_ {i = 1} ^ {B} \lambda_ {i} e _ {i} e _ {i} ^ {T} = C _ {B}. \tag {7}
+$$
+
+We use the first $B$ eigenvectors to define the basis vectors for our deformation models. As for the AdvAffine approach of Sec. 2.1 we can then compute the gradient of the segmentation loss with respect to the transformation parameters
+
+$$
+f := \frac {\partial \mathcal {L} (\theta)}{\partial \theta}. \tag {8}
+$$
+
+The only change is in the deformation model where $\theta$ is now either $\{\theta_i^d\}$ or $\{\theta_i^m\}$ to parameterize the displacement or the initial momentum field respectively. Every thing else stays unchanged. A key benefit of our approach, however, is that the
+
+low-rank covariance matrix, $C_B$ induces a $B$ -dimensional Gaussian distribution on the basis coefficient $\{\theta_i\}$ , which are by construction decorrelated and have variances $\{\lambda_i\}$ , i.e., they are normally distributed according to $N(0,C_m)$ , where $C_m = \text{diag}(\{\lambda_i\})$ . As we will see next, this is beneficial to define step-sizes in the adversarial direction which are consistent with the observed data.
+
+Adversarial Direction So far our transformation models allow more flexible, data-driven transformations than the affine transformation model of Sec. 2.1, but it is still unclear how far one should move in the adversarial gradient direction, $f$ (Eq. 8). However, now that we have a statistical deformation model we can use it to obtain deformation parameters $\theta$ which are consistent with the range
+
+of values that should be expected based on the statistical model. Specifically, we want to specify how far we move away from the mean in terms of standard deviations of the distribution, while moving in the adversarial direction. To move $r \geq 0$ standard deviations away we scale the adversarial gradient as follows
+
+$$
+\theta = \frac {f r}{\sqrt {\sum_ {i = 1} ^ {B} f _ {i} ^ {2} / \lambda_ {i}}}. \tag {9}
+$$
+
+It is easy to see why this is the case.
+
+Proof. We first transform the gradient direction (with respect to the basis coefficients, $\theta$ ), $f$ , to Mahalanobis space via $C_m^{-\frac{1}{2}}$ in which the coefficients, $\theta$ , are distributed according to $N(0,I)$ . We are only interested in the direction and not the magnitude of the adversarial gradient, $f$ . Hence, we normalize it to obtain
+
+$$
+\bar {f} = \frac {C _ {m} ^ {- \frac {1}{2}} f}{\left\| C _ {m} ^ {- \frac {1}{2}} f \right\| _ {2}}. \tag {10}
+$$
+
+In this space, we scale the transformed coefficients by $r$ to move $r$ standard deviations out and subsequently transform back to the original space by multiplying with the inverse transform $C_m^{\frac{1}{2}}$ resulting in
+
+$$
+\theta = C _ {m} ^ {\frac {1}{2}} \bar {f} r = C _ {m} ^ {\frac {1}{2}} \frac {C _ {m} ^ {- \frac {1}{2}} f r}{\| C _ {m} ^ {- \frac {1}{2}} f \| _ {2}} = \frac {f r}{\| C _ {m} ^ {- \frac {1}{2}} f \| _ {2}} = \frac {f r}{\sqrt {\sum_ {i = 1} ^ {B} f _ {i} ^ {2} / \lambda_ {i}}}. \tag {11}
+$$
+
+Given an adversarial direction, $f$ , this allows us to sample a set of transformation coefficients, $\theta$ , which are consistent with this adversarial direction and which have a desired magnitude, $r$ , as estimated via the statistical deformation model. Hence, in contrast to the AdvAffine approach of Sec. 2.1 there is now an intuitive way for specifying how large the adversarial deformations should be so that they remain consistent with the observed deformations between image pairs. Fig. 1 shows an example illustration of determining such a scaled adversarial direction. Pseudo-code for the overall augmentation algorithm for the displacement-based deformation model is given in Alg. 2.2. The fluid-flow-based approach follows similarly, but requires integration and backpropagation through Eqs. 4-6. The resulting deformations are then applied to the training image and its segmentation (using nearest neighbor interpolation) to augment the training dataset.
+
+# 3 Experiments
+
+We investigate the performance of different augmentation strategies for the segmentation of knee cartilage from the 3D magnetic resonance images (MRIs) of the Osteoarthritis Initiative3. All images are affinely registered to a knee
+
+atlas. The original images are of size $384 \times 384 \times 160$ with a voxel size of $0.36 \times 0.36 \times 0.7 \, \text{mm}^3$ . We normalize the intensities of each image such that the 0.1 percentile and the 99.9 percentile are mapped to $\{0,1\}$ and clamp values that are smaller to 0 and larger to 1 to avoid outliers. We do not bias-field correct these images as it has been shown in [39] that it is not required for this dataset. This also justifies our choice to test the Brainstorm approach without its appearance-normalization part. To be able to store the 3D data in the 11 GB of our NVIDIA GTX 2080Ti GPU we re-sample the input images and their segmentations to $190 \times 190 \times 152$ . Note that this resampling might introduce slight evaluation bias with respect to the manual segmentations drawn on the original full-resolution images. However, our objective is to compare augmentation approaches which would all be equally affected, hence, the relative comparisons between methods remain fair. To assure that we can compare between methods we use the same hyperparameters that we use in the NoAug experiments across all experiments. All segmentation networks are trained with a learning rate of 0.001 using Adam [22] where $\beta_{1} = 0.5$ and $\beta_{2} = 0.99$ . For the displacement field network, we use a learning rate of 0.0002. For the momentum generation network, we use the same settings as in [32].
+
+# 3.1 Registration and Segmentation Networks
+
+We use a 3D-UNet [6] with 5 encoder and decoder layers. The number of feature channels in the encoder layers are [16,32,32,32,32] and the number of feature channels in the decoder layers are [32,32,32,8,8]. Each convolutional block is followed by batch normalization and ReLU activation except for the last one.
+
+Segmentation Network We use binary cross-entropy loss for simplicity and jointly segment the femoral and tibial cartilage. However, our approach generalizes to any loss function, e.g., to multiple class labels.
+
+Registration Networks We train two registration networks: to predict (1) displacement fields and (2) the initial momentum of EPDiff (Eq. 5).
+
+Displacement field network For the displacement field network we follow the VoxelMorph architecture [3], but use a modified loss of the form
+
+$$
+\mathcal {L} _ {\text {d i s p}} (u (x)) = \mathcal {L} _ {\text {r e g}} ^ {d} (u (x)) + \lambda^ {d} \mathcal {L} _ {\text {s i m}} \left(I _ {T}, I \circ (x + u (x))\right). \tag {12}
+$$
+
+where $u$ is the displacement field predicted by the network for a given source image, $I$ , and target image $I_T$ ; $\mathcal{L}_{sim}$ is the image similarity measure (we use normalized cross correlation (NCC)) and $\mathcal{L}_{reg}$ is the regularizer for which we choose the bending energy [29] to keep transformations smooth:
+
+$$
+\mathcal {L} _ {r e g} ^ {d} (u (x)) = \frac {1}{N} \sum_ {x} \sum_ {i = 1} ^ {d} \| H \left(u _ {i} (x) \right\| _ {F} ^ {2}, \tag {13}
+$$
+
+where $\| \cdot \| _F$ is the Frobenius norm, $H(u_{i}(x))$ is the Hessian of the i-th component of $u(x)$ , $N$ the number of voxels, and $d$ is the spatial dimension ( $d = 3$ ). We set $\lambda^d = 200$ .
+
+Initial momentum network We use the LDDMM network of [32], which predicts the initial momentum as an intermediate output. The network loss is
+
+$$
+\mathcal {L} _ {m} (m _ {0} (x)) = \mathcal {L} _ {r e g} ^ {m} (m _ {0} (x)) + \lambda^ {m} \mathcal {L} _ {s i m} \left(I _ {T}, I \circ \Phi^ {- 1}\right), \text {s . t .} E q s. 4 - 6 \tag {14}
+$$
+
+where $m_0(x)$ is the initial momentum (not yet approximated via a finite basis for the network loss), $\mathcal{L}_{reg} = \langle m_0, K * m_0 \rangle$ , $\varPhi^{-1}$ is the spatial transform induced by $m_0(x)$ and we use a Localized NCC similarity loss [31]. See [32] for details.
+
+# 3.2 Experimental Design
+
+Our goal is to demonstrate that (1) building a statistical deformation model is useful, that (2) using adversarial samples based on the statistical deformation model improves segmentation results, and that (3) AdvEigAug outperforms other augmentation approaches, in particular, approaches that do not attempt to learn deformation models from data. We focus on a few-shot learning setting.
+
+# Methods we Compare to and Rationale for Comparisons
+
+NoAug uses no augmentation. It only uses the segmented training samples. All augmentation approaches are expected to outperform this method.
+
+RandDeform is our representative of an augmentation strategy using various spatial transformations which are not inferred from data and hence is expected to show inferior performance to approaches that do. Specifically, we randomly rotate images between $\pm 15$ degrees in all axes and apply random translations by up to 15 voxels in each space direction. We simultaneously apply random displacement fields. Translations and rotations are drawn from uniform distributions. The displacement fields are obtained by Gaussian smoothing of random displacement fields drawn from a unit normal distribution.
+
+AdvAffine is described in detail in Sec. 2.1. It is our baseline adversarial augmentation approach. It lacks our ability to determine a desirable deformation magnitude, uses a simple affine transformation, and does not build a statistical deformation model. Our various AdvEigAug approaches directly compete with AdvAffine and favorable performance indicates that using statistical deformation models can be beneficial for augmentation. We choose $t = 20$ and $\Delta t = 0.25$ to select the deformation magnitude for AdvAffine.
+
+Brainstorm [43] relates to our AdvEigAug approach in the sense that it uses unlabeled images via registrations (we use the displacement field network of Sec. 3.1 for its implementation) to create realistic deformations for augmentation. In contrast to AdvEigAug it is not an adversarial approach nor does it build a statistical deformation model and hence it relies on the set of deformations that can be observed between images. We will show that these design choices matter by comparing to AdvEigAug directly and to a non-adversarial variant EigAug (explained next). Brainstorm also uses an intensity transfer network. However, as we are working on one specific dataset without strong appearance differences we only use the deformation component of Brainstorm for a more direct comparison.
+
+EigAug is our AdvEigAug approach when replacing the adversarial gradient direction, $f$ by a random direction. We compare against this approach to (1) show that modeling the deformation space is beneficial (e.g., with respect to Brainstorm) and (2) that using the adversarial direction of AdvEigAug yields improvements.
+
+$AdvEigAug$ is our proposed approach and described in detail in Sec. 2.2.
+
+Upper bound We use the segmentations for the entire training set $(n = 200)$ to train the segmentation network. We regard the resulting segmentation accuracy as the quasi upper-bound for achievable segmentation accuracy.
+
+# Randomization and Augmentation Strategies
+
+Randomization AdvEigAug and EigAug require the selection of, $r$ , which specifies how many standard deviations away from the mean the parameterized deformation is. In our experiments we explore fixing $r \in \{1,2,3\}$ as well as randomly selecting it: $r \sim U(1.5,3)$ . We hypothesize that randomly selecting $r$ results in a richer set of deformations and hence in better segmentation performance.
+
+Covariance and basis vector computations AdvEigAug and EigAug also require the computation of the covariance matrix based on displacement or momentum fields. We recompute this covariance matrix for every augmented sample we create to assure sufficient variability of deformation directions. Specifically, for a given training sample we randomly select $N$ images (we do not use or need their segmentations) and register the training sample to these images using one of our registration networks to obtain the displacement or momentum fields respectively. We choose the top two eigendirections ( $B = 2$ ) as our basis vectors. Clearly, using more samples results in more stable results, but it also limits the space being explored. In the extreme case one would use all available image pairs to estimate the covariance matrix which would then limit the variability. We therefore use only $N = 10$ samples to estimate these directions.
+
+Offline augmentation For almost all our experiments we employ an offline augmentation strategy. We train the segmentation network with the original training data for 700 epochs. We then create a set of adversarial examples. Specifically, we create 5 adversarial samples for each training sample. We then continue training with the augmented training set for another 1,300 epochs. To assure that training is not biased too strongly to the adversarial samples [24] we down-weight the adversarial examples by $1/5$ in the loss.
+
+Online augmentation We also explore an online augmentation approach for AdvEigAug only to determine if the successive introduction of adversarial samples is beneficial. As for the offline approach, we first train the segmentation network for 700 epochs. We then add 1 adversarial example per original training sample and continue training for 150 epochs. We repeat this augmentation step 4 more times and finally train for another 550 epochs. As for the offline variant adversarial samples are down-weighted. We down-weight by $\frac{1}{k}$ , where $k$ is the $k^{\text{th}}$ addition of adversarial samples. This assures that at all times the adversarial samples are balanced with the original training data.
+
+Data We split our dataset into a test set with segmentations (n=100), a validation set with segmentations (n=50), as well as a training set (n=200) of which we use n=4 or n=8 as the original training set including their segmentations. This training set (excluding their segmentations) is also used to obtain deformations for Brainstorm and the deformation spaces for AdvEigAug and EigAug.
+
+# 4 Results and Discussion
+
+Tab. 1 shows Dice scores and surface distance measures between the obtained segmentations and the manual knee cartilage segmentations on the test set $(n = 100)$ . As we focus on training models with small numbers of manual segmentations we show results for training with 4 and 8 segmentations respectively. Images without segmentations were also used during training of the *Brainstorm*, *EigAug*, and *AdvEigAug* models. All results are averages over 5 random runs; standard deviations are reported in parentheses.
+
+Overall Performance Our proposed $L$ -AdvEigAug $_{\text{rand\_r}}^{\text{online}}$ approach performs best in terms of Dice scores and surface distances demonstrating the effectiveness of using statistical deformation models to capture plausible anatomical variations in low sample settings in combination with a randomized deformation approach and an LDDMM-based deformation model which can assure spatially well-behaved deformations. In particular, our AdvEigAug approaches outperform all competing baseline approaches we tested: no augmentation (NoAug); augmentations with random deformations (RandDeform); Brainstorm which also uses unsegmented images to obtain deformations via registrations; and a competing adversarial approach using affine transformations only (AdvAffine). Note that our images are all already affinely aligned to an atlas, hence an improvement by
+
+AdvAffine was not expected in this setting. However, it illustrates that the more flexible deformation models of AdvEigAug are indeed beneficial.
+
+Table 1. Reported mean Dice scores and surface distance measures in $mm$ (average surface distance (ASD), 50th, and 95th-percentile) over 5 training runs. Surface distances are measured based on the binarized predictions. Standard deviations are in parentheses. Prefix L denotes training with the LDDMM model, prefix Adv indicates using adversarial samples. Dice scores for the approaches were tested for statistical difference with respect to the L-AdvEigAug online approach (bold) using a paired t-test. At a significance level $\alpha = 0.05$ with 14 different tests per measure results in statistical significance based on Bonferroni correction for $p < \alpha / 14 \approx 0.0035$ ;† shows statistical significance where $p < 1e^{-3}$ and $^*$ where $p < 1e^{-6}$ .
+
+ | 4 training samples | 8 training samples |
| Experiment | Dice in % | ASD | 50% | 95% | Dice in % | ASD | 50% | 95% |
| NoAug | 75.2(0.31)* | 1.79 | 0.134 | 20.17 | 77.1(0.28)* | 1.17 | 0.093 | 9.14 |
| RandDeform | 76.3(0.34)* | 1.80 | 0.171 | 14.72 | 78.1(0.25)* | 0.87 | 0.072 | 4.12 |
| Brainstorm [43] | 77.7(0.20)* | 1.17 | 0.087 | 8.80 | 79.4(0.17)* | 0.71 | 0.031 | 3.17 |
| AdvAffine | 75.1(0.49)* | 1.62 | 0.102 | 13.78 | 77.8(0.37)* | 1.19 | 0.042 | 5.13 |
| EigAugrand_r | 73.3(0.81)* | 1.76 | 0.352 | 9.37 | 76.2(0.13)* | 0.86 | 0.139 | 5.15 |
| EigAugfix_r=1 | 73.4(0.45)* | 1.45 | 0.241 | 12.04 | 77.8(0.12)* | 0.92 | 0.086 | 5.36 |
| EigAugfix_r=2 | 73.6(0.40)* | 1.46 | 0.217 | 10.32 | 77.7(0.10)* | 0.85 | 0.011 | 5.17 |
| EigAugfix_r=3 | 73.0(0.40)* | 1.44 | 0.232 | 11.05 | 77.3(0.30)* | 0.98 | 0.147 | 5.24 |
| AdvEigAugfix_r=1 | 75.6(0.31)* | 1.77 | 0.231 | 10.12 | 77.8(0.23)* | 0.97 | 0.074 | 6.78 |
| AdvEigAugfix_r=2 | 76.1(0.20)* | 1.24 | 0.128 | 9.28 | 78.7(0.21)* | 0.88 | 0.031 | 4.76 |
| AdvEigAugfix_r=3 | 74.9(0.15)* | 1.75 | 0.287 | 11.18 | 76.9(0.27)* | 1.22 | 0.113 | 9.99 |
| AdvEigAugrand_r | 78.4(0.25)† | 0.85 | 0.091 | 7.14 | 81.0(0.15)† | 0.65 | 0.031 | 3.77 |
| AdvEigAugrand_r | 78.2(0.23)† | 0.92 | 0.071 | 6.44 | 81.1(0.16)† | 0.58 | 0.025 | 3.51 |
| L-AdvEigAugrand_r | 78.5(0.17) | 0.79 | 0.083 | 4.68 | 81.1(0.11) | 0.55 | 0.021 | 3.85 |
| L-AdvEigAugrand_r | 79.1(0.14) | 0.72 | 0.020 | 3.72 | 81.2(0.12) | 0.51 | 0.023 | 3.46 |
| Upper bound | 82.5(0.2) | 0.47 | 0.019 | 2.42 | |
+
+Adversarial Sample Effect Comparing the adversarial $AdvEigAug$ to the corresponding non-adversarial $EigAug$ results directly shows the impact of the adversarial samples. In general, the adversarial examples result in improved Dice scores for the 4 training sample cases for $r \in \{1,2,3\}$ and in similar Dice scores for the 8 training sample case. Surface distances appear comparable. The exception is the randomized $AdvEigAug_{rand-r}$ strategy which performs uniformly better on all measures than $EigAug_{rand-r}$ and all its fixed $r$ value variants.
+
+Randomization of Deformation Magnitude, r Tab. 1 shows that randomizing $r \sim U(1.5,3)$ improves performance over using a fixed $r$ in terms of Dice scores and surface distances. As we draw 5 augmented samples per training sample, randomization occurs for fixed $r$ only via the computation of the covariance matrices based on the randomly drawn images. The random $r$ strategy, however, introduces additional randomness through the random deformation magnitude and can therefore explore the deformation space more fully.
+
+
+Fig. 2. Augmentation with LDDMM (top row) and displacement field network (bottom row). Left to right: augmentations with $r = 1$ , $r = 2$ , $r = 3$ . LDDMM produces well behaved transformations even for large values of $r$ .
+
+Online vs Offline The online and offline variants of our approach show comparable performance, though we observe a slight improvement for the 95-th percentile surface distance for the online approach. This might be because the online approach allows improving upon earlier adversarial examples during training.
+
+LDDMM vs Displacement Field Networks The LDDMM parameterization assures a well-behaved diffeomorphic transformation regardless of the chosen deformation magnitude, $r$ . In contrast the displacement field approach might create very strong deformations even to the point of creating foldings. Fig. 2 shows that augmented samples created via the LDDMM model tend to indeed be better behaved than the ones created via the displacement field model. While slight, this benefit appears to be born out by the validation results in Tab. 1 which generally show higher Dice scores and lower surface distance measures for the LDDMM models, in particular for the lowest sample size ( $n = 4$ ).
+
+# 5 Conclusion
+
+We introduced an end-to-end data augmentation framework guided by a statistical deformation model. To the best of our knowledge, this is the first work that studies statistical deformation modelling in conjunction with data augmentation. We proposed two variants of our method, one using a fluid- and the other a displacement-based deformation model. We showed that such statistical deformation models allow an intuitive control over deformation magnitudes and that a combination of randomizing the deformation magnitude, online training, and a fluid-based deformation model performs best for our segmentation task.
+
+Acknowledgements: Research reported in this publication was supported by the National Institutes of Health (NIH) under award numbers NIH 1R41MH118845 and NIH 1R01AR072013. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
+
+# References
+
+1. Akkus, Z., Galimzianova, A., Hoogi, A., Rubin, D.L., Erickson, B.J.: Deep learning for brain MRI segmentation: state of the art and future directions. Journal of digital imaging 30(4), 449-459 (2017)
+2. Balakrishnan, G., Zhao, A., Sabuncu, M., Guttag, J., Dalca, A.V.: VoxelMorph: A learning framework for deformable medical image registration. IEEE TMI: Transactions on Medical Imaging 38, 1788-1800 (2019)
+3. Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: VoxelMorph: a learning framework for deformable medical image registration. IEEE transactions on medical imaging 38(8), 1788-1800 (2019)
+4. Beg, M.F., Miller, M.I., Trouve, A., Younes, L.: Computing large deformation metric mappings via geodesic flows of diffeomorphisms. International journal of computer vision 61(2), 139-157 (2005)
+5. Chaitanya, K., Karani, N., Baumgartner, C.F., Becker, A., Donati, O., Konukoglu, E.: Semi-supervised and task-driven data augmentation. In: International Conference on Information Processing in Medical Imaging. pp. 29-41. Springer (2019)
+6. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International conference on medical image computing and computer-assisted intervention. pp. 424-432. Springer (2016)
+7. Cootes, T.F., Taylor, C.J.: Statistical models of appearance for medical image analysis and computer vision. In: Medical Imaging 2001: Image Processing. vol. 4322, pp. 236-248. International Society for Optics and Photonics (2001)
+8. Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active shape models-their training and application. Computer vision and image understanding 61(1), 38-59 (1995)
+9. Dalca, A.V., Balakrishnan, G., Guttag, J., Sabuncu, M.R.: Unsupervised learning for fast probabilistic diffeomorphic registration. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 729-738. Springer (2018)
+0. Eaton-Rosen, Z., Bragman, F., Ourselin, S., Cardoso, M.J.: Improving data augmentation for medical image segmentation (2018)
+1. Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: A rotation and a translation suffice: Fooling CNNs with simple transformations. arXiv preprint arXiv:1712.02779 (2017)
+2. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. pp. 2672-2680 (2014)
+3. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
+4. Hataya, R., Zdenek, J., Yoshizoe, K., Nakayama, H.: Faster AutoAugment: Learning augmentation strategies using backpropagation. arXiv preprint arXiv:1911.06987 (2019)
+5. Ho, D., Liang, E., Stoica, I., Abbeel, P., Chen, X.: Population based augmentation: Efficient learning of augmentation policy schedules. arXiv preprint arXiv:1905.05393 (2019)
+6. Holden, M.: A review of geometric transformations for nonrigid body registration. IEEE transactions on medical imaging 27(1), 111-128 (2007)
+
+17. Iglesias, J.E., Sabuncu, M.R.: Multi-atlas segmentation of biomedical images: a survey. Medical image analysis 24(1), 205-219 (2015)
+18. Jendele, L., Skopek, O., Becker, A.S., Konukoglu, E.: Adversarial augmentation for enhancing classification of mammography images. arXiv preprint arXiv:1902.07762 (2019)
+19. Kamnitsas, K., Ferrante, E., Parisot, S., Ledig, C., Nori, A.V., Criminisi, A., Rueckert, D., Glocker, B.: DeepMedic for brain tumor segmentation. In: International workshop on Brainlesion: Glioma, multiple sclerosis, stroke and traumatic brain injuries. pp. 138-149. Springer (2016)
+20. Kamnitsas, K., Ledig, C., Newcombe, V.F., Simpson, J.P., Kane, A.D., Menon, D.K., Rueckert, D., Glocker, B.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical image analysis 36, 61-78 (2017)
+21. Kanbak, C., Moosavi-Dezfooli, S.M., Frossard, P.: Geometric robustness of deep networks: analysis and improvement. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4441-4449 (2018)
+22. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
+23. Moeskops, P., Viergever, M.A., Mendrik, A.M., De Vries, L.S., Benders, M.J., Isgum, I.: Automatic segmentation of MR brain images with a convolutional neural network. IEEE transactions on medical imaging 35(5), 1252-1261 (2016)
+24. Paschali, M., Simson, W., Roy, A.G., Naeem, M.F., Göbl, R., Wachinger, C., Navab, N.: Data augmentation with manifold exploring geometric transformations for increased performance and robustness. arXiv preprint arXiv:1901.04420 (2019)
+25. Pereira, S., Pinto, A., Alves, V., Silva, C.A.: Brain tumor segmentation using convolutional neural networks in MRI images. IEEE transactions on medical imaging 35(5), 1240-1251 (2016)
+26. Risser, L., Vialard, F.X., Wolz, R., Murgasova, M., Holm, D.D., Ruekert, D.: Simultaneous multi-scale registration using large deformation diffeomorphic metric mapping. IEEE transactions on medical imaging 30(10), 1746-1759 (2011)
+27. Roth, H.R., Lu, L., Farag, A., Shin, H.C., Liu, J., Turkbey, E.B., Summers, R.M.: Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In: International conference on medical image computing and computer-assisted intervention. pp. 556-564. Springer (2015)
+28. Rueckert, D., Frangi, A.F., Schnabel, J.A.: Automatic construction of 3-D statistical deformation models of the brain using nonrigid registration. IEEE transactions on medical imaging 22(8), 1014-1025 (2003)
+29. Rueckert, D., Sonoda, L.I., Hayes, C., Hill, D.L., Leach, M.O., Hawkes, D.J.: Non-rigid registration using free-form deformations: application to breast MR images. IEEE transactions on medical imaging 18(8), 712-721 (1999)
+30. Shaham, U., Yamada, Y., Negahban, S.: Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing 307, 195-204 (2018)
+31. Shen, Z., Han, X., Xu, Z., Niethammer, M.: Networks for joint affine and nonparametric image registration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4224-4233 (2019)
+32. Shen, Z., Vialard, F.X., Niethammer, M.: Region-specific diffeomorphic metric mapping. arXiv preprint arXiv:1906.00139 (2019)
+33. Shin, H.C., Tenenholtz, N.A., Rogers, J.K., Schwarz, C.G., Senjem, M.L., Gunter, J.L., Andriole, K.P., Michalski, M.: Medical image synthesis for data augmentation
+
+and anonymization using generative adversarial networks. In: International workshop on simulation and synthesis in medical imaging. pp. 1-11. Springer (2018)
+34. Sotiras, A., Davatzikos, C., Paragios, N.: Deformable medical image registration: A survey. IEEE transactions on medical imaging 32(7), 1153-1190 (2013)
+35. Tajbakhsh, N., Jeyaseelan, L., Li, Q., Chiang, J., Wu, Z., Ding, X.: Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation. arXiv preprint arXiv:1908.10454 (2019)
+36. Vialard, F.X., Risser, L., Rueckert, D., Cotter, C.J.: Diffeomorphic 3D image registration via geodesic shooting using an efficient adjoint calculation. International Journal of Computer Vision 97(2), 229-241 (2012)
+37. Xiao, C., Zhu, J.Y., Li, B., He, W., Liu, M., Song, D.: Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612 (2018)
+38. Xu, Z., Niethammer, M.: Deepatlas: Joint semi-supervised learning of image registration and segmentation. arXiv preprint arXiv:1904.08465 (2019)
+39. Xu, Z., Shen, Z., Niethammer, M.: Contextual additive networks to efficiently boost 3D image segmentations. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 92-100. Springer (2018)
+40. Yang, X., Kwitt, R., Styner, M., Niethammer, M.: Quicksilver: Fast predictive image registration-a deep learning approach. NeuroImage 158, 378-396 (2017)
+41. Younes, L., Arrate, F., Miller, M.I.: Evolutions equations in computational anatomy. NeuroImage 45(1), S40-S50 (2009)
+42. Zhang, X., Wang, Q., Zhang, J., Zhong, Z.: Adversarial AutoAugment. arXiv preprint arXiv:1912.11188 (2019)
+43. Zhao, A., Balakrishnan, G., Durand, F., Guttag, J.V., Dalca, A.V.: Data augmentation using learned transformations for one-shot medical image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8543-8553 (2019)
+44. Zhou, X.Y., Guo, Y., Shen, M., Yang, G.Z.: Artificial intelligence in surgery. arXiv preprint arXiv:2001.00627 (2019)
\ No newline at end of file
diff --git a/adversarialdataaugmentationviadeformationstatistics/images.zip b/adversarialdataaugmentationviadeformationstatistics/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..117762202be28b08eb7522d3386debe9a3a10aba
--- /dev/null
+++ b/adversarialdataaugmentationviadeformationstatistics/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9f811b0b91af8179588d11f6d360cad728d54e47e90d2144584c7ec563f470ef
+size 333835
diff --git a/adversarialdataaugmentationviadeformationstatistics/layout.json b/adversarialdataaugmentationviadeformationstatistics/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5d5e986048c1d42570e496f673f0abc6898788f7
--- /dev/null
+++ b/adversarialdataaugmentationviadeformationstatistics/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e030c0dd83ddb27b0813dcdb5dbfd682ae5ae8935ae9fd4738248b56dcec62b
+size 443739
diff --git a/adversarialgenerativegrammarsforhumanactivityprediction/1f1841aa-fb41-4a76-ab50-9ad4a47ef61d_content_list.json b/adversarialgenerativegrammarsforhumanactivityprediction/1f1841aa-fb41-4a76-ab50-9ad4a47ef61d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f801fbfba0170d973a8bab1bd62940a35b17fafd
--- /dev/null
+++ b/adversarialgenerativegrammarsforhumanactivityprediction/1f1841aa-fb41-4a76-ab50-9ad4a47ef61d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1812f368bbb374d2e0aafa7372c41603ecec4d34babd4ae68726d60c98601a5
+size 76608
diff --git a/adversarialgenerativegrammarsforhumanactivityprediction/1f1841aa-fb41-4a76-ab50-9ad4a47ef61d_model.json b/adversarialgenerativegrammarsforhumanactivityprediction/1f1841aa-fb41-4a76-ab50-9ad4a47ef61d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c6cc676f507051b8cda0946815f512d57719423c
--- /dev/null
+++ b/adversarialgenerativegrammarsforhumanactivityprediction/1f1841aa-fb41-4a76-ab50-9ad4a47ef61d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a275f98cba4a5757a5e3efad087a022b144ee818ab9e43b4abe2eff2e91e1d0
+size 92311
diff --git a/adversarialgenerativegrammarsforhumanactivityprediction/1f1841aa-fb41-4a76-ab50-9ad4a47ef61d_origin.pdf b/adversarialgenerativegrammarsforhumanactivityprediction/1f1841aa-fb41-4a76-ab50-9ad4a47ef61d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ace0da85dd39cd9ab0dc7a946e4c3c2858206530
--- /dev/null
+++ b/adversarialgenerativegrammarsforhumanactivityprediction/1f1841aa-fb41-4a76-ab50-9ad4a47ef61d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c7c164b6e855f414ea451ab3fa224dec22db1123ced29b828546a15ed798a7e
+size 758404
diff --git a/adversarialgenerativegrammarsforhumanactivityprediction/full.md b/adversarialgenerativegrammarsforhumanactivityprediction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a215804d9a70ae5dde749fa94f58ec9cb10373c9
--- /dev/null
+++ b/adversarialgenerativegrammarsforhumanactivityprediction/full.md
@@ -0,0 +1,286 @@
+# Adversarial Generative Grammars for Human Activity Prediction
+
+AJ Piergiovanni $^{1}$ , Anelia Angelova $^{1}$ , Alexander Toshev $^{1}$ , and Michael S. Ryoo $^{1,2}$
+
+$^{1}$ Robotics at Google
+
+2 Stony Brook University
+
+{ajpiergi,anelia,toshev,mryoo}@google.com
+
+Abstract. In this paper we propose an adversarial generative grammar model for future prediction. The objective is to learn a model that explicitly captures temporal dependencies, providing a capability to forecast multiple, distinct future activities. Our adversarial grammar is designed so that it can learn stochastic production rules from the data distribution, jointly with its latent non-terminal representations. Being able to select multiple production rules during inference leads to different predicted outcomes, thus efficiently modeling many plausible futures. The adversarial generative grammar is evaluated on the Charades, MultiTHUMOS, Human3.6M, and 50 Salads datasets and on two activity prediction tasks: future 3D human pose prediction and future activity prediction. The proposed adversarial grammar outperforms the state-of-the-art approaches, being able to predict much more accurately and further in the future, than prior work. Code will be open sourced.
+
+# 1 Introduction
+
+Future prediction in videos is one of the most challenging visual tasks. Accurately predicting future activities or human pose has many important applications, e.g., in video analytics and robot action planning. Prediction is particularly hard because it is not a deterministic process as multiple potential 'futures' are possible, especially for predicting real-valued output vectors with non-unimodal distribution. Given these challenges, we address the important question of how the sequential dependencies in the data should be modeled and how multiple possible long-term future outcomes can be predicted at any given time.
+
+
+
+Fig. 1: The Adversarial Generative Grammar predicts future activities in videos and can generate many other plausible ones.
+
+We propose an Adversarial Generative Grammar (AGG) model for future prediction. The model is a differentiable form of a regular grammar trained with adversarial sampling of various possible futures, which is able to output real-valued predictions (e.g., 3D human pose) or semantic prediction (e.g., activity classes). Learning sequences of actions or other sequential processes with the production rules of a grammar is valuable, as it imposes temporal structural dependencies and captures relationships between latent states. Each (learned) production rule of a grammar model is able to take a state representation and transition to a different future state. Using multiple rules allows the model to capture multiple branching possibilities (Figure 1). This capability makes the grammar learning unique, different from previous sequential models including many recurrent neural network (RNN) models.
+
+The main technical contribution of this work is the introduction of adversarial learning approach for differentiable grammar models. This is essential, as the adversarial process allows the grammar model to produce multiple candidate future sequences that follow a similar distribution to sequences seen in the data. A brute force implementation of differentiable grammar learning would need to enumerate all possible rules and generate multiple sequence branches (exponential growth in time) to consider multiple futures. Our adversarial stochastic sampling process allows for much more memory- and computationally-efficient learning without such enumeration. Additionally, unlike other techniques for future generation (e.g., autoregressive RNNs), we show the adversarial grammar is able to learn longer sequences, can handle multi-label settings, and predict much further into the future.
+
+To our knowledge, AGG is the first approach of adversarial grammar learning. It enables qualitatively and quantitatively better solutions - ones able to successfully produce multiple feasible long-term future predictions for real-valued outputs. The proposed approach is driven entirely by the structure imposed from learning grammar rules and adversarial losses - i.e., no direct supervised loss is used for the grammar model training.
+
+The proposed approach is evaluated on different future activity prediction tasks: (i) on future action prediction - multi-class classification and multi-class multi-label problems and (ii) on 3D human pose prediction, which predicts the 3D joint positions of the human body in the future. The proposed method is tested on four challenging datasets: Charades, MultiTHUMOS, 50 Salads, and Human3.6M. It outperforms previous state-of-the-art methods, including RNNs, LSTMs, GRUs, grammar and memory based methods.
+
+# 2 Related work
+
+Grammar models for visual data. The notion of grammars in computational science was introduced by [4] for description of language, and has found a
+
+widespread use in natural language understanding. In the domain of visual data, grammars are used to parse images of scenes [39,38,13]. In their position paper, [39] present a comprehensive grammar-based language to describe images, and propose MCMC-based inference. More recently, a recursive neural net based approach was applied to parse scenes by [29]. However, these previous works either use a traditional symbolic grammar formulation or use a neural network without explicit representation of grammar. In the context of temporal visual data, grammars have been applied to activity recognition and parsing [23,27,32,24] but not to prediction or generation. [25] used traditional stochastic grammar to predict activities, but only within 3 seconds.
+
+Generative models for sequences. Generative Adversarial Networks (GANs) are a very powerful mechanism for data generation by an underlying learning of the data distribution through adversarial sampling [12]. GANs have been very popular for image generation tasks [6,16,33,2]. Prior work on using GANs for improved sequences generation [37,8,14] has also been successful. Fraccaro et al. [10] proposed a stochastic RNN which enables generation of different sequences from a given state. However, to our knowledge, no prior work explored end-to-end adversarial training of formal grammar as we do. Qi et al. [26] showed a grammar could be used for future prediction, and our work builds on this by learning the grammar structure differentiably from data.
+
+Differentiable Rule Learning Previous approaches that address differentiable rule or grammar learning are most aligned to our work [34]. Unlike the prior work, we are able to handle larger branching factors and demonstrate successful results in real-valued output spaces, benefiting from the adversarial learning.
+
+Future pose prediction. Previous approaches for human pose prediction [11,15,31] are relatively scarce. The dominant theme is the use of recurrent models (RNNs or GRUs/LSTMs) [11,22]. Tang et al. [31] use attention models specifically to target long-term predictions, up to 1 second in the future. Jain et al. [17] propose a structural RNN which learns the spatio-temporal relationship of pose joints. The above models, contrary to ours, cannot produce multiple futures, making them limited for long-term anticipation. These results are only within short-term horizons and the produced sequences often 'interpolate' actual data examples. Although our approach is more generic and is not limited to just pose forecasting, we show that it is able to perform successfully too on this task, outperforming others.
+
+Video Prediction. Our approach is also related to the video prediction literature [9,5,1,20], but more in-depth survey is beyond the scope of this work.
+
+# 3 Approach
+
+We first introduce a differentiable form of a formal grammar, where its production rules are implemented with fully-differentiable functions to be applied to non-terminals and terminals represented with latent vectors (Section 3.3). Unlike traditional grammar induction with symbolic representations, our approach
+
+
+Fig.2: Overview of the adversarial grammar model. The initial non-terminal is produced by an encoder based on the input video. The grammar then generates multiple possible sequences from the non-terminal. The generated and real sequences are used to train the adversarial discriminator, evaluating whether the generated sequences match the distribution of real sequences.
+
+allows joint learning of latent representations and differentiable functions with the standard back-propagation. Next, we present the adversarial grammar learning approach that actually enables training of such functions and representations without spending an exponential amount of memory and computation (Sec. 3.4). Our adversarial grammar is trained to generate multiple candidate future sequences. This enables robust future prediction, which, more importantly, can easily generate multiple realistic futures.
+
+We note that the proposed approach, based on stochastic sequence learning, is driven entirely by the adversarial losses which help model the data distribution over long sequences. That is, while direct supervised losses can be used, we implement our approach with adversarial losses only, which learn the underlying distribution. All experiments below demonstrate the success of this approach, despite being more challenging.
+
+# 3.1 Preliminaries
+
+A formal regular grammar is represented as the tuple $(\mathcal{N},\mathcal{T},\mathcal{P},N_0)$ where $\mathcal{N}$ is a finite non-empty set of non-terminals, $\mathcal{T}$ is a finite set of terminals (or output symbols, e.g., here actions), $\mathcal{P}$ is a set of production rules, and $N_0$ is the starting non-terminal symbol, $N_0\in \mathcal{N}$ . Production rules in a regular grammar are of the form $A\to aB$ , $A\to b$ , and $A\to \epsilon$ , where $A,B\in \mathcal{N}$ , $a,b\in \mathcal{T}$ , and $\epsilon$ is the empty string. Autoregressively applying production rules to the non-terminal generates a sequence of terminals. Note that we only implement rules of form $A\rightarrow aB$ in our grammar, allowing it to generate sequences infinitely and we represented $N$ as a real-valued vector.
+
+Our objective is to learn such non-terminals $\mathcal{N}$ and terminals $\mathcal{T}$ as latent vector representations directly from training data, and model the production
+
+rules $\mathcal{P}$ as a (differentiable) generative neural network function. That is, the goal is to learn a nonlinear function $G$ that maps a non-terminal to a set of (non-terminal, terminal) pairs; here $G$ is a neural network with learnable parameters.
+
+$$
+G: \mathcal {N} \rightarrow \left\{\left(\mathcal {N}, \mathcal {T}\right)\right\} \tag {1}
+$$
+
+Note that this is a mapping from a single non-terminal to multiple (non-terminal, terminal) pairs. The selection of different rules enables modeling of multiple different sequences, generating different future outcomes, unlike existing deterministic models (e.g., RNNs).
+
+The learned production rules allow modeling of the transitions between continuous events in time, for example 3D human pose or activities, which can naturally spawn into many possible futures at different points similarly to switching between rules in a grammar. For example, an activity corresponding to 'walking' can turn into 'running' or 'stopping' or continuing the 'walking' behaviour.
+
+More formally, for any latent non-terminal $N \in \mathcal{N}$ , the grammar production rules are generated by applying the function $G$ (a sequence of fully connected layers), to $N$ as:
+
+$$
+G (N) = \left\{\left(N _ {i}, t _ {i}\right) \right\} _ {i = 1: K}, \tag {2}
+$$
+
+where each pair corresponds to a particular production rule for this non-terminal:
+
+$$
+N \rightarrow t _ {1} N _ {1}
+$$
+
+$$
+N \rightarrow t _ {2} N _ {2} \dots \tag {3}
+$$
+
+$$
+N \rightarrow t _ {K} N _ {K}, \text {w h e r e} N _ {1}, N _ {2}, \dots N _ {K} \in \mathcal {N}, t _ {1}, t _ {2}, \dots t _ {K} \in \mathcal {T}, \text {f o r} K \text {r u l e s}.
+$$
+
+This function is applied recursively to obtain a number of output sequences, similar to prior recurrent methods (e.g., RNNs such as LSTMs and GRUs). However, in RNNs, the learned state is required to abstract multiple potential possibilities into a single representation, as the mapping from the state representation to the next representation is deterministic. As a result, when learning from sequential data with multiple possibilities, standard RNNs tend to learn states as a mixture of multiple sequences instead of learning more discriminative states. By learning explicit production rules, our states lead to more salient and distinct predictions which can be exploited for learning long-term, complex output tasks with multiple possibilities, as shown later in the paper.
+
+# 3.2 Learning the starting non-terminal
+
+Given an initial input data sequence (e.g., a short video or pose sequences), we learn to generate its corresponding starting non-terminal $N_0$ (i.e., root node). This is used as input to $G$ so as to generate a sequence of terminal symbols starting from the given non-terminal. Concretely, given an input sequence $X$ , a function $s$ (a CNN) is learned which gives the predicted starting non-terminal:
+
+$$
+N _ {0} = s (X). \tag {4}
+$$
+
+Notice that the function $s(X)$ serves as a jointly-learned blackbox parser that is able to estimate the non-terminal corresponding to the current state of the model, allowing future sequence generation to start from such non-terminal.
+
+# 3.3 Grammar learning
+
+Given a starting non-terminal, the function $G$ is applied recursively to obtain the possible sequences where $j$ is an index in the sequence and $i$ is one of the possible rules:
+
+$$
+\left\{ \begin{array}{l l} G \left(N _ {0}\right) = \left\{\left(N _ {i} ^ {1}, t _ {i} ^ {1}\right) \right\} _ {i}, & j = 0 \\ G \left(N ^ {j}\right) = \left\{\left(N _ {i} ^ {j + 1}, t _ {i} ^ {j + 1}\right) \right\} _ {i}, & \text {f o r} j > 0 \end{array} \right. \tag {5}
+$$
+
+For example, suppose $W$ is the non-terminal that encodes the activity for 'walking' sequences. Let walking denote the terminal of a grammar. An output of the rule $W \rightarrow \text{walking}W$ will be able to generate a sequence of continual 'walking' behavior. Additional rules, e.g., $W \rightarrow \text{stopping}U$ , $W \rightarrow \text{running}V$ , can be learned, allowing for the activity to switch to 'stopping' or 'running' (with the non-terminals $U, V$ respectively learning to generate their corresponding potential futures, e.g. 'sitting down', or 'falling'). Clearly, for real valued outputs, such as 3D human pose, the number and dimensionality of the non-terminals required will be larger. We also note that the non-terminals act as a form of memory, capturing the current state with the Markov property.
+
+To accomplish the above task, $G$ (in Eq. 2) has a special structure. $G$ takes an input of $N \in \mathcal{N}$ , then using several nonlinear transformations (e.g., fully connected layers with activation functions), maps $N$ to a binary vector $r$ corresponding to a set of rules: $r = f_{R}(N)$ . Here, $r$ is a vector with the size $|\mathcal{P}|$ whose elements specify the probability of each rule given input non-terminal. We learn $|\mathcal{P}|$ rules which are shared globally, but only a (learned) subset are selected for each non-terminal as the other rule probabilities are zero. This is conceptually similar to using memory with recurrent neural network methods [36], but the main difference is that the rule vectors are used to build grammar-like rule structures which are more advantageous in explicitly modeling of temporal dependencies.
+
+In order to generate multiple outputs, the candidate rules, $r$ are followed by the Gumbel-Softmax function [18,21], which allows for stochastic selection of a rule. This function is differentiable and samples a single rule from the candidate rules based on the learned rule probabilities. The probabilities are learned to model the likelihood of each generated sequence, and this formulation allows the 'branching' of sequence predictions as the outcome of the Gumbel-Softmax function differs every time, following the probability distribution.
+
+For each given rule $r$ , two nonlinear functions $f_{T}(r)$ and $f_{N}(r)$ are then learned, so that they output the resulting terminal and non-terminal for the rule $r$ : $N_{new} = f_{N}(r)$ , $t_{new} = f_{T}(r)$ . These functions are both implemented as a sequence of fully-connected layers followed by a non-linear activation function (e.g., softmax or sigmoid depending on the task). The schematic of $G$ is visualized in Figure 2, and more details on the functions are provided in the later sections.
+
+The non-terminals and terminals are modeled as sets of high dimensional vectors with pre-specified size and are learned jointly with the rules (all are tunable parameters and naturally more complex datasets require larger capacity).
+
+For example, for a $C$ -class classification problem, the terminals are represented as $C$ -dimensional vectors matching the one-hot encoding for each class.
+
+Difference to stochastic RNNs. Standard recurrent models have a deterministic state given some input, while the grammar is able to generate multiple potential next non-terminals (i.e., states). This is particularly important for multi-modal state distributions. Stochastic RNNs (e.g., [10]) address this by allowing the next state to be stochastically generated, but this is difficult to control, as the next state now depends on a random value. In the grammar model, the next non-terminal is sampled randomly, but from a set of fixed candidates while following the learned probability distribution. By maintaining a set of candidates, the next state can be selected randomly or by some other method (e.g., greedily taking most probable, beam search, etc.), giving more control over the generated sequences.
+
+# 3.4 Adversarial grammar learning
+
+The function $G$ generates a set of (non-terminal, terminal) pairs, which is applied recursively to the non-terminals, resulting in new production rules and the next sets of (non-terminal, terminal) pairs. Note that in most cases, each rule generates a different non-terminal, thus sampling $G$ many times will lead to a variety of generated sequences. As a result, an exponential number of sequences will need to be generated during training, to cover the possible sequences, and enumerating all possible sequences is computationally prohibitive beyond $k = 2$ .3 This restricts the tasks that can be addressed to ones with lower dimensional outputs because of memory limits. When $k = 1$ , i.e. when there is no branching, we have an RNN-like model, unable to generate multiple possible future sequences (we also tested this in ablation experiments below).
+
+Stochastic Adversarial Sampling. We address this problem by using stochastic adversarial rule sampling. Given the non-terminals, which effectively contain a number of potential 'futures', we use an adversarial-based sampling, similar to GAN approaches [12], which learns to sample the most likely rules for the given input (Figure 2). The use of a discriminator network allows the model to generate realistic sequences that may not exactly match the ground truth (but are still realistic) without being penalized.
+
+Generator: We use the function $G$ , which is the function modeling the learned grammar described above, as the generator function.
+
+Discriminator: We build an additional discriminator function $D$ . Following standard GAN training, the discriminator function returns a binary prediction which discriminates examples from the data distribution vs. generated ones. Note that the adversarial process is designed to ultimately generate terminals,
+
+i.e., the final output sequence for the model. $D$ is defined as:
+
+$$
+p = D (t, n) \tag {6}
+$$
+
+where $t = t_0t_1t_2\ldots t_L$ is the input sequence of terminals, $n = N_0N_1N_2\ldots N_L$ is the sequence of non-terminals ( $L$ is the length of the sequence) and $p \in [0,1]$ and reflects when the input sequence of terminals is from the data distribution or not. Note that our discriminator is also conditioned on the non-terminal sequence ( $n = N_0N_1N_2\ldots N_L$ ), thus the distribution of non-terminals is learned implicitly as well.
+
+The discriminator function $D$ is implemented as follows: given an input sequence of non-terminals and terminals, we apply several 1D convolutional layers to the terminals and non-terminals, then concatenate their representations followed by a fully-connected layer to produce the binary prediction (see the supp. material).
+
+Adversarial Generative Grammar (AGG). The discriminator and generator (grammar) functions are trained to work jointly, generating sequences which match the data distribution. The optimization objective is defined as:
+
+$$
+\min _ {G} \max _ {D} E _ {x \sim p _ {\text {d a t a}} (x)} [ \log D (x) ] + \tag {7}
+$$
+
+$$
+E _ {z \sim s (X)} [ \log (1 - D (G (z))) ]
+$$
+
+where $p_{data}(x)$ is the real data distribution and $G(z)$ is the generated sequence from an initial state based on a sequence of frames $(X)$ . That is, the first part of the loss works on sequences of actions or human pose, whereas the second works over generated sequences $(s(X))$ is the video embedding, or starting non-terminal).
+
+Alternatively, the sequences generated by $G$ could be compared to the ground truth to compute a loss during training (e.g., maximum likelihood estimation), however, doing so requires enumerating many possibilities in order to learn multiple distinct possible sequences. Without such enumeration, the model converges to a mixture representing possible sequences from the data distribution. By using the adversarial training of $G$ , our model is able to generate sequences that match the distribution observed in the dataset. This allows for computationally feasible learning of longer, higher-dimensional sequences.
+
+Architecture details. The functions $G$ , $f_{N}$ and $f_{t}$ , $f_{R}$ are implemented as networks using several fully-connected layers. The detailed architectures depend on the task and dataset, and we provide them in the supplemental material. For the pose forecasting, the function $s$ is implemented as a two-layer GRU module [3] followed by a 1x1 convolutional layer with $D_{N}$ outputs to produce the starting non-terminal. For activity prediction, $s$ is implemented as two sequential temporal 1D convolutional layers which produce the starting non-terminal.
+
+# 4 Experiments
+
+We conduct experiments on two sets of problems for future prediction: future 3D human pose forecasting and future activity prediction. The experiments are done
+
+on four public datasets and demonstrate strong performance of the proposed approach over the state-of-the-art and the ability to produce multiple future outcomes, to handle multi-label datasets, and to predict further in the future than prior work.
+
+Table 1: Evaluation of future pose for specific activity classes. Results are Mean Angle Error (lower is better). Human3.6M dataset.
+
+| Methods | Walking | | | | | | | |
| 80ms | 160ms | 320ms | 400ms | 560ms | 640ms | 720ms | 1000ms |
| ERD [11] | 0.77 | 0.90 | 1.12 | 1.25 | 1.44 | 1.45 | 1.46 | 1.44 |
| LSTM-3LR [11] | 0.73 | 0.81 | 1.05 | 1.18 | 1.34 | 1.36 | 1.37 | 1.36 |
| Res-GRU [22] | 0.27 | 0.47 | 0.68 | 0.76 | 0.90 | 0.94 | 0.99 | 1.06 |
| Zero-velocity [22] | 0.39 | 0.68 | 0.99 | 1.15 | 1.35 | 1.37 | 1.37 | 1.32 |
| MHU [31] | 0.32 | 0.53 | 0.69 | 0.77 | 0.90 | 0.94 | 0.97 | 1.06 |
| Ours | 0.25 | 0.43 | 0.65 | 0.75 | 0.79 | 0.85 | 0.92 | 0.96 |
| Methods | Greeting | | | | | | | |
| 80ms | 160ms | 320ms | 400ms | 560ms | 640ms | 720ms | 1000ms |
| ERD [11] | 0.85 | 1.09 | 1.45 | 1.64 | 1.93 | 1.89 | 1.92 | 1.98 |
| LSTM-3LR [11] | 0.80 | 0.99 | 1.37 | 1.54 | 1.81 | 1.76 | 1.79 | 1.85 |
| Res-GRU [22] | 0.52 | 0.86 | 1.30 | 1.47 | 1.78 | 1.75 | 1.82 | 1.96 |
| Zero-velocity [22] | 0.54 | 0.89 | 1.30 | 1.49 | 1.79 | 1.74 | 1.77 | 1.80 |
| MHU [31] | 0.54 | 0.87 | 1.27 | 1.45 | 1.75 | 1.71 | 1.74 | 1.87 |
| Ours | 0.52 | 0.86 | 1.26 | 1.45 | 1.58 | 1.69 | 1.72 | 1.79 |
| Methods | Taking photo | | | | | | | |
| 80ms | 160ms | 320ms | 400ms | 560ms | 640ms | 720ms | 1000ms |
| ERD [11] | 0.70 | 0.78 | 0.97 | 1.09 | 1.20 | 1.23 | 1.27 | 1.37 |
| LSTM-3LR [11] | 0.63 | 0.64 | 0.86 | 0.98 | 1.09 | 1.13 | 1.17 | 1.30 |
| Res-GRU [22] | 0.29 | 0.58 | 0.90 | 1.04 | 1.17 | 1.23 | 1.29 | 1.47 |
| Zero-velocity [22] | 0.25 | 0.51 | 0.79 | 0.92 | 1.03 | 1.06 | 1.13 | 1.27 |
| MHU [31] | 0.27 | 0.54 | 0.84 | 0.96 | 1.04 | 1.08 | 1.14 | 1.35 |
| Ours | 0.24 | 0.50 | 0.76 | 0.89 | 0.95 | 1.08 | 1.15 | 1.24 |
+
+# 4.1 Datasets
+
+MultiTHUMOS: The MultiTHUMOS dataset [35] is a well-established video understanding dataset for multi-class activity prediction. It contains 400 videos spanning about 30 hours of video and 65 action classes.
+
+Charades: Charades [28] is a challenging video dataset containing longer-duration activities recorded in home environments. Charades is a multi-class multi-label dataset in which multiple activities are often co-occurring. We use it to demonstrate the ability of the model to handle complex data. It contains 9858 videos of 157 action classes.
+
+Human3.6M: The Human 3.6M dataset [15] is a popular benchmark for future pose prediction. It has 3.6 million 3D human poses of 15 activities. The goal is to predict the future 3D locations of 32 joints in the human body.
+
+50 Salads: The 50 Salads [30] is a video dataset of 50 salad preparation sequences (518,411 frames total) with an average length of 6.4 minutes per video. It has been used recently for future activity prediction [19,7], making it suitable for the evaluation of our method.
+
+Table 2: Evaluation of future pose for short-term and long-term prediction horizons. Measured with Mean Angle Error (lower is better) on Human3.6M. No predictions beyond 1 second are available for prior work.
+
+| Method | 80ms | 160ms | 320ms | 560ms | 640ms | 720ms | 1s | 2s | 3s | 4s |
| ERD [11] | 0.93 | 1.07 | 1.31 | 1.58 | 1.64 | 1.70 | 1.95 | - | - | - |
| LSTM-3LR [11] | 0.87 | 0.93 | 1.19 | 1.49 | 1.55 | 1.62 | 1.89 | - | - | - |
| Res-GRU [22] | 0.40 | 0.72 | 1.09 | 1.45 | 1.52 | 1.59 | 1.89 | - | - | - |
| Zero-vel. [22] | 0.40 | 0.71 | 1.07 | 1.42 | 1.50 | 1.57 | 1.85 | - | - | - |
| MHU-MSE [31] | 0.39 | 0.69 | 1.04 | 1.40 | 1.49 | 1.57 | 1.89 | - | - | - |
| MHU [31] | 0.39 | 0.68 | 1.01 | 1.34 | 1.42 | 1.49 | 1.80 | - | - | - |
| AGG (Ours) | 0.36 | 0.65 | 0.98 | 1.27 | 1.40 | 1.49 | 1.74 | 2.25 | 2.70 | 2.98 |
+
+# 4.2 Human Pose Forecasting
+
+We first evaluate the approach on forecasting 3D human pose, a real valued structured-output problem. This is a challenging task [17,11] but is of high importance, e.g., for motion planning in robotics. It also showcases the use of the Adversarial Grammar, as using the standard grammar is not feasible due to the memory and computation constraints for this real-valued dataset.
+
+
+Fig. 3: Example results for 3D pose predictions. Top: walking, middle: greeting, bottom: posing.
+
+Human 3.6M dataset. We conduct experiments on the well established future pose prediction benchmark Human3.6M [15]. We here predict the future 3D locations of 32 joints in the human body. We use quaternions to represent each joint location, allowing for a more continuous joint representation space. We also predict differences, rather than absolute positions, which we found leads to more stable learning. Previous work demonstrated prediction results up to a second on this dataset. This work can generate future sequences for longer horizons, 4 seconds in the future.
+
+We compare against the state-of-the-art methods on the Human 3.6M benchmark [11,17,15,22,31] using the Mean Angle Error (MAE) metric as introduced by [17]. Table 1 shows results on several activities and Table 2 shows average
+
+
+Fig. 4: Starting from a neutral pose, the grammar is able to generate multiple sequences by selecting different rules. Top: a walking sequence, middle: eating, bottom: sitting.
+
+MAE for all activities compared to the state-of-the-art methods, consistent with the protocol in prior work. As seen from the tables, our work outperforms prior work. Furthermore, we are able to generate results at larger time horizons of four seconds in the future. In Figure 3, we show some predicted future poses for several different activities, confirming the results reflect the characteristics of the actual behaviors. In Figure 4, we show the ability of the adversarial grammar to generate different sequences from a given starting state. Here, given the same starting state, we select different rules, which lead to different sequences corresponding to walking, eating or sitting.
+
+# 4.3 Activity forecasting in videos
+
+We further test the method for video activity anticipation, where the goal is to predict future activities at various time-horizons, using an initial video sequence as input. We predict future activities on three video understanding datasets MultiTHUMOS [35], Charades [28] and 50-salads [30] using the standard evaluation protocols per dataset. We also predict from 1 to 45 seconds in the future on MultiTHUMOS and Charades, which is much further into the future than prior approaches.
+
+50 Salads. Following the setting 'without ground truth' in [19] and [7], we evaluate the future prediction task on the 50 Salads dataset [30]. As per standard evaluation protocol, we report prediction on portions of the video when $20\%$ and $30\%$ portion is observed. The results are shown in Table 3, where Grammar-only denotes training without adversarial losses. The results confirm that our approach allows better prediction which outperforms both the baseline, which is already a strong grammar model, as well as, the state-of-the-art approaches. Fig. 5 has an example prediction, which proposes three plausible continuations of the recipe, the top corresponding to the ground truth.
+
+Table 3: Results on 50 Salads without ground-truth observations. The proposed work outperforms the grammar baselines and the state-of-the-art.
+
+| Observation | 20% | 30% |
| Prediction | 10% | 20% | 30% | 50% | 10% | 20% | 30% | 50% |
| Nearest-Neighbor [7] | 19.0 | 16.1 | 14.1 | 10.4 | 21.6 | 15.5 | 13.5 | 13.9 |
| RNN [7] | 30.1 | 25.4 | 18.7 | 13.5 | 30.8 | 17.2 | 14.8 | 9.8 |
| CNN [7] | 21.2 | 19.0 | 16.0 | 9.9 | 29.1 | 20.1 | 17.5 | 10.9 |
| TCA [19] | 32.5 | 27.6 | 21.3 | 16.0 | 35.1 | 27.1 | 22.1 | 15.6 |
| Grammar (from [7]) | 24.7 | 22.3 | 19.8 | 12.7 | 29.7 | 19.2 | 15.2 | 13.1 |
| Grammar only | 39.2 | 32.1 | 24.8 | 19.3 | 38.4 | 29.5 | 25.5 | 18.5 |
| AGG (Ours) | 39.5 | 33.2 | 25.9 | 21.2 | 39.5 | 31.5 | 26.4 | 19.8 |
+
+MultiTHUMOS. We here present our future prediction results on the MultiTHUMOS dataset [35] $^{4}$ . We use a standard evaluation metric: we predict the activities occurring $T$ seconds in the future and compute the mean average precision (mAP) between the predictions and ground truth. As the grammar model is able to generate multiple, different future sequences, we also report the maximum mAP the model could obtain by selecting the best of 10 different future predictions. We compare the predictions at 1, 2, 5, 10, 20, 30 and 45 seconds into the future. As little work has explored long-term future activity prediction (with the exception of [35] which predicts within a second), we compare against four different baseline methods: (i) repeating the activity prediction of the last seen frame, (ii) using a fully connected layer to predict the next second (applied autoregressively), (iii) using a fully-connected layer to directly predict activities at various future times, and (iv) an LSTM applied autoregressively to future activity predictions.
+
+Table 4 shows activity prediction accuracy for the MultiTHUMOS dataset. In the table, we also report our approach when limited to generating a single outcome ('AGG-single'), to be consistent to previous methods which are not able to generate more than one outcome. We also compare to grammar without adversarial learning, trained by pruning the exponential amount of future sequences to fit into the memory ('Grammar only').
+
+As seen, our approach outperforms alternative methods. We observe that the gap to other approaches widens further in the future: 3.9 mAP for the LSTM vs 11.2 of ours at 45 sec. in the future, as the autoregressive predictions of an LSTM become noisy. Due to the structure of the grammar model, we are able to generate better long-term predictions. We also find that by predicting multiple futures and taking the max improves performance, confirming that the grammar model is generating different sequences, some of which more closely match the ground truth (see also Figure 6).
+
+Charades. Table 5 shows the future activity prediction results on Charades, using the same protocol as MultiTHUMOS. Similar to our MultiTHUMOS ex
+
+
+Fig. 5: Example sequence from 50-salads showing the observed frames and the next two predictions.
+
+Table 4: Prediction mAP for future activities (higher is better) from 1 seconds to 45 seconds in the future. MultiTHUMOS.
+
+| Method | 1 sec | 2 sec | 5 sec | 10 sec | 20 sec | 30 sec | 45 sec |
| Random | 2.6 | 2.6 | 2.6 | 2.6 | 2.6 | 2.6 | 2.6 |
| Last Predicted Action | 16.5 | 16.0 | 15.1 | 12.7 | 8.7 | 5.8 | 5.9 |
| FC Autoregressive | 17.9 | 17 | 14.5 | 7.7 | 4.5 | 4.2 | 4.7 |
| FC Direct | 13.7 | 9.8 | 11.0 | 7.3 | 8.0 | 5.5 | 8.2 |
| LSTM (Autoregressive) | 16.5 | 15.7 | 12.5 | 6.8 | 4.1 | 3.2 | 3.9 |
| Grammar only | 18.7 | 18.6 | 13.5 | 12.8 | 10.5 | 8.2 | 8.5 |
| AGG-single (Ours) | 19.3 | 19.6 | 13.1 | 13.6 | 11.7 | 10.4 | 11.4 |
| AGG (Ours) | 22.0 | 19.9 | 15.5 | 14.4 | 13.3 | 10.8 | 11.4 |
+
+periments, we observe that the adversarial grammar model provides more accurate future prediction than previous work, outperforming the grammar-only model in most cases. While the grammar-only model performs slightly better at 10 and 20 seconds, it is not computationally feasible for real-valued tasks due to the memory constraint. We note that Charades is more challenging than others on both recognition and prediction. Figure 1 shows a true sequence and several other sequences generated by the adversarial grammar. As Charades contains many different possible sequences, generating multiple futures is beneficial.
+
+Ablation study We conduct additional experiments to examine the importance of learning grammar with multiple possibilities (i.e., branching). Table 6 compares the models with and without the branching capability. These models use the exact same network architecture as our full models, while the only difference is that they do not generate multiple possible sequences for its learning. That is, they just become standard RNNs, constrained to have our grammar structure. We are able to observe that the ability to consider multiple possibilities during the learning is important, and that our adversarial training is beneficial. Note that we restricted these models to only generate one sequence with the highest likelihood during the inference for fair comparison.
+
+
+Fig.6: Example video and activity sequence from MultiTHUMOS (a cricket game). The adversarial grammar is able to learn two possible sequences: a hit/play and no play, instead of picking only the most likely one.
+
+Table 5: Prediction accuracy for future activities for 45 seconds in the future on the Charades dataset.
+
+| Method | 1 sec | 2 sec | 5 sec | 10 sec | 20 sec | 30 sec | 45 sec |
| Random | 2.4 | 2.4 | 2.4 | 2.4 | 2.4 | 2.4 | 2.4 |
| Last Predicted Action | 15.1 | 13.8 | 12.8 | 10.2 | 7.6 | 6.2 | 5.7 |
| FC Autoregressive | 13.5 | 14.0 | 12.6 | 6.7 | 3.7 | 3.5 | 5.1 |
| FC Direct | 15.2 | 14.5 | 12.2 | 9.1 | 6.6 | 6.5 | 5.5 |
| LSTM (Autoregressive) | 12.6 | 12.7 | 12.4 | 10.8 | 7.0 | 6.1 | 5.4 |
| Grammar only | 15.7 | 14.8 | 12.9 | 11.2 | 8.5 | 6.6 | 8.5 |
| AGG-single (Ours) | 15.9 | 15.0 | 13.1 | 10.5 | 7.4 | 6.2 | 8.8 |
| AGG (Ours) | 17.0 | 15.9 | 13.4 | 10.7 | 7.8 | 7.2 | 9.8 |
+
+Table 6: Ablation of our grammar learning on Charades.
+
+| Method | 1 sec | 5 sec | 45 sec |
| Grammar only - no branching | 12.2 | 8.4 | 3.8 |
| Grammar only | 15.7 | 12.9 | 8.5 |
| Adversarial Grammar (AGG) - no branching | 14.2 | 12.5 | 5.5 |
| Adversarial Grammar (AGG) | 15.9 | 13.1 | 8.8 |
+
+# 5 Conclusion
+
+We proposed a differentiable adversarial generative grammar which shows strong performance for future prediction of human pose and activities. Because of the structure we impose for learning grammar-like rules for sequences and learning in adversarial fashion, the model is able to generate multiple sequences that follow the distribution seen in data. One challenge is evaluating future predictions when the ground truth only contains one of many potentially valid sequences. In the future, other forms of evaluation, such as asking humans to rate a generated sequence, could be explored.
+
+# References
+
+1. Babaeizadeh, M., Finn, C., Erhan, D., Campbell, R.H., Levine, S.: Stochastic variational video prediction. arXiv preprint arXiv:1710.11252 (2017) 3
+2. Brock, A., Donahue, J., Simonyan, K.: Large scale gan training for high fidelity natural image synthesis. ICLR (2019) 3
+3. Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rn n encoder-decoder for statistical machine translation. EMNLP (2014) 8
+4. Chomsky, N.: Three models for the description of language. IRE Transactions on information theory 2(3), 113-124 (1956) 2
+5. Denton, E., Fergus, R.: Stochastic video generation with a learned prior. arXiv preprint arXiv:1802.07687 (2018) 3
+6. Emily L Denton, Soumith Chintala, R.F.: Deep generative image models using a laplacian pyramid of adversarial networks. Advances in Neural Information Processing Systems (NeurIPS) (2015) 3
+7. Farha, Y.A., Richard, A., Gall, J.: When will you do what? - anticipating temporal occurrences of activities. In: CVPR (2018) 9, 11, 12
+8. Fedus, W., Goodfellow, I., Dai, A.: Maskgan: Better text generation via filling in the .. ICLR (2018) 3
+9. Finn, C., Goodfellow, I., Levine, S.: Unsupervised learning for physical interaction through video prediction. In: Advances in Neural Information Processing Systems (NeurIPS). pp. 64-72 (2016) 3
+10. Fraccaro, M., Sønderby, S.K., Paquet, U., Winther, O.: Sequential neural models with stochastic layers. In: Advances in neural information processing systems. pp. 2199-2207 (2016) 3, 7
+11. Fragkiadaki, K., Levine, S., Felsen, P., Malik, J.: Recurrent network models for human dynamics. In: ICCV (2015) 3, 9, 10
+12. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., WardeFarley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. Advances in Neural Information Processing Systems (NeurIPS) (2014) 3, 7
+13. Han, F., Zhu, S.C.: Bottom-up/top-down image parsing with attribute grammar. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(1), 59-73 (2008) 3
+14. Hu, Z., Yang, Z., Liang, X., Salakhutdinov, R., Xing, E.P.: Toward controlled generation of text. ICML (2017) 3
+15. Ionescu, C., Papava, D., Olaru, V., Sminchisescu, C.: Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence (2014) 3, 9, 10
+16. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. CVPR (2017) 3
+17. Jain, A., Zamir, A.R., Savarese, S., Saxena, A.: Structural-rnn: Deep learning on spatio-temporal graphs. In: CVPR (2016) 3, 10
+18. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. In: ICLR (2017) 6
+19. Ke, Q., Fritz, M., Schiele, B.: Time-conditioned action anticipation in one shot. In: CVPR (2019) 9, 11, 12
+20. Lee, A.X., Zhang, R., Ebert, F., Abbeel, P., Finn, C., Levine, S.: Stochastic adversarial video prediction. arXiv preprint arXiv:1804.01523 (2018) 3
+
+21. Maddison, C.J., Mnih, A., Teh, Y.W.: The concrete distribution: A continuous relaxation of discrete random variables. In: ICLR (2017) 6
+22. Martinez, J., Black, M., Romero, J.: On human motion prediction using recurrent neural networks. In: CVPR (2017) 3, 9, 10
+23. Moore, D., Essa, I.: Recognizing multitasked activities from video using stochastic context-free grammar. In: Proceedings of AAAI Conference on Artificial Intelligence (AAAI). pp. 770-776 (2002) 3
+24. Pirsiavash, H., Ramanan, D.: Parsing videos of actions with segmental grammars. In: CVPR. pp. 612-619 (2014) 3
+25. Qi, S., Huang, S., Wei, P., Zhu, S.C.: Predicting human activities using stochastic grammar. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1164-1172 (2017) 3
+26. Qi, S., Jia, B., Zhu, S.C.: Generalized earley parser: Bridging symbolic grammars and sequence data for future prediction. arXiv preprint arXiv:1806.03497 (2018) 3
+27. Ryoo, M.S., Aggarwal, J.K.: Recognition of composite human activities through context-free grammar based representation. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). vol. 2, pp. 1709-1718. IEEE (2006) 3
+28. Sigurdsson, G.A., Varol, G., Wang, X., Farhadi, A., Laptev, I., Gupta, A.: Hollywood in homes: Crowdsourcing data collection for activity understanding. Proceedings of European Conference on Computer Vision (ECCV) (2016) 9, 11
+29. Socher, R., Lin, C.C., Manning, C., Ng, A.Y.: Parsing natural scenes and natural language with recursive neural networks. In: Proceedings of the 28th international conference on machine learning (ICML-11). pp. 129-136 (2011) 3
+30. Stein, S., McKenna, S.J.: Combining embedded accelerometers with computer vision for recognizing food preparation activities. In: Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. pp. 729-738. ACM (2013) 9, 11
+31. Tang, Y., Ma, L., Liu, W., Zheng, W.S.: Long-term human motion prediction by modeling motion context and enhancing motion dynamic. In: IJCAI (2018) 3, 9, 10
+32. Vo, N.N., Bobick, A.F.: From stochastic grammar to bayes network: Probabilistic parsing of complex activity. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2641-2648 (2014) 3
+33. Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: CVPR (2018) 3
+34. Yang, F., Yang, Z., Cohen, W.W.: Differentiable learning of logical rules for knowledge base reasoning. Advances in Neural Information Processing Systems (NeurIPS) (2017) 3
+35. Yeung, S., Russakovsky, O., Jin, N., Andriluka, M., Mori, G., Fei-Fei, L.: Every moment counts: Dense detailed labeling of actions in complex videos. International Journal of Computer Vision (IJCV) pp. 1-15 (2015) 9, 11, 12
+36. Yogatama, D., Miao, Y., Melis, G., Ling, W., Kuncoro, A., Dyer, C., Blunsom, P.: Memory architectures in recurrent neural network language models. ICLR (2018) 6
+37. Yu, L., Zhang, W., J.Wang, Yu, Y.: Seqgan: sequence generative adversarial nets with policy gradient. Proceedings of AAAI Conference on Artificial Intelligence (AAAI) (2017) 3
+38. Zhao, Y., Zhu, S.C.: Image parsing with stochastic scene grammar. In: Advances in Neural Information Processing Systems. pp. 73-81 (2011) 3
+
+39. Zhu, S.C., Mumford, D.: A stochastic grammar of images. Foundations and Trends® in Computer Graphics and Vision 2 (2007) 3
\ No newline at end of file
diff --git a/adversarialgenerativegrammarsforhumanactivityprediction/images.zip b/adversarialgenerativegrammarsforhumanactivityprediction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..511f3184681d78dc74d054929591ac20f31d2e03
--- /dev/null
+++ b/adversarialgenerativegrammarsforhumanactivityprediction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d477652541853cb8c2a0feaac2f5a5d9b142fc5a1f7f846e8adc2cefd9ffa066
+size 615847
diff --git a/adversarialgenerativegrammarsforhumanactivityprediction/layout.json b/adversarialgenerativegrammarsforhumanactivityprediction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a07526af3f3067b2d8777a7b17f212dbf4299fe6
--- /dev/null
+++ b/adversarialgenerativegrammarsforhumanactivityprediction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e77046033e37fe2fbd601c85f28a678ea023891867935e554b13f8caf6533997
+size 371491
diff --git a/adversariallearningforzeroshotdomainadaptation/85bee00a-b072-45b6-93f0-f18c6af082b2_content_list.json b/adversariallearningforzeroshotdomainadaptation/85bee00a-b072-45b6-93f0-f18c6af082b2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a6ef2b22ad10edec8e6500364a6d5b8fc4ef8bfb
--- /dev/null
+++ b/adversariallearningforzeroshotdomainadaptation/85bee00a-b072-45b6-93f0-f18c6af082b2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8db0e57f804f74408ee4e7fde43eb9e359ecb65f483cb58418eae5677c0bc4bf
+size 82424
diff --git a/adversariallearningforzeroshotdomainadaptation/85bee00a-b072-45b6-93f0-f18c6af082b2_model.json b/adversariallearningforzeroshotdomainadaptation/85bee00a-b072-45b6-93f0-f18c6af082b2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e5e62870036dd4a8eda687a87646ede10200fc36
--- /dev/null
+++ b/adversariallearningforzeroshotdomainadaptation/85bee00a-b072-45b6-93f0-f18c6af082b2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7851c0228741f5a9201c99221d5d67ff4cca43cd05976aeca5531870daccc58
+size 101510
diff --git a/adversariallearningforzeroshotdomainadaptation/85bee00a-b072-45b6-93f0-f18c6af082b2_origin.pdf b/adversariallearningforzeroshotdomainadaptation/85bee00a-b072-45b6-93f0-f18c6af082b2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4a4a08621fbfc164d1ac09c3cb8b40999aea5641
--- /dev/null
+++ b/adversariallearningforzeroshotdomainadaptation/85bee00a-b072-45b6-93f0-f18c6af082b2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:69134320c8427ed2727bc3a2f466dfac004d8265eac0aaf0dd2ae409d7722ebe
+size 665249
diff --git a/adversariallearningforzeroshotdomainadaptation/full.md b/adversariallearningforzeroshotdomainadaptation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bae210d582b4a22a232e6ec410458c50bde35feb
--- /dev/null
+++ b/adversariallearningforzeroshotdomainadaptation/full.md
@@ -0,0 +1,308 @@
+# Adversarial Learning for Zero-shot Domain Adaptation
+
+Jinghua Wang[0000-0002-2629-1198] and Jianmin Jiang[0000-0002-7576-3999]
+
+Research Institute for Future Media Computing, College of Computer Science & Software Engineering, and Guangdong Laboratory of Artificial Intelligence & Digital Economy (SZ), Shenzhen University, Shenzhen, China. {wang.jh, jianmin.jiang}@szu.edu.cn†
+
+Abstract. Zero-shot domain adaptation (ZSDA) is a category of domain adaptation problems where neither data sample nor label is available for parameter learning in the target domain. With the hypothesis that the shift between a given pair of domains is shared across tasks, we propose a new method for ZSDA by transferring domain shift from an irrelevant task $(IrT)$ to the task of interest $(ToI)$ . Specifically, we first identify an $IrT$ , where dual-domain samples are available, and capture the domain shift with a coupled generative adversarial networks (CoGAN) in this task. Then, we train a CoGAN for the $ToI$ and restrict it to carry the same domain shift as the CoGAN for $IrT$ does. In addition, we introduce a pair of co-training classifiers to regularize the training procedure of CoGAN in the $ToI$ . The proposed method not only derives machine learning models for the non-available target-domain data, but also synthesizes the data themselves. We evaluate the proposed method on benchmark datasets and achieve the state-of-the-art performances.
+
+Keywords: Transfer Learning; Domain Adaptation; Zero-shot Learning; Coupled Generative Adversarial Networks
+
+# 1 Introduction
+
+When a standard machine learning technique learns a model with the training data and applies the model on the testing data, it implicitly assumes that the testing data share the distribution with the training data [16, 37, 38]. However, this assumption is often violated in applications, as the data in real-world are often from different domains [32]. For example, the images captured by different cameras follow different distributions due to the variations of resolutions, illuminations, and capturing views.
+
+Domain adaptation techniques tackle the problem of domain shift by transferring knowledge from the label-rich source domain to the label-scarce target domain [6,15,2]. They have a wide range of applications, such as person re-identification [43], semantic segmentation [24], attribute analysis [44], and medical image analysis [7]. Most domain adaptation techniques assume that the data
+
+
+Fig. 1. An intuitive example of ZSDA (best viewed in color). The $ToI$ is digit image analysis and the $IrT$ is letter image analysis. The source domain consists of gray scale images and the target domain consists of color images. In order to learn the model for the unseen MNIST- $M$ (i.e., target-domain data in $ToI$ ), we first learn the domain shift based on the dual-domain samples in the $IrT$ , then transfer it to the $ToI$ .
+
+in target domain are available at the training time for model learning [42, 23, 6, 29]. However, this is not always the case in the real-world. For example, we may want a artificial intelligence system to provide continuous service with a newly installed camera [19]. This involves a domain adaptation task, where the source domain consists of the images captured by the old camera and the target domain consists of the non-accessible images captured by the new camera. Such a task is referred to as domain generalization [8] or zero-shot domain adaptation (ZSDA) [28, 36].
+
+In this paper, we propose a new method to tackle the challenging ZSDA tasks, where only the source-domain data is available in the Task of Interest (ToI). It is recognized that the existence of domain shift does not allow us to learn a model for the target domain based on the source-domain data alone. To solve this problem, we establish a hypothesis that the domain shift, which intrinsically characterizes the difference between the domains, is shared by different tasks. For successful ZSDA, we firstly learn the domain shift from an irrelevant task $(IrT)$ where many data in both domains are available, then transfer this domain shift to the $ToI$ and learn the model for the target domain.
+
+We illustrate an example of ZSDA in Fig. 1, which learns a model for the color digit images (i.e., MNIST-M [6]), given the grayscale digit images (i.e., MNIST [25]), the grayscale letter images (i.e., EMNIST [3]), and the color letter images (i.e., EMNIST-M). In this example, the $\text{ToI}$ and $\text{IrT}$ are digit and letter image analysis, respectively. The source domain consists of grayscale images, and the target domain consists of color images. We consider these two tasks to have the same domain shift, which transforms grayscale images to color images.
+
+With the available dual-domain data in the $IrT$ , we can train a coupled generative adversarial networks (CoGAN) [21] (i.e., CoGAN- $IrT$ ) to model the joint distribution of images in these two domains. This CoGAN- $IrT$ not only shows the sharing of two domains in high-level concepts, but also implicitly encodes the difference between them, which is typically referred to as domain shift. We consider one source-domain sample and one target-domain to be paired samples if they are realizations of the same thing and correspond to each other. Fig. 1 shows eight grayscale images and their correspondences in the color domain. The RGB image and depth image of the same scene are also paired samples [39]. Based on the observation that it is the domain shift that introduces the difference between paired samples, we define the domain shift to be the distribution of representation difference between paired samples.
+
+For successful ZSDA in the $ToI$ , we train a CoGAN-ToI to capture the joint distribution of dual-domain samples and use it to synthesize the unseen samples in the target domain. Besides the available samples in the source domain, we introduce two supervisory signals for CoGAN-ToI training. Firstly, we transfer the domain shift from $IrT$ to $ToI$ and enforce the CoGAN-ToI to encode the same domain shift with CoGAN-ToI. In other words, we restrict that the representation difference between paired samples to follow the same distribution in two tasks. To improve the quality of the synthesized target-domain samples, we also take a pair of co-training classifiers to guide the training procedure of CoGAN-ToI. The predictions of these two classifiers are trained to be (i) consistent when receiving samples from both $IrT$ and $ToI$ , and (ii) different as much as possible when receiving samples which are not from these two tasks. In the training procedure, we guide the CoGAN-ToI to synthesize such target-domain samples that the classifiers produce consistent predictions when taking them as the input. With domain shift preservation and co-training classifiers consistency as the supervisory signals, our CoGAN-ToI can synthesize high quality data for the non-accessible target domain and learn well-performed models.
+
+To summary, we propose a new method for ZSDA by learning across both domains and tasks and our contributions can be highlighted in two folds.
+
+- Firstly, we propose a new strategy for domain adaptation through domain shift transferring across tasks. For the first time, we define the domain shift to be the distribution of representation difference between paired samples in two domains. We learn the domain shift from a CoGAN-IrT that captures the joint distribution of dual-domain samples in the IrT and design a method for shift transferring to the ToI, where only source domain is seen. In addition to domain shift preservation, we also take the consistency of two co-training classifiers as another supervisory signal for CoGAN-IoT training to better explore the non-accessible target domain.
+- Secondly, our method has a broader range of applications than existing methods [28, 36]. While our method is applicable when paired samples in the $IrT$ are non-accessible, the work [28] is not. While our method can learn the domain shift from one $IrT$ and transfer it to multiple different $ToI$ , the work [36] is only applicable to a given pair of $(IrT, ToI)$ .
+
+# 2 Related Work
+
+While standard machine learning methods involves with a single domain [13, 35], domain adaptation uses labeled data samples in one or more source domains to learn a model for the target domain. For transferable knowledge learning, researchers normally minimize the discrepancy between domains by learning domain-invariant features [6, 22, 33, 40]. Ganin and Lempitsky [6] introduced gradient reversal layer to extract features that can confuses the domain classifier. Long et al.[22] introduced residual transfer network to bridge the source domain and the target domain by transferable features and adaptive classifiers. Taking maximum mean discrepancy (MMD) as the measurement between domains, Tzeng et al.[33] introduced an adaptation layer to learn representations which are not only domain invariant but also semantically meaningful. In order to solve the problem of class weight bias, Yan et al.[40] introduced weighted MMD and proposed a classification EM algorithm for unsupervised domain adaptation.
+
+These methods achieve good performances in various computer vision tasks. However, none of them can solve the ZSDA problem as they rely on the target-domain data at the training time. The existing techniques for ZSDA can be summarized into three categories based on their strategies.
+
+The first strategy learns domain-invariant features which not only work in the available source domains but also generalize well to the unseen target domain. Domain-invariant component analysis (DICA) [26] is a kernel-based method that learns a common feature space for different domains while preserving the posterior probability. For cross domain object recognition, multi-task autoencoder (MTAE) [8] extends the standard denoising autoencoder framework by reconstructing the analogs of a given image for all domains. Conditional invariant deep domain generalization (CIDDG) [20] introduces an invariant adversarial network to align the conditional distributions across domains and guarantee the domain-invariance property. With a structured low-rank constraint, deep domain generalization framework (DDG) [5] aligns multiple domain-specific networks to learn sharing knowledge across source domains.
+
+The second strategy assumes that a domain is jointly determined by a sharing latent common factor and a domain specific factor [14, 18, 41]. This strategy identifies the common factor through decomposition and expects it to generalize well in the unseen target domain. Khosla et al.[14] model each dataset as a biased observation of the visual world and conduct the decomposition via max-margin learning. Li et al.[18] develop a low-rank parameterized CNN model to simultaneously exploit the relationship among domains and learn the domain agnostic classifier. Yang and Hospedales [41] parametrize the domains with continuous values and propose a solution to predict the subspace of the target domain via manifold-valued data regression. Researchers also correlate domains with semantic descriptors [15] or latent domain vectors [17].
+
+The third strategy first learns the correlation between domains from an assistant task, then accomplishes ZSDA based on the available source-domain data and the domain correlation [28,36]. Normally, this strategy relies on an $IrT$ where data from both source and target domain are sufficiently available.
+
+In comparison with the first two strategies, this strategy can work well with a single source domain. Zero-shot deep domain adaptation (ZDDA) [28] aligns representations from source domain and target domain in the $IrT$ and expect the alignment in the $ToI$ . CoCoGAN [36] aligns the representation across tasks in the source domain and takes the alignment as the supervisory signal in the target domain.
+
+# 3 Background
+
+Generative Adversarial Networks (GAN) consists of two competing models, i.e., the generator and the discriminator [9]. Taking a random vector $z \sim p_z$ as the input, the generator aims to synthesize images $g(z)$ which are resemble to the real image as much as possible. The discriminator tries to distinguish real images from the synthesized ones. It takes an image $x$ as the input and outputs a scalar $f(x)$ , which is expected to be large for real images and small for synthesized images. The following objective function formulates the adversarial training procedure of the generator and the discriminator:
+
+$$
+\max _ {g} \min _ {f} V (f, g) \equiv E _ {x \sim p _ {x}} [ - \log f (x) ] + E _ {z \sim p _ {z}} [ - \log (1 - f (g (z))) ], \tag {1}
+$$
+
+where $E$ is the empirical estimate of expected value of the probability. In fact, Eq. (1) measures the Jensen-Shannon divergence between the distribution of real images and that of the synthesized images [9].
+
+Coupled Generative Adversarial Networks (CoGAN) consists of a pair of GANs (i.e., $\mathrm{GAN}_1$ and $\mathrm{GAN}_2$ ) which are closely related with each other. With each GAN corresponds to a domain, CoGAN captures the joint distribution of images from two different domains [21]. Let $x_i \sim p_{x_i} (i = 1, 2)$ be the images from the $i$ th domain. In $\mathrm{GAN}_i (i = 1, 2)$ , we denote the generator as $g_i$ and the discriminator as $f_i$ . Based on a sharing random vector $z$ , the generators synthesize image pairs $(g_1(z), g_2(z))$ which not only are indistinguishable from the real ones but also have correspondences. We can formulate the objective function of the CoGAN as follows:
+
+$$
+\begin{array}{l} \max \min V \left(f _ {1}, f _ {2}, g _ {1}, g _ {2}\right) \equiv E _ {x _ {1} \sim p _ {x _ {1}}} \left[ - \log f _ {1} \left(x _ {1}\right) \right] + E _ {z \sim p _ {z}} \left[ - \log \left(1 - f _ {1} \left(g _ {1} (z)\right)\right) \right] \\ \left. _ {g _ {1}, g _ {2} f _ {1}, f _ {2}} \right. \quad 1 = f _ {1} \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \tag {2} \\ + E _ {x _ {2} \sim p _ {x _ {2}}} [ - \log f _ {2} (x _ {2}) ] + E _ {z \sim p _ {z}} [ - \log (1 - f _ {2} (g _ {2} (z))) ], \\ \end{array}
+$$
+
+subject to two constraints: (i) $\theta_{g_1^j} = \theta_{g_2^j}$ , $1 \leq j \leq n_g$ ; and (ii) $\theta_{f_1^{n_1 - k}} = \theta_{f_2^{n_2 - k}}$ , $0 \leq k \leq n_{fs} - 1$ . The parameter $n_i (i = 1, 2)$ denotes the number of layers in the discriminator $f_i$ . While the first constraint restricts the generators to have $n_g$ sharing bottom layers, the second restricts the discriminators to have $n_f$ sharing top layers. These two constraints force the generators and discriminators to process the high-level concepts in the same way, so that the CoGAN is able to discover the correlation between two domains. CoGAN is effective in dual-domain analysis, as it is capable to learn the joint distribution of data samples (i.e., $p_{x_1, x_2}$ ) based on the samples drawn individually from the marginal distributions (i.e., $p_{x_1}$ and $p_{x_2}$ ).
+
+
+
+
+
+
+(a) CoGAN-IrT for IrT
+(c) Task classifier for shift transferring
+Fig. 2. The network structures of our method. The CoGAN- $IrT$ in (a) models the joint distribution of $(x_s^\beta, x_t^\beta)$ in the $IrT$ . The CoGAN-ToI in (b) models the joint distribution of $(x_s^\alpha, x_t^\alpha)$ in the $ToI$ . In the discriminators of these two CoGANs, we use $R_l(\cdot)$ to denote the lower level representation produced by the non-sharing layers, and $R_h(\cdot)$ to denote higher level representations produced by the sharing layers, respectively. The task classifier in (c) discriminates $\delta_h^\beta = R_h(x_t^\beta) \ominus R_h(x_s^\beta)$ from $\delta_h^\alpha = R_h(x_t^\alpha) \ominus R_h(x_s^\alpha)$ . We maximize the loss of this task classifier to align the domain shift. The co-training classifiers in (d) produce the labels for $X_s^\alpha$ and consistent predictions for $X_s^\beta$ and $X_t^\beta$ . To train the CoGAN-ToI, we use domain shift preservation to regularize the higher level features and co-training classifiers to regularize the lower level features. The backpropagation directions of these two signals are marked by orange and red, respectively.
+
+
+(b) CoGAN-ToI for ToI
+(d) Co-training classifiers
+
+# 4 Approach
+
+# 4.1 Problem Definition
+
+We define a domain $D = \{X, P(X)\}$ to be the data sample space $X$ and its marginal probability distribution $P(X)$ [27]. Given the data samples $X$ , a task $T = \{Y, P(Y|X)\}$ consists of a label space $Y$ and the conditional probability distribution $P(Y|X)$ . This work considers two tasks to be the same as long as they have sharing label space. In $ToI$ , the label space is $Y^{\alpha}$ , the source domain is $D_s^\alpha = \{X_s^\alpha, P(X_s^\alpha)\}$ , and the target domain is $D_t^\alpha = \{X_t^\alpha, P(X_t^\alpha)\}$ . Then, the $ToI$ is denoted by $T^\alpha = \{Y^\alpha, P_s(Y^\alpha | X_s^\alpha)\} \cup \{Y^\alpha, P_t(Y^\alpha | X_t^\alpha)\}$ .
+
+Given the labeled data samples in the source domain (i.e., $(x_s^\alpha, y_s^\alpha)$ , $x_s^\alpha \in X_s^\alpha$ and $y_s^\alpha \in Y^\alpha$ ), our ZSDA task aims to derive the conditional probability distribution $P(Y^\alpha | X_t^\alpha)$ in the target domain. In general, the main challenge of this task is induced by non-accessibility of the target domain, as well as the domain shift, i.e., $P(X_s^\alpha) \neq P(X_t^\alpha)$ and $P_s(Y^\alpha | X_s^\alpha) \neq P_t(Y^\alpha | X_t^\alpha)$ .
+
+# 4.2 Main Idea
+
+In order to accomplish the ZSDA task, we identify an irrelevant task $(IrT)$ that satisfies two constraints: (i) the $IrT$ involves the same pair of domains with the $ToI$ ; and (ii) the dual-domain samples in the $IrT$ are available. Under the hypothesis that the shift between a given pair of domains maintains across tasks, we propose to learn the domain shift from $IrT$ and transfer it to $ToI$ .
+
+Let the label space of $IrT$ be $Y^{\beta}$ . We denote the $IrT$ as $T^{\beta} = \{Y^{\beta}, P_s(Y^{\beta}|X_s^{\beta})\} \cup \{Y^{\beta}, P_t(Y^{\beta}|X_t^{\beta})\}$ , where $D_s^\beta = \{X_s^\beta, P(X_s^\beta)\}$ is the source domain and $D_t^\beta = \{X_t^\beta, P(X_t^\beta)\}$ is the target domain, respectively. Note that, the source-domain samples $(X_s^\alpha$ and $X_{s}^{\beta})$ are in the same sample space. It is also true in the target domain. In the example of Fig. 1, while the source-domain data $X_{s}^{\alpha} = MNIST$ and $X_{s}^{\beta} = EMNIST$ are grayscale images, the target-domain data $X_{t}^{\alpha} = MNIST-M$ and $X_{t}^{\beta} = EMNIST-M$ are color images.
+
+In this work, we define two corresponding samples from source domain and target domain as paired samples. In most cases, two paired samples are different views of the same object. For example, a grayscale image in MNIST and its corresponding color image in MNIST-M are paired samples in Fig. 1. The depth image and RGB image of the same scene are also paired samples. While the similarity between paired samples is determined by the object itself, their difference is mainly introduced by the domain shift. Our work only assumes the existences of correspondence between dual-domain samples. Nevertheless, the correspondences between them in the $IrT$ are not required.
+
+For correlation analysis between paired samples, we train CoGANs to capture the joint distribution of source-domain and target-domain data. As both $X_{s}^{\beta}$ and $X_{t}^{\beta}$ are available, we can easily train CoGAN- $IrT$ (Fig. 2 (a)) for the $IrT$ using the standard method [21]. The main difficulty lies in the training of CoGAN-ToI (Fig. 2 (b)) for the ToI, as the target-domain data $X_{t}^{\alpha}$ is not available. To tackle this problem, we propose two kinds of supervisory information for CoGAN-ToI training, which are domain shift preservation and co-training classifiers consistency.
+
+For easier transferring across tasks, we define domain shift to be the distribution of element-wise difference between paired samples in the representation space. We can learn the domain shift from the CoGAN- $IrT$ which carries the correlation between two domains by varying the inputting noise $z^{\beta}$ . After that, we train CoGAN-ToI and enforce the representation difference between paired samples of ToI to follow the distribution learned in CoGAN- $IrT$ by maximizing the loss of a task classifier. Fig. 2 (c) visualizes the task classifier which aims to identify the task label of the representation difference. In this way, the domain shift is transferred from the $IrT$ to $ToI$ .
+
+To better explore the unseen target domain of $ToI$ , we also build two constraining classifiers (Fig. 2 (d)) and use their consistency to guide the training procedure of CoGAN-ToI. By enforcing the weights of the classifiers to be different from each other as much as possible, we aim to analyze data samples from distinct views. The classifiers are trained to: (i) predict the labels of $X_{s}^{\alpha}$ ; (ii) produce consistent predictions when receiving $X_{s}^{\beta}$ and $X_{t}^{\beta}$ ; and (iii) produce
+
+different predictions when receiving samples not involved with $\text{ToI}$ or $\text{IrT}$ . Thus, we can use the consistency of these two classifiers to evaluate whether a sample is involved with the two tasks. We guide the training procedure of CoGAN-ToI to synthesize $X_{t}^{\alpha}$ as such that the classifiers also produce consistent predictions when receiving their representations.
+
+# 4.3 Training
+
+In CoGAN-IrT (Fig. 2 (a)), the sharing layers are $P_g^\beta$ and $P_d^\beta$ , the non-sharing layers are $S_g^\beta$ , $T_g^\beta$ , $S_d^\beta$ and $T_d^\beta$ . The components of CoGAN-ToI are denoted in similar way in Fig. 2 (b). For simplicity, we use $R_l(x)$ to denote the lower level representation of sample $x$ produced by the non-sharing layers and and $R_h(x)$ to denote the higher level representation produced by sharing layers. Note, the representation extraction procedures $R_l(\cdot)$ and $R_h(\cdot)$ vary with task and domain.
+
+# Domain shift
+
+We train CoGAN-IrT based on the dual-domain samples $(X_{s}^{\beta}$ and $X_{t}^{\beta})$ and let it carry the correlation between two domains. The CoGAN-IrT can synthesize a set of paired samples for the IrT. For two paired samples $(x_{s}^{\beta}\in X_{s}^{\beta},x_{t}^{\beta}\in X_{t}^{\beta})$ , we characterize their shift by the element-wise difference between representations in a sharing layer, i.e., $\delta_h^\beta = R_h(x_t^\beta)\ominus R_h(x_s^\beta)$ . We then define the domain shift to be the distribution of $\delta_h^\beta$ , i.e., $p_{\delta_h^\beta}$ . Specifically, we can obtain a set of $\{\delta_h^\beta\}$ by feeding CoGAN-IrT with different values of inputting noise $z^{\beta}$ .
+
+# Co-training classifiers
+
+Both of the two co-training classifiers (denoted as $clf_{1}$ and $clf_{2}$ ) take the representations (i.e., $R_{l}(X_{s}^{\beta})$ , $R_{l}(X_{t}^{\beta})$ , and $R_{l}(X_{s}^{\alpha})$ ) as the input. With $R_{l}(x)$ as the input, the classifier $clf_{i}$ ( $i = 1,2$ ) produces a $c$ -dimensional vector $v_{i}(x)$ , where $c$ denotes the number of categories in $X_{s}^{\alpha}$ . We minimize the following loss to train the classifiers:
+
+$$
+L (c l f _ {1}, c l f _ {2}) = \lambda_ {w} L _ {w} (w _ {1}, w _ {2}) + \lambda_ {a c c} L _ {c l s} (X _ {s} ^ {\alpha}) - \lambda_ {c o n} L _ {c o n} (X ^ {\beta}) + \lambda_ {d i f f} L _ {d i f f} (\tilde {X}), (3)
+$$
+
+where $L_{w}$ measures the similarity between the two classifiers, $L_{cls}$ denotes the loss to classify the labeled source-domain samples of $ToI$ , $L_{con}$ assesses the consistency of the output scores when receiving dual-domain samples of $IrT$ (i.e., $X_{s}^{\beta}$ and $X_{t}^{\beta}$ ) as the input, and $L_{diff}$ assess the consistency when receiving samples $\tilde{X}$ which are not related to $IrT$ .
+
+As in standard co-training methods [31], we expect the two classifiers to have diverse parameters so that they can analyze the inputs from different views. In this work, we implement these two classifiers with the same neural network structure and assess their similarity by the cosine distance between the parameters:
+
+$$
+L _ {w} = w _ {1} ^ {T} * w _ {2} / \left\| w _ {1} \right\| * \left\| w _ {2} \right\|, \tag {4}
+$$
+
+where $w_{i}$ is the vectored parameters of $clf_{i}$ .
+
+With the labeled source-domain data in $\text{ToI}$ , we can easily formulate a multiclass classification problem and use the soft-max loss to define the second term
+
+of Eq. (3) as follows:
+
+$$
+L _ {c l s} = - \sum_ {x _ {s} ^ {\alpha} \in X _ {s} ^ {\alpha}} \sum_ {i = 1} ^ {2} \sum_ {j = 1} ^ {c} v _ {i} ^ {j} \left(x _ {s} ^ {\alpha}\right) l ^ {j} \left(x _ {s} ^ {\alpha}\right), \tag {5}
+$$
+
+where $v_{i}^{j}(x_{s}^{\alpha})$ is the $j$ th element of the prediction $v_{i}(x_{s}^{\alpha})$ , the binary value $l^{j}(x_{s}^{\alpha})$ denotes whether $x_{s}^{\alpha}$ belongs to the $j$ th class or not. This item regularizes the classifiers to produce semantically meaningful vectors.
+
+Different from $X_{s}^{\alpha}$ in $ToI$ , the labels for dual-domain data in $IrT$ are not available. It is impossible to predict their true labels. To gain supervisory signals from these label-missing data, we restrict the two classifiers to produce consistent predictions. The consistency for a given sample is measured by the dot product of its two predictions. Thus, we define the third term in Eq. (3) as:
+
+$$
+L _ {c o n} = \sum_ {x ^ {\beta} \in X _ {s} ^ {\beta} \cup X _ {t} ^ {\beta}} v _ {1} (x ^ {\beta}) \cdot v _ {2} (x ^ {\beta}). \tag {6}
+$$
+
+The last term $L_{diff}$ regularizes the classifiers to produce different predictions when receiving samples that are not related with the two tasks. It is defined in the same way to $L_{con}$ and the only difference lies in the input $\tilde{X}$ . Here, the samples in $\tilde{X}$ have two sources: (i) the samples in public datasets, e.g., imageNet [4]; (ii) the corrupted images by replacing a patch of $x_s^\beta$ , $x_t^\beta$ , and $x_s^\alpha$ with random noise.
+
+In principal, we can use the consistency of these two classifiers to assess whether a sample is properly involved with $IrT$ or $ToI$ in these two domains. Thus, we can guide the training procedure of CoGAN-ToI in such a way that the synthesized $X_{t}^{\alpha}$ should satisfy $v_{1}(X_{t}^{\alpha}) = v_{2}(X_{t}^{\alpha})$ , and take this as a complementary supervisory signal of domain shift preservation.
+
+# CoGAN-ToI
+
+At this stage, we train CoGAN-ToI to capture the joint distribution of paired samples in the ToI. By correlating the two domains, a well-trained CoGAN-ToI is able to synthesize the non-available target-domain data. We use three constraints to train CoGAN-ToI, including (i) one branch captures the distribution of $X_{s}^{\alpha}$ ; (ii) the domain shift is shared by the two tasks, i.e., $p_{\delta_h^\beta} = p_{\delta_h^\alpha}$ , where $\delta_h^\alpha = R_h(x_t^\alpha) \ominus R_h(x_s^\alpha)$ ; and (iii) the co-training classifiers have consistent predictions for the synthesized sample $x_{t}^{\alpha}$ , i.e., $v_{1}(x_{t}^{\alpha}) = v_{2}(x_{t}^{\alpha})$ .
+
+This work trains the two branches of CoGAN-ToI separately, unlike the standard method in [21] that trains them simultaneously. To satisfy the first constraint, we consider the source-domain branch (consisting of $P_{g}^{\alpha}$ , $S_{g}^{\alpha}$ , $S_{d}^{\alpha}$ , and $P_{d}^{\alpha}$ ) as an independent GAN and train it using the available $X_{s}^{\alpha}$ .
+
+Though involving in different tasks, both $X_{t}^{\beta}$ and $X_{t}^{\alpha}$ are images from the target domain. Thus, they are composed of the same set of low-level details. In order to mimic the processing method learned in the $IrT$ , we initialize the non-sharing components of CoGAN-ToI in the target domain as $T_{g}^{\beta} \rightarrow T_{g}^{\alpha}$ and $T_{d}^{\beta} \rightarrow T_{d}^{\alpha}$ .
+
+After initialization, we use the second and third constraints to train the non-sharing components $(T_{g}^{\alpha}$ and $T_{d}^{\alpha})$ for the target domain and fine-tune the sharing
+
+components $(P_g^\alpha$ and $P_d^\alpha)$ . Specifically, we minimize the following loss function:
+
+$$
+V \left(P _ {g} ^ {\alpha}, T _ {g} ^ {\alpha}, T _ {d} ^ {\alpha}, P _ {d} ^ {\alpha}\right) \equiv \lambda_ {c o n} ^ {\alpha} \sum_ {x _ {t} ^ {\alpha} = g _ {t} ^ {\alpha} \left(z ^ {\alpha}\right)} v _ {1} \left(x _ {t} ^ {\alpha}\right) \cdot v _ {2} \left(x _ {t} ^ {\alpha}\right) - L _ {c l f} \left(\delta_ {h} ^ {\alpha}, \delta_ {h} ^ {\beta}\right), \tag {7}
+$$
+
+where $g_{t}^{\alpha} = P_{g}^{\alpha} + T_{g}^{\alpha}$ is the generator. While the first term assesses how the two classifiers agree with each other, the second term assesses how $\delta_h^\alpha$ is distinguishable from $\delta_h^\beta$ .
+
+With CoGAN-ToI, we train a classifier for the synthesized target-domain data by three steps. Firstly, we train a classifier $\varPhi_s(\cdot)$ for the labeled source-domain data. Then, we synthesize a set of paired samples $(x_{s}^{\alpha},x_{t}^{\alpha})$ and use $\varPhi_s(\cdot)$ to predict their labels. Finally, we train a classifier $\varPhi_t(\cdot)$ for $x_{t}^{\alpha}$ with the constraint $\varPhi_s(x_s^\alpha)=\varPhi_t(x_t^\alpha)$ and evaluate our method using the average accuracy.
+
+# 5 Experiments
+
+# 5.1 Adaptation Across Synthetic Domains
+
+We conduct experiments on four gray image datasets, including MNIST $(D_M)$ [25], Fashion-MNIST $(D_F)$ [12], NIST $(D_N)$ [10], and EMNIST $(D_E)$ [3]. Both MNIST [25] and Fashion-MNIST have 70000 images from 10 classes. NIST is imbalance and has more than 40k images from 52 classes. EMNIST has more than 145k images from 26 classes.
+
+These four datasets are in the gray domain $(G - dom)$ . We create three more domains for evaluation, i.e., the colored domain $(C - dom)$ , the edge domain $(E - dom)$ , and the negative domain $(N - dom)$ . The $C - dom$ is created using the method in [6], i.e., combining an image with a random color patch in BSDS500 [1]. We apply canny detector to create $E - dom$ and the operation of $I_{n} = 255 - I$ to create $N - dom$ .
+
+# Implementation details
+
+In order to learn transferable domain shift across tasks, the two CoGANs (i.e., CoGAN-IrT and CoGAN-ToI) have the same network structure. The two branches inside these CoGANs also share the same structure, and both generators and discriminators have seven layers. We transform the output of the last convolutional layer of discriminator into a column vector before feeding it into a single sigmoid function. The last two layers in generators and the first two layers in discriminators are non-sharing layers for low-level feature processing.
+
+The task classifier has four convolutional layers to identify the task label of its input. We vary the input noise $z^{\beta}$ of CoGAN-IrT to extract $R_{h}(x)$ and thus obtain a set of $\delta_h^\beta$ . The parameters of the task classifier are initialized with zero-centered normal distribution. We adopt the stochastic gradient descent (SGD) method for optimization. The batch size is set to be 128 and the learning rate is set to be 0.0002.
+
+The co-training classifiers are implemented as convolutional neural networks with three fully connected layers, with 200, 50, and $c$ , respectively. We use $c$ to
+
+denote the number of categories in $X_{s}^{\alpha}$ . We set the hyper-parameter as $\lambda_w = 0.01$ , $\lambda_{acc} = 1.0$ , $\lambda_{con} = 0.5$ , and $\lambda_{diff} = 0.5$ .
+
+The source-domain branch in CoGAN-ToI is firstly trained independently using the available data $X_{s}^{\alpha}$ . Based on the two supervisory signals, we use backpropagation method to train $T_{g}^{\alpha}$ and $T_{d}^{\alpha}$ . Simultaneously, the $P_{g}^{\alpha}$ and $P_{d}^{\alpha}$ are fine-tuned. In our experiment, we train the two branches of CoGAN-ToI in an iterative manner to obtain the best results.
+
+# Results
+
+With the above four datasets, we conduct experiments on ten different pairs of $(IrT, ToI)$ . Note, $D_N$ and $D_E$ are the same task, as both of them consist of letter images. We test four pairs of source domain and target domain, including $(G\text{-dom}, C\text{-dom})$ , $(G\text{-dom}, E\text{-dom})$ , $(C\text{-dom}, G\text{-dom})$ , and $(N\text{-dom}, G\text{-dom})$ .
+
+We take two existing methods as the benchmarks, including ZDDA [28] and CoCoGAN [36]. In addition, we adapt ZDDA by introducing a domain classifier in order to learn from non-corresponding samples and denote it as $\mathrm{ZDDA}_{dc}$ . We also conduct ablation study by creating the baseline $CTCC$ , which only uses Co-Training Classifiers Consistency to train CoGAN-ToI.
+
+Table 1. The accuracy of different methods with (source,target) = (G-dom,C-dom)
+
+| ToI | DM | DF | DN | DE |
| IrT | DF | DN | DE | DM | DN | DE | DM | DF | DM | DF |
| ZDDA | 73.2 | 92.0 | 94.8 | 51.6 | 43.9 | 65.3 | 34.3 | 21.9 | 71.2 | 47.0 |
| CoCoGAN | 78.1 | 92.4 | 95.6 | 56.8 | 56.7 | 66.8 | 41.0 | 44.9 | 75.0 | 54.8 |
| ZDDAdc | 69.3 | 79.6 | 80.7 | 50.6 | 42.4 | 62.0 | 29.1 | 20.2 | 49.8 | 46.5 |
| CTCC | 68.5 | 74.9 | 77.6 | 42.0 | 52.9 | 60.9 | 37.0 | 43.6 | 47.3 | 45.2 |
| Ours | 81.2 | 93.3 | 95.0 | 57.4 | 58.7 | 62.0 | 44.6 | 45.5 | 72.4 | 58.9 |
+
+Table 2. The accuracy of different methods with (source,target) = (G-dom, E-dom)
+
+| ToI | DM | DF | DN | DE |
| IrT | DF | DN | DE | DM | DN | DE | DM | DF | DM | DF |
| ZDDA | 72.5 | 91.5 | 93.2 | 54.1 | 54.0 | 65.8 | 42.3 | 28.4 | 73.6 | 50.7 |
| CoCoGAN | 79.6 | 94.9 | 95.4 | 61.5 | 57.5 | 71.0 | 48.0 | 36.3 | 77.9 | 58.6 |
| ZDDAdc | 66.5 | 83.3 | 84.7 | 49.3 | 50.4 | 58.0 | 42.2 | 31.6 | 65.0 | 41.2 |
| CTCC | 65.5 | 73.9 | 80.5 | 44.0 | 40.8 | 37.3 | 40.0 | 31.4 | 57.7 | 48.2 |
| Ours | 81.4 | 93.5 | 96.3 | 63.2 | 58.7 | 72.4 | 49.9 | 38.6 | 78.2 | 61.1 |
+
+As seen in Tab. 1-4, our method achieves the best performance in average. Taking $D_{E}$ classification as an example, our method outperforms ZDDA [28] by a margin of $8.9\%$ , and outperforms CoCoGAN [36] by a margin of $4.1\%$ when $IrT$ is $D_{F}$ in Tab. 1. In average, our method performs $7.38\%$ better than ZDDA and $0.69\%$ better than CoCoGAN in Tab. 1.
+
+Table 3. The accuracy of different methods with (source,target) = (C-dom,G-dom)
+
+| ToI | DM | DF | DN | DE |
| IrT | DF | DN | DE | DM | DN | DE | DM | DF | DM | DF |
| ZDDA | 67.4 | 85.7 | 87.6 | 55.1 | 49.2 | 59.5 | 39.6 | 23.7 | 75.5 | 52.0 |
| CoCoGAN | 73.2 | 89.6 | 94.7 | 61.1 | 50.7 | 70.2 | 47.5 | 57.7 | 80.2 | 67.4 |
| ZDDAdc | 61.5 | 76.7 | 79.9 | 51.2 | 46.1 | 53.4 | 31.3 | 20.4 | 61.2 | 42.2 |
| CTCC | 62.1 | 76.9 | 68.6 | 47.2 | 45.6 | 57.6 | 27.5 | 33.6 | 58.0 | 49.9 |
| Ours | 73.7 | 91.0 | 93.4 | 62.4 | 53.5 | 71.5 | 50.6 | 58.1 | 83.5 | 70.9 |
+
+Table 4. The accuracy of different methods with (source,target) $= \left( {N - {dom},G - {dom}}\right)$
+
+| ToI | DM | DF | DN | DE |
| IrT | DF | DN | DE | DM | DN | DE | DM | DF | DM | DF |
| ZDDA | 78.5 | 90.7 | 87.6 | 56.6 | 57.1 | 67.1 | 34.1 | 39.5 | 67.7 | 45.5 |
| CoCoGAN | 80.1 | 92.8 | 93.6 | 63.4 | 61.0 | 72.8 | 47.0 | 43.9 | 78.8 | 58.4 |
| ZDDAdc | 68.4 | 79.8 | 82.5 | 48.1 | 46.2 | 64.6 | 28.6 | 34.4 | 61.8 | 36.2 |
| CTCC | 68.4 | 80.0 | 80.2 | 50.1 | 55.1 | 61.3 | 37.6 | 33.9 | 56.1 | 33.9 |
| Ours | 82.6 | 94.6 | 95.8 | 67.0 | 68.2 | 77.9 | 51.1 | 44.2 | 79.7 | 62.2 |
+
+In each of the Tab. 1-4, the proposed method improves CTCC more than $10\%$ in average. This means that the domain shift transferring are useful in the training procedure of CoGAN-ToI. Among the three tasks, the digit image classification is the easiest one. Out of all settings, the most successful one transfers knowledge from the $G$ -dom to the $E$ -dom with $D_{E}$ as the $IrT$ and $D_{M}$ as the $ToI$ . In this case, our method achieves the accuracy of $96.3\%$ . For $D_{M}$ classification in $G$ -dom, our method achieve the accuracy of $95.8\%$ with $D_{E}$ as $IrT$ and $N$ -dom as the source domain, outperforming other techniques (including $89.5\%$ in [11], and $94.2\%$ in [30]) which rely on the availability of the target-domain data in the training stage. With CoGAN-ToI, we not only derive models for the unseen target domain, but also synthesize data themselves. Fig. 3 visualizes the generated images in $C$ -dom and $E$ -dom with $G$ -dom as the source domain.
+
+
+Fig.3. The generated images in the $C$ -dom and $E$ -dom.
+
+
+ToI: O $\Delta$
+
+
+IrT: □
+Fig.4. An example. $\text{ToI}$ represents a subset of the categories (square and triangle) and $IrT$ represents the rest (circle and ellipse).
+
+# 5.2 Adaptation in Public Dataset
+
+We also evaluate our method on Office-Home [34], which has four different domains, i.e., Art (Ar), Clipart (Cl), Product (Pr), and Real-world (Rw). It has more than 15k images from 65 categories.
+
+As it is difficult to identify an analogous set for this dataset, we evaluate our method on adaptation across subsets. Give a pair of domains, we take a subset of the categories as the $ToI$ and the rest as the $IrT$ . An example is shown in Fig. 4, where the $ToI$ represents the classification of two categories (square and triangle), and $IrT$ represents the classification of other two categories (circle and ellipse). Here, we set the parameters as $\lambda_w = 0.01$ , $\lambda_{acc} = 1$ , and $\lambda_{con} = \lambda_{diff} = 0.1$ .
+
+Table 5. The accuracy of different methods on Office-Home
+
+| Source | Ar | Cl | Pr | Rw |
| Target | Cl | Pr | Rw | Ar | Pr | Rw | Ar | Cl | Rw | Ar | Cl | Pr |
| ZDDAdc | 53.2 | 61.4 | 68.8 | 67.4 | 57.0 | 68.4 | 60.9 | 40.6 | 62.4 | 68.1 | 43.4 | 50.3 |
| CoCoGAN | 62.2 | 69.5 | 74.5 | 66.7 | 74.0 | 66.4 | 57.6 | 53.4 | 71.7 | 69.2 | 51.3 | 65.8 |
| CTCC | 55.7 | 61.5 | 66.5 | 66.8 | 64.6 | 65.2 | 56.3 | 46.6 | 61.6 | 64.3 | 43.7 | 57.7 |
| Ours | 62.7 | 71.9 | 76.3 | 72.6 | 75.1 | 73.9 | 70.3 | 60.8 | 74.8 | 72.2 | 61.4 | 72.2 |
+
+Let $N_{\alpha}$ denote the number of categories of $ToI$ . We fix the value of $N_{\alpha}$ to be 10 and conduct experiments on all of the 12 possible different pairs of source domain and target domain. As seen in Tab. 5, our method achieves the best performances in all cases. This indicates that our method is applicable to a broad range of applications. Our method can beats both ZDDA and CoCoGAN by a margin larger than $10\%$ , when source domain is $Rw$ and target domain is $Cl$ .
+
+Table 6. The variation of accuracy against parameter ${\lambda }_{con}^{\alpha }$
+
+| λαcon | ar→ cl | ar→ pr | ar→ rw | cl→ ar | cl→ pr | cl→ rw |
| 0.001 | 59.3 | 68.5 | 73.3 | 65.7 | 68.3 | 69.3 |
| 0.005 | 61.6 | 70.3 | 75.7 | 70.6 | 74.6 | 71.1 |
| 0.01 | 62.7 | 71.9 | 76.3 | 72.6 | 75.1 | 73.9 |
| 0.02 | 62.1 | 71.0 | 74.7 | 72.1 | 76.1 | 72.8 |
| 0.1 | 53.0 | 64.8 | 66.1 | 60.4 | 60.4 | 63.5 |
+
+We use the parameter $\lambda_{con}^{\alpha} = 0.01$ to balance the two terms in Eq. (7). Generally, the CTCC mainly regularizes the training of $T_{s}^{\alpha}$ , which processes the low-level details. The detail-richer $X_{t}^{\beta}$ means more knowledge are available for training, and the more transferable across tasks the $T_{s}^{\alpha}$ is. Thus, we set smaller value for $\lambda_{con}^{\alpha}$ when richer details are included in $X_{t}^{\beta}$ . Tab. 6 lists the accuracy
+
+of our method on Office-Home with different parameter values of $\lambda_{con}^{\alpha}$ . As seen, our method performs well when $\lambda_{con}^{\alpha} \in [0.005, 0.01]$ .
+
+Let $N_{s}$ be the number of samples in the $X = \{X_s^\alpha, X_s^\beta, X_t^\beta\}$ . We use $2N_{s}$ supplementary samples to train the $L_{diff}$ where (i) half are randomly cropped from the ImageNet and (ii) half are obtained by replacing patches of training samples with random noises. Tab. 7 lists the performance of our method with different number of supplementary samples. As seen, $2N_{s}$ supplementary samples are enough for model training.
+
+Table 7. The variation of accuracy against number of supplementary samples
+
+| Num | ar→ cl | ar→ pr | ar→ rw | cl→ ar | cl→ pr | cl→ rw |
| 0.8N | 60.3 | 67.5 | 73.4 | 68.8 | 67.0 | 70.7 |
| N | 61.3 | 70.7 | 73.6 | 70.3 | 73.2 | 71.0 |
| 1.6N | 62.5 | 71.5 | 76.0 | 71.5 | 74.3 | 73.5 |
| 2N | 62.7 | 71.9 | 76.3 | 72.6 | 75.1 | 73.9 |
| 4N | 62.7 | 71.9 | 76.3 | 72.7 | 75.1 | 73.9 |
+
+# 6 Conclusion and Future Work
+
+This paper proposes a new method for ZSDA based on the hypothesis that different tasks may share the domain shift for the given two domains. We learn the domain shift from one task and transfer it to the other by bridging two CoGANs with a task classifier. Our method takes the domain shift as the distribution of the representation difference between paired samples and transfers it across CoGANs. Our method is capable of not only learning the machine learning models for the unseen target domain, but also generate target-domain data samples. Experimental results on six datasets show the effectiveness of our method in transferring knowledge among images in different domains and tasks.
+
+The proposed method learns the shift between domains and transfers it across tasks. This strategy makes our method to be applicable only when "large" shift exists across domains, such as (rgb, gray), (clipart, art) etc. Thus, our method cannot perform well on the datasets where the domain shift is "small", such as VLSC and Office-31. In the future, we will train a classifier to determine whether correspondence exists between a source-domain sample and a synthesized target-domain sample. Such a classifier can guide the training procedure of CoGAN, even when only samples from a single domain is available.
+
+# Acknowledgment
+
+The authors wish to acknowledge the financial support from: (i) Natural Science Foundation China (NSFC) under the Grant no. 61620106008; (ii) Natural Science Foundation China (NSFC) under the Grant no. 61802266.
+
+# References
+
+1. Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 898-916 (2011)
+2. Chen, Y., Lin, Y., Yang, M., Huang, J.: Crdoco: Pixel-level domain transfer with cross-domain consistency. In: CVPR. pp. 1791-1800 (2019)
+3. Cohen, G., Afshar, S., Tapson, J., van Schaik, A.: EMNIST: an extension of MNIST to handwritten letters. arXiv (2017)
+4. Deng, J., Dong, W., Socher, R., Li, L., Li, K., Li, F.: Imagenet: A large-scale hierarchical image database. In: CVPR. pp. 248-255 (2009)
+5. Ding, Z., Fu, Y.: Deep domain generalization with structured low-rank constraint. IEEE Transactions on Image Processing 27(1), 304-313 (2018)
+6. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: ICML. vol. 37, pp. 1180-1189 (2015)
+7. Ghassami, A., Kiyavash, N., Huang, B., Zhang, K.: Multi-domain causal structure learning in linear systems. In: NeurIPS. pp. 6269-6279 (2018)
+8. Ghifary, M., Kleijn, W.B., Zhang, M., Balduzzi, D.: Domain generalization for object recognition with multi-task autoencoders. In: ICCV (2015)
+9. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: NIPS. pp. 2672-2680 (2014)
+0. Grother, P., Hanaoka, K.: Nist special database 19 handprinted forms and characters database. In: National Institute of Standards and Technology (2016)
+1. Haeusser, P., Frerix, T., Mordvintsev, A., Cremers, D.: Associative domain adaptation. In: ICCV. pp. 2784-2792 (2017)
+2. Han, X., Kashif, R., Roland, V.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. CoRR abs/1708.07747 (2017)
+3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. pp. 770-778 (2016)
+4. Khosla, A., Zhou, T., Malisiewicz, T., Efros, A.A., Torralba, A.: Undoing the damage of dataset bias. In: ECCV (2012)
+5. Kodirov, E., Xiang, T., Fu, Z., Gong, S.: Unsupervised domain adaptation for zero-shot learning. In: ICCV (2015)
+6. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS. pp. 1106-1114 (2012)
+7. Kumagai, A., Iwata, T.: Zero-shot domain adaptation without domain semantic descriptors. CoRR abs/1807.02927 (2018)
+8. Li, D., Yang, Y., Song, Y.Z., Hospedales, T.M.: Deeper, broader and artier domain generalization. In: ICCV (2017)
+9. Li, D., Yang, Y., Song, Y.Z., Hospedales, T.M.: Learning to generalize: Meta-learning for domain generalization. In: AAAI (2018)
+20. Li, Y., Tian, X., Gong, M., Liu, Y., Liu, T., Zhang, K., Tao, D.: Deep domain generalization via conditional invariant adversarial networks. In: ECCV (2018)
+21. Liu, M.Y., Tuzel, O.: Coupled generative adversarial networks. In: NIPS (2016)
+22. Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: NIPS. pp. 136-144 (2016)
+23. Lopez-Paz, D., Hernández-Lobato, J., Schölkopf, B.: Semi-supervised domain adaptation with non-parametric copulas. In: NIPS (2012)
+
+24. Luo, Y., Zheng, L., Guan, T., Yu, J., Yang, Y.: Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation. In: CVPR (2019)
+25. Lécun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE (1998)
+26. Muandet, K., Balduzzi, D., Scholkopf, B.: Domain generalization via invariant feature representation. In: ICML (2013)
+27. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE TKDE 22(10), 1345-1359 (2010)
+28. Peng, K.C., Wu, Z., Ernst, J.: Zero-shot deep domain adaptation. In: ECCV (2018)
+29. Pinheiro, P.O.: Unsupervised domain adaptation with similarity learning. In: CVPR (2018)
+30. Saito, K., Ushiku, Y., Harada, T.: Asymmetric tri-training for unsupervised domain adaptation. In: ICML. pp. 2988-2997 (2017)
+31. Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: CVPR. pp. 3723-3732 (2018)
+32. Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR (2011)
+33. Tzeng, E., Hoffman, J., N., Z., S., K., D., T.: Deep domain confusion: Maximizing for domain invariance. Computer Science (2014)
+34. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: CVPR. pp. 5385-5394 (2017)
+35. Wang, J., Jiang, J.: An unsupervised deep learning framework via integrated optimization of representation learning and gmm-based modeling. In: ACCV. vol. 11361, pp. 249-265 (2018)
+36. Wang, J., Jiang, J.: Conditional coupled generative adversarial networks for zero-shot domain adaptation. In: ICCV (2019)
+37. Wang, J., Jiang, J.: Sa-net: A deep spectral analysis network for image clustering. Neurocomputing 383, 10-23 (2020)
+38. Wang, J., Wang, G.: Hierarchical spatial sum-product networks for action recognition in still images. IEEE Trans. Circuits Syst. Video Techn. 28(1), 90-100 (2018)
+39. Wang, J., Wang, Z., Tao, D., See, S., Wang, G.: Learning common and specific features for RGB-D semantic segmentation with deconvolutional networks. In: ECCV. pp. 664-679 (2016)
+40. Yan, H., Ding, Y., Li, P., Wang, Q., Xu, Y., Zuo, W.: Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation. In: IEEE. pp. 945-954 (2017)
+41. Yang, Y., Hospedales, T.: Zero-shot domain adaptation via kernel regression on the grassmannian (2015). https://doi.org/10.5244/C.29.DIFFCV.1
+42. Yao, T., Pan, Y., Ngo, C.W., Li, H., Tao, M.: Semi-supervised domain adaptation with subspace learning for visual recognition. In: CVPR (2015)
+43. Zhong, Z., Zheng, L., Luo, Z., Li, S., Yang, Y.: Invariance matters: Exemplar memory for domain adaptive person re-identification. In: CVPR (2019)
+44. Zhu, P., Wang, H., Saligramama, V.: Learning classifiers for target domain with limited or no labels. In: ICML. pp. 7643-7653 (2019)
\ No newline at end of file
diff --git a/adversariallearningforzeroshotdomainadaptation/images.zip b/adversariallearningforzeroshotdomainadaptation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9d0d3e6fc182ba183e65eae34575dfebaa873d53
--- /dev/null
+++ b/adversariallearningforzeroshotdomainadaptation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d9f027c9db5bbf26b39b0417c29f71eeb04a1ed35bd6f73726af20eedc60bd6
+size 421653
diff --git a/adversariallearningforzeroshotdomainadaptation/layout.json b/adversariallearningforzeroshotdomainadaptation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d52cd5893c41eb53988b2e1ddee4fce6dc0ca972
--- /dev/null
+++ b/adversariallearningforzeroshotdomainadaptation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:acfad59b47734c0755280234d5b76c869a5942e28691e335f461bd8d33f5c69a
+size 592341
diff --git a/adversarialrankingattackanddefense/4199f2f7-0bc0-411e-b60f-3c02e5c403c2_content_list.json b/adversarialrankingattackanddefense/4199f2f7-0bc0-411e-b60f-3c02e5c403c2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5aed14714e9b6e093acdc21b47d2ae049d141d60
--- /dev/null
+++ b/adversarialrankingattackanddefense/4199f2f7-0bc0-411e-b60f-3c02e5c403c2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:605c21043e951ea4302bece2db71976aa4a0cfca4772499eafcd713e553d2edb
+size 96819
diff --git a/adversarialrankingattackanddefense/4199f2f7-0bc0-411e-b60f-3c02e5c403c2_model.json b/adversarialrankingattackanddefense/4199f2f7-0bc0-411e-b60f-3c02e5c403c2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b65a0bd57c8612e7bdfaccac7df10531e9b3d226
--- /dev/null
+++ b/adversarialrankingattackanddefense/4199f2f7-0bc0-411e-b60f-3c02e5c403c2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:268cb662bbe22cacb6778bba98c3b12cac570a87084cf6e4695faf08746c8b5e
+size 121038
diff --git a/adversarialrankingattackanddefense/4199f2f7-0bc0-411e-b60f-3c02e5c403c2_origin.pdf b/adversarialrankingattackanddefense/4199f2f7-0bc0-411e-b60f-3c02e5c403c2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..dfd551281481463ceb8ebee0d74f521d89eccfa6
--- /dev/null
+++ b/adversarialrankingattackanddefense/4199f2f7-0bc0-411e-b60f-3c02e5c403c2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fd5c091308ed7736a25f709129e36ac3c738f292a40553214e780791c1f6d499
+size 1410609
diff --git a/adversarialrankingattackanddefense/full.md b/adversarialrankingattackanddefense/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3e531a493aaa9b96b91c9dd9cec01e8aa18b2594
--- /dev/null
+++ b/adversarialrankingattackanddefense/full.md
@@ -0,0 +1,360 @@
+# Adversarial Ranking Attack and Defense
+
+Mo Zhou $^{1}$ , Zhenxing Niu $^{2}$ , Le Wang $^{1*}$ , Qilin Zhang $^{3}$ , and Gang Hua $^{4}$
+
+$^{1}$ Xi'an Jiaotong University, $^{2}$ Alibaba DAMO MIIL, $^{3}$ HERE Technologies, $^{4}$ Wormpex AI Research
+
+Abstract. Deep Neural Network (DNN) classifiers are vulnerable to adversarial attack, where an imperceptible perturbation could result in misclassification. However, the vulnerability of DNN-based image ranking systems remains under-explored. In this paper, we propose two attacks against deep ranking systems, i.e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by adversarial perturbations. Specifically, the expected ranking order is first represented as a set of inequalities, and then a triplet-like objective function is designed to obtain the optimal perturbation. Conversely, a defense method is also proposed to improve the ranking system robustness, which can mitigate all the proposed attacks simultaneously. Our adversarial ranking attacks and defense are evaluated on datasets including MNIST, Fashion-MNIST, and Stanford-Online-Products. Experimental results demonstrate that a typical deep ranking system can be effectively compromised by our attacks. Meanwhile, the system robustness can be moderately improved with our defense. Furthermore, the transferable and universal properties of our adversary illustrate the possibility of realistic black-box attack.
+
+# 1 Introduction
+
+Despite the successful application in computer vision tasks such as image classification [31, 21], Deep Neural Networks (DNNs) have been found vulnerable to adversarial attacks. In particular, the DNN's prediction can be arbitrarily changed by just applying an imperceptible perturbation to the input image [69, 17]. Moreover, such adversarial attacks can effectively compromise the state-of-the-art DNNs such as Inception [67, 68] and ResNet [21]. This poses a serious security risk on many DNN-based applications such as face recognition, where recognition evasion or impersonation can be easily achieved [12, 64, 30, 72].
+
+Previous adversarial attacks primarily focus on classification, however, we speculate that DNN-based image ranking systems [3, 6, 70, 29, 52, 15, 35] also suffer from similar vulnerability. Taking the image-based product search as an example, a fair ranking system should rank the products according to their visual similarity to the query, as shown in Fig. 1 (row 1). Nevertheless, malicious sellers may attempt to raise the rank of their product by adding perturbation to the image $(\mathrm{CA}+,$ row 2), or lower the rank of his competitor's product (CA-, row 3);
+
+
+Fig. 1. Adversarial ranking attack that can raise or lower the rank of chosen candidates by adversarial perturbations. In Candidate Attack, adversarial perturbation is added to the candidate and its rank is raised (CA+) or lowered (CA-). In Query Attack, adversarial perturbation is added to the query image, and the ranks of chosen candidates are raised (QA+) or lowered (QA-).
+
+Besides, "man-in-the-middle" attackers (e.g., a malicious advertising company) could hijack and imperceptibly perturb the query image in order to promote $(\mathrm{QA}+,$ row 4) or impede $(\mathrm{QA}-,$ row 5) the sales of specific products.
+
+Unlike classification tasks where images are predicted independently, the rank of one candidate is related to the query as well as other candidates for image ranking. The relative relations among candidates and queries determine the final ranking order. Therefore, we argue that the existing adversarial classification attacks are incompatible with the ranking scenario. Thus, we need to thoroughly study the adversarial ranking attack.
+
+In this paper, adversarial ranking attack aims to raise or lower the ranks of some chosen candidates $C = \{c_1, c_2, \ldots, c_m\}$ with respect to a specific query set $Q = \{q_1, q_2, \ldots, q_w\}$ . This can be achieved by either Candidate Attack (CA) or Query Attack (QA). In particular, CA is defined as to raise (abbr. CA+) or lower (abbr. CA-) the rank of a single candidate $c$ with respect to the query set $Q$ by perturbing $c$ itself; while QA is defined as to raise (abbr. QA+) or lower (abbr. QA-) the ranks of a candidate set $C$ with respect to a single query $q$ by perturbing $q$ . Thus, adversarial ranking attack can be achieved by performing CA on each $c \in C$ , or QA on each $q \in Q$ . In practice, the choice of CA or QA depends on the accessibility to the candidate or query respectively, i.e., CA is feasible for modifiable candidate, while QA is feasible for modifiable query.
+
+An effective implementation of these attacks is proposed in this paper. As we know, a typical DNN-based ranking model maps objects (i.e., queries and candidates) to a common embedding space, where the distances among them determine the final ranking order. Predictably, the object's position in the embedding space will be changed by adding a perturbation to it. Therefore, the essential of adversarial ranking attack is to find a proper perturbation, which could push the object to a desired position that leads to the expected ranking order. Specifically, we first represent the expected ranking order as a set of in-
+
+equalities. Subsequently, a triplet-like objective function is designed according to those inequalities, and combined with Projected Gradient Descent (PGD) to efficiently obtain the desired adversarial perturbation.
+
+Opposed to the proposed attacks, adversarial ranking defense is worth being investigated especially for security-sensitive deep ranking applications. Until now, the Madry defense [45] is regarded as the most effective method for classification defense. However, we empirically discovered a primary challenge of diverging training loss while directly adapting such mechanism for ranking defense, possibly due to the generated adversarial examples being too "strong". In addition, such defense mechanism needs to defend against distinct ranking attacks individually, but a generic defense method against all $\mathrm{CA}+$ , $\mathrm{CA}-$ , $\mathrm{QA}+$ and $\mathrm{QA}-$ attacks is preferred.
+
+To this end, a shift-distance based ranking defense is proposed, which could simultaneously defend against all attacks. Note that the position shift of objects in the embedding space is the key for all ranking attacks. Although different attacks prefer distinct shift directions (e.g., $\mathrm{CA + }$ and CA- often prefer opposed shifting directions), a large shift distance is their common preference. If we could reduce the shift distance of embeddings incurred by adversarial perturbation, all attacks can be simultaneously defended. Specifically, we first propose a shift-distance based ranking attack, which aims to push the objects as far from their original positions as possible. And then, the adversarial examples generated from such attack is involved in the adversarial training. Experimental results manifest that our ranking defense can converge and moderately improve model robustness.
+
+In addition, our ranking attacks have some good properties for realistic applications. First, our adversary is transferable, i.e., the adversary obtained from a known DNN ranker can be directly used to attack an unknown DNN ranker (i.e., the network architecture and parameters are unknown). Second, our attacks can be extended to universal ranking attacks with slight performance drop, i.e., we could learn a universal perturbation to all candidates for CA, or a universal perturbation to all queries for QA. Such properties illustrate the possibility of practical black-box attack.
+
+To the best of our knowledge, this is the first work that thoroughly studies the adversarial ranking attack and defense. In brief, our contributions are:
+
+1. The adversarial ranking attack is defined and implemented, which can intentionally change the ranking results by perturbing the candidates or queries.
+2. An adversarial ranking defense method is proposed to improve the ranking model robustness, and mitigate all the proposed attacks simultaneously.
+
+# 2 Related Works
+
+Adversarial Attacks. Szegedy et al. [69] claimed that DNN is susceptible to imperceptible adversarial perturbations added to inputs, due to the intriguing "blind spot" property, which was later ascribed to the local linearity [17] of neural networks. Following these findings, many white-box (model architecture
+
+and parameters are known to the adversary) attacking methods [50, 57, 32, 5, 8, 10, 61, 66, 45, 74, 7, 16] are proposed to effectively compromise the state-of-the-art DNN classifiers. Among them, PGD [45] is regarded as one of the most powerful attacks [1]. Notably, adversarial examples are discovered to be transferable [56, 55] among different neural network classifiers, which inspired a series of black-box attacks [65, 73, 76, 41, 11, 24]. On the other hand, universal (i.e., image-agnostic) adversarial perturbations are also discovered [49, 37].
+
+Deep Ranking. Different from the traditional "learning to rank" [38, 27] methods, DNN-based ranking methods often embed data samples (including both queries and candidates) of all modalities into a common embedding space, and subsequently determine the ranking order based on distance. Such workflow has been adopted in distance metric learning [6, 70, 53, 26], image retrieval [3], cross-modal retrieval [52, 15, 35, 29], and face recognition [62].
+
+Adversarial Attacks in Deep Ranking. For information retrieval and ranking systems, the risk of malicious users manipulating the ranking always exists [19, 23]. However, only a few research efforts have been made in adversarial attacks in deep ranking. Liu et al. [42] proposed adversarial queries leading to incorrect retrieval results; while Li et al. [36] staged similar attack with universal perturbation that corrupts listwise ranking results. None of the aforementioned research efforts explore the adversarial ranking attack. Besides, adaptation of distance-based attacks (e.g. [61]) are unsuitable for our scenario.
+
+Adversarial Defenses. Adversarial attacks and defenses are consistently engaged in an arms race [77]. Gradient masking-based defenses can be circumvented [2]. Defensive distillation [54, 58] has been compromised by C&W [5, 4]. As claimed in [22], ensemble of weak defenses are insufficient against adversarial examples. Notably, as an early defense method [69], adversarial training [17, 45, 25, 13, 33, 63, 71, 78, 51, 46] remains to be one of the most effective defenses. Other types of defenses include adversarial detection [43, 48], input transformation/reconstruction/replacement [60, 44, 20, 47, 14], randomization [40, 39], network verification [28, 18], etc. However, defense in deep ranking systems remains mostly uncharted.
+
+# 3 Adversarial Ranking
+
+Generally, a DNN-based ranking task could be formulated as a metric learning problem. Given the query $q$ and candidate set $X = \{c_{1}, c_{2}, \ldots, c_{n}\}$ , deep ranking is to learn a mapping $f$ (usually implemented as a DNN) which maps all candidates and query into a common embedding space, such that the relative distances among the embedding vectors could satisfy the expected ranking order. For instance, if candidate $c_{i}$ is more similar to the query $q$ than candidate $c_{j}$ , it is encouraged for the mapping $f$ to satisfy the inequality $\| f(q) - f(c_{i}) \| < \| f(q) - f(c_{j}) \|^{1}$ , where $\| \cdot \|$ denotes $\ell_{2}$ norm. For brevity, we denote $\| f(q) - f(c_{i}) \|$ as $d(q, c_{i})$ in following text.
+
+Therefore, adversarial ranking attack is to find a proper adversarial perturbation which leads the ranking order to be changed as expected. For example, if a less relevant $c_{j}$ is expected to be ranked ahead of a relevant $c_{i}$ , it is desired to find a proper perturbation $r$ to perturb $c_{j}$ , i.e. $\tilde{c}_{j} = c_{j} + r$ , such that the inequality $d(q, c_{i}) < d(q, c_{j})$ could be changed into $d(q, c_{i}) > d(q, \tilde{c}_{j})$ . In the next, we will describe Candidate Attack and Query Attack in detail.
+
+# 3.1 Candidate Attack
+
+Candidate Attack (CA) aims to raise (abbr. CA+) or lower (abbr. CA-) the rank of a single candidate $c$ with respect to a set of queries $Q = \{q_{1}, q_{2}, \ldots, q_{w}\}$ by adding perturbation $r$ to the candidate itself, i.e., $\tilde{c} = c + r$ .
+
+Let $\operatorname{Rank}_X(q, c)$ denote the rank of the candidate $c$ with respect to the query $q$ , where $X$ indicates the set of all candidates, and a smaller rank value represents a higher ranking. Thus, the $\mathbf{CA}+$ that raises the rank of $c$ with respect to every query $q \in Q$ by perturbation $r$ could be formulated as the following problem,
+
+$$
+r = \underset {r \in \Gamma} {\arg \min } \sum_ {q \in Q} \operatorname {R a n k} _ {X} (q, c + r), \tag {1}
+$$
+
+$$
+\Gamma = \left\{r \mid \| r \| _ {\infty} \leqslant \varepsilon ; r, c + r \in [ 0, 1 ] ^ {N} \right\}, \tag {2}
+$$
+
+where $\Gamma$ is a $\ell_{\infty}$ -bounded $\varepsilon$ -neighbor of $c$ , $\varepsilon \in [0,1]$ is a predefined small positive constant, the constraint $\|r\|_{\infty} \leqslant \varepsilon$ limits the perturbation $r$ to be "visually imperceptible", and $c + r \in [0,1]^N$ ensures the adversarial example remains a valid input image. Although alternative "imperceptible" constraints exist (e.g., $\ell_0$ [66, 9], $\ell_1$ [8] and $\ell_2$ [5, 50] variants), we simply follow [17, 32, 45] and use the $\ell_{\infty}$ constraint.
+
+However, the optimization problem Eq. (1)-(2) cannot be directly solved due to the discrete nature of the rank value $\operatorname{Rank}_X(q, c)$ . In order to solve the problem, a surrogate objective function is needed.
+
+In metric learning, given two candidates $c_p, c_n \in X$ where $c_p$ is ranked ahead of $c_n$ , i.e. $\mathrm{Rank}_X(q, c_p) < \mathrm{Rank}_X(q, c_n)$ , the ranking order is represented as an inequality $d(q, c_p) < d(q, c_n)$ and formulated in triplet loss:
+
+$$
+L _ {\text {t r i p l e t}} \left(q, c _ {p}, c _ {n}\right) = \left[ \beta + d \left(q, c _ {p}\right) - d \left(q, c _ {n}\right) \right] _ {+}, \tag {3}
+$$
+
+where $[\cdot ]_{+}$ denotes $\max (0,\cdot)$ , and $\beta$ is a manually defined constant margin. This function is known as the triplet (i.e.pairwise ranking) loss [6, 62].
+
+Similarly, the attacking goal of $\mathbf{CA}+$ in Eq. (1) can be readily converted into a series of inequalities, and subsequently turned into a sum of triplet losses,
+
+$$
+L _ {\mathrm {C A} +} (c, Q; X) = \sum_ {q \in Q} \sum_ {x \in X} [ d (q, c) - d (q, x) ] _ {+}. \tag {4}
+$$
+
+In this way, the original problem in Eq. (1)-(2) can be reformulated into the following constrained optimization problem:
+
+$$
+r = \underset {r \in \Gamma} {\arg \min } L _ {\mathrm {C A} +} (c + r, Q; X). \tag {5}
+$$
+
+To solve the optimization problem, Projected Gradient Descent (PGD) method [45, 32] (a.k.a the iterative version of FGSM [17]) can be used. Note that PGD is one of the most effective first-order gradient-based algorithms [1], popular among related works about adversarial attack.
+
+Specifically, in order to find an adversarial perturbation $r$ to create a desired adversarial candidate $\tilde{c} = c + r$ , the PGD algorithm alternates two steps at every iteration $t = 1,2,\ldots ,\eta$ . Step one updates $\tilde{c}$ according to the gradient of Eq. (4); while step two clips the result of step one to fit in the $\varepsilon$ -neighboring region $\varGamma$ :
+
+$$
+\tilde {c} _ {t + 1} = \operatorname {C l i p} _ {c, \Gamma} \left\{\tilde {c} _ {t} - \alpha \operatorname {s i g n} \left(\nabla_ {\tilde {c} _ {t}} L _ {\mathrm {C A} +} (\tilde {c} _ {t}, Q, X)\right) \right\}, \tag {6}
+$$
+
+where $\alpha$ is a constant hyper-parameter indicating the PGD step size, and $\tilde{c}_1$ is initialized as $c$ . After $\eta$ iterations, the desired adversarial candidate $\tilde{c}$ is obtained as $\tilde{c}_{\eta}$ , which is optimized to satisfy as many inequalities as possible. Each inequality represents a pairwise ranking sub-problem, hence the adversarial candidate $\tilde{c}$ will be ranked ahead of other candidates with respect to every specified query $q \in Q$ .
+
+Likewise, the CA- that lowers the rank of a candidate $c$ with respect to a set of queries $Q$ can be obtained in similar way:
+
+$$
+L _ {\mathrm {C A} -} (c, Q; X) = \sum_ {q \in Q} \sum_ {x \in X} [ - d (q, c) + d (q, x) ] _ {+}. \tag {7}
+$$
+
+# 3.2 Query Attack
+
+Query Attack (QA) is supposed to raise (abbr. QA+) or lower (abbr. QA-) the rank of a set of candidates $C = \{c_1, c_2, \ldots, c_m\}$ with respect to the query $q$ , by adding adversarial perturbation $r$ to the query $\tilde{q} = q + r$ . Thus, QA and CA are two "symmetric" attacks. The QA- for lowering the rank could be formulated as follows:
+
+$$
+r = \underset {r \in \Gamma} {\arg \max } \sum_ {c \in C} \operatorname {R a n k} _ {X} (q + r, c), \tag {8}
+$$
+
+where $\Gamma$ is the $\varepsilon$ -neighbor of $q$ . Likewise, this attacking objective can also be transformed into the following constrained optimization problem:
+
+$$
+L _ {\mathrm {Q A} -} (q, C; X) = \sum_ {c \in C} \sum_ {x \in X} [ - d (q, c) + d (q, x) ] _ {+}, \tag {9}
+$$
+
+$$
+r = \underset {r \in \Gamma} {\arg \min } L _ {\mathrm {Q A} -} (q + r, C; X), \tag {10}
+$$
+
+and it can be solved with the PGD algorithm. Similarly, the $\mathbf{QA}+$ loss function $L_{\mathrm{QA + }}$ for raising the rank of $c$ is as follows:
+
+$$
+L _ {\mathrm {Q A} +} (q, C; X) = \sum_ {c \in C} \sum_ {x \in X} [ d (q, c) - d (q, x) ] _ {+}. \tag {11}
+$$
+
+Unlike CA, QA perturbs the query image, and hence may drastically change its semantics, resulting in abnormal retrieval results. For instance, after perturbing a "lamp" query image, some unrelated candidates (e.g., "shelf", "toaster",
+
+etc) may appear in the top return list. Thus, an ideal query attack should preserve the query semantics, i.e., the candidates in $X \setminus C^2$ should retain their original ranks if possible. Thus, we propose the Semantics-Preserving Query Attack (SP-QA) by adding the SP term to mitigate the semantic changes $q$ , e.g.,
+
+$$
+L _ {\mathrm {S P - Q A -}} (q, C; X) = L _ {\mathrm {Q A -}} (q, C; X) + \xi L _ {\mathrm {Q A +}} (q, C _ {\mathrm {S P}}; X), \tag {12}
+$$
+
+where $C_{\mathrm{SP}} = \{c \in X \setminus C | \mathrm{Rank}_{X \setminus C}(q, c) \leqslant G\}$ , i.e., $C_{\mathrm{SP}}$ contains the top- $G$ most-relevant candidates corresponding to $q$ , and the $L_{\mathrm{QA+}}(q, C_{\mathrm{SP}}; X)$ term helps preserve the query semantics by retaining some $C_{\mathrm{SP}}$ candidates in the retrieved ranking list. Constant $G$ is a predefined integer; and constant $\xi$ is a hyper-parameter for balancing the attack effect and semantics preservation. Unless mentioned, in the following text QA means SP-QA by default.
+
+# 3.3 Robustness & Defense
+
+Adversarial defense for classification has been extensively explored, and many of them follows the adversarial training mechanism [25, 33, 45]. In particular, the adversarial counterparts of the original training samples are used to replace or augment the training samples. Until now, Madry defense [45] is regarded as the most effective [71, 2] adversarial training method. However, when directly adapting such classification defense to improve ranking robustness, we empirically discovered a primary challenge of diverging training loss, possibly due to the generated adversarial examples being too "strong". Moreover, such defense mechanism needs to defend against distinct attacks individually. Therefore, a generic defense against all the proposed attacks is preferred.
+
+Note that the underlying principle of adversarial ranking attack is to shift the embeddings of candidates/queries to a proper place, and a successful attack depends on a large shift distance as well as a correct shift direction. A large shift distance is an indispensable objective for all the $\mathbf{CA}+$ , $\mathbf{CA}-$ , $\mathbf{QA}+$ and $\mathbf{QA}-$ attacks. Predictably, a reduction in shift distance could improve model robustness against all attacks simultaneously.
+
+To this end, we propose a "maximum-shift-distance" attack that pushes an embedding vector as far from its original position as possible (resembles Feature Adversary [61] for classification), $r = \arg \max_{r\in \Gamma}d(c + r,c)$ . Then we use adversarial examples obtained from this attack to replace original training samples for adversarial training, hence reduce the shift distance incurred by adversarial perturbations.
+
+A ranking model can be normally trained with the defensive version of the triplet loss:
+
+$$
+\begin{array}{l} L_{\text{d - t}}(q,c_{p},c_{n}) = L_{\text{triplet}}\Bigl(q + \operatorname *{arg max}_{r\in \Gamma}d(q + r,q),c_{p} + \operatorname *{arg max}_{r\in \Gamma}d(c_{p} + r,c_{p}), \\ \left. c _ {n} + \underset {r \in \Gamma} {\arg \max } d \left(c _ {n} + r, c _ {n}\right)\right). \tag {13} \\ \end{array}
+$$
+
+| ε | CA+ | CA- | QA+ | QA- |
| w=1 | 2 | 5 | 10 | w=1 | 2 | 5 | 10 | m=1 | 2 | 5 | 10 | m=1 | 2 | 5 | 10 |
| (CT) Cosine Distance, Triplet Loss (R@1=99.1%) |
| 0 | 50 | 50 | 50 | 50 | 2.1 | 2.1 | 2.1 | 2.1 | 50 | 50 | 50 | 50 | 0.5 | 0.5 | 0.5 | 0.5 |
| 0.01 | 44.6 | 45.4 | 47.4 | 47.9 | 3.4 | 3.2 | 3.1 | 3.1 | 45.2 | 46.3 | 47.7 | 48.5 | 0.9 | 0.7 | 0.6 | 0.6 |
| 0.03 | 33.4 | 37.3 | 41.9 | 43.9 | 6.3 | 5.9 | 5.7 | 5.6 | 35.6 | 39.2 | 43.4 | 45.8 | 1.9 | 1.4 | 1.1 | 1.1 |
| 0.1 | 12.7 | 17.4 | 24.4 | 30.0 | 15.4 | 14.9 | 14.8 | 14.7 | 14.4 | 21.0 | 30.6 | 37.2 | 5.6 | 4.4 | 3.7 | 3.5 |
| 0.3 | 2.1 | 9.1 | 13.0 | 17.9 | 93.9 | 93.2 | 93.0 | 92.9 | 6.3 | 11.2 | 22.5 | 32.1 | 8.6 | 6.6 | 5.3 | 4.8 |
+
+Table 1. Adversarial ranking attack on vanilla model with MNIST. The “+” attacks (i.e.CA+ and QA+) raise the rank of chosen candidates towards 0 (\%); while the “-” attacks (i.e.CA- and QA-) lower the ranks of chosen candidates towards 100 (\%). Applying $\varepsilon = 0.01, 0.03, 0.1, 0.3$ QA+ attacks on the model, the SP term keeps the ranks of $C_{\mathrm{SP}}$ no larger than $3.6\%$ , $5.7\%$ , $7.7\%$ , $7.7\%$ , respectively, regardless of $m$ . With the QA- counterpart, the ranks of $C_{\mathrm{SP}}$ are kept no larger than $1.6\%$ , $1.6\%$ , $1.5\%$ , $1.5\%$ , respectively, regardless of $m$ . For all the numbers in the table, “%” is omitted.
+
+# 4 Experiments
+
+To validate the proposed attacks and defense, we use three commonly used ranking datasets including MNIST [34], Fashion-MNIST [75], and Stanford Online Product (SOP) [53]. We respectively train models on these datasets with PyTorch [59], and conduct attacks $^3$ on their corresponding test sets (used as $X$ ).
+
+Evaluation Metric. Adversarial ranking attack aims to change the ranks of candidates. For each candidate $c$ , its normalized rank is calculated as $R(q, c) = \frac{\operatorname{Rank}_X(q, c)}{|X|} \times 100\%$ where $c \in X$ , and $|X|$ is the length of full ranking list. Thus, $R(q, c) \in [0, 1]$ , and a top ranked $c$ will have a small $R(q, c)$ . The attack effectiveness can be measured by the magnitude of change in $R(q, c)$ .
+
+Performance of Attack. To measure the performance of a single CA attack, we average the rank of candidate $c$ across every query $q \in Q$ , i.e., $R_{\mathrm{CA}}(c) = \sum_{q \in Q} R(q, c) / w$ . Similarly, the performance of a single QA attack can be measured by the average rank across every candidate $c \in C$ , i.e., $R_{\mathrm{QA}}(q) = \sum_{c \in C} R(q, c) / m$ . For the overall performance of an attack, we conduct $T$ times of independent attacks and report the mean of $R_{\mathrm{CA}}(c)$ or $R_{\mathrm{QA}}(q)$ , accordingly.
+
+$\mathbf{CA} + \& \mathbf{QA}+$ . For $\mathrm{CA}+$ , the query set $Q$ is randomly sampled from $X$ . Likewise, for $\mathrm{QA}+$ , the candidate set $C$ is from $X$ . Without attack, both the $R_{\mathrm{CA}}(c)$ and $R_{\mathrm{QA}}(q)$ will approximate to $50\%$ , and the attacks should significantly decrease the value.
+
+CA- & QA-. In practice, the $Q$ for CA- and the $C$ for QA- cannot be randomly sampled, because the two attacks are often to lower some top ranked candidates. Thus, the two sets should be selected from the top ranked samples (top-1% in our experiments) in $X$ . Formally, given the candidate $c$ for CA-, we randomly sample the $w$ queries from $\{q \in X | R(c, q) \leqslant 1\}$ as $Q$ . Given the query $q$ for QA-, $m$ candidates are randomly sampled from $\{c \in X | R(q, c) \leqslant 1\}$ as $C$ . Without attack, both the $R_{\mathrm{CA}}(c)$ and $R_{\mathrm{QA}}(q)$ will be close to $0\%$ , and the attacks should significantly increase the value.
+
+| ε | CA+ | CA- | QA+ | QA- |
| w=1 | 2 | 5 | 10 | w=1 | 2 | 5 | 10 | m=1 | 2 | 5 | 10 | m=1 | 2 | 5 | 10 |
| (CTD) Cosine Distance, Triplet Loss, Defensive (R@1=98.3%) |
| 0 | 50 | 50 | 50 | 50 | 2.0 | 2.0 | 2.0 | 2.0 | 50 | 50 | 50 | 50 | 0.5 | 0.5 | 0.5 | 0.5 |
| 0.01 | 48.9 | 49.3 | 49.4 | 49.5 | 2.2 | 2.2 | 2.2 | 2.1 | 49.9 | 49.5 | 49.5 | 49.7 | 0.5 | 0.5 | 0.5 | 0.5 |
| 0.03 | 47.4 | 48.4 | 48.6 | 48.9 | 2.5 | 2.5 | 2.4 | 2.4 | 48.0 | 48.5 | 49.2 | 49.5 | 0.6 | 0.6 | 0.5 | 0.5 |
| 0.1 | 42.4 | 44.2 | 45.9 | 46.7 | 3.8 | 3.6 | 3.5 | 3.4 | 43.2 | 45.0 | 47.4 | 48.2 | 1.0 | 0.8 | 0.7 | 0.7 |
| 0.3 | 30.7 | 34.5 | 38.7 | 40.7 | 7.0 | 6.7 | 6.5 | 6.5 | 33.2 | 37.2 | 42.3 | 45.1 | 2.4 | 1.9 | 1.6 | 1.5 |
+
+Table 2. Adversarial ranking defense with MNIST. Applying $\varepsilon = 0.01$ , 0.03, 0.1, 0.3 QA+ attacks on model, the ranks of candidates in $C_{\mathrm{SP}}$ are kept no larger than $0.5\%$ , $0.5\%$ , $0.5\%$ , $0.5\%$ , respectively, regardless of $m$ . With the QA- counterpart, the ranks of $C_{\mathrm{SP}}$ are kept less than $0.4\%$ , $0.4\%$ , $0.4\%$ , $0.4\%$ , respectively, regardless of $m$ .
+
+Hyper-Parameters. We conduct CA with $w \in \{1,2,5,10\}$ queries, and QA with $m \in \{1,2,5,10\}$ candidates, respectively. In QA, we let $G = 5$ . The SP balancing parameter $\xi$ is set to 1 for QA+, and $10^2$ for QA-. In addition, We investigate attacks of different strength $\varepsilon$ , i.e. 0.01, 0.03, 0.1, 0.3 on MNIST and Fashion-MNIST following [45], and 0.01, 0.03, 0.06 on SOP following [33]. The PGD step size is empirically set to $\alpha = \min(\max(\frac{\varepsilon}{10}, \frac{1}{255}), 0.01)$ , and the number of PGD iterations to $\eta = \min(\max(10, \frac{2\varepsilon}{\alpha}), 30)$ . We perform $T = |X|$ times of attack to obtain the reported performance.
+
+Adversarial Defense. Ranking models are trained using Eq. (13) with the strongest adversary following the procedure of Madry defense [45].
+
+# 4.1 MNIST Dataset
+
+Following conventional settings with the MNIST [34] dataset, we train a CNN ranking model comprising 2 convolutional layers and 1 fully-connected layer. This CNN architecture (denoted as C2F1) is identical to the one used in [45] except for the removal of the last fully-connected layer. Specifically, the ranking model is trained with cosine distance and triplet loss. The retrieval performance of the model is $\mathrm{Recall}@1 = 99.1\%$ ( $\mathrm{R}@1$ ), as shown in Tab. 1 in grey highlight.
+
+Attacking results against this vanilla model (i.e., the ranking model which is not enhanced with our defense method) are presented in Tab. 1. For example, a strong $\mathbf{CA}+$ attack (i.e., $\varepsilon = 0.3$ ) for $w = 1$ can raise the rank $R_{\mathrm{CA}}(c)$ from $50\%$ to $2.1\%$ . Likewise, the rank of $C$ can be raised to $9.1\%$ , $13.0\%$ , $17.9\%$ for $w = 2, 5, 10$ chosen queries, respectively. On the other hand, a strong $\mathbf{CA}-$ attack for $w = 1$ can lower the rank $R_{\mathrm{CA}}(c)$ from $2.1\%$ to $93.9\%$ . The results of strong $\mathbf{CA}-$ attacks for $w = 2, 5, 10$ are similar to the $w = 1$ case.
+
+The results of $\mathbf{QA}+$ and $\mathbf{QA}-$ are also shown in Tab. 1. the rank changes with QA attacks are less dramatic (but still significant) than CA. This is due to the additional difficulty introduced by SP term in Eq. (12), and the QA attack effectiveness is inversely correlated with $\xi$ . For instance, a strong QA- for $m = 1$ can only lower the rank $R_{\mathrm{QA}}(q)$ from $0.5\%$ to $8.6\%$ , but the attacking effect can be further boosted by decreasing $\xi$ . More experimental results are presented in following discussion. In brief, our proposed attacks against the vanilla ranking model is effective.
+
+
+Fig. 2. Comparison of Attacks on vanilla and defensive models. Apart from the ranks of chosen candidates, We also measure the maximum shift distance of embedding vectors that adversarial perturbation could incur.
+
+
+
+Next, we evaluate the performance of our defense method. Our defense should be able to enhance the robustness of a ranking model, which can be measured by the difference between the attack effectiveness with our defense and the attack effectiveness without our defense. As a common phenomenon of adversarial training, our defense mechanism leads to a slight retrieval performance degradation for unperturbed input (highlighted in blue in Tab. 2), but the attacking effectiveness is clearly mitigated by our defense. For instance, the same strong $\mathbf{CA}+$ attack for $w = 1$ on the defensive model (i.e., the ranking model which is enhanced by our defense method) can only raise the rank $R_{\mathrm{CA}}(c)$ from $50\%$ to $30.7\%$ , compared to its vanilla counterpart raising to $2.1\%$ . Further analysis suggests that the weights in the first convolution layer of the defensive model are closer to 0 and have smaller variance than those of the vanilla model, which may help resist adversarial perturbation from changing the layer outputs into the local linear area of ReLU [17].
+
+To visualize the effect of our attacks and defense, we track the attacking effect with $\varepsilon$ varying from 0.0 to 0.3 on the vanilla and defensive models, as shown in Fig. 2. It is noted that our defense could significantly suppress the maximum embedding shift distance incurred by adversarial perturbation to nearly 0, but the defensive model is still not completely immune to attacks. We speculate the defensive model still has "blind spots" [69] in some local areas that could be exploited by the attacks.
+
+In summary, these results and further experiments suggest that: (1) deep ranking models are vulnerable to adversarial ranking attacks, no matter what loss function or distance metric is selected; (2) vanilla models trained with contrastive loss are more robust than those trained with triplet loss. This is possibly due to contrastive loss explicitly reducing the intra-class embedding variation. Additionally, our defense method could consistently improve the robustness of all these models; (3) Euclidean distance-based models are harder to defend than cosine distance-based ones. Beyond these experiments, we also find that the margin hyper-parameter $\beta$ of triplet loss and the dimensionality of the embedding space have marginal influences on model robustness.
+
+| ε | CA+ | CA- | QA+ | QA- |
| w=1 | 2 | 5 | 10 | w=1 | 2 | 5 | 10 | m=1 | 2 | 5 | 10 | m=1 | 2 | 5 | 10 |
| (CT) Cosine Distance, Triplet Loss (R@1=88.8%) |
| 0 | 50 | 50 | 50 | 50 | 1.9 | 1.9 | 1.9 | 1.9 | 39.4 | 42.0 | 45.3 | 47.1 | 0.5 | 0.5 | 0.5 | 0.5 |
| 0.01 | 36.6 | 39.9 | 43.2 | 44.8 | 5.6 | 5.1 | 4.9 | 4.8 | 39.4 | 42.0 | 45.3 | 47.1 | 2.1 | 1.6 | 1.2 | 1.1 |
| 0.03 | 19.7 | 25.4 | 31.7 | 35.6 | 15.5 | 14.8 | 14.4 | 14.3 | 21.7 | 28.2 | 35.7 | 40.6 | 5.6 | 4.1 | 3.3 | 2.9 |
| 0.1 | 3.7 | 10.5 | 17.3 | 22.7 | 87.2 | 86.7 | 86.3 | 86.3 | 7.1 | 12.4 | 23.6 | 32.5 | 10.9 | 8.3 | 6.7 | 6.0 |
| 0.3 | 1.3 | 9.4 | 16.0 | 21.5 | 100.0 | 100.0 | 100.0 | 100.0 | 6.3 | 10.8 | 21.8 | 31.7 | 12.6 | 9.4 | 7.5 | 6.6 |
| (CTD) Cosine Distance, Triplet Loss, Defensive (R@1=79.6%) |
| 0 | 50 | 50 | 50 | 50 | 1.2 | 1.2 | 1.2 | 1.2 | 50 | 50 | 50 | 50 | 0.5 | 0.5 | 0.5 | 0.5 |
| 0.01 | 48.9 | 48.9 | 49.3 | 49.3 | 1.4 | 1.4 | 1.4 | 1.4 | 49.4 | 49.9 | 49.9 | 50.0 | 0.5 | 0.5 | 0.5 | 0.5 |
| 0.03 | 47.1 | 47.9 | 48.3 | 48.3 | 2.0 | 1.9 | 1.8 | 1.8 | 48.3 | 49.1 | 49.5 | 49.8 | 0.7 | 0.6 | 0.6 | 0.6 |
| 0.1 | 42.4 | 43.5 | 44.5 | 44.8 | 4.6 | 4.2 | 4.0 | 3.9 | 45.4 | 47.2 | 48.7 | 49.2 | 1.4 | 1.2 | 1.1 | 1.1 |
| 0.3 | 32.5 | 35.4 | 37.5 | 38.2 | 11.2 | 10.5 | 10.1 | 10.0 | 39.3 | 42.6 | 46.5 | 47.8 | 3.9 | 3.3 | 3.0 | 2.9 |
+
+Table 3. Adversarial ranking attack and defense on Fashion-MNIST. The lowest ranks of $C_{\mathrm{SP}}$ are $3.0\%$ , $5.2\%$ , $7.8\%$ , $8.3\%$ in QA+, and $1.9\%$ , $1.9\%$ , $1.9\%$ , $1.8\%$ for QA+, respectively.
+
+# 4.2 Fashion-MNIST Dataset
+
+Fashion-MNIST [75] is an MNIST-like but more difficult dataset, comprising 60,000 training examples and 10,000 test samples. The samples are $28 \times 28$ greyscale images covering 10 different fashion product classes, including "T-shirt" and "dress", etc. We train the vanilla and defensive models based on the cosine distance and triplet loss and conduct attack experiments.
+
+The attack and defense results are available in Tab. 3. From the table, we note that our attacks could achieve better effect compared to experiments on MNIST. For example, in a strong $\mathbf{CA} +$ for $w = 1$ , the rank $R_{\mathrm{CA}}(c)$ can be raised to $1.3\%$ . On the other hand, despite the moderate improvement in robustness, the defensive model performs worse in unperturbed sample retrieval. The performance degradation is more pronounced on this dataset compared to MNIST. We speculate the differences are related to the increased dataset difficulty.
+
+# 4.3 Stanford Online Products Dataset
+
+Stanford Online Products (SOP) dataset [53] contains 120k images of 23k classes of real online products from eBay for metric learning. We use the same dataset split as used in the original work [53]. We also train the same vanilla ranking model using the same triplet ranking loss function with Euclidean distance, except that the GoogLeNet [67] is replaced with ResNet-18 [21]. The ResNet-18 achieves better retrieval performance.
+
+Attack and defense results on SOP are present in Tab. 4. It is noted that our attacks are quite effective on this difficult large-scale dataset, as merely $1\%$ perturbation $(\varepsilon = 0.01)$ to any candidate image could make it ranked ahead or behind of nearly all the rest candidates (as shown by the $\mathrm{CA + }$ and CA- results with $w = 1$ ). QA on this dataset is significantly effective as well. On the other hand, our defense method leads to decreased retrieval performance, i.e. R@1 from $63.1\%$ to $46.4\%$ , which is expected on such a difficult dataset. Meanwhile, our defense could moderately improve the model robustness against relatively weaker adversarial examples (e.g. $\varepsilon = 0.01$ ), but improving model robustness on this dataset is more difficult, compared to experiments on other datasets.
+
+| ε | CA+ | CA- | QA+ | QA- |
| w=1 | 2 | 5 | 10 | w=1 | 2 | 5 | 10 | m=1 | 2 | 5 | 10 | m=1 | 2 | 5 | 10 |
| (ET) Euclidean Distance, Triplet Loss (R@1=63.1%) |
| 0 | 50 | 50 | 50 | 50 | 1.9 | 1.9 | 1.9 | 1.9 | 50 | 50 | 50 | 50 | 0.5 | 0.5 | 0.5 | 0.5 |
| 0.01 | 0.0 | 0.8 | 2.0 | 2.6 | 99.7 | 99.6 | 99.4 | 99.3 | 4.8 | 7.0 | 16.3 | 25.8 | 54.9 | 40.2 | 27.1 | 21.9 |
| 0.03 | 0.0 | 0.3 | 1.0 | 1.5 | 100.0 | 100.0 | 100.0 | 100.0 | 1.6 | 3.3 | 10.0 | 19.2 | 68.1 | 52.4 | 36.6 | 30.1 |
| 0.06 | 0.0 | 0.2 | 1.0 | 1.5 | 100.0 | 100.0 | 100.0 | 100.0 | 1.1 | 2.7 | 8.8 | 17.6 | 73.8 | 57.9 | 40.3 | 32.4 |
| (ETD) Euclidean Distance, Triplet Loss, Defensive (R@1=46.4%) |
| 0 | 50 | 50 | 50 | 50 | 2.0 | 2.0 | 2.0 | 2.0 | 50 | 50 | 50 | 50 | 0.5 | 0.5 | 0.5 | 0.5 |
| 0.01 | 7.5 | 12.2 | 16.5 | 18.0 | 66.4 | 62.6 | 59.3 | 57.8 | 16.1 | 24.8 | 36.1 | 41.4 | 26.7 | 18.1 | 12.2 | 10.2 |
| 0.03 | 0.7 | 4.5 | 8.7 | 10.4 | 91.7 | 90.2 | 89.1 | 88.4 | 7.9 | 14.5 | 27.2 | 35.6 | 43.4 | 31.7 | 21.9 | 18.1 |
| 0.06 | 0.1 | 3.8 | 7.9 | 9.7 | 97.3 | 96.8 | 96.4 | 96.2 | 6.9 | 12.5 | 24.3 | 33.4 | 51.4 | 39.0 | 28.0 | 23.5 |
+
+Table 4. Adversarial ranking attack and defense on SOP. With different $\varepsilon$ , the worst ranks of $C_{\mathrm{SP}}$ in QA+ are $0.2\%$ , $0.7\%$ , $2.0\%$ , $3.3\%$ , and those for QA- are $0.4\%$ , $0.7\%$ , $0.8\%$ , $1.0\%$ , respectively.
+
+By comparing the results among all the three datasets, we find ranking models trained on harder datasets more susceptible to adversarial attack, and more difficult to defend. Therefore, we speculate that models used in realistic applications could be easier to attack, because they are usually trained on larger-scale and more difficult datasets.
+
+# 5 Discussions
+
+White-box attacks are sometimes limited by data accessibility in practice, but it's possible to circumvent them with adversarial example transferability and universal perturbation, as will be discussed in this section. Such properties reveal the possibility of practical black-box attack.
+
+# 5.1 Adversarial Example Transferability
+
+As demonstrated in the experiments, deep ranking models can be compromised by our white-box attacks. In realistic scenarios, the white-box attacks are not practical enough because the model to be attacked is often unknown (i.e., the architecture and parameters are unknown). On the other hand, adversarial examples for classification have been found transferable [56, 55] (i.e. model-agnostic) between different models with different network architectures. Typically, in this case, adversarial examples are generated from a replacement model [56] using a white-box attack, and are directly used to attack the black-box model.
+
+Adversarial ranking attack could be more practical if the adversarial ranking examples have the similar transferability. Besides the C2F1 model, we train two vanilla models on the MNIST dataset: (1) LeNet [34], which has lower model capacity compared to C2F1; (2) ResNet-18 [21] (denoted as Res18), which has a better network architecture and higher model capacity.
+
+The results are present in Tab. 5. For example, in the CA+ transfer attack, we generate adversarial candidates from the C2F1 model and directly use them to attack the Res18 model (row 2, column 3, top-left table), and the ranks of the adversarial candidates with respect to the same query is still raised to $31.3\%$ . We also find the CA- transfer attack is effective, where the ranks of our adversarial
+
+| CA+ Transfer (Black Box), w = 1 |
| To From | LeNet | C2F1 | Res18 |
| LeNet | 50→16.6 | 35.1 | 34.3 |
| C2F1 | 28.6 | 50→2.1 | 31.3 |
| Res18 | 24.4 | 27.0 | 50→2.2 |
+
+| QA+ Transfer (Black Box), m = 1 |
| To From | LeNet | C2F1 | Res18 |
| LeNet | 50→20.5 | 43.0 | 45.8 |
| C2F1 | 43.5 | 50→6.3 | 45.4 |
| Res18 | 41.4 | 40.4 | 50→14.1 |
+
+| CA- Transfer (Black Box), w = 1 |
| To From | LeNet | C2F1 | Res18 |
| LeNet | 2.5→63.7 | 2.1→10.0 | 2.1→9.1 |
| C2F1 | 2.5→9.1 | 2.1→93.9 | 2.1→9.3 |
| Res18 | 2.5→9.9 | 2.1→11.8 | 2.1→66.7 |
+
+| QA- Transfer (Black Box), m = 1 |
| From To | LeNet | C2F1 | Res18 |
| LeNet | 0.5→7.0 | 0.5→1.6 | 0.5→1.8 |
| C2F1 | 0.5→1.0 | 0.5→8.6 | 0.5→1.9 |
| Res18 | 0.5→0.8 | 0.5→1.2 | 0.5→6.9 |
+
+Table 5. Transferring adversarial ranking examples generated from one model to another. We report the rank of the same $c$ with respect to the same $q$ across different models to illustrate the transfer attack effectiveness. Transferring adversarial examples to a model itself (the diagonal lines) is equivalent to white-box attack.
+
+candidates are lowered, e.g. from $2.1\%$ to $9.3\%$ (row 2, column 3, bottom-left table). Similar results can be observed on the QA transfer experiments, and they show weaker effect due to the SP term.
+
+From these results, we find that: (1) CNN with better architecture and higher model capacity (i.e., Res18), is less susceptible to adversarial ranking attack. This conclusion is consistent with one of Madry's [45], which claims that higher model capacity could help improve model robustness; (2) adversarial examples generated from the Res18 have the most significant effectiveness in transfer attack; (3) CNN of low model capacity (i.e., LeNet), performs moderately in terms of both adversarial example transferability and model robustness. We speculate its robustness stems from a forced regularization effect due low model capacity. Beyond these, we also noted adversarial ranking examples are transferable disregarding the difference in loss function or distance metric.
+
+Apart from transferability across different architectures, we also investigated the transferability between several independently trained C2F1 models. Results suggest similar transferability between them. Notably, when transferring adversarial examples to a defensive C2F1 model, the attacking effect is significantly mitigated. The result further demonstrates the effectiveness of our defense.
+
+# 5.2 Universal Perturbation for Ranking
+
+Recently, universal (i.e. image-agnostic) adversarial perturbation [49] for classification has been found possible, where a single perturbation may lead to misclassification when added to any image. Thus, we also investigate the existence of universal adversarial perturbation for adversarial ranking attack.
+
+To this end, we follow [49] and formulate the image-agnostic $\mathrm{CA}+$ (abbr. I- $\mathbf{C}\mathbf{A}+$ ). Given a set of candidates $C = \{c_{1},c_{2},\ldots ,c_{m}\}$ and a set of queries $Q = \{q_{1},q_{2},\dots ,q_{w}\}$ , I- $\mathbf{CA}+$ is to find a single universal adversarial perturbation $r$ so that the rank of every perturbed candidate $\tilde{c} = c + r$ ( $c\in C$ ) with respect to $Q$ can be raised. The corresponding optimization problem of I- $\mathbf{CA}+$ is:
+
+$$
+r = \underset {r \in \Gamma} {\arg \min } \sum_ {c \in C} L _ {\mathrm {C A} +} (c + r, Q; X). \tag {14}
+$$
+
+| CA+ | CA- | QA+ | QA- |
| 50 → 2.1 | 2.1 → 93.9 | 50 → 0.2 | 0.5 → 94.1 |
| I-CA+ | I-CA- | I-QA+ | I-QA- |
| 50 → 18.1 | 0.6 → 9.5 | 50 → 20.5 | 2.1 → 7.6 |
| I-CA+ (unseen) | I-CA- (unseen) | I-QA+ (unseen) | I-QA- (unseen) |
| 50 → 18.5 | 0.7 → 9.4 | 50 → 21.0 | 2.2 → 7.4 |
+
+Table 6. Universal Adversarial Perturbation for Ranking on MNIST. Each pair of results presents the original rank of chosen candidates and that after adding adversarial perturbation. Both $w, m$ are set to 1. Parameter $\xi$ is set to 0 to reduce attack difficulty.
+
+When applied with such universal perturbation, the rank of any candidate $w.r.t. Q$ is expected to be raised. The objective functions of I-CA-, I-QA+ and I-QA can be obtained in similar way. Note, unlike [36] which aims to find universal perturbation that can make image retrieval system return irrelevant results, our universal perturbations have distinct purposes.
+
+We conduct experiment on the MNIST dataset. For I-CA+ attack, we randomly sample $5\%$ of $X$ for generating the universal perturbation. Following [49], another non-overlapping $5\%$ examples are randomly sampled from $X$ to test whether the generated perturbation is generalizable on "unseen" (i.e., not used for generating the perturbation) images. Experiments for the other image-agnostic attacks are conducted similarly. Note, we only report the I-CA- and I-QA- effectiveness on the $1\%$ top ranked samples, similar to CA- and QA-.
+
+As shown in Tab. 6, our I-CA can raise the ranks of $C$ to $18.1\%$ , or lower them to $9.5\%$ . When added to "unseen" candidate images, the universal perturbation could retain nearly the same effectiveness, possibly due to low intra-class variance of the MNIST dataset.
+
+# 6 Conclusion
+
+Deep ranking models are vulnerable to adversarial perturbations that could intentionally change the ranking result. In this paper, the adversarial ranking attack that can compromise deep ranking models is defined and implemented. We also propose an adversarial ranking defense that can significantly suppress embedding shift distance and moderately improve the ranking model robustness. Moreover, the transferability of our adversarial examples and the existence of universal adversarial perturbations for ranking attack illustrate the possibility of practical black-box attack and potential risk of realistic ranking applications.
+
+Acknowledgments This work was supported partly by National Key R&D Program of China Grant 2018AAA0101400, NSFC Grants 61629301, 61773312, 61976171, and 61672402, China Postdoctoral Science Foundation Grant 2019M653642, Young Elite Scientists Sponsorship Program by CAST Grant 2018QNRC001, and Natural Science Foundation of Shaanxi Grant 2020JQ-069.
+
+# References
+
+1. Athalye, A., Carlini, N.: On the robustness of the cvpr 2018 white-box adversarial example defenses. arXiv preprint arXiv:1804.03286 (2018)
+2. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018)
+3. Bui, T., Ribeiro, L., Ponti, M., Collomosse, J.: Compact descriptors for sketch-based image retrieval using a triplet loss convolutional neural network. CVIU 164, 27-37 (2017)
+4. Carlini, N., Wagner, D.: Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311 (2016)
+5. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP). pp. 39-57. IEEE (2017)
+6. Chechik, G., Sharma, V., Shalit, U., Bengio, S.: Large scale online learning of image similarity through ranking. JMLR 11(Mar), 1109-1135 (2010)
+7. Chen, J., Jordan, M.I.: Boundary attack++: Query-efficient decision-based adversarial attack. arXiv preprint arXiv:1904.02144 (2019)
+8. Chen, P.Y., Sharma, Y., Zhang, H., Yi, J., Hsieh, C.J.: Ead: elastic-net attacks to deep neural networks via adversarial examples. In: AAAI (2018)
+9. Croce, F., Hein, M.: Sparse and imperceivable adversarial attacks. In: ICCV. pp. 4724-4732 (2019)
+10. Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum. In: CVPR (June 2018)
+1. Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: CVPR. pp. 4312-4321 (2019)
+2. Dong, Y., Su, H., Wu, B., Li, Z., Liu, W., Zhang, T., Zhu, J.: Efficient decision-based black-box adversarial attacks on face recognition. In: CVPR. pp. 7714-7722 (2019)
+3. Dong, Y., Su, H., Zhu, J., Bao, F.: Towards interpretable deep neural networks by leveraging adversarial examples. arXiv preprint arXiv:1708.05493 (2017)
+4. Dubey, A., Maaten, L.v.d., Yaliniz, Z., Li, Y., Mahajan, D.: Defense against adversarial images using web-scale nearest-neighbor search. In: CVPR. pp. 8767-8776 (2019)
+5. Faghri, F., Fleet, D.J., Kiros, J.R., Fidler, S.: Vse++: Improved visual-semantic embeddings. arXiv preprint arXiv:1707.05612 2(7), 8 (2017)
+6. Ganeshan, A., Babu, R.V.: Fda: Feature disruptive attack. In: ICCV. pp. 8069-8079 (2019)
+7. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
+8. Gopinath, D., Katz, G., Pasareanu, C.S., Barrett, C.: Deepsafe: A data-driven approach for checking adversarial robustness in neural networks. arXiv preprint arXiv:1710.00486 (2017)
+9. Goren, G., Kurland, O., Tennenholtz, M., Raiber, F.: Ranking robustness under adversarial document manipulations. In: ACM SIGIR. pp. 395-404. ACM (2018)
+20. Guo, C., Rana, M., Cisse, M., Van Der Maaten, L.: Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2017)
+21. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (June 2016)
+
+22. He, W., Wei, J., Chen, X., Carlini, N., Song, D.: Adversarial example defense: Ensembles of weak defenses are not strong. In: 11th USENIX Workshop on Offensive Technologies (WOOT 17) (2017)
+23. He, X., He, Z., Du, X., Chua, T.S.: Adversarial personalized ranking for recommendation. In: ACM SIGIR. pp. 355-364. ACM (2018)
+24. Huang, Q., Gu, Z., Katsman, I., He, H., Pawakapan, P., Lin, Z., Belongie, S., Lim, S.N.: Intermediate level adversarial attack for enhanced transferability. arXiv preprint arXiv:1811.08458 (2018)
+25. Huang, R., Xu, B., Schuurmans, D., Szepesvári, C.: Learning with a strong adversary. CoRR abs/1511.03034 (2015), http://arxiv.org/abs/1511.03034
+26. Jacob, P., Picard, D., Histace, A., Klein, E.: Metric learning with horde: High-order regularizer for deep embeddings. In: ICCV. pp. 6539-6548 (2019)
+27. Joachims, T.: Optimizing search engines using clickthrough data. In: ACM SIGKDD. pp. 133-142. ACM (2002)
+28. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: An efficient smt solver for verifying deep neural networks. In: International Conference on Computer Aided Verification. pp. 97-117. Springer (2017)
+29. Kiros, R., Salakhutdinov, R., Zemel, R.S.: Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539 (2014)
+30. Komkov, S., Petiushko, A.: Advhat: Real-world adversarial attack on arcface face id system. arXiv preprint arXiv:1908.08705 (2019)
+31. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NeurIPS. pp. 1097-1105 (2012)
+32. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)
+33. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016)
+34. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278-2324 (1998)
+35. Lee, K.H., Chen, X., Hua, G., Hu, H., He, X.: Stacked cross attention for image-text matching. In: ECCV. pp. 201-216 (2018)
+36. Li, J., Ji, R., Liu, H., Hong, X., Gao, Y., Tian, Q.: Universal perturbation attack against image retrieval. In: ICCV. pp. 4899-4908 (2019)
+37. Liu, H., Ji, R., Li, J., Zhang, B., Gao, Y., Wu, Y., Huang, F.: Universal adversarial perturbation via prior driven uncertainty approximation. In: ICCV. pp. 2941-2949 (2019)
+38. Liu, T.Y., et al.: Learning to rank for information retrieval. Foundations and Trends in Information Retrieval 3(3), 225-331 (2009)
+39. Liu, X., Cheng, M., Zhang, H., Hsieh, C.J.: Towards robust neural networks via random self-ensemble. In: ECCV. pp. 369-385 (2018)
+40. Liu, X., Li, Y., Wu, C., Hsieh, C.J.: Adv-bnn: Improved adversarial defense through robust bayesian neural network. arXiv preprint arXiv:1810.01279 (2018)
+41. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 (2016)
+42. Liu, Z., Zhao, Z., Larson, M.: Who's afraid of adversarial queries?: The impact of image modifications on content-based image retrieval. In: ICMR. pp. 306-314. ACM (2019)
+43. Lu, J., Issaranon, T., Forsyth, D.: Safetynet: Detecting and rejecting adversarial examples robustly. In: ICCV. pp. 446-454 (2017)
+44. Luo, Y., Boix, X., Roig, G., Poggio, T., Zhao, Q.: Foveation-based mechanisms alleviate adversarial examples. arXiv preprint arXiv:1511.06292 (2015)
+
+45. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
+46. Mao, C., Zhong, Z., Yang, J., Vondrick, C., Ray, B.: Metric learning for adversarial robustness. In: NeurIPS. pp. 478-489 (2019)
+47. Meng, D., Chen, H.: Magnet: a two-pronged defense against adversarial examples. In: ACM SIGSAC. pp. 135-147. ACM (2017)
+48. Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267 (2017)
+49. Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: CVPR. pp. 1765-1773 (2017)
+50. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: CVPR. pp. 2574-2582 (2016)
+51. Mummadi, C.K., Brox, T., Metzen, J.H.: Defending against universal perturbations with shared adversarial training. In: ICCV. pp. 4928-4937 (2019)
+52. Niu, Z., Zhou, M., Wang, L., Gao, X., Hua, G.: Hierarchical multimodal LSTM for dense visual-semantic embedding. In: ICCV. pp. 1881-1889 (2017)
+53. Oh Song, H., Xiang, Y., Jegelka, S., Savarese, S.: Deep metric learning via lifted structured feature embedding. In: CVPR. pp. 4004-4012 (2016)
+54. Papernot, N., McDaniel, P.: On the effectiveness of defensive distillation. arXiv preprint arXiv:1607.05113 (2016)
+55. Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)
+56. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security. pp. 506-519. ACM (2017)
+57. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P). pp. 372-387. IEEE (2016)
+58. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP). pp. 582-597. IEEE (2016)
+59. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch. None (2017)
+60. Prakash, A., Moran, N., Garber, S., DiLillo, A., Storer, J.: Deflecting adversarial attacks with pixel deflection. In: CVPR. pp. 8571-8580 (2018)
+61. Sabour, S., Cao, Y., Faghri, F., Fleet, D.J.: Adversarial manipulation of deep representations. arXiv preprint arXiv:1511.05122 (2015)
+62. Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: A unified embedding for face recognition and clustering. In: CVPR. pp. 815-823 (2015)
+63. Shaham, U., Yamada, Y., Negahban, S.: Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing 307, 195-204 (2018)
+64. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: ACM SIGSAC. pp. 1528-1540. ACM (2016)
+65. Shi, Y., Wang, S., Han, Y.: Curls & whey: Boosting black-box adversarial attacks. arXiv preprint arXiv:1904.01160 (2019)
+
+66. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation (2019)
+67. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR. pp. 1-9 (2015)
+68. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR. pp. 2818-2826 (2016)
+69. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
+70. Wang, J., Song, Y., Leung, T., Rosenberg, C., Wang, J., Philbin, J., Chen, B., Wu, Y.: Learning fine-grained image similarity with deep ranking. In: CVPR. pp. 1386-1393 (2014)
+71. Wang, J., Zhang, H.: Bilateral adversarial training: Towards fast training of more robust models against adversarial attacks. In: ICCV. pp. 6629-6638 (2019)
+72. Wang, Z., Zheng, S., Song, M., Wang, Q., Rahimpour, A., Qi, H.: advpattern: Physical-world attacks on deep person re-identification via adversarially transformable patterns. In: ICCV. pp. 8341-8350 (2019)
+73. Wu, L., Zhu, Z., Tai, C., et al.: Understanding and enhancing the transferability of adversarial examples. arXiv preprint arXiv:1802.09707 (2018)
+74. Xiao, C., Zhu, J.Y., Li, B., He, W., Liu, M., Song, D.: Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612 (2018)
+75. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)
+76. Xie, C., Zhang, Z., Zhou, Y., Bai, S., Wang, J., Ren, Z., Yuille, A.L.: Improving transferability of adversarial examples with input diversity. In: CVPR. pp. 2730-2739 (2019)
+77. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: Attacks and defenses for deep learning. IEEE TNNLS (2019)
+78. Zhong, Y., Deng, W.: Adversarial learning with margin-based triplet embedding regularization. In: ICCV. pp. 6549-6558 (2019)
\ No newline at end of file
diff --git a/adversarialrankingattackanddefense/images.zip b/adversarialrankingattackanddefense/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..bb62f2b37c3e7a1a4a63ff0c1feaf903e4daecee
--- /dev/null
+++ b/adversarialrankingattackanddefense/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be83aa95cd5ee7d45956a8f33ef4c42a5a56e5f5f761c5f18fe7da14feb09340
+size 377997
diff --git a/adversarialrankingattackanddefense/layout.json b/adversarialrankingattackanddefense/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..eccfe1ac68d068a2a3cf00a68668eef15e9f3bfc
--- /dev/null
+++ b/adversarialrankingattackanddefense/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f4a329685390321d0975ae9d1ffc96bbac222b1a8b0a669aa849179993f94185
+size 631423
diff --git a/adversarialrobustnessoninandoutdistributionimprovesexplainability/d94e2a68-b904-41bc-a480-11e38665a74c_content_list.json b/adversarialrobustnessoninandoutdistributionimprovesexplainability/d94e2a68-b904-41bc-a480-11e38665a74c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..bea49f1b62fcc267a306a1cbb3098c4c8b500e6f
--- /dev/null
+++ b/adversarialrobustnessoninandoutdistributionimprovesexplainability/d94e2a68-b904-41bc-a480-11e38665a74c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:371616add34b822d70dd2ffea20ac065d8bc8db5ab923895757fd1ea60183e86
+size 67950
diff --git a/adversarialrobustnessoninandoutdistributionimprovesexplainability/d94e2a68-b904-41bc-a480-11e38665a74c_model.json b/adversarialrobustnessoninandoutdistributionimprovesexplainability/d94e2a68-b904-41bc-a480-11e38665a74c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d96773d53ca96b300c8a440ee0d1ce2f6420bfb4
--- /dev/null
+++ b/adversarialrobustnessoninandoutdistributionimprovesexplainability/d94e2a68-b904-41bc-a480-11e38665a74c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:85411cca523604944608b7003ddbc08c6d9c783bad786467f115d06d6874a0fc
+size 87314
diff --git a/adversarialrobustnessoninandoutdistributionimprovesexplainability/d94e2a68-b904-41bc-a480-11e38665a74c_origin.pdf b/adversarialrobustnessoninandoutdistributionimprovesexplainability/d94e2a68-b904-41bc-a480-11e38665a74c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5e0e480cb6b9792f48b554568000c208df874db2
--- /dev/null
+++ b/adversarialrobustnessoninandoutdistributionimprovesexplainability/d94e2a68-b904-41bc-a480-11e38665a74c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cc05504e2f9eb5f04716d39f61e058b71bd6a04d15347555229e7c4465d6adfd
+size 1965216
diff --git a/adversarialrobustnessoninandoutdistributionimprovesexplainability/full.md b/adversarialrobustnessoninandoutdistributionimprovesexplainability/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..750716499680620637aa899b8d7bde053534edb8
--- /dev/null
+++ b/adversarialrobustnessoninandoutdistributionimprovesexplainability/full.md
@@ -0,0 +1,231 @@
+# Adversarial Robustness on In- and Out-Distribution Improves Explainability
+
+Maximilian Augustin, Alexander Meinke, and Matthias Hein
+University of Tübingen, Germany
+
+Abstract. Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions. In this work we propose RATIO, a training procedure for Robustness via Adversarial Training on In- and Out-distribution, which leads to robust models with reliable and robust confidence estimates on the out-distribution. RATIO has similar generative properties to adversarial training so that visual counterfactuals produce class specific features. While adversarial training comes at the price of lower clean accuracy, RATIO achieves state-of-the-art $l_{2}$ -adversarial robustness on CIFAR10 and maintains better clean accuracy.
+
+# 1 Introduction
+
+Deep neural networks have shown phenomenal success in achieving high accuracy on challenging classification tasks [29]. However, they are lacking in terms of robustness against adversarial attacks [51], make overconfident predictions [20, 21] especially on out-of-distribution (OOD) data [41, 24] and their black box decisions are inscrutable [56]. Progress has been made with respect to all these aspects but there is currently no approach which is accurate, robust, has good confidence estimates and is explainable. Adversarial training (AT) [34] leads to models robust against adversarial attacks in a defined threat model and has recently been shown to produce classifiers with generative capabilities [46]. However, AT typically suffers from a significant drop in accuracy and is over-confident on OOD data as we show in this paper. Adversarial confidence enhanced training (ACET) [21] enforces low confidence in a neighborhood around OOD samples and can be seen as adversarial training on the out-distribution. ACET leads to models with good OOD detection performance even in an adversarial setting and suffers from a smaller loss in clean accuracy compared to AT. However, ACET models typically are significantly less robust than adversially trained models.
+
+In this paper we show that combining AT and ACET into RATIO, Robustness via Adversarial Training on In- and Out-distribution, inherits the good properties of adversarial training and ACET without, or at least with significantly reduced, negative effects, e.g. we get SOTA $l_{2}$ -robustness on CIFAR10 and have better clean accuracy than AT. On top of this we get reliable confidence estimates on the out-distribution even in a worst case scenario. In particular AT
+
+Table 1: Summary: We show clean and robust accuracy in an $l_{2}$ -threat model with $\epsilon = 0.5$ and the expected calibration error (ECE). For OOD detection we report the mean of clean and worst case AUC over several out-distributions in an $l_{2}$ -threat model with $\epsilon = 1.0$ as well as the mean maximal confidence (MMC) on the out-distributions. In light red we highlight failure cases for certain metrics. Only RATIO-0.25 ( $\mathrm{R}_{0.25}$ ) has good performance across all metrics.
+
+| CIFAR10 | Plain | OE | ACET | M0.5 | AT0.5 | AT0.25 | JEM-0 | R0.5 | R0.25 |
| Acc. ↑ | 96.2 | 96.4 | 94.1 | 90.8 | 90.8 | 94.0 | 92.8 | 91.1 | 93.5 |
| R. Acc.0.5 ↑ | 0.0 | 0.0 | 52.3 | 69.3 | 70.4 | 65.0 | 40.5 | 73.3 | 70.5 |
| ECE (in %) ↓ | 1.0 | 2.9 | 2.8 | 2.6 | 2.2 | 2.2 | 3.9 | 2.8 | 2.7 |
| AUC ↑ | 94.2 | 96.5 | 94.7 | 81.8 | 88.9 | 92.7 | 75.0 | 95.6 | 95.0 |
| WC AUC1.0 ↑ | 1.6 | 8.7 | 81.9 | 48.5 | 57.4 | 42.0 | 14.6 | 83.6 | 84.3 |
| MMC ↓ | 62.0 | 31.9 | 39.1 | 62.7 | 55.8 | 55.2 | 69.7 | 31.9 | 33.9 |
| SVHN | Plain | OE | ACET | AT0.5 | AT0.25 | R0.5 | R0.25 |
| Acc. ↑ | 97.3 | 97.6 | 97.8 | 94.4 | 96.7 | 94.3 | 96.8 |
| R. Acc.0.5 ↑ | 0.9 | 0.3 | 28.8 | 68.1 | 63.0 | 68.4 | 64.8 |
| ECE ↓ | 0.9 | 0.9 | 1.6 | 1.6 | 0.8 | 2.0 | 1.8 |
| AUC ↑ | 96.9 | 99.6 | 99.8 | 91.0 | 97.0 | 99.8 | 99.9 |
| WC AUC1.0 ↑ | 8.5 | 18.2 | 96.0 | 51.1 | 48.3 | 97.5 | 97.5 |
| MMC ↓ | 61.5 | 16.3 | 11.8 | 67.1 | 49.1 | 12.1 | 11.1 |
+
+yields highly overconfident predictions on out-distribution images in the absence of class specific features whereas RATIO only yields high confident predictions if recognizable features are present. In summary, RATIO achieves high clean accuracy, is robust, calibrated and has generative properties which can be used to produce high-quality visual counterfactual explanations: see Table 1 for a summary of our results for CIFAR10 and SVHN and Table 2 for CIFAR100 and restricted ImageNet [54].
+
+# 2 Related Work
+
+Adversarial Robustness. Adversarial attacks are small changes of an image with respect to some distance measure, which change the decision of a classifier [51]. Many defenses have been proposed but with more powerful or adapted attacks most of them could be defeated [13, 8, 3, 38]. Adversarial training (AT) [34] is the most widely used approach that has not been broken. However, adversarial robustness comes at the price of a drop in accuracy [48, 50]. Recent variations are using other losses [60] and boost robustness via generation of additional training data [9, 1] or pre-training [26]. Another line of work are provable defenses, either deterministic [58, 12, 37, 17] or based on randomized smoothing [33, 30, 11]. However, provable defenses are still not competitive with the empirical robustness of adversarial training for datasets like CIFAR10 and have even worse accuracy. We show that using AT on the in-distribution and out-distribution leads to a smaller drop in clean accuracy and similar or better robustness.
+
+Confidence on In- and Out-distribution. Neural networks have been shown to yield overly confident predictions far away from the training data [41, 24, 32] and this is even provably the case for ReLU networks [21]. Moreover, large neural networks are not calibrated on the in-distribution and have a bias to be overconfident [20]. The overconfidence on the out-distribution has been tackled in [31, 21, 25] by enforcing low-confidence predictions on a large out-distribution dataset e.g. using the 80 million tiny images dataset[25] leads to state-of-the-art results. However, if one maximizes the confidence in a ball around out-distribution-samples, most OOD methods are again overconfident [48, 21, 49, 35] and only AT on the out-distribution as in ACET [21] or methods providing guaranteed worst case OOD performance [35, 7] work in this worst-case setting. We show that RATIO leads to better worst case OOD performance than ACET.
+
+Counterfactual Explanations. Counterfactual explanations have been proposed in [56] as a tool for making classifier decisions plausible, since humans also justify decisions via counterfactuals "I would have decided for X, if Y had been true" [36]. Other forms are explanations based on image features [22, 23]. However, changing the decision for image classification in image space for non-robust models leads to adversarial samples [15] with changes that are visually not meaningful. Thus visual counterfactuals are often based on generative models or restrictions on the space of image manipulation [45, 42, 10, 18, 61, 57]. Robust models wrt $l_{2}$ -adversarial attacks [54, 46] have been shown to change their decision when class-specific features appear in the image, which is a prerequisite for meaningful counterfactuals [6]. RATIO generates better counterfactuals, i.e. the confidence of the counterfactual images obtained by an $l_{2}$ -adversarial attack tends to be high only after features of the alternative class have appeared. Especially for out-distribution images the difference to AT is pronounced.
+
+Robust, reliable and explainable classifiers. This is the holy grail of machine learning. A model which is accurate and calibrated [20] on the in-distribution reliably has low confidence on out-distribution inputs, is robust to adversarial manipulation and has explainable decisions. Up to our knowledge there is no model which claims to have all these properties. The closest one we are aware of is the JEM-0 of [19] which is supposed to be robust, detects out-of-distribution samples and has generative properties. They state "JEM does not confidently classify nonsensical images, so instead, ... natural image properties visibly emerge". We show that RATIO gets us closer to this ultimate goal and outperforms JEM-0 in all aspects: accuracy, robustness, (worst-case) out-of-distribution detection, and visual counterfactual explanations.
+
+# 3 RATIO: Robust, Reliable and Explainable Classifier
+
+In the following we are considering multi-class (image) classification. We have the logits of a classifier $f:[0,1]^d\to \mathbb{R}^K$ where $d$ is the input dimension and $K$ the number of classes. With $\varDelta=\{p\in[0,1]^{K}\mid\sum_{i=1}^{K}p_{i}=1\}$ we denote the predicted probability distribution of $f$ over the labels by $\hat{p}:\mathbb{R}^d\to \varDelta$ which is obtained using the softmax function: $\hat{p}_{f,s}(x)=\frac{e^{f_s(x)}}{\sum_{j=1}^{K}e^{f_j(x)}}, s=1,\ldots,K$ . We
+
+further denote the training set by $(x_{i},y_{i})_{i = 1}^{N}$ with $x_{i}\in [0,1]^{d}$ and $y_{i}\in \{1,\ldots ,K\}$ . As loss we always use the cross-entropy loss defined as
+
+$$
+L (p, \hat {p} _ {f}) = \sum_ {j = 1} ^ {K} p _ {j} \log (\hat {p} _ {f, j}), \tag {1}
+$$
+
+where $p\in \varDelta$ is the true distribution and $\hat{p}_f$ the predicted distribution.
+
+# 3.1 Robustness via Adversarial Training
+
+An adversarial sample of $x$ with respect to some threat model $T(x) \subset \mathbb{R}^d$ is a point $z \in T(x) \cap [0,1]^d$ such that the decision of the classifier $f$ changes for $z$ while an oracle would unambiguously associate $z$ with the class of $x$ . In particular this implies that $z$ shows no meaningful class-associated features of any other class. Formally, let $y$ be the correct label of $x$ , then $z$ is an adversarial sample if
+
+$$
+\underset {k \neq y} {\arg \max } f _ {k} (z) > f _ {y} (x), \quad z \in [ 0, 1 ] ^ {d} \cap T (x), \tag {2}
+$$
+
+assuming that the threat model is small enough such that no real class change occurs. Typical threat models are $l_{p}$ -balls of a given radius $\epsilon$ , that is
+
+$$
+T (x) = B _ {p} (x, \epsilon) = \{z \in \mathbb {R} ^ {d} \mid \| z - x \| _ {p} \leq \epsilon \}. \tag {3}
+$$
+
+The robust test accuracy is then defined as the lowest possible accuracy when every test image $x$ is allowed to be changed to some $z \in T(x) \cap [0,1]^d$ . Plain models have a robust test accuracy close to zero, even for "small" threat models.
+
+Several strategies for adversarial robustness have been proposed, but adversarial training (AT) [34] has proven to produce robust classifiers across datasets and network architectures without adding significant computational overhead during inference (compared to randomized smoothing [33, 30, 11]).
+
+The objective of adversarial training for a threat model $T(x) \subset \mathbb{R}^d$ is:
+
+$$
+\min _ {f} \mathbb {E} _ {(x, y) \sim p _ {\text {i n}}} \left[ \max _ {z \in T (x)} L \left(\mathbf {e} _ {y}, \hat {p} _ {f} (z)\right) \right], \tag {4}
+$$
+
+where $\mathbf{e}_y$ is a one-hot encoding of label $y$ and $p_{\mathrm{in}}(x,y)$ is the training distribution. During training one approximately solves the inner maximization problem in equation 4 via projected gradient descent (PGD) and then computes the gradient wrt $f$ at the approximate solution of the inner problem. The community has put emphasis on robustness wrt $l_{\infty}$ but recently there is more interest in other threat models e.g. $l_{2}$ -balls [53,44,46]. In particular, it has been noted [54,46] that robust models wrt an $l_{2}$ -ball have the property that "adversarial" samples generated within a sufficiently large $l_{2}$ -ball tend to have image features of the predicted class. Thus they are not "adversarial" samples in the sense defined above as the true class has changed or is at least ambiguous.
+
+The main problem of AT is that robust classifiers suffer from a significant drop in accuracy compared to normal training [54]. This trade-off [47, 50] can be mitigated e.g. via training $50\%$ on clean samples and $50\%$ on adversarial samples at the price of reduced robustness [50] or via semi-supervised learning [55, 39, 9].
+
+# 3.2 Worst-case OOD detection via Adversarial Training on the Out-distribution
+
+While adversarial training yields robust classifiers, similarly to plain models it suffers from overconfident predictions on out-of-distribution samples. Overconfident predictions are a problem for safety-critical systems as the classifier is not reliably flagging when it operates "out of its specification" and thus its confidence in the prediction cannot be used to trigger human intervention.
+
+In order to mitigate over-confident predictions [21, 25] proposed to enforce low confidence on images from a chosen out-distribution $p_{\mathrm{out}}(x)$ . A generic out-distribution would be all natural images and thus [25] suggest the 80 million tiny images dataset [52] as a proxy for this. While [25] consistently reduce confidence on different out-of-distribution datasets, similar to plain training for the in distribution one can again get overconfident predictions by maximizing the confidence in a small ball around a given out-distribution image (adversarial attacks on the out-distribution [21, 35]).
+
+Thus [21] proposed Adversarial Confidence Enhanced Training (ACET) which enforces low confidence in an entire neighborhood around the out-distribution samples which can be seen as a form of AT on the out-distribution:
+
+$$
+\min _ {f} \mathbb {E} _ {(x, y) \sim p _ {\mathrm {i n}}} \left[ L (\boldsymbol {e} _ {y}, \hat {p} _ {f} (x)) \right] + \lambda \mathbb {E} _ {(x, y) \sim p _ {\mathrm {o u t}}} \left[ \max _ {\| z - x \| _ {2} \leq \epsilon} L (\mathbf {1} / K, \hat {p} _ {f} (z)) \right], \tag {5}
+$$
+
+where $\mathbf{1}$ is the vector of all ones (outlier exposure [25] has the same objective without the inner maximization for the out-distribution). Different from [21] we use the same loss for in-and out-distribution, whereas they used the maximal log-confidence over all classes as loss for the out-distribution. In our experience the maximal log-confidence is more difficult to optimize, but both losses are minimized by the uniform distribution over the labels. Thus the difference is rather small and we also denote this version as ACET.
+
+# 3.3 RATIO: Robustness via Adversarial Training on In-and Out-distribution
+
+We propose RATIO: adversarial training on in-and out-distribution. This combination leads to synergy effects where most positive attributes of AT and ACET are fused without having larger drawbacks. The objective of RATIO is given by:
+
+$$
+\min _ {f} \mathbb {E} _ {(x, y) \sim p _ {\text {i n}}} \left[ \max _ {\| z - x \| _ {2} \leq \epsilon_ {i}} L \left(\boldsymbol {e} _ {y}, \hat {p} _ {f} (z)\right) \right] + \lambda \mathbb {E} _ {(x, y) \sim p _ {\text {o u t}}} \left[ \max _ {\| z - x \| _ {2} \leq \epsilon_ {o}} L \left(\mathbf {1} / K, \hat {p} _ {f} (z)\right) \right], \tag {6}
+$$
+
+where $\lambda$ has the interpretation of $\frac{p_o}{p_i}$ , the probability to see out-distribution $p_o$ and in-distribution $p_i$ samples at test time. Here we have specified an $l_2$ -threat model for in-and out-distribution but the objective can be adapted to different threat models which could be different for in- and out-distribution. The surprising part of RATIO is that the addition of the out-distribution part can improve the results even on the in-distribution in terms of (robust) accuracy. The reason
+
+is that adversarial training on the out-distribution ensures that spurious features do not change the confidence of the classifier. This behavior generalizes to the in-distribution and thus ACET (adversarial training on the out-distribution) is also robust on the in-distribution (52.3% robust accuracy for $l_{2}$ with $\epsilon = 0.5$ on CIFAR10). One problem of adversarial training is overfitting on the training set [43]. Our RATIO has seen more images at training time and while the direct goal is distinct (keeping one-hot prediction on the in-distribution and uniform prediction on out-distribution) both aim at constant behavior of the classifier over the $l_{2}$ -ball and thus the effectively increased training size improves generalization (in contrast to AT, RATIO has its peak robustness at the end of the training). Moreover, RATIO typically only shows high confidence if class-specific features have appeared which we use in the generative process described next.
+
+# 4 Visual Counterfactual Explanations
+
+The idea of a counterfactual explanation [56] is to provide the smallest change of a given input such that the decision changes into a desired target class e.g. how would this X-ray image need to look in order to change the diagnosis from X to Y. Compared to sensitivity based explanations [5, 59] or explanations based on feature attributions [4] counterfactual explanations have the advantage that they have an "operational meaning" which couples the explanation directly to the decision of the classifier. On the other hand the counterfactual explanation requires us to specify a metric or a budget for the allowed change of the image which can be done directly in image space or in the latent space of a generative model. However, our goal is that the classifier directly learns what meaningful changes are and we do not want to impose that via a generative model. Thus we aim at visual counterfactual explanations directly in image space with a fixed budget for changing the image. As the decision changes, features of this class should appear in the image (see Figure 2). Normally trained models will not achieve this since non-robust models change their prediction for non-perceptible perturbations [51], see Figure 1. Thus robustness against $(l_{2}$ -adversarial perturbations is a necessary requirement for visual counterfactuals and indeed [54, 46] have shown "generative properties" of $l_{2}$ -robust models.
+
+A visual counterfactual for the original point $x$ classified as $c = \operatorname*{arg\max}_{k=1,\ldots,K} f_k(x)$ , a target class $t \in \{1, \ldots, K\}$ and a budget $\epsilon$ is defined as
+
+$$
+x ^ {(t)} = \underset {z \in [ 0, 1 ] ^ {d}, \| x - z \| _ {2} \leq \epsilon} {\arg \max } \hat {p} _ {f, t} (z), \tag {7}
+$$
+
+where $\hat{p}_{f,t}(z)$ is the confidence for class $t$ of our classifier for the image $z$ . If $t \neq c$ it answers the counterfactual question of how to use the given budget to change the original input $x$ so that the classifier is most confident in class $t$ . Note that in our definition we include the case where $t = c$ , that is we ask how to change the input $x$ classified as $c$ to get even more confident in class $c$ . In Figure 2 we illustrate both directions and show how for robust models class specific image
+
+Table 2: Summary for CIFAR100 and R. ImageNet (see Table 1 for details).
+
+| CIFAR100 | Plain | OE | ACET | \(AT_{0.5}\) | \(AT_{0.25}\) | \(R_{0.5}\) | \(R_{0.25}\) | |
| Acc. ↑ | 81.5 | 81.4 | - | 70.6 | 75.8 | 69.2 | 74.4 | |
| R. Acc. \( \cdot 0.5 \) ↑ | 0.0 | 0.0 | - | 43.2 | 37.3 | 45.6 | 42.4 | |
| ECE ↓ | 1.2 | 7.2 | - | 1.3 | 1.5 | 3.2 | 2.0 | |
| AUC ↑ | 84.0 | 91.9 | - | 75.6 | 79.4 | 87.0 | 86.9 | |
| WC AUC \( _{1.0} \) ↑ | 0.4 | 14.6 | - | 29.9 | 24.8 | 55.5 | 54.5 | |
| MMC ↓ | 51.1 | 21.8 | - | 45.8 | 47.1 | 24.4 | 31.0 | |
| R.Imagenet | Plain | OE | ACET | \(M_{3.5}\) | \(AT_{3.5}\) | \(AT_{1.75}\) | \(R_{3.5}\) | \(R_{1.75}\) |
| Acc. ↑ | 96.6 | 97.2 | 96.2 | 90.3 | 93.5 | 95.5 | 93.9 | 95.5 |
| R. Acc. \( \cdot 3.5 \) ↑ | 0.0 | 0.0 | 6.2 | 47.7 | 47.7 | 36.7 | 49.2 | 43.0 |
| ECE ↓ | 0.6 | 1.8 | 0.9 | 0.7 | 0.9 | 0.5 | 0.3 | 0.7 |
| AUC ↑ | 92.7 | 98.9 | 97.74 | 83.6 | 84.3 | 86.5 | 97.2 | 97.8 |
| WC AUC \( _{7.0} \) ↑ | 0.0 | 1.8 | 87.54 | 44.2 | 37.5 | 16.3 | 90.9 | 90.6 |
| MMC ↓ | 67.9 | 20.6 | 34.85 | 69.2 | 75.2 | 81.8 | 33.6 | 32.3 |
+
+features appear when optimizing the confidence of that class. This shows that the optimization of visual counterfactuals can be done directly in image space.
+
+
+Fig. 1: Failure of a visual counterfactual for a plain model. The targeted attack immediately produces very high confidence in both classes but instead of class features only high-frequency noise appears because plain models are not robust.
+
+# 5 Experiments
+
+Comparison, Training and Attacks. We validate our approach on SVHN [40], CIFAR10/100 [28] and restricted ImageNet [46]. On CIFAR10 we compare RATIO to a pretrained JEM-0 [19] and the AT model [16] with $l_{2} = 0.5$ ( $\mathrm{M}_{0.5}$ ) (both not available on the other datasets). As an ablation study of RATIO we train a plain model, outlier exposure (OE) [25], ACET [21] and AT with $l_{2} = 0.5$
+
+$(\mathrm{AT}_{0.5})$ and $l_{2} = 0.25$ $(\mathrm{AT}_{0.25})$ , using the same hyperparameters as for our RATIO training. On SVHN we use a ResNet18 architecture for all methods and on the other datasets we use ResNet50, both with standard input normalization. For ACET on CIFAR10 we use ResNet18 since for ResNet50 we could not obtain a model with good worst case OOD performance as the attack seemed to fail at some point during training (on CIFAR100 this was even the case for ResNet18 and thus we omit it from comparison). In general ACET is difficult to train. For RATIO the additional adversarial training on the in-distribution seems to stabilize the training and we did not encounter any problems. As out-distribution for SVHN and CIFAR we use 80 million tiny images [52] as suggested in [25] and for restricted ImageNet the remaining ImageNet classes. For the out-distribution we always use $l_{2}$ -attacks with radius $\epsilon_{o} = 1$ for SVHN/CIFAR and $\epsilon_{o} = 7$ on restricted ImageNet (both ACET and RATIO) whereas on the in-distribution we use $\epsilon_{i} = 0.25$ and $\epsilon_{i} = 0.5$ and $\epsilon_{i} = 1.75$ and $\epsilon_{i} = 3.5$ , respectively (both AT and RATIO). Therefore RATIO/AT models are labeled by $\epsilon_{i}$ . For further training details see the Appendix. For the adversarial attacks on in- and out-distribution we use the recent Auto-Attack [13] which is an ensemble of four attacks, including the black-box Square Attack [2] and three white-box attacks (FAB-attack [14] and AUTO-PGD with different losses). For each of the white-box attacks, a budget of 100 iterations and 5 restarts is used and a query limit of 5000 for Square attack. In [13] they show that Auto-Attack consistently improves the robustness evaluation for a large number of models (including JEM-x).
+
+Calibration on the in-distribution. With RATIO we aim for reliable confidence estimates, in particular no overconfident predictions. In order to have comparable confidence for the different models we train, especially when we check visual counterfactuals or feature generation, we first need to "align" their confidence. We do this by minimizing the expected calibration error (ECE) via temperature rescaling [20]. Note that this rescaling does not change the classification and thus has no impact on (robust) accuracy and only a minor influence on the (worst case) AUC values for OOD-detection. For details see the Appendix. (Robust) Accuracy on the in-distribution. Using Auto-Attack [13] we evaluate robustness on the full test set for both CIFAR and r. Imagenet and 10000 test samples for SVHN. Tables 1 and 2 contain (robust $l_{2}$ ) accuracy, detailed results, including $l_{\infty}$ attacks, can be found in the Appendix. On CIFAR10, RATIO achieves significantly higher robust accuracy than AT for $l_{2}$ -and $l_{\infty}$ -attacks. Thus the additional adversarial training on the out-distribution with radius $\epsilon_{o} = 1$ boosts the robustness on the in-distribution. In particular, $\mathrm{RATIO}_{0.25}$ achieves better $l_{2}$ -robustness than $\mathrm{AT}_{0.5}$ and $\mathrm{M}_{0.5}$ at $\approx 2.7\%$ higher clean accuracy. In addition, $\mathrm{R}_{0.5}$ yields new state-of-the-art $l_{2}$ -robust accuracy at radius 0.5 (see [13] for a benchmark) while having higher test accuracy than $\mathrm{AT}_{0.5}$ , $\mathrm{M}_{0.5}$ . Moreover, the $l_{2}$ -robustness at radius 1.0 and the $l_{\infty}$ -robustness at 8/255 is significantly better. Interestingly, although ACET is not designed to yield adversarial robustness on the in-distribution, it achieves more than $50\%$ robust accuracy for $l_{2} = 0.5$ and outperforms JEM-0 in all benchmarks. However, as our goal is to have a model which is both robust and accurate, we recommend to use $R_{0.25}$ for
+
+CIFAR10 which has a drop of only $2.6\%$ in test accuracy compared to a plain model while having similar robustness to $\mathrm{M}_{0.5}$ and $\mathrm{AT}_{0.5}$ . Similar observations as for CIFAR10 hold for CIFAR100 and for Restricted ImageNet, see Table 2, even though for CIFAR100 AT and RATIO suffer a higher loss in accuracy. On SVHN, RATIO outperforms AT in terms of robust accuracy trained with the same $l_{2}$ -radius but the effect is less than for CIFAR10. We believe that this is due to the fact that the images obtained from the 80 million tiny image dataset (out distribution) do not reflect the specific structure of SVHN numbers which makes (worst case) outlier detection an easier task. This is supported by the fact that ACET achieves better clean accuracy on SVHN than both OE and the plain model while it has worse clean accuracy on CIFAR10.
+
+Visual Counterfactual Generation. We use 500 step Auto-PGD [13] for a targeted attack with the objective in equation 7. However, note that this nonconvex optimization problem has been shown to be NP-hard [27]. In Figure 2, 3 and 4 and in the Appendix we show generated counterfactuals for all datasets. For CIFAR10 $\mathrm{AT}_{0.5}$ performs very similar to $\mathrm{RATIO}_{0.25}$ in terms of the emergence of class specific image features. In particular, we often see the appearance of characteristic features such as pointed ears for cats, wheels for cars and trucks, large eyes for both cats and dogs and the antlers for deers. JEM-0 and ACET perform worse but for both of them one observes the appearance of image features. However, particularly the images of JEM-0 have a lot of artefacts. For SVHN $\mathrm{RATIO}_{0.25}$ on average performs better than $\mathrm{AT}_{0.25}$ and ACET. It is interesting to note that for both datasets class-specific features emerge already for an $l_{2}$ -radius of 1.0. Thus it seems questionable if $l_{2}$ -adversarial robustness beyond a radius of 1.0 should be enforced. Due to the larger number of classes, CIFAR100 counterfactuals are of slightly lower quality. For Restricted ImageNet the visual counterfactuals show class-specific features but can often be identified as synthetic due to misaligned features.
+
+Reliable Detection of (Worst-case) Out-of-Distribution Images. A reliable classifier should assign low confidence to OOD images. This is not the case for plain models and AT. As the 80 million tiny image dataset has been used for training for ACET and RATIO (respectively other ImageNet classes for Restricted ImageNet), we evaluate the discrimination of in-distribution versus out-distribution on other datasets as in [35], see the Appendix for details. We use $\max_k\hat{p}_{f,k}(x)$ as feature to discriminate in-and out-distribution (binary classification) and compute the AUC. However, it has been shown that even state-of-the-art methods like outlier exposure (OE) suffer from overconfident predictions if one searches for the most confident prediction in a small neighborhood around the the-out-distribution image [35]. Thus we also report the worst-case AUC by maximizing the confidence in an $l_{2}$ -ball of radius 1.0 (resp. 7.0 for R. ImageNet) around OOD images via Auto-PGD [13] with 100 steps and 5 random restarts. Figure 5 further shows that while RATIO behaves similar to AT around samples from the data distribution, which explains similar counterfactuals, it has a flatter confidence profile around out-distribution samples.
+
+
+Fig. 2: Visual Counterfactuals (CIFAR10): The dog image on the left is misclassified by all models (confidence for true and predicted class are shown). The top row shows visual counterfactuals for the correct class (how to change the image so that it is classified as dog) and the bottom row shows how to change the image in order to increase the confidence in the wrong prediction for different budgets of the $l_{2}$ -radius ( $\epsilon = 0.5$ to $\epsilon = 3$ ). More examples are in the appendix.
+
+
+Fig. 3: Visual Counterfactuals (SVHN): The 5 on the left is misclassified by all models. We show counterfactuals for the true class the predicted class (see Figure 2). RATIO consistently produces samples with fewer artefacts than AT.
+
+on 1024 points from each out-distribution (300 points for LSUN_CR). Using the worst case confidence of these points we find empirical upper bounds on the worst-case AUC under our threat model. We report both the average-case AUCs as well as the worst-case AUCs in the Appendix. The average AUC over all OOD datasets is reported in Tables 1 and 2. The AT-model of Madry et. al $(\mathbf{M}_{0.5})$ perform worse than the plain model even on the average case task. However, we see that with our more aggressive data augmentation this problem is somewhat alleviated $(\mathrm{AT}_{0.5}$ and $\mathrm{AT}_{0.25}$ ). As expected ACET, has good worst-case OOD performance but is similar to the plain model for the average case. JEM-0 has bad worst-case AUCs and we cannot confirm the claim that "JEM does not confidently classify nonsensical images"[19]. As expected, OE has state-of-the-art performance on the clean task but has no robustness on the out-distribution, so it fails completely in this regime. Our RATIO models show strong performance on all tasks and even outperform the ACET model which shows that adversarial robustness wrt the in-distribution also helps with adversarial robustness on the out-distribution. On SVHN the average case OOD task is simple enough that several models achieve near perfect AUCs, but again only ACET and our RATIO models manage to retain strong performance in the worst case setting. The worst-case AUC of AT models is significantly worse than that of ACET and RATIO.
+
+
+Fig.4: Visual Counterfactuals top: RATIO-0.25 for CIFAR100 and bottom: RATIO-1.75 for RestrictedImageNet.
+
+Feature Generation on OOD images. Finally, we test the abilities to generate image features with a targeted attack on OOD images (taken from 80m tiny image dataset resp. ImageNet classes not belonging to R. ImageNet). The setting is similar to the visual counterfactuals. We take some OOD image and then optimize the confidence in the class which is predicted on the OOD image. The results can be found in Figure 7 and 6 and additional samples are attached in the Appendix. For CIFAR10 all methods are able to generate image features of the class but the predicted confidence are only reasonable for ACET and $\mathrm{RATIO}_{0.25}$ whereas $\mathrm{AT}_{0.5}$ and JEM-0 are overconfident when no strong class features are visible. This observation generalizes to SVHN and mostly CIFAR100 and r. Imagenet, i.e. RATIO generally has the best OOD-confidence profile.
+
+Summary. In summary, in Table 1 and 2 we can see that $\mathrm{RATIO}_{0.25}$ resp. $\mathrm{RATIO}_{1.75}$ is except for CIFAR100 the only model which has no clear failure case. Here the subjective definition of a failure case (highlighted in red) is an entry which is "significantly worse" than the best possible in this metric. Thus we think that RATIO succeeds in being state-of-the-art in generating a model which is accurate, robust, has reliable confidence and is able to produce meaningful visual counterfactuals. Nevertheless RATIO is not perfect and we discuss failure cases of all models in the Appendix.
+
+
+(a) ID worst-case confidence
+
+
+(b) OD worst-case confidence
+
+
+Fig.5: (a) Mean confidence in true label as a function of the attack $l_{2}$ -radius around CIFAR10 test images. RATIO and AT0.5 have a reasonable decay of the confidence. (b) Mean of maximal confidence around OD-data (tiny images) over the attack $l_{2}$ -radius. All methods except RATIO and ACET are overconfident.
+Fig. 6: Feature Generation for out-distribution images top: RATIO-0.25 for CIFAR100 and bottom: RATIO-1.75 for R.ImageNet
+
+# 6 Conclusion and Outlook
+
+We have shown that adversarial robustness on in-distribution and out-distribution (as a proxy of all natural images) gets us closer to a classifier which is accurate, robust, has reliable confidence estimates and is able to produce visual counterfactual explanations with strong class specific image features. For the usage in safety-critical in systems it would be ideal if these properties can be achieved in a provable way which remains an open problem.
+
+# Acknowledgements
+
+M.H and A.M. acknowledge support by the BMBF Tübingen AI Center (FKZ: 01IS18039A) and by DFG TRR 248, project number 389792660 and the DFG Excellence Cluster Machine Learning -New Perspectives for Science, EXC 2064/1, project number 390727645. A.M. thanks the IMPRS for Intelligent Systems.
+
+
+Fig. 7: Feature Generation for out-distribution images (CIFAR10 (top), SVHN (bottom)): targeted attacks towards the class achieving highest confidence on original image for different budgets of the $l_{2}$ -radius ranging from $\epsilon = 0.5$ to $\epsilon = 3$ . RATIO-0.25 generates the visually best images and in particular has reasonable confidence values for its decision. While AT-0.5/AT-0.25 generates also good images it is overconfident into the target class.
+
+# References
+
+1. Alayrac, J.B., Uesato, J., Huang, P.S., Fawzi, A., Stanforth, R., Kohli, P.: Are labels required for improving adversarial robustness? In: NeurIPS (2019)
+2. Andriushchenko, M., Croce, F., Flammarion, N., Hein, M.: Square attack: a query-efficient black-box adversarial attack via random search. In: ECCV (2020)
+3. Athalye, A., Carlini, N., Wagner, D.A.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: ICML (2018)
+4. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One 10(7), e0130140 (2015)
+5. Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.R.: How to explain individual classification decisions. Journal of Machine Learning Research (JMLR) 11, 1803-1831 (2010)
+6. Barocas, S., Selbst, A.D., Raghavan, M.: The hidden assumptions behind counterfactual explanations and principal reasons. In: FAT (2020)
+7. Bitterwolf, J., Meinke, A., Hein, M.: Provable worst case guarantees for the detection of out-of-distribution data. arXiv:2007.08473 (2020)
+8. Carlini, N., Wagner, D.: Adversarial examples are not easily detected: Bypassing ten detection methods. In: ACM Workshop on Artificial Intelligence and Security (2017)
+9. Carmon, Y., Raghunathan, A., Schmidt, L., Duchi, J.C., Liang, P.S.: Unlabeled data improves adversarial robustness. In: NeurIPS (2019)
+0. Chang, C.H., Creager, E., Goldenberg, A., Duvenaud, D.: Explaining image classifiers by counterfactual generation. In: ICLR (2019)
+1. Cohen, J.M., Rosenfeld, E., Kolter, J.Z.: Certified adversarial robustness via randomized smoothing. In: NeurIPS (2019)
+2. Croce, F., Andriushchenko, M., Hein, M.: Provable robustness of relu networks via maximization of linear regions. In: AISTATS (2019)
+3. Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: ICML (2020)
+4. Croce, F., Hein, M.: Minimally distorted adversarial examples with a fast adaptive boundary attack. In: ICML (2020)
+5. Dong, Y., Su, H., Zhu, J., Bao, F.: Towards interpretable deep neural networks by leveraging adversarial examples (2017), arXiv preprint, arXiv:1708.05493
+6. Engstrom, L., Ilyas, A., Santurkar, S., Tsipras, D.: Robustness (python library) (2019), https://github.com/MadryLab/robustness
+7. Gowal, S., Dvijotham, K., Stanforth, R., Bunel, R., Qin, C., Uesato, J., Arandjelovic, R., Mann, T.A., Kohli, P.: On the effectiveness of interval bound propagation for training verifiably robust models (2018), preprint, arXiv:1810.12715v3
+8. Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., Lee, S.: Counterfactual visual explanations. In: ICML (2019)
+9. Grathwohl, W., Wang, K.C., Jacobsen, J.H., Duvenaud, D., Norouzi, M., Swersky, K.: Your classifier is secretly an energy based model and you should treat it like one. ICLR (2020)
+20. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.: On calibration of modern neural networks. In: ICML (2017)
+21. Hein, M., Andriushchenko, M., Bitterwolf, J.: Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In: CVPR (2019)
+
+22. Hendricks, L.A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., Darrell, T.: Generating visual explanations. In: ECCV (2016)
+23. Hendricks, L.A., Hu, R., Darrell, T., Akata, Z.: Grounding visual explanations. In: ECCV (2018)
+24. Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: ICLR (2017)
+25. Hendrycks, D., Mazeika, M., Dietterich, T.: Deep anomaly detection with outlier exposure. In: ICLR (2019)
+26. Hendrycks, D., Lee, K., Mazeika, M.: Using pre-training can improve model robustness and uncertainty. In: ICML. pp. 2712-2721 (2019)
+27. Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex: An efficient smt solver for verifying deep neural networks. In: CAV (2017)
+28. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
+29. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521 (2015)
+30. Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., Jana, S.: Certified robustness to adversarial examples with differential privacy. In: IEEE Symposium on Security and Privacy (SP) (2019)
+31. Lee, K., Lee, H., Lee, K., Shin, J.: Training confidence-calibrated classifiers for detecting out-of-distribution samples. In: ICLR (2018)
+32. Leibig, C., Allken, V., Ayhan, M.S., Berens, P., Wahl, S.: Leveraging uncertainty information from deep neural networks for disease detection. Scientific Reports 7 (2017)
+33. Li, B., Chen, C., Wang, W., Carin, L.: Certified adversarial robustness with additive noise. In: NeurIPS (2019)
+34. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Valdu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
+35. Meinke, A., Hein, M.: Towards neural networks that provably know when they don't know. In: ICLR (2020)
+36. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267, 1-38 (2019)
+37. Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: ICML (2018)
+38. Mosbach, M., Andriushchenko, M., Trost, T., Hein, M., Klakow, D.: Logit pairing methods can fool gradient-based attacks. In: NeurIPS 2018 Workshop on Security in Machine Learning (2018)
+39. Najafi, A., Maeda, S.i., Koyama, M., Miyato, T.: Robustness to adversarial perturbations in learning from incomplete data. In: NeurIPS (2019)
+40. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. In: NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning (2011)
+41. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: CVPR (2015)
+42. Ivaro Parafita, Vitri, J.: Explaining visual models by causal attribution. In: ICCV Workshop on XCAI (2019)
+43. Rice, L., Wong, E., Kolter, J.Z.: Overfitting in adversarially robust deep learning. In: ICML (2020)
+44. Rony, J., Hafemann, L.G., Oliveira, L.S., Ayed, I.B., Sabourin, R., Granger, E.: Decoupling direction and norm for efficient gradient-based L2 adversarial attacks and defenses. In: CVPR (2019)
+
+45. Samangouei, P., Saeedi, A., Nakagawa, L., Silberman, N.: Explaining: Model explanation via decision boundary crossing transformations. In: ECCV (2018)
+46. Santurkar, S., Tsipras, D., Tran, B., Ilyas, A., Engstrom, L., Madry, A.: Computer vision with a single (robust) classifier. In: NeurIPS (2019)
+47. Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., Madry, A.: Adversarily robust generalization requires more data. In: NeurIPS (2018)
+48. Schott, L., Rauber, J., Bethge, M., Brendel, W.: Towards the first adversarially robust neural network model on mnist. In: ICLR (2019)
+49. Sehwag, V., Bhagoji, A.N., Song, L., Sitawarin, C., Cullina, D., Chiang, M., Mittal, P.: Better the devil you know: An analysis of evasion attacks using out-of-distribution adversarial examples. preprint, arXiv:1905.01726 (2019)
+50. Stutz, D., Hein, M., Schiele, B.: Disentangling adversarial robustness and generalization. In: CVPR (2019)
+51. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: ICLR. pp. 2503-2511 (2014)
+52. Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE transactions on pattern analysis and machine intelligence 30(11), 1958-1970 (2008)
+53. Tramr, F., Boneh, D.: Adversarial training and robustness for multiple perturbations. In: NeurIPS (2019)
+54. Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., Madry, A.: Robustness may be at odds with accuracy. In: ICLR (2019)
+55. Uesato, J., Alayrac, J.B., Huang, P.S., Stanforth, R., Fawzi, A., Kohli, P.: Are labels required for improving adversarial robustness? In: NeurIPS (2019)
+56. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard Journal of Law and Technology 31(2), 841-887 (2018)
+57. Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: CVPR (2018)
+58. Wong, E., Schmidt, F., Metzen, J.H., Kolter, J.Z.: Scaling provable adversarial defenses. In: NeurIPS (2018)
+59. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: ECCV (2014)
+60. Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: ICML (2019)
+61. Zhu, J.Y., Krahenbuhl, P., Shechtman, E., Efros, A.A.: Generative visual manipulation on the natural image manifold. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV (2016)
\ No newline at end of file
diff --git a/adversarialrobustnessoninandoutdistributionimprovesexplainability/images.zip b/adversarialrobustnessoninandoutdistributionimprovesexplainability/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e2a8a6aa4da0465aa10d906f6be8555d5a155acc
--- /dev/null
+++ b/adversarialrobustnessoninandoutdistributionimprovesexplainability/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f185f83124e1c4f9e7485bf8a8ece0e893033079bd84cf20cd1f1b940497f3c3
+size 908804
diff --git a/adversarialrobustnessoninandoutdistributionimprovesexplainability/layout.json b/adversarialrobustnessoninandoutdistributionimprovesexplainability/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ac4b197cbc9aa9d86ef8dc6881b533016222760c
--- /dev/null
+++ b/adversarialrobustnessoninandoutdistributionimprovesexplainability/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6b7697af7f8bec88781c0122294e8319dccf4b3a272537fbd092361ae3e0d426
+size 392167
diff --git a/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/9b611100-57ed-495b-af27-cda6b8e0c8d7_content_list.json b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/9b611100-57ed-495b-af27-cda6b8e0c8d7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..341a0499de92232764bd7a587f8504bc41c8c7fa
--- /dev/null
+++ b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/9b611100-57ed-495b-af27-cda6b8e0c8d7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d11be183fb384f62af8811cdd1afa4bf76e553e0fac09fc12cfbff63339629ee
+size 75687
diff --git a/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/9b611100-57ed-495b-af27-cda6b8e0c8d7_model.json b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/9b611100-57ed-495b-af27-cda6b8e0c8d7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..650dd6254a2a21f41304c9009c92bed4596dcd7d
--- /dev/null
+++ b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/9b611100-57ed-495b-af27-cda6b8e0c8d7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:88314284c0ab5dacc88556f5e8b5a76a8667f6b1f9feb0a765080dc2137a947e
+size 95668
diff --git a/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/9b611100-57ed-495b-af27-cda6b8e0c8d7_origin.pdf b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/9b611100-57ed-495b-af27-cda6b8e0c8d7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2cdaa886476dabd3aaa44944de848b5a171e62da
--- /dev/null
+++ b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/9b611100-57ed-495b-af27-cda6b8e0c8d7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e3ad1d80246c173707bc40fb037fe4e520e0b6684b6e2e00766eeb8b43cbae39
+size 1349367
diff --git a/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/full.md b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f427e6d488104131c2f01ec671e002a55109ea1d
--- /dev/null
+++ b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/full.md
@@ -0,0 +1,291 @@
+# Adversarial Self-Supervised Learning for Semi-Supervised 3D Action Recognition
+
+Chenyang Si $^{1,2,3[0000-0002-3354-1968]}$ , Xuecheng Nie $^{3}$ , Wei Wang $^{1,2}$ , Liang Wang $^{1,2}$ , Tieniu Tan $^{1,2}$ , and Jiashi Feng $^{3}$
+
+1 University of Chinese Academy of Sciences
+
+$^{2}$ CRIPAC & NLPR, Institute of Automation, Chinese Academy of Sciences
+
+3 Department of ECE, National University of Singapore
+
+chenyang.si@cripac.ia.ac.cn, {wangwei, wangliang, tint}@nlpr.ia.ac.cn, niexuecheng@u.nus.edu, elefjia@nus.edu.sg
+
+Abstract. We consider the problem of semi-supervised 3D action recognition which has been rarely explored before. Its major challenge lies in how to effectively learn motion representations from unlabeled data. Self-supervised learning (SSL) has been proved very effective at learning representations from unlabeled data in the image domain. However, few effective self-supervised approaches exist for 3D action recognition, and directly applying SSL for semi-supervised learning suffers from misalignment of representations learned from SSL and supervised learning tasks. To address these issues, we present Adversarial Self-Supervised Learning (ASSL), a novel framework that tightly couples SSL and the semi-supervised scheme via neighbor relation exploration and adversarial learning. Specifically, we design an effective SSL scheme to improve the discrimination capability of learned representations for 3D action recognition, through exploring the data relations within a neighborhood. We further propose an adversarial regularization to align the feature distributions of labeled and unlabeled samples. To demonstrate effectiveness of the proposed ASSL in semi-supervised 3D action recognition, we conduct extensive experiments on NTU and N-UCLA datasets. The results confirm its advantageous performance over state-of-the-art semi-supervised methods in the few label regime for 3D action recognition.
+
+Keywords: Semi-supervised 3D action recognition, Self-supervised learning, Neighborhood Consistency, Adversarial learning
+
+# 1 Introduction
+
+Recently, 3D action recognition (a.k.a. skeleton-based action recognition) has made remarkable progress through learning discriminative features with effective networks [7,47,18,12,44,29,30]. However, these methods heavily rely on the available manual annotations that are costly to acquire. Techniques requiring less or no manual annotations are therefore developed, and among them a powerful approach is semi-supervised learning. It is aimed at leveraging unlabeled data to enhance the model's capability of learning and generalization such that the requirement for labeled data can be alleviated. It has been widely applied in the image
+
+domain [14,25,27,15,34,24,16]. Compared with these methods, [45] has recently proposed a more efficient way of feature learning from unlabeled data, namely self-supervised semi-supervised learning ( $S^4\mathrm{L}$ ), that couples self-supervision with a semi-supervised learning algorithm. It employs the self-supervised technique to learn representations of unlabeled data to benefit semi-supervised learning tasks. Self-supervised learning is very advantageous in making full use of unlabeled data, which learns the representations of unlabeled data via defining and solving various pretext tasks. Thus in this work we exploit its application to semi-supervised 3D action recognition, which has little previous investigation.
+
+As images contain rich information that is beneficial to feature extraction, many effective SSL techniques [5,37,42] are image-based. Comparatively, for tasks over skeleton data which represent a person by 3D coordinate positions of key joints, it becomes very challenging to leverage SSL techniques to learn discriminative motion representation. Therefore, how to learn motion representation with SSL technique is an urgent problem for this task. Recently, [48] proposes a SSL method to learn temporal information of unlabeled sequence via skeleton inpainting. This SSL treats each sample as an individual such that it ignores the shared information among samples with the same action class. As a result, semi-supervised 3D action recognition has derived little benefit from the representations learned by skeleton inpainti
+
+
+Fig.1. Illustration of our main idea. We design an effective SSL scheme to capture the discriminative motion representations of unlabeled skeleton sequences for 3D action recognition. Since directly applying SSL to semi-supervised learning suffers from misalignment of representations learned from SSL and supervised learning tasks, we further pioneer to align their feature distributions via adversarial learning
+
+Moreover, we also find that, directly applying SSL for semi-supervised learning suffers from misalignment of representations learned from self-supervised and supervised learning tasks. As shown in Fig. 1, labeled and unlabeled samples are enforced with supervised and self-supervised optimization objectives respectively. Though both sampled from the same data distribution, their feature distributions are misaligned. This misalignment would weaken the generalization of semi-supervised 3D action recognition models to unseen samples. A task with similar problem as ours is unsupervised domain adaptation (UDA) that matches the feature distributions from different domains. While their problem is quite similar to ours, there exist important differences between UDA and our task. In UDA, the discrepancy of feature distributions is caused by different domains. Our problem is the misalignment of representations learned from SSL and supervised learning tasks in semi-supervised 3D action recognition. One line
+
+of research in UDA is adversarial-based adaptation methods [9,35,20] that have shown promising results in domain adaptation. These methods seek to minimize an approximate domain discrepancy distance through an adversarial objective with respect to a domain discriminator. Hence, inspired by the alignment effect of adversarial learning in UDA, we exploit its application to couple the self-supervision method into a semi-supervised learning algorithm.
+
+In this work, we propose an Adversarial Self-Supervised Learning (ASSL) Network for semi-supervised 3D action recognition. As shown in Fig. 1, our model leverages (i) self-supervised learning to capture discriminative motion representation of unlabeled skeleton sequences, and (ii) adversarial regularization that allows to align feature distributions of labeled and unlabeled sequences. More specifically, in addition to a self-inpainting constraint [48] for learning temporal information of each individual unlabeled sample, we propose a new perspective of consistency regularization within the neighborhood to explore the data relationships. Neighborhoods can be considered as tiny sample-anchored clusters with high compactness and class consistency. Consistency regularization within the neighborhood further reveals the underlying class concept of the self-supervised motion representation. Such discriminative motion representations significantly improve the performance of semi-supervised 3D action recognition. Moreover, considering that adversarial learning can minimize the discrepancy between two distributions, we also propose a novel adversarial learning strategy to couple the self-supervision method and a semi-supervised algorithm. The adversarial regularization allows the model to align the feature distributions of labeled and unlabeled data, which boosts the capability of generalization to unseen samples for semi-supervised 3D action recognition.
+
+We perform extensive studies for semi-supervised 3D action recognition on two benchmark datasets: NTU RGB+D [28] and N-UCLA [39] datasets. With the proposed ASSL network, we establish new state-of-the-art performances of semi-supervised 3D action recognition. Summarily, our main contributions are in three folds:
+
+1. We present an Adversarial Self-Supervised Learning (ASSL) framework for semi-supervised 3D action recognition, which tightly couples SSL and a semi-supervised scheme via adversarial learning and neighbor relation exploration.
+2. We offer a new self-supervised strategy, i.e., neighborhood consistency, for semi-supervised 3D action recognition. By exploring data relationships within the neighborhood, our model can learn discriminative motion representations that significantly improve the performance of semi-supervised 3D action recognition.
+3. We identify that directly applying SSL for semi-supervised learning suffers from the representation misalignment of labeled and unlabeled samples. A novel adversarial regularization is pioneered to couple SSL into a semi-supervised algorithm to align their feature distributions, which further boosts the capability of generalization.
+
+# 2 Related Work
+
+# 2.1 3D Action Recognition
+
+Human action recognition is one of important computer vision tasks. Due to the informative representation for the action, skeleton-based action recognition has been examined thoroughly in past literature. Previously, the traditional approaches [36,37,11,38] try to design various hand-crafted features from skeleton sequences to represent human motion, e.g., relative 3D geometry between all pairs of body parts [36]. Recently, deep learning has also been applied to this task due to its wide success. To model temporal dependencies, many methods leverage and extend the recurrent neural networks (RNNs) to capture the motion features for skeleton-based action recognition, e.g., HBRNN [7] and VA-LSTM [47]. Based on Convolutional Neural Networks (CNNs) that are powerful at learning hierarchical representations, spatio-temporal representations are extracted for action recognition in [6,12,18,41]. For graph-structured data, graph-based approaches [31,19,32] are popularly adopted for skeleton-based action recognition, e.g., ST-GCN [44] and AGC-LSTM [30]. Though successful, these supervised methods highly rely on massive data samples with annotated action labels, which are expensive to obtain. Semi-supervised approaches are thus developed to alleviate this data annotation limitation, and in this paper, we apply it to learning motion representation for 3D action recognition.
+
+# 2.2 Semi-Supervised Learning
+
+Semi-supervised learning algorithms learn from a data set that includes both labeled and unlabeled data, usually mostly unlabeled. For a comprehensive review of semi-supervised methods, we refer readers to [3]. Recently, there is increasing interest in deep learning based semi-supervised algorithms. One group of these methods is based on generative models, e.g., denoising autoencoders [26], variational autoencoders [14] and generative adversarial networks [25,27]. Some semi-supervised methods add small perturbations to unlabeled data, and require similar outputs between them by enforcing a consistency regularization, e.g., II-Model [15], Temporal Ensembling [15], Mean Teacher [34] and Virtual Adversarial Training [24]. There are also some other works. To name a few, Lee et al. [16] pick up the class with maximum predicted probability as pseudo-labels for unlabeled data, and use them to train the models. [10] presents a conditional entropy minimization for unlabeled data, which encourages their predicted probability to bias some class. The work most related to ours is [45] which proposes a new technique for semi-supervised learning by leveraging SSL techniques to learn representation of unlabeled images. Their work enlarges the generalization of semi-supervised learning methods. In this work, we exploit effective SSL to learn discriminative motion representation for semi-supervised 3D action recognition. Moreover, we further propose a novel adversarial regularization to couple SSL into the semi-supervised algorithm.
+
+# 2.3 Self-Supervised Learning for Action Recognition
+
+Self-supervised learning for action recognition aims to learn motion representations from the unlabeled data by solving the pretext tasks. Recently, a stream of studies [33,23,8,17,1,43] design various temporal-related tasks to learn the temporal pattern from the unlabeled RGB videos. For example, a sequence sorting task is introduced in [17]. [21,40] propose to learn the video representation by predicting motion flows. Note that, these methods are for learning representations from RGB videos and not applicable to long-term skeleton sequences. For 3D action recognition, Zheng et al. [48] propose a conditional skeleton inpainting architecture to learn the long-term dynamics from unlabeled skeleton data. However, this SSL ignores the shared information among samples with the same action class and therefore may yield less discriminative feature representations. Hence, we propose an effective self-supervised strategy to learn discriminative representation that is beneficial for semi-supervised 3D action recognition.
+
+# 3 Method
+
+# 3.1 Problem Formulation
+
+Instead of relying on massive labels in existing methods, we use only a few labeling data in semi-supervised 3D action recognition. Formally, let $\mathcal{X}$ be the training set. The training samples $\pmb{x}_i\in \mathcal{X}$ are skeleton sequences with $T$ frames, and $\pmb{x}_i = \{\pmb{x}_{i,1},\dots,\pmb{x}_{i,T}\}$ . At each time $t$ , the $\pmb{x}_{i,t}$ is a set of 3D coordinates of body joints, which can be obtained by the Microsoft Kinect and the advanced human pose estimation algorithms [2,46]. In contrast to supervised 3D action classification, training samples are split to two subsets in our task here: a labeled training set denoted as $\mathcal{X}_L = \{\pmb{x}_1,\dots,\pmb{x}_L\}$ and an unlabeled training set denoted as $\mathcal{X}_U = \{\pmb{x}_1,\dots,\pmb{x}_U\}$ . The training samples $\pmb{x}_l\in \mathcal{X}_L$ have annotated labels $\{y_1,\dots,y_L\}$ with $y_{l}\in \mathcal{C}$ , where $\mathcal{C} = \{1,\dots,C\}$ is a discrete label set for $C$ action classes. The training samples $\pmb{x}_u\in \mathcal{X}_U$ are unlabeled. Usually, $L$ is smaller than $U$ ( $L\ll U$ ).
+
+Inspired by $S^4\mathrm{L}$ [45], we propose an Adversarial Self-Supervised Learning framework to learn discriminative motion representations from $\mathcal{X}_L$ and $\mathcal{X}_U$ . It couples self-supervised techniques into the semi-supervised scheme via adversarial learning and neighbor relation exploration. Detailed descriptions of ASSL are described in the following subsections.
+
+# 3.2 Neighborhood Consistency for Semi-Supervised 3D Action Recognition
+
+Semi-supervised 3D action recognition aims to learn discriminative motion representation from massive unlabeled sequences. However, this is difficult over succinct 3D human poses. To tackle this challenge, we propose an effective SSL strategy, neighborhood consistency, that enhances the underlying class semantics
+
+
+Fig. 2. Framework of Adversarial Self-Supervised Learning (ASSL). The ASSL leverages SSL and adversarial regularization for semi-supervised 3D action recognition. For SSL techniques, in addition to a self-inpainting constraint [48] for learning temporal information of each individual unlabeled sample, we propose to apply a new consistency regularization within the neighborhood to explore data relations. The adversarial training with a feature discriminator is used to align feature distributions of labeled and unlabeled samples, which further boosts generalization of semi-supervised models to unseen samples
+
+of motion representation by exploring data relations within the neighborhoods, so as to improve recognition performance.
+
+As shown in Fig. 2, we first employ skeleton inpainting [48] to learn temporal information for each unlabeled sequence. Specifically, an encoder network Enc takes an input skeleton sequence $\mathbf{x}_u$ from training set $\mathcal{X}_U$ and produces a vector as the temporal features $\pmb{h}_u \in \mathbb{R}^d$ . Conditioned on the learned representation $\pmb{h}_u$ , a decoder network Dec aims to fill the masked regions in the input sequence. Due to the difference between the action classification (discrimination) and skeleton inpainting (regression) tasks, we use a translation layer i.e., a linear layer, to bridge the gap between the feature spaces of both tasks. The output of linear layer is denoted as $\bar{h}_u$ for the sample $\mathbf{x}_u$ . Then, in this feature space, we employ K-nearest neighbor [4] to select $K$ nearest neighbors from unlabeled training set $\mathcal{X}_U$ . The neighbor set of $\mathbf{x}_u$ is denoted as $\varOmega_{x_u} = \{\pmb{x}_u^1,\dots,\pmb{x}_u^K\}$ . A message aggregation module is proposed to produce the local center vector. We use a multilayer perceptron to assign a weight for each neighbor sample, which evaluates their similarities as the anchor. The weights $\alpha_{k}$ are computed as follows:
+
+$$
+\alpha_ {k} = \frac {\exp \left(M L P \left(\left| \bar {h} _ {u} - \bar {h} _ {u} ^ {k} \right|\right)\right)}{\sum_ {k = 1} ^ {K} \exp \left(M L P \left(\left| \bar {h} _ {u} - \bar {h} _ {u} ^ {k} \right|\right)\right)}, \tag {1}
+$$
+
+where $\bar{\pmb{h}}_u^k$ is the translated feature of neighbor sample $\pmb{x}_u^k\in \varOmega_{x_u}$ , $MLP(\cdot)$ denotes the multilayer perceptron in message aggregation module. According to the computed weights $\{\alpha_{1},\dots,\alpha_{K}\}$ , the local class center $\pmb{c}_u$ can be aggregated with the neighbor set $\varOmega_{x_u}$ as follows:
+
+$$
+\boldsymbol {c} _ {u} = \sum_ {k = 1} ^ {K} \alpha_ {k} \bar {\boldsymbol {h}} _ {u} ^ {k}. \tag {2}
+$$
+
+Considering the high compactness and class consistency within neighborhoods, we require that the samples within neighborhoods achieve a similar prediction with the local center $\pmb{c}_u$ . However, for a sample $\pmb{x}_u$ , its neighbor samples either share the class label (positive) with $\pmb{x}_u$ or not (negative). To minimize the impact of negative neighbors, we introduce a simple selecting criterion: we get the 1-nearest labeled neighbor from the labeled training set $\mathcal{X}_L$ for the anchor $\pmb{x}_u$ and the neighbor $\pmb{x}_u^k$ . If the labeled neighbors of the anchor $\pmb{x}_u$ and the neighbor $\pmb{x}_u^k$ have the same label, $\pmb{x}_u^k$ is regarded as the positive neighbor. The set of selected positive neighbor for sample $\pmb{x}_u$ is denoted as $\varOmega_{x_u}^p$ . Finally, the loss of consistency regularization within neighborhood is defined as follows:
+
+$$
+\mathcal {L} _ {K L} = \sum_ {\boldsymbol {x} _ {u} \in \mathcal {X} _ {U}} \left(K L \left(f _ {c} \left(\boldsymbol {c} _ {u}\right), f _ {c} \left(\bar {\boldsymbol {h}} _ {u}\right)\right) + \sum_ {\boldsymbol {x} _ {u} ^ {K} \in \Omega_ {\boldsymbol {x} _ {u}} ^ {p}} K L \left(f _ {c} \left(\boldsymbol {c} _ {u}\right), f _ {c} \left(\bar {\boldsymbol {h}} _ {u} ^ {k}\right)\right)\right), \tag {3}
+$$
+
+where $f_{c}(\cdot)$ is the classifier that outputs the predictions, $KL(\cdot)$ denotes Kullback-Leibler divergence.
+
+Like consistency regularization for unlabeled samples $\pmb{x}_u\in \mathcal{X}_U$ , the neighbor sets of labeled examples $\pmb{x}_l\in \mathcal{X}_L$ are also selected from the unlabeled set $\mathcal{X}_U$ . which are denoted as $\varOmega_{\pmb{x}_l}$ . Similarly, we use the feature $\bar{\pmb{h}}_l$ of $\pmb{x}_l$ as the anchor to estimate its local center representation $\pmb{c}_l$ with its neighbors set $\varOmega_{\pmb{x}_l}$ as the Eqn. (1)-(2) (shown in Fig. 2). Under the assumption that the anchor shares the same class semantic with the local center, we use a cross-entropy loss $CE(\cdot)$ for the center $\pmb{c}_l$ :
+
+$$
+\mathcal {L} _ {C E} ^ {c} = \sum_ {\boldsymbol {x} _ {l} \in \mathcal {X} _ {L}} \left(C E \left(f _ {c} (\boldsymbol {c} _ {l}), y _ {l}\right)\right), \tag {4}
+$$
+
+where $y_{l}$ is the class label of $\pmb{x}_l$
+
+Overall, the optimization objectives of unlabeled samples can be formulated as follows:
+
+$$
+\mathcal {L} _ {U} = \mathcal {L} _ {K L} + \mathcal {L} _ {C E} ^ {c} + \mathcal {L} _ {i n p}, \tag {5}
+$$
+
+where $\mathcal{L}_{inp}$ denotes the skeleton inpainting loss that is the $L_{2}$ distance between the inpainted sequence and the original input sequence. Minimizing this optimization objective $\mathcal{L}_U$ encourages the model to enhance the underlying class concept of the self-supervised motion representation and yield discriminative feature representations.
+
+# 3.3 Adversarial Learning for Aligning Self-Supervised and Semi-Supervised Representations
+
+According to the training of existing semi-supervised learning methods, the labeled and unlabeled samples are enforced with supervised and SSL objectives, respectively. In this work, Eqn. (5) is used for the unlabeled samples. Although our proposed SSL technique is quite effective for semi-supervised 3D action recognition, we identify that the representations learned with supervised and SSL task are misaligned. As shown in the Fig. 3, with the benefit of SSL technique, the features of $\text{Sup.} + \text{Sel}$ present a more compact distribution than $\text{Sup}$ . However, in contrast to the intra-class compactness of labeled data (the squares with black border), there are scattering distributions for the unlabeled data in Fig. 3(b). Thus, although both sequences are sampled from the same data distribution, their feature distributions are misaligned
+
+due to different optimization objectives. To tackle this problem, we propose a novel adversarial training strategy to couple SSL method with the semi-supervised 3D action recognition. Specifically, a discriminator $Dis$ is trained to distinguish the unlabeled features from the labeled features. And the model is trained simultaneously to confuse the discriminator $Dis$ . Hence, the adversarial loss is defined as follows:
+
+$$
+\mathcal {L} _ {a d v} = \frac {1}{L} \sum_ {\boldsymbol {x} _ {l} \in \mathcal {X} _ {L}} \left(\log \left(D i s \left(\bar {\boldsymbol {h}} _ {l}\right)\right)\right) + \frac {1}{U} \sum_ {\boldsymbol {x} _ {u} \in \mathcal {X} _ {U}} \left(\log \left(1 - D i s \left(\bar {\boldsymbol {h}} _ {u}\right)\right)\right). \tag {6}
+$$
+
+The adversarial regularization allows the model to align the feature distributions of labeled and unlabeled data. Therefore, like the labeled data, the feature distribution of unlabeled data becomes more intra-class compactness, which boosts the capability of generalization to unseen samples. More analyses about adversarial regularization are reported in Section 4.3.
+
+# 3.4 Model Architecture and Optimization
+
+Unlike the existing 3D action recognition method [7,47,18,12,44,29,30] learning the discriminative features through the designed effective networks, the goal of
+
+
+(a) Sup.
+
+
+(b) $\operatorname{Sup.} + \operatorname{Sel}$ .
+Fig.3. The t-SNE visualization of motion features learned by Sup. and $\text{Sup. + Sel.}$ . (a) Sup. is trained with the supervised objective for the labeled samples. (b) Sup. + Sel. is trained through optimizing the supervised and SSL objectives (Eqn. (5)) for the labeled and unlabeled samples, respectively. Different colors indicate different classes. Best viewed in color. The squares with black border denote the labeled data, and others are unlabeled data
+
+this work is to explore effective semi-supervised scheme for 3D action recognition. Therefore, this work adopts a universal architecture. In order to effectively capture the motion dynamics, we use three bidirectional GRU layers to encode the input skeleton sequence in the Enc. The decoder consists of two unidirectional GRU layers. We use 4 linear layers and 3 linear layers in the discriminator and the multilayer perceptron of message aggregation, respectively. The classifier is a two-layer perceptron.
+
+During training, our ASSL network is learned by minimizing the following loss on the training data:
+
+$$
+\mathcal {L} = \mathcal {L} _ {L} + \lambda_ {1} \mathcal {L} _ {U} + \lambda_ {2} \mathcal {L} _ {a d v}. \tag {7}
+$$
+
+where $\mathcal{L}_L$ is a cross-entropy loss of all labeled examples in $\mathcal{X}_L$ , $\lambda_1$ and $\lambda_2$ are nonnegative scalar weights. Note that, we always sample the same number labeled and unlabeled samples in mini-bathes.
+
+# 4 Experiments
+
+In this section, we evaluate and compare our work with previous semi-supervised methods and also conduct detailed component analysis.
+
+# 4.1 Experimental Setup
+
+Datasets Two popular benchmark datasets, NTU RGB+D dataset [28] and Northwestern-UCLA dataset [39], are used for our experiments.
+
+NTU RGB+D dataset [28] contains 56,880 samples covering 60 different classes of human actions performed by 40 distinct subjects. These videos are collected with three cameras simultaneously in different horizontal views. Two evaluation protocols are provided: Cross-Subject (CS) and Cross-View (CV). For CS protocol, skeleton sequences performed by 20 subjects are used for training, and the rest for testing. For CV protocol, all videos from Camera 2 and 3 are used for training while those from Camera 1 are used for testing. For semi-supervised 3D action recognition, $5\%$ , $10\%$ , $20\%$ and $40\%$ of training sequences of each class are labeled on the training set.
+
+Northwestern-UCLA dataset [39] has 1,494 samples performed by 10 different subjects belonging to 10 action classes. Each action sample is captured by three Kinect cameras simultaneously from a variety of viewpoints. Its training set consists of samples from the first two cameras and the rest from the third camera form the testing set. For semi-supervised 3D action recognition, we use $5\%$ , $15\%$ , $30\%$ and $40\%$ labels of training sequences of each class on the training set.
+
+Baselines There is no available semi-supervised baseline for 3D action recognition, so we use following methods as baselines that achieve state-of-the-art performances in the RGB domain:
+
+1) Supervised-only (Sup.), training with labeled skeleton sequences only.
+
+2) Pseudo labels [16], leveraging the idea that the predicted labels of unlabeled samples are used for training. First, train a model with the labeled data, then predict the classes of unlabeled samples. These pseudo labels are used to retrain the network in a supervised fashion with labeled and unlabeled data simultaneously.
+3) Virtual Adversarial Training (VAT) [24], training with unlabeled data to make the model robust around input data point against local perturbation. It generates small adversarial perturbations for unlabeled samples, which greatly alter the output distribution; then consistency loss is applied over unlabeled training data to encourage consistency of predictions for input data and its adversially perturbed version.
+4) Conditional Entropy Minimization (EntMin) [10], minimizing the entropy of prediction over unlabeled training data as a regularization for model training. Predicted class probabilities are encouraged to be near a one-hot vector via training with unlabeled data.
+5) Self-Supervised Semi-Supervised Learning ( $S^4\mathrm{L}$ ) [45], the most related method to ours. It trains the model on self-supervised and semi-supervised tasks in a multi-task fashion. For 3D action recognition, we use the skeleton inpainting framework [48] as the pretext task for self-supervised learning.
+
+Implementation All comparisons with semi-supervised baselines are made under the same setting to be fair. In all experiments, the dimension of hidden states in the GRU and bidirectional GRU is set to 512. On both datasets, we randomly sample $T = 40$ frames from each skeleton sequence as input during training and testing. We train all networks by the ADAM optimizer [13]. The learning rate, initiated with 0.0005, is reduced by multiplying it by 0.5 every 30 epochs. We set $\lambda_{1} = 1$ and $\lambda_{2} = 0.1$ in Eqn. (7). Our experiments are all implemented with PyTorch and 1 Titan Xp GPU.
+
+# 4.2 Comparison with Semi-Supervised Methods
+
+We evaluate our method by comparing it with baselines for semi-supervised 3D action recognition and show results on NTU and N-UCLA datasets respectively in Tables 1 and 2.
+
+Table 1. Test accuracy (\%) on NTU dataset (Cross-Subject (CS) and Cross-View protocols (CV)) with $5\%$ , $10\%$ , 20 and $40\%$ labels of training set. $v./c.$ denotes the number of labeled videos per class
+
+| Method | 5% | 10% | 20% | 40% |
| CS (33 v./c.) | CV (31 v./c.) | CS (66 v./c.) | CV (62 v./c.) | CS (132 v./c.) | CV (124 v./c.) | CS (264 v./c.) | CV (248 v./c.) |
| Supervised-only | 47.2 | 53.7 | 57.2 | 63.1 | 62.4 | 70.4 | 68.0 | 76.8 |
| Pseudolabels [16] | 50.9 | 56.3 | 58.4 | 65.8 | 63.9 | 71.2 | 69.5 | 77.7 |
| VAT [24] | 51.3 | 57.9 | 60.3 | 66.3 | 65.6 | 72.6 | 70.4 | 78.6 |
| VAT + EntMin [10] | 51.7 | 58.3 | 61.4 | 67.5 | 65.9 | 73.3 | 70.8 | 78.9 |
| S4L (Inpainting) [45] | 48.4 | 55.1 | 58.1 | 63.6 | 63.1 | 71.1 | 68.2 | 76.9 |
| ASSL (ours) | 57.3 | 63.6 | 64.3 | 69.8 | 68.0 | 74.7 | 72.3 | 80.0 |
+
+Table 2. Test accuracy (%) on N-UCLA dataset with $5\%$ , $15\%$ , $30\%$ and $40\%$ labels of training set. $v./c.$ denotes the number of labeled videos per class
+
+| Method | 5% (5 v./c.) | 15% (15 v./c.) | 30% (30 v./c.) | 40% (40 v./c.) |
| Supervised-only | 34.1 | 37.9 | 48.9 | 58.8 |
| Pseudolabels [16] | 35.6 | 48.9 | 60.6 | 65.7 |
| VAT [24] | 44.8 | 63.8 | 73.7 | 73.9 |
| VAT + EntMin [10] | 46.8 | 66.2 | 75.4 | 75.6 |
| S4L (Inpainting) [45] | 35.3 | 46.6 | 54.5 | 60.6 |
| ASSL (ours) | 52.6 | 74.8 | 78.0 | 78.4 |
+
+As seen from tables, with the proposed ASSL network, we establish new state-of-the-art performances of semi-supervised 3D action recognition. To be specific, $S^4 L$ (Inpainting) performs worse than Pseudolabels, VAT and $VAT + EntMin$ , suggesting it is inefficient to learn discriminative representation via skeleton inpainting and thus semi-supervised 3D action recognition has derived little benefit from self-supervised representations. $S^4 L$ (Inpainting), though a advanced semi-supervised approach, requires an effective self-supervised representations that are difficult to learn in this task. Compared with these semi-supervised methods, our benefit is larger when the number of labels is reduced. For example, with $5\%$ labels of training set on NTU dataset, the results of our ASSL present greater improvement compared with $VAT + EntMin$ . This clearly demonstrates the power of the proposed ASSL.
+
+# 4.3 Ablation Study
+
+We then investigate effectiveness of the neighborhood consistency and adversarial training in our proposed ASSL on NTU and N-UCLA datasets. We also analyze effects of different neighborhood sizes and Neighborhood quality.
+
+Neighborhood Consistency We evaluate the effects of the proposed self-supervised strategy, neighborhood consistency, upon the discriminativeness of motion representations that is shown in final performance of semi-supervised 3D action recognition. In Table 3, the model $Sup. + Inp$ is trained with a cross-entropy loss for labeled data and a self-inpainting loss $\mathcal{L}_{inp}$ for unlabeled data. Instead of self-inpainting loss, $Sup. + Nei$ explores the data relations within neighborhoods by enforcing the consistency regularization (Eqn. (3), (4)) for unlabeled data. We can see that $Sup. + Nei$ significantly outperforms the $Sup. + Inp$ . The comparison results justify that our neighborhood consistency can learn more discriminative motion representations that are more beneficial for semi-supervised 3D action recognition.
+
+Moreover, the self-inpainting constraint [48] aims at learning temporal information of each individual unlabeled sequence. The goal of our neighborhood consistency regularization is to explore inter-sample relations within neighborhoods. We jointly learn the two features in $\text{Sup.} + \text{Inp.} + \text{Nei}$ . It can be seen
+
+Table 3. Ablation study on self-supervised learning methods, skeleton inpainting (Inp.) [48] and neighbor consistency (Nei.). Classification accuracy (\%) is reported on NTU with $5\%$ labels and N-UCLA with $15\%$ labels.
+
+| Methods | NTU 5% | N-UCLA 15% (15 v./c.) |
| CS (33 v./c.) | CV (31 v./c.) |
| Supervised-only (Sup.) | 47.2 | 53.7 | 37.9 |
| Sup. + Inp. | 48.4 | 55.1 | 46.6 |
| Sup. + Nei. | 52.1 | 57.8 | 60.0 |
| Sup. + Inp. + Nei. | 55.2 | 61.1 | 66.4 |
| ASSL | 57.3 | 63.6 | 74.8 |
+
+compared with $\text{Sup.} + \text{Inp.}$ and $\text{Sup.} + \text{Nei.}$ , $\text{Sup.} + \text{Inp.} + \text{Nei.}$ achieves better performance on both datasets for semi-supervised 3D action recognition. This illustrates that the representations learned by our neighborhood consistency are complementary to those learned with self-inpainting. Therefore, the benefits of combining these two SSL techniques to capture discriminative representation from unlabeled sequences in our final model are verified (seen Eqn. (5)).
+
+Neighborhood Size We assume that the larger neighborhood size imposes stronger regularization and gives better performance. In order to justify this hypothesis, we investigate the effects of different neighborhood sizes in Fig. 4. As neighborhood size increases, the performance is improved and then becomes saturated. This implies that more discriminative representations can be learned with a larger size. But, if using too large a size, the model will cover distant data points that have weak semantic consistency within the neighborhood, and hence the performance becomes saturated.
+
+Neighborhood Quality We further examine effects of the class consistency of anchor Neighborhood, i.e., Neighborhood quality. In Fig. 5, we report the progress of the ratio of neighbor samples sharing the same action label as the anchor throughout training. We can observe the ratio of class consistent neighborhoods increases, and then becomes saturated. This indicates exploring data
+
+
+Fig.4. Classification accuracy $(\%)$ with different neighborhood size on NTU dataset with $5\%$ labels
+
+
+Fig. 5. The ratio of neighbor samples sharing the same action label as the anchor throughout training on N-UCLA dataset
+
+Table 4. Ablation study on adversarial training. Classification accuracy (%) is reported on NTU with $5\%$ labels and N-UCLA with $15\%$ labels.
+
+| Methods | NTU 5% | N-UCLA 15% |
| CS (33 v./c.) | CV (31 v./c.) | (15 v./c.) |
| Sup. + Inp. | w/o adv | 48.4 | 55.1 | 46.6 |
| w/ adv | 51.2 | 57.1 | 52.4 |
| Sup. + Nei. | w/o adv | 52.1 | 57.8 | 60.0 |
| w/ adv | 53.4 | 59.1 | 68.5 |
| ASSL | w/o adv | 55.2 | 61.1 | 66.4 |
| (Sup. + Inp. + Nei.) | w/ adv | 57.3 | 63.6 | 74.8 |
+
+relations is helpful to inferring underlying class semantics, thus facilitating the clustering of samples with the same action labels.
+
+Adversarial Training The adversarial alignment is proposed to mitigate the gap between representations learned from supervised and self-supervised tasks. To evaluate effectiveness of adversarial training for coupling self-supervision methods with the semi-supervised 3D action recognition, we train several self-supervised models with or without adversarial regularization. The results are reported in Table 4. It is obvious that all models with adversarial regularization achieve better performances than those without. For example, on N-UCLA dataset, the result of $\text{ASSL} w / \text{adv}$ is $74.8\%$ , outperforming $\text{ASSL} w / o \text{adv}$ by $8.4\%$ . The improved performance in Table 4 demonstrates that it is an effective
+
+
+(a) CS-Sup.
+
+
+(b) CS-ASSL $w / o$ adv
+
+
+(c) CS-ASSLw/adv
+
+
+(d) CV-Sup.
+
+
+(e) CV-ASSL $w / o$ adv
+
+
+(f) CV-ASSL $w/$ adv
+Fig.6. The t-SNE visualization of motion features learned by Supervised Baseline (Sup.), $\text{ASSL} w / o$ adv and $\text{ASSL} w / \text{adv}$ (ours) on NTU dataset. Different colors indicate different classes. Best viewed in color. The squares with black border denote the labeled data, and others are unlabeled data
+
+strategy to couple self-supervision with semi-supervised algorithms by adversarial training.
+
+To further explore this scheme, we visualize the feature distributions of $Sup$ , $ASSL w/o adv$ and $ASSL w/ adv$ by using t-SNE [22] in Fig. 6. For the model $Sup$ trained with only supervised objective on labeled data, the decision boundaries of its feature distributions are very ambiguous. The model $ASSL w/o adv$ is trained with supervised and self-supervised objectives for labeled and unlabeled data, respectively. Compared with $Sup$ , the features of $ASSL w/o adv$ present tighter distributions, which benefit from self-supervised learning. But, long-tail distributions still exist for unlabeled samples (circles). Fig. 6(c) and 6(f) show clearly the alignment between feature distributions of labeled and unlabeled data for $ASSL w/ adv$ , i.e., the proposed ASSL. Overall, the comparison results prove the effectiveness of adversarial training for coupling self-supervision with semi-supervised action recognition. And this drives our model to learn more discriminative features that have desired intra-class compactness and inter-class separability.
+
+# 5 Conclusions
+
+In this paper, we consider the semi-supervised learning scheme for 3D action recognition task. The proposed ASSL effectively couples SSL into semi-supervised algorithm via neighbor relation exploration and adversarial learning. Exploring data relations with neighborhood consistency regularization encourages the model to learn discriminative motion representations that significantly improve the performance of this task. Moreover, we introduce a novel adversarial regularization to couple SSL method into a semi-supervised algorithm. This allows the model to align the feature distributions of labeled and unlabeled samples and boosts the capability of generalization to unseen samples. Our experiments verify that the proposed neighbor relation exploration and adversarial learning are strongly beneficial for semi-supervised 3D action recognition. With the proposed ASSL network, we establish news state-of-the-art performances of semi-supervised 3D action recognition.
+
+# Acknowledgements
+
+This work is jointly supported by National Key Research and Development Program of China (2016YFB1001000), National Natural Science Foundation of China (61420106015, 61976214, 61721004), Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project) (NO.2019JZZY010119). Jiashi Feng was partially supported by MOE Tier 2 MOE2017-T2-2-151, NUS_ECRA_FY17_P08, AISG-100E-2019-035. Chenyang Si was partially supported by the program of China Scholarships Council (No.201904910608). We thank Jianfeng Zhang for his helpful comments.
+
+# References
+
+1. Buchler, U., Brattoli, B., Ommer, B.: Improving spatiotemporal self-supervision by deep reinforcement learning. In: ECCV (2018)
+2. Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Realtime multi-person 2d pose estimation using part affinity fields. In: CVPR (2017)
+3. Chapelle, O., Scholkopf, B., Zien, A.: Semi-supervised learning. MIT Press (2006)
+4. Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE transactions on information theory (1967)
+5. Dosovitskiy, A., Springenberg, J.T., Riedmiller, M., Brox, T.: Discriminative unsupervised feature learning with convolutional neural networks. In: NIPS (2014)
+6. Du, Y., Fu, Y., Wang, L.: Skeleton based action recognition with convolutional neural network. In: ACPR (2015)
+7. Du, Y., Wang, W., Wang, L.: Hierarchical recurrent neural network for skeleton based action recognition. In: CVPR (2015)
+8. Fernando, B., Bilen, H., Gavves, E., Gould, S.: Self-supervised video representation learning with odd-one-out networks. In: CVPR (2017)
+9. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: ICML (2015)
+10. Grandvalet, Y., Bengio, Y.: Semi-supervised learning by entropy minimization. In: NIPS (2005)
+1. Hussein, M.E., Torki, M., Gowayyed, M.A., El-Saban, M.: Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations. In: IJCAI (2013)
+2. Ke, Q., Bennamoun, M., An, S., Sohel, F., Boussaid, F.: A new representation of skeleton sequences for 3d action recognition. In: CVPR (2017)
+3. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015)
+4. Kingma, D.P., Mohamed, S., Rezende, D.J., Welling, M.: Semi-supervised learning with deep generative models. In: NIPS (2014)
+5. Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. In: ICLR (2017)
+6. Lee, D.H.: Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In: ICML (2013)
+7. Lee, H.Y., Huang, J.B., Singh, M., Yang, M.H.: Unsupervised representation learning by sorting sequences. In: ICCV (2017)
+8. Li, C., Zhong, Q., Xie, D., Pu, S.: Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. In: IJCAI (2018)
+9. Li, M., Chen, S., Chen, X., Zhang, Y., Wang, Y., Tian, Q.: Actional-structural graph convolutional networks for skeleton-based action recognition. In: CVPR (2019)
+20. Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: NIPS (2018)
+21. Luo, Z., Peng, B., Huang, D.A., Alahi, A., Fei-Fei, L.: Unsupervised learning of long-term motion dynamics for videos. In: CVPR (2017)
+22. Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research (2008)
+23. Misra, I., Zitnick, C.L., Hebert, M.: Shuffle and learn: unsupervised learning using temporal order verification. In: ECCV (2016)
+
+24. Miyato, T., ichi Maeda, S., Koyama, M., Ishii, S.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence (2018)
+25. Odena, A.: Semi-supervised learning with generative adversarial networks. In: arXiv preprint arXiv:1606.01583 (2016)
+26. Rasmus, A., Berglund, M., Honkala, M., Valpola, H., Raiko, T.: Semi-supervised learning with ladder networks. In: NIPS (2015)
+27. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. In: NIPS (2016)
+28. Shahroudy, A., Liu, J., Ng, T.T., Wang, G.: Ntu rgb+d: A large scale dataset for 3d human activity analysis. In: CVPR (2016)
+29. Shi, L., Zhang, Y., Cheng, J., Lu, H.: Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In: CVPR (2019)
+30. Si, C., Chen, W., Wang, W., Wang, L., Tan, T.: An attention enhanced graph convolutional LSTM network for skeleton-based action recognition. In: CVPR (2019)
+31. Si, C., Jing, Y., Wang, W., Wang, L., Tan, T.: Skeleton-based action recognition with spatial reasoning and temporal stack learning. In: ECCV (2018)
+32. Si, C., Jing, Y., Wang, W., Wang, L., Tan, T.: Skeleton-based action recognition with hierarchical spatial reasoning and temporal stack learning network. Pattern Recognition (2020)
+33. Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representations using lstms. In: ICML (2015)
+34. Tarvainen, A., Valpola, H.: Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In: NIPS (2017)
+35. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: CVPR (2017)
+36. Vemulapalli, R., Arrate, F., Chellappa, R.: Human action recognition by representing 3d skeletons as points in a lie group. In: CVPR (2014)
+37. Vemulapalli, R., Chellappa, R.: Rolling rotations for recognizing human actions from 3d skeletal data. In: CVPR (2016)
+38. Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: CVPR (2012)
+39. Wang, J., Nie, X., Xia, Y., Wu, Y., Zhu, S.C.: Cross-view action modeling, learning, and recognition. In: CVPR (2014)
+40. Wang, J., Jiao, J., Bao, L., He, S., Liu, Y., Liu, W.: Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics. In: CVPR (2019)
+41. Wang, P., Li, Z., Hou, Y., Li, W.: Action recognition based on joint trajectory maps using convolutional neural networksn. In: ACM MM (2016)
+42. Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via nonparametric instance discrimination. In: CVPR (2018)
+43. Xu, D., Xiao, J., Zhao, Z., Shao, J., Xie, D., Zhuang, Y.: Self-supervised spatiotemporal learning via video clip order prediction. In: CVPR (2019)
+44. Yan, S., Xiong, Y., Lin, D., xiaou Tang: Spatial temporal graph convolutional networks for skeleton-based action recognition. In: AAAI (2018)
+45. Zhai, X., Oliver, A., Kolesnikov, A., Beyer, L.: S4l: Self-supervised semi-supervised learning. In: ICCV (2019)
+46. Zhang, J., Nie, X., Feng, J.: Inference stage optimization for cross-scenario 3d human pose estimation. In: arXiv preprint arXiv:2007.02054 (2020)
+
+47. Zhang, P., Lan, C., Xing, J., Zeng, W., Xue, J., Zheng, N.: View adaptive recurrent neural networks for high performance human action recognition from skeleton data. In: ICCV (2017)
+48. Zheng, N., Wen, J., Liu, R., Long, L., Dai, J., Gong, Z.: Unsupervised representation learning with long-term dynamics for skeleton based action recognition. In: AAAI (2018)
\ No newline at end of file
diff --git a/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/images.zip b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..996af12b745c22ae42ae4126ec8a116b994f76ab
--- /dev/null
+++ b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:798887cc0ee4eeda99efb1f97a394cfbfc8fb6d2013626f1c8264505b70f507f
+size 283976
diff --git a/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/layout.json b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d1b7e4980b08409f51ba8d5ebbdf735ffdbbd12e
--- /dev/null
+++ b/adversarialselfsupervisedlearningforsemisupervised3dactionrecognition/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18292f0f19c6b504adb4edc9d03ec1a677b7fa47ce6ebaed1060f0701fd4866f
+size 440026
diff --git a/adversarialsemanticdataaugmentationforhumanposeestimation/e08c5c46-3efa-422a-ba02-95bd2d9defd9_content_list.json b/adversarialsemanticdataaugmentationforhumanposeestimation/e08c5c46-3efa-422a-ba02-95bd2d9defd9_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..022acf0276e2905d062c4a04adeacd14141d3ae2
--- /dev/null
+++ b/adversarialsemanticdataaugmentationforhumanposeestimation/e08c5c46-3efa-422a-ba02-95bd2d9defd9_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:38d1b48fc831a7c5c1e5c9a82809de75ae32a27f3d1b6de0fafda145d0163e97
+size 78308
diff --git a/adversarialsemanticdataaugmentationforhumanposeestimation/e08c5c46-3efa-422a-ba02-95bd2d9defd9_model.json b/adversarialsemanticdataaugmentationforhumanposeestimation/e08c5c46-3efa-422a-ba02-95bd2d9defd9_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8d26eabaec3065a1f9821ab525256d9af0807093
--- /dev/null
+++ b/adversarialsemanticdataaugmentationforhumanposeestimation/e08c5c46-3efa-422a-ba02-95bd2d9defd9_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03b0e8d3c366f0c3ceefea3d70fbf96ee8958610aaaa3e3f8577c38837b9c3b5
+size 94129
diff --git a/adversarialsemanticdataaugmentationforhumanposeestimation/e08c5c46-3efa-422a-ba02-95bd2d9defd9_origin.pdf b/adversarialsemanticdataaugmentationforhumanposeestimation/e08c5c46-3efa-422a-ba02-95bd2d9defd9_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b7cc70224291689011dbac232b189258d9982ed2
--- /dev/null
+++ b/adversarialsemanticdataaugmentationforhumanposeestimation/e08c5c46-3efa-422a-ba02-95bd2d9defd9_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c653653f3e4cf820326f57dabf3a3e13c0dc13c7d8e0c5cc4c1bf2dd4a99248
+size 6035611
diff --git a/adversarialsemanticdataaugmentationforhumanposeestimation/full.md b/adversarialsemanticdataaugmentationforhumanposeestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e50e882ad49877795bbff2316b654da918318a6e
--- /dev/null
+++ b/adversarialsemanticdataaugmentationforhumanposeestimation/full.md
@@ -0,0 +1,312 @@
+# Adversarial Semantic Data Augmentation for Human Pose Estimation
+
+Yanrui Bin $^{1[0000-0003-2845-3928]}$ , Xuan Cao $^{2}$ , Xinya Chen $^{1[0000-0002-6537-4316]}$ , Yanhao Ge $^{2}$ , Ying Tai $^{2}$ , Chengjie Wang $^{2}$ , Jilin Li $^{2}$ , Feiyue Huang $^{2}$ , Changxin Gao $^{1[0000-0003-2736-3920]}$ , and Nong Sang $^{1[0000-0002-9167-1496]}$
+
+1 Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
+
+{yrbin,hust_cxy,cgao,nsang}@hust.edu.cn
+
+2 Tencent Youtu Lab
+
+{marscao, halege, yingtai, javoncjwang, jerolinli, garyhuang}@tencent.com
+
+Abstract. Human pose estimation is the task of localizing body keypoints from still images. The state-of-the-art methods suffer from insufficient examples of challenging cases such as symmetric appearance, heavy occlusion and nearby person. To enlarge the amounts of challenging cases, previous methods augmented images by cropping and pasting image patches with weak semantics, which leads to unrealistic appearance and limited diversity. We instead propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity. Furthermore, we propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamically predict tailored pasting configuration. Given off-the-shelf pose estimation network as discriminator, the generator seeks the most confusing transformation to increase the loss of the discriminator while the discriminator takes the generated sample as input and learns from it. The whole pipeline is optimized in an adversarial manner. State-of-the-art results are achieved on challenging benchmarks. The code has been publicly available at https://github.com/Binyr/ASDA.
+
+Keywords: Pose Estimation, Semantic Data Augmentation
+
+# 1 Introduction
+
+Human Pose Estimation (HPE) is the task of localizing body keypoint from still images. It serves as a fundamental technique for numerous computer vision applications. Recently, deep convolutional neural networks (DCNN) [23,13,33] have achieved drastic improvements on standard benchmark datasets. However, as shown in Figure 1, they are still prone to fail in some challenging cases such as symmetric appearance, heavy occlusion, and nearby persons.
+
+
+
+
+
+
+(a) Symmetric Appearance
+
+
+
+
+
+
+
+
+(b) Occlusion
+
+
+Fig. 1. Pairs of pose predictions obtained by HRNet [23] (top) and our approach (bottom) in the challenging cases. Incorrect predictions are highlighted by the red dotted circles. Note that image in Figure 1 (c) {cols. 1} is an extremely challenging case so that few of the keyponts are correctly predicted by the original HRNet. After equipped with our ASDA (bottom), HRNet improves the robustness to the challenging cases.
+
+
+
+
+(c) Nearby Person
+
+The reason for the inferior performance of the DCNN-based methods in the challenging cases is that there exists an insufficient amount of examples that contain these challenging cases to train a deep network for accurate keypoint localization. However, obtaining the annotations of keypoint localization is costly.
+
+One promising way to tackle this problem is data augmentation. Conventional data augmentation performs global image transformations (e.g., scaling, rotating, flipping or color jittering). Although it enhances the global translational invariance of the network and largely improves the generalizability, it contributes little to solving the challenging cases. Recently, Ke et al. [13] proposes keypoints masking training to force the network better recognize poses from difficult training samples. They simulate the keypoint occlusion by copying a background patch and putting it onto a keypoint or simulate the multiple existing keypoints by copying a body keypoint patch and putting it onto a nearby background. However, this data augmentation method only brings marginal improvement. On the one hand, the used patch is cropped from the input image, resulting in a limited variance of the generated images. On the other hand, the cropped keypoint patch is surrounded by some background, which makes the generated image unrealistic.
+
+In this paper, we propose a novel Adversarial Semantic Data Augmentation (ASDA) scheme. Human parsing is applied to the training images to get a large amount of pure body part patches. These body parts are organized, according to their semantic types, to build a semantic part pool. As the human body could be represented as a hierarchy of parts and subparts, we combine several subparts, according to the structure of the human body, to get body parts with various semantic granularity. For each input image, several parts will be randomly selected from the semantic part pool and properly pasted to the image.
+
+Further, randomly pasting parts to the image is still suboptimal. Without taking the difference between training image instances into account, it may generate ineffective examples that are too easy to boost the network. Moreover, it can hardly match the dynamic training status of the pose estimation network, since it is usually sampled from static distributions [21]. For instance, with the training of the network, it may gradually learn to associate occluded wrists while still have difficulty in distinguish similar appearance with legs.
+
+Based on the above consideration, we parameterize the parts pasting process as an affine transformation matrix and exploit a generative network to online predict the transformation parameters. The generator seeks the most confusing transformation to increase the loss of the pose estimation network and consequently generates tailored training samples. The pose estimation network acts as a discriminator, which takes the tailored samples as input and tries to learn from it. By leveraging the spatial transformer network, the whole process is differentiable and trained in an adversarial manner.
+
+Additionally, our Adversarial Semantic Data Augmentation is a universal solution that can be easily applied to different datasets and networks for human pose estimation.
+
+In summary, the main contributions are three-fold:
+
+- We design a novel Semantic Data Augmentation (SDA) which augments images by pasting segmented body parts of various semantic granularity to simulate examples that contain challenging cases.
+- We propose to utilize a generative network to dynamically adjust the augmentation parameters of the SDA and produce tailored training samples against the pose estimation network, which largely elevates the performance of the SDA.
+- We comprehensively evaluate our methods on various benchmark datasets and consistently outperforms the state-of-the-art methods.
+
+# 2 Related Work
+
+The advances of DCNN-based human pose estimation benefit from multiple factors. We compare our methods with literature from three most related aspects.
+
+# 2.1 Human Pose Estimation.
+
+Recently, pose estimation using DCNNs has shown superior performance. DeepPose [27] first applied deep neural networks to human pose estimation by directly regressing the 2D coordinates of keypoints from the input image. [26] proposed a heatmap representation for each keypoint and largely improved the spatial generalization. Following the heatmap-based framework, various methods [29,18,22,24,23,30,23] focused on designing the structure of the network and indeed achieved significant improvement. However they still suffered from insufficient amounts of samples that contains challenging cases. In this work, standing on the shoulder of the well-designed network structure, we propose a universal data augmentation solution to further improve the performance of human pose estimation.
+
+# 2.2 Data Augmentation.
+
+Typical data augmentation [18,4,30,23] mainly performed global spatial transformation like scaling, rotating and flipping etc. These common data augmentation schemes helped the network to resist the global image deformation but fail to improve the immunity to the challenging cases. Recently, some novel data augmentations were proposed. PoseRefiner [8] transformed the keypoint annotations to mimic the most common failure cases of human pose estimators, so that the proposed refiner network could be trained well. MSR-net [13] introduced keypoint-masking which cropped and pasted patches from the input image to simulate challenging cases. Different from the existing data augmentation strategies, we propose a novel semantic data augmentation scheme which takes advantage of the human semantic segmentation to obtain the pure segmented body parts rather than noisy image patches. Furthermore, we compose the related parts to form a set of new parts with higher semantic granularity.
+
+# 2.3 Adversarial Learning.
+
+Inspired by the minimax mechanism of Generative Adversarial Networks (GANs) [10], some literature [5] generated hard training samples in an adversarial way. Semantic Jitter [32] proposed to overcome the sparsity of supervision problem via synthetically generated images. A-Fast-RCNN [28] used GANs to generate deformations for object detection. Recently, GANs were introduced into human pose estimation. Such as Adversarial PoseNet [4] designed discriminators to distinguish the real poses from the fake ones. Jointly Optimize [21] designed an augmentation network that competed against a target network by generating hard augmentation operations. In this paper, we designed a generative network to adjust the semantic data augmentation then to produce challenging training data. The generative network takes the difference between training instances into consideration, and produce tailored training samples for the pose estimation network. Hard mining, as an alternative strategy to feed challenging training data to network, is totally different from ours. Hard mining can only "select" rather than "produce" challenging samples, which essentially limits its improvement of accuracy on challenging cases.
+
+# 3 Methodology
+
+# 3.1 Semantic Data Augmentation
+
+Building Semantic Part Pool. For common human pose estimation schemes [18,30,25,23], data augmentations such as global scaling, rotation, flipping are usually applied, which bring the global translational invariances to the network and largely improves the generalizability.
+
+However, the remained problem of pose estimation task is the challenging cases, e.g., symmetric appearance, heavy occlusion, and nearby person, where the global spatial transformation helps little. In contrast to the global spatial
+
+
+Fig. 2. Illustration of Semantic Data Augmentation (SDA). We first apply human parsing on training images and get a large amount of segmented body parts. The segmented body parts are organized, according to their semantics, to build semantic part pool. For each training image, several part patches will be randomly sampled and properly placed on the image to synthesize the real challenging cases such as symmetric appearance (green circle), occlusion (purple circle) and nearby person (yellow circle).
+
+transformations, local pixel patch manipulation provide more degrees of freedom to augment image and is able to synthesize the challenging case realistically.
+
+A human image is assembled by semantic part patches, such as arm, leg, shoe, trousers and so on. Inspired by these semantic cues, we can synthesize plentiful human image instances by elaborately combining these local part patches. Here, we propose a novel augmentation scheme, as shown in Figure 2. By firstly segmenting all human images through the human parsing method [17], then we can build a data pool $\mathbb{D}_{\text{part}}$ filled with various semantic body part patches. We follow the definition of LIP dataset [9] and segment the human image into $\hat{N} = 26$ part patches. Finally, the body part patches from the data pool can be properly mounted on the current person's body to synthesize challenging cases.
+
+As human parsing aims to analyze every detail region of a person as well as different categories of clothes, LIP defines 6 body parts and 13 clothes categories in fine semantic granularity. However, body parts of various semantic granularity will appear in images of real-world scenarios with complex multi-person activities. For the above considerations, we combine some of the parts (e.g., left shoe and left leg) to form a set of new parts with higher semantic granularity and then add them to our part pool. After the cutting step, we filter out scattered segments, segments with the area below $35^2$ and segments with low semantics.
+
+Augmentation Parameter Formulation. Given a semantic part patch $I_{p}$ and a training image $I_{o}$ , the placement of this semantic part can be defined by the affine transformation matrix
+
+$$
+\boldsymbol {H} = \left[ \begin{array}{c c c} s \cos r & s \sin r & t _ {x} \\ - s \sin r & s \cos r & t _ {y} \\ 0 & 0 & 1 \end{array} \right], \tag {1}
+$$
+
+where $s$ denotes the scale of the part patch, $r$ denotes the rotation, and $t_x, t_y$ is the translation in horizontal and vertical direction respectively. Thus the placement of the part patch $I_p$ can be uniquely determined by a 4D tuple $\theta(s, r, t_x, t_y)$ .
+
+
+Fig. 3. Overview of our approach. The input image is fed to the generator $\mathcal{G}$ to obtain $\hat{N}$ groups of tailored augmentation parameters which are used to warp the randomly selected semantic part patches. Each group parameters is used to warp the patch of the specific part type. $\mathcal{G}$ seeks the most confusing transformation to increase the loss of the pose estimation network and consequently generates tailored training samples. The pose estimation network acts as a discriminator $\mathcal{D}$ , which takes the tailored sample as input and tries to learn from it. The whole pipeline is optimized in an adversarial manner.
+
+The scale of the part patch will be aligned with the target person in advance according to the human bounding box. Initially, the part patch could be pasted in the center of the training image without rotation. In other words, the tuple $(1,0,0,0)$ is served as our original paste configuration.
+
+Random Semantic Augmentation. With 4D augmentation parameters defined in Equation 1, a straight augmentation method can be realized by sampling a 4D tuple augmentation parameter from a uniform distribution in the neighborhood space of $(1,0,0,0)$ . $N$ different body parts will be pasted to the target person. The value of $N$ is set manually as a hyper-parameter. Sensitivity Analysis of $N$ is detailed in Section 4.5.
+
+# 3.2 Adversarial Learning
+
+Our goal is to generate the confusing transformation to improve the performance of pose estimation networks. However, the augmentation parameters of SDA are sampled from the neighborhood of $(1,0,0,0)$ . On the one hand, the confusing transformation naturally varies with different training instances and different part types. On the other hand, random sampling augmentation parameters from the static distribution can hardly perceive the dynamic training status. Thus it is prone to generate ineffective training samples which are so easy that it may not bring positive or even put negative effect on network training.
+
+To overcome such issues, we propose to leverage Spatial Transformer Network (STN) to manipulate semantic parts within the network and optimize it in an adversarial manner. The main idea is to utilize an STN as the generator, which seeks the most confusing transformation to increase the pose estimation network loss. On the other hand, the pose estimation network acts as a discriminator, which tries to learn from the tailored semantic augmentation.
+
+Generate Tailored Samples. The core module of our method is an STN, which takes the target person image as input and predicts $\hat{N}$ groups transformation parameters, each of which is used to transform the randomly selected semantic body parts of the specific part type. In our experiments, we find that allowing the network to predict the scale $s$ of the part would collapse the training. It would easily predict a large scale, so that the part completely covers the target person in the training images. Thus, we randomly sample the scale $s$ from the neighboring space of 1.0 and the generative network is mainly responsible for predicting the $(r,t_x,t_y)$ . The affine transformation matrix is generated as defined in Equation 1.
+
+Each pixel in the transformed image is computed by applying a sampling kernel centered at a particular location in the original image. Mathematically, the pointwise transformation is shown in eq. (2).
+
+$$
+\left( \begin{array}{c} x _ {i} ^ {s} \\ y _ {i} ^ {s} \\ 1 \end{array} \right) = \boldsymbol {H} \left( \begin{array}{c} x _ {i} ^ {t} \\ y _ {i} ^ {t} \\ 1 \end{array} \right), \tag {2}
+$$
+
+where $(x_i^s, y_i^s)$ and $(x_i^t, y_i^t)$ denote the coordinates of the i-th pixel in the original and transformed image respectively. The transformed parts thus can be pasted to the target person image in the order they were sampled.
+
+It is the not the first time to determine the augmentation parameters through a network. Xi Peng et al [21] jointly optimizes the conventional data augmentation (i.e., global scaling, rotating and feature erasing.) and network training to enhance the global transformation invariance of the network. Our contributions are quite different with [21]. We design a novel SDA which augments images by pasting segmented body parts of various semantic granularity to simulate examples that contain challenging cases. Then we further propose ASDA that utilize a generative network to dynamically adjust the augmentation parameters of the SDA and produce tailored training samples for the pose estimation network.
+
+Joint Training. As shown in the Figure 3, the networks training follow the pipeline of training standard GANs [10]. Generative network acting as generator $\mathcal{G}$ try to produce challenging cases. Meanwhile, the pose estimation network acting as a discriminator $\mathcal{D}$ try to learn from the generated training samples.
+
+The discriminator is supervised by ground-truth heatmaps and try to decrease the loss $\mathcal{L}_{\mathcal{D}}$ which is formulated as eq. (4). On the contrary, the generator try to increase the loss $\mathcal{L}_{\mathcal{D}}$ . So the loss for generator is simply set as negative discriminator loss as formulated in eq. (5).
+
+$$
+I _ {a u g} = \mathcal {F} _ {a f f} \left(\mathcal {G} \left(I _ {o}\right), \left\{I _ {p} \right\}\right), \tag {3}
+$$
+
+$$
+\mathcal {L} _ {\mathcal {D}} = \left\| \mathcal {D} \left(I _ {a u g}\right) - H _ {g t} \right\| _ {\ell_ {2}}, \tag {4}
+$$
+
+$$
+\mathcal {L} _ {\mathcal {G}} = - \mathcal {L} _ {\mathcal {D}}, \tag {5}
+$$
+
+where $I_{o}$ is the original training image, $\{I_p\}$ is a set of randomly sampled part patches, $\mathcal{F}_{aff}(\cdot ,\cdot)$ denotes the affine transformation function and $H_{gt}$ denote ground-truth heatmap. The network weights of $\mathcal{G}$ and $\mathcal{D}$ are updated alternately.
+
+# 4 Experiments
+
+# 4.1 Datasets and Evaluation Protocols
+
+We conduct experiments on three representative benchmark datasets, i.e. extended Leeds Sports Poses (LSP) dataset [12], MPII human pose dataset [1] and MS COCO dataset [16].
+
+LSP Dataset. The extended LSP dataset consists of 11k training images and 1k testing images of mostly sports people. Standard Percentage of Correct Keypoints (PCK) metric is used for evaluation. It reports the percentage of keypoint that fall into a normalized distance of the ground-truth, where the torso size is used as the normalized distance.
+
+MPII Dataset. The MPII dataset includes around 25k images containing over 40k people with annotated body keypoint (28k training and 11k testing). Following [18], 3k samples are taken as a validation set to tune the hyperparameters. PCK is also utilized to evaluate MPII, but distance is normalized by head size. MPII evaluation metric is referred to PCKh.
+
+COCO Dataset. The COCO dataset involves multi-person pose estimation task which requires simultaneously detecting people and localizing their key points. The COCO training dataset (train2017) includes 57k images and validation dataset (val2017) includes 5000 images. The COCO evaluation defines the object keypoint similarity (OKS) which plays the same role as the IoU.
+
+# 4.2 Implementation Details
+
+Both generator $\mathcal{G}$ and discriminator $\mathcal{D}$ are the off-the-shelf networks. For generator, the ResNet-18 is utilized to regress $(3\times \hat{N})$ parameters, where $\hat{N}$ is the class number of the human parsing. For discriminator, we adopt HRNet [23].
+
+During building the semantic part pool, in order to avoid the inference of different human parsing algorithms, we obtain body parts from LIP dataset [9]. Beside our semantic data augmentation, we keep original data augmentation as adopted in HRNet, including global random flip, rotation and scale.
+
+Network training is implemented on the open-platform PyTorch. For training details, we employ Adam [14] with a learning rate 0.001 as the optimizer of both generator and discriminator network. We drop the learning rate by a factor of 10 at the 170-th and 200-th epochs. Training ends at 210 epochs. The HRNet is initialized with weight of pre-trained model on public-released ImageNet [7].
+
+MPII. For both MPII training and testing set, body scale and center are provided. We first utilize these value to crop the image around the target person and resized to $256 \times 256$ or $384 \times 384$ . Data augmentation includes random flip, random rotation $(-30^{\circ}, 30^{\circ})$ and random scale (0.75, 1.25).
+
+LSP. For LSP training set, we crop image by estimating the body scale and position according to keypoint positions. The data augmentation strategy are the same to MPII. For the LSP testing set, we perform similar cropping and resizing, but simply use the image center as the body position, and estimate the body scale by the image size following [31]. We follow previous methods [29,31] to
+
+train our model by adding the MPII training set to the extended LSP training set with person-centric annotations. For both MPII and LSP, testing is conducted on six-scale image pyramids (0.8, 0.9, 1.0, 1.1, 1.2 1.3).
+
+COCO. For COCO training set, each ground-truth human box is extended to fixed aspect ratio, e.g., height: width = 4 : 3 and enlarged to contain more context by a rescale factor 1.25. Then the resulting box is cropped from image without distorting image aspect ratio and resized to a fixed resolution. The default resolution is 256 : 192. We apply random flip, random rotation $(-40^{\circ}, 40^{\circ})$ and random scale (0.7, 1.3). For COCO testing set, we utilized the predicted bounding box released by Li et al [15]. We also predict the pose of the corresponding flipped image and average the heat maps to get the final prediction.
+
+# 4.3 Quantitative Results
+
+We report the performance of our methods on the three benchmark datasets following the public evaluation protocols. We adopt the HRNet as the backbone network. "W32" and "W48" represent the channel dimensions of the high-resolution subnetworks in last three stages of HRNet, respectively. "s7" indicates the we expand the HRNet to 7 stages by repeating the last stage of the original HRNet.
+
+Results on LSP. Table 1 presents the PCK@0.2 scores on LSP test set. Our method outperforms the state-of-the-art methods especially on some challenging keypoints, e.g., wrist, knee and ankle, we have $0.8\%$ , $1.0\%$ and $1.0\%$ improvements respectively.
+
+Table 1. Comparisons on the LSP test set (PCK@0.2).
+
+| Method | Hea. | Sho. | Elb. | Wri. | Hip. | Kne. | Ank. | Total |
| Insafutdinov et al., 2016 [11] | 97.4 | 92.7 | 87.5 | 84.4 | 91.5 | 89.9 | 87.2 | 90.1 |
| Wei et al., 2016 [29] | 97.8 | 92.5 | 87.0 | 83.9 | 91.5 | 90.8 | 89.9 | 90.5 |
| Bulat et al., 2016 [2] | 97.2 | 92.1 | 88.1 | 85.2 | 92.2 | 91.4 | 88.7 | 90.7 |
| Chu et al., 2017 [6] | 98.1 | 93.7 | 89.3 | 86.9 | 93.4 | 94.0 | 92.5 | 92.6 |
| Chen et al., 2017 [4] | 98.5 | 94.0 | 89.8 | 87.5 | 93.9 | 94.1 | 93.0 | 93.1 |
| Yang et al., 2017 [31] | 98.3 | 94.5 | 92.2 | 88.9 | 94.4 | 95.0 | 93.7 | 93.9 |
| Zhang et al., 2019 [33] | 98.4 | 94.8 | 92.0 | 89.4 | 94.4 | 94.8 | 93.8 | 94.0 |
| Ours-W32 | 98.8 | 95.2 | 92.5 | 90.2 | 94.7 | 95.8 | 94.8 | 94.6 |
+
+Results on MPII. The performance of our methods on MPII test set is shown in Table 2. We can observe that Ours-W48-s7 achieves $94.1\%$ PCKh@0.5, which is the new state-of-the-art result. In particular, Ours-W48-s7 achieves $0.5\%$ , $0.5\%$ and $0.7\%$ improvements on wrist, knee and ankle which are considered as the most challenging keypoints.
+
+Results on COCO. Table 3 compares our methods with classic and SOTA methods on COCO val2017 dataset. All the methods use standard top-down paradigm which sequentially performs human detection and single-person pose
+
+Table 2. Comparisons on the MPII test set (PCKh@0.5).
+
+| Method | Hea. | Sho. | Elb. | Wri. | Hip. | Kne. | Ank. | Total |
| Wei et al., 2016 [29] | 97.8 | 95.0 | 88.7 | 84.0 | 88.4 | 82.8 | 79.4 | 88.5 |
| Bulat et al., 2016 [2] | 97.9 | 95.1 | 89.9 | 85.3 | 89.4 | 85.7 | 81.7 | 89.7 |
| Newell et al., 2016 [18] | 98.2 | 96.3 | 91.2 | 87.1 | 90.1 | 87.4 | 83.6 | 90.9 |
| Ning et al., 2018 [20] | 98.1 | 96.3 | 92.2 | 87.8 | 90.6 | 87.6 | 82.7 | 91.2 |
| Chu et al., 2017 [6] | 98.5 | 96.3 | 91.9 | 88.1 | 90.6 | 88.0 | 85.0 | 91.5 |
| Chen et al., 2017 [4] | 98.1 | 96.5 | 92.5 | 88.5 | 90.2 | 89.6 | 86.0 | 91.9 |
| Yang et al., 2017 [31] | 98.5 | 96.7 | 92.5 | 88.7 | 91.1 | 88.6 | 86.0 | 92.0 |
| Xiao et al., 2018 [30] | 98.5 | 96.6 | 91.9 | 87.6 | 91.1 | 88.1 | 84.1 | 91.5 |
| Ke et al., 2018 [13] | 98.5 | 96.8 | 92.7 | 88.4 | 90.6 | 89.4 | 86.3 | 92.1 |
| Nie et al., 2018 [19] | 98.6 | 96.9 | 93.0 | 89.1 | 91.7 | 89.0 | 86.2 | 92.4 |
| Tang et al., 2018 [25] | 98.4 | 96.9 | 92.6 | 88.7 | 91.8 | 89.4 | 86.2 | 92.3 |
| Sun et al., 2019 [23] | 98.6 | 96.9 | 92.8 | 89.0 | 91.5 | 89.0 | 85.7 | 92.3 |
| Zhang et al., 2019 [33] | 98.6 | 97.0 | 92.8 | 88.8 | 91.7 | 89.8 | 86.6 | 92.5 |
| Su et al., 2019 [22]* | 98.7 | 97.5 | 94.3 | 90.7 | 93.4 | 92.2 | 88.4 | 93.9 |
| Ours-W48-s7* | 98.9 | 97.6 | 94.6 | 91.2 | 93.1 | 92.7 | 89.1 | 94.1 |
+
+```
+```
+indicates the network take image size $384 \times 384$ as input.
+
+estimation. Our model outperforms SIM [30] and HRNet [23] by $4.8\%$ and $0.8\%$ for input size $256 \times 192$ respectively. When input size is $384 \times 288$ , our model achieve better AP than SIM [30] and HRNet [23] by $4.5\%$ and $0.9\%$ .
+
+Table 3. Comparison with SOTA methods on COCO val2017 dataset. Their results are cited from Chen et al. [3] and Sun et al. [23].
+
+| Method | Backbone | Input Size | Params | GFLOPs | AP | \( AP^{50} \) | \( AP^{75} \) | \( AP^M \) | \( AP^L \) | AR |
| Hourglass [18] | HG-8stage | 256 × 192 | 25.1M | 14.3 | 66.9 | - | - | - | - | - |
| CPN [3] | ResNet-50 | 256 × 192 | 27.0M | 6.20 | 69.4 | - | - | - | - | - |
| CPN [3] | ResNet-50 | 384 × 288 | 27.0M | 13.9 | 71.6 | - | - | - | - | - |
| SIM [30] | ResNet-50 | 256 × 192 | 34.0M | 8.9 | 70.4 | 88.6 | 78.3 | 67.1 | 77.2 | 76.3 |
| SIM [30] | ResNet-50 | 384 × 288 | 34.0M | 20.0 | 72.2 | 89.3 | 78.9 | 68.1 | 79.7 | 77.6 |
| HRNet [23] | HRNet-W32 | 256 × 192 | 28.5M | 7.10 | 74.4 | 90.5 | 81.9 | 70.8 | 81.0 | 79.8 |
| HRNet [23] | HRNet-W32 | 384 × 288 | 28.5M | 16.0 | 75.8 | 90.6 | 82.7 | 71.9 | 82.8 | 81.0 |
| Ours | HRNet-W32 | 256 × 192 | 28.5M | 7.10 | 75.2 | 91.0 | 82.4 | 72.2 | 81.3 | 80.4 |
| Ours | HRNet-W32 | 384 × 288 | 28.5M | 16.0 | 76.7 | 91.2 | 83.5 | 73.2 | 83.4 | 81.5 |
+
+# 4.4 Qualitative Results
+
+Figure 4 displays some pose estimation results obtained by HRNet without (left size) and with (right side) our ASDA. We can observe that original HRNet is confused by symmetric appearance (e.g. the left and right legs in $\{rows.1, cols.3\}$ ), heavy occlusion (e.g., the right ankle in $\{rows.1 cols.2\}$ ) and nearby person (e.g., multiple similar legs and arms in $\{rows.1, cols.1\}$ ). Note that image
+
+
+
+
+Fig.4. Comparisons of the HRNet [23] trained without (left side) and with (right side) our Adversarial Semantic Data Augmentation.
+
+
+
+
+
+
+
+
+
+
+
+
+
+in $\{rows.1, cols.1\}$ is an extremely challenging case so that few of the keypoints are correctly predicted by the original HRNet. By generating tailored semantic augmentation for each input image, our ASDA largely improves the performance of the original HRNet in the extremely challenging cases. Figure 5 shows some pose estimation results obtained by our approach on the COCO test dataset.
+
+# 4.5 Ablation Studies
+
+In this section, we conduct ablative analysis on the validation set of MPII dataset. The baseline is HRNet-W32 [23] which achieved PCKh@0.5 at $90.3\%$ by performing flipping and single scale in inference. During baseline training, the data augmentation adopts global spatial transformation including random rotation $(30^{\circ}, 30^{\circ})$ , random scale (0.75, 1.25) and flipping. The results are shown in Table 4 (a).
+
+The MPII dataset provides visibility annotations for each keypoint, which enables us to conduct ablative analysis on the subset of invisible keypoints and study the effect of our method on improving the occlusion cases. The results are shown in Table 4 (b).
+
+With Vs. Without Semantic Data Augmentation. We first evaluate the effect of the Semantic Data Augmentation scheme. As shown in Table 4 (a), +SDA outperforms the Baseline with a large margin by $0.5\%$ . Note that our SDA scheme consistently achieved improvements on all keypoints. Especially, our SDA achieves $0.9\%$ , $0.5\%$ and $0.4\%$ improvements on elbow, wrist and ankle respectively, which are considered as the most challenging keypoints to be localized. In Table 4 (b), we can observe a more significant improvement brought by SDA. The result demonstrates that the semantic local pixel manipulation of our SDA effectively augment training data and elevate the performance of pose estimation.
+
+Table 4. Ablation studies on the MPII validation set (PCKh@0.5)
+(a) Results evaluated on all keypoints
+
+| Method | Hea. | Sho. | Elb. | Wri. | Hip. | Kne. | Ank. | Total |
| Baseline | 97.1 | 95.9 | 90.3 | 86.4 | 89.1 | 87.1 | 83.3 | 90.3 |
| +ROR | 97.0 | 96.2 | 90.9 | 86.9 | 89.3 | 86.9 | 82.9 | 90.5 |
| +SDA (Ours) | 97.2 | 96.3 | 91.2 | 86.9 | 90.0 | 87.2 | 83.7 | 90.8 |
| +ASDA (Ours) | 97.6 | 96.6 | 91.5 | 87.3 | 90.5 | 87.5 | 84.5 | 91.2 |
+
+(b) Results evaluated only on invisible keypoints
+
+| Baseline | - | 90.9 | 73.6 | 61.9 | 81.8 | 71.7 | 61.8 | 74.2 |
| +ROR | - | 92.0 | 74.9 | 63.2 | 82.7 | 71.6 | 61.6 | 74.9 |
| +SDA (Ours) | - | 91.8 | 75.1 | 63.0 | 84.1 | 71.7 | 63.3 | 75.4 |
| +ASDA (Ours) | - | 92.7 | 75.1 | 65.1 | 84.8 | 71.8 | 63.4 | 76.1 |
+
+Baseline: The original HRNet-W32 [23]. The following experiments are all based on this baseline.
++ROR: Adopt data augmentation of Randomly Occluding and Repeating (ROR) the keypoints patch [13] on training HRNet-W32.
++SDA: Adopt our Semantic Data Augmentation (SDA) scheme on training HRNet-W32, the augmentation parameters are adjusted randomly from a uniform distribution in the neighborhood space of $(1,0,0,0)$ .
++ASDA: Adop our Adversarial Semantic Data Augmentation (ASDA) scheme on training HRNet-W32, the augmentation parameters are online adjusted by the generative network in an adversarial way.
+
+Both SDA and Randomly Occluding and Repeating (ROR) the keypoints patch [13] augment training data by manipulate the local pixel. However, ROR achieves $0.3\%$ lower average PCKh@0.5 than our SDA. Moreover, ROR even brings negative effects to baseline model when localizing keypoints like knee and ankle. These results demonstrate that various segmented body parts with high semantics used in our SDA play an key role for improving pose estimation performance.
+
+Random Vs. Adversarial Augmentation. Based on the SDA scheme, we found that Adversarial SDA can further improve the accuracy by online adjusting augmentation parameters. As shown in the table 4 (a), +ASDA consistently outperforms +SDA on all keypoints and achieve $0.4\%$ higher average PCKh@0.5. For invisible keypoints, ASDA outperforms baseline and SDA by $1.9\%$ and $0.7\%$ PCKh@0.5 score. As discussed in Sec. 3.2, our ASDA can further improve performance due to the adversarial learning strategy which generates tailored samples for training pose estimation network.
+
+Sensitivity Analysis. The part number $N$ as a hyper-parameter is configured manually. We test different $N$ values during training and the PCKh@0.5 score on the MPII validation set is shown in Table 5. Less than 3 parts, the performance maintain roughly the same. Begin with 4 parts, the performance sharply drop along the increasing of part number. We infer that too many part-
+
+s will generate too hard training samples for pose estimation network which misleads network to learn unrealistic cases.
+
+Table 5. Ablation studies of different number of body parts $N$
+
+| Part Num | Hea. | Sho. | Elb. | Wri. | Hip. | Kne. | Ank. | Total |
| 1 | 97.6 | 96.6 | 91.5 | 87.3 | 90.5 | 87.5 | 84.5 | 91.2 |
| 2 | 97.5 | 96.6 | 91.5 | 86.9 | 90.1 | 87.4 | 83.8 | 91.0 |
| 3 | 97.3 | 96.8 | 91.3 | 86.9 | 90.6 | 87.4 | 83.6 | 91.0 |
| 4 | 97.4 | 96.3 | 91.1 | 86.2 | 90.3 | 87.0 | 83.6 | 90.7 |
| 6 | 97.2 | 96.2 | 90.4 | 85.2 | 90.0 | 86.0 | 82.1 | 90.1 |
| 8 | 97.0 | 95.7 | 89.3 | 83.8 | 89.3 | 85.6 | 81.4 | 89.4 |
+
+Apply on Different Networks. As shown in Table 6, we report the performance of different networks trained with our ASDA. By applying our ASDA, the SOTA networks consistently achieved improvements. Especially on the challenging keypoints such as elbow, wrist, knee and ankle, our ADSA enhances the network significantly. This result exhibits the universality of our ADSA scheme.
+
+Table 6. Result of applying on different network.
+
+| Method | Hea. | Sho. | Elb. | Wri. | Hip. | Kne. | Ank. | Total |
| 2-Stacked HG | 96.6 | 95.4 | 89.7 | 84.7 | 88.7 | 84.1 | 80.7 | 89.1 |
| 2-Stacked HG+ASDA | 96.8 | 95.8 | 90.5 | 85.5 | 89.3 | 85.5 | 81.9 | 89.8 |
| 8-Stacked HG | 96.9 | 95.9 | 90.8 | 86.0 | 89.5 | 86.5 | 82.9 | 90.2 |
| 8-Stacked HG+ASDA | 97.5 | 96.5 | 91.6 | 87.3 | 90.5 | 87.7 | 83.5 | 91.1 |
| SIM-ResNet50 | 96.4 | 95.3 | 89.0 | 83.2 | 88.4 | 84.0 | 79.6 | 88.5 |
| SIM-ResNet50+ASDA | 96.8 | 95.8 | 89.7 | 83.9 | 89.5 | 85.1 | 80.5 | 89.3 |
| SIM-ResNet101 | 96.9 | 95.9 | 89.5 | 84.4 | 88.4 | 84.5 | 80.7 | 89.1 |
| SIM-ResNet101+ASDA | 97.2 | 95.9 | 90.0 | 85.2 | 89.7 | 86.0 | 82.3 | 90.0 |
| HRNet-W32 | 97.1 | 95.9 | 90.3 | 86.4 | 89.1 | 87.1 | 83.3 | 90.3 |
| HRNet-W32+ASDA | 97.6 | 96.6 | 91.5 | 87.3 | 90.5 | 87.5 | 84.5 | 91.2 |
| HRNet-W48 | 97.2 | 96.1 | 90.8 | 86.3 | 89.3 | 86.6 | 83.1 | 90.4 |
| HRNet-W48+ASDA | 97.3 | 96.5 | 91.7 | 87.9 | 90.8 | 88.2 | 84.2 | 91.4 |
+
+Compare with methods that also use parsing information. Nie et al [19] also use parsing information and improves the 8-stacked hourglass from $90.2\%$ to $91.0\%$ on MPII validation set. The improvement is slightly lower than ASDA that improves the 8-stacked hourglass from $90.2\%$ to $91.1\%$ . In addition, [19] uses 2-stacked hourglass as Parsing Encoder to predict the parameters of an adaptive convolution, which introduces extra parameters and computation burden. Moreover, the parsing annotation and keypoints annotation of LIP are both used in the training of Parsing Encoder while our ASDA only uses the parsing annotation.
+
+
+Fig. 5. Examples of estimated poses on the COCO test set.
+
+# 5 Conclusions
+
+In this work, we proposed Semantic Data Augmentation (SDA) which locally pasted segmented body parts with various semantic granularity to synthesize challenging cases. Based on the SDA, we further proposed Adversarial Semantic Data Augmentation which exploit a generative network to online adjust the augmentation parameters for each individual training image in an adversarial way. Improved results on public benchmark and comprehensive experiments have demonstrated the effectiveness of our methods. Our ASDA is general and independent on network. We hope our work can provide inspiration on how to generate tailored training samples for other tasks.
+
+Acknowledgement. This work was supported by the National Natural Science Foundation of China under grant 61871435 and the Fundamental Research Funds for the Central Universities no. 2019kfyXKJC024.
+
+# References
+
+1. Andriluka, M., Pishchulin, L., Gehler, P., Schiele, B.: 2d human pose estimation: New benchmark and state of the art analysis. In: CVPR. pp. 3686-3693 (2014)
+2. Bulat, A., Tzimiropoulos, G.: Human pose estimation via convolutional part heatmap regression. In: ECCV. pp. 717-732. Springer (2016)
+3. Chen, Y., Wang, Z., Peng, Y., Zhang, Z., Yu, G., Sun, J.: Cascaded pyramid network for multi-person pose estimation. In: CVPR. pp. 7103-7112 (2018)
+4. Chen, Y., Shen, C., Wei, X.S., Liu, L., Yang, J.: Adversarial posenet: A structure-aware convolutional network for human pose estimation. In: ICCV. pp. 1212-1221 (2017)
+5. Chu, W., Hung, W.C., Tsai, Y.H., Cai, D., Yang, M.H.: Weakly-supervised caricature face parsing through domain adaptation. ICIP (2019)
+6. Chu, X., Yang, W., Ouyang, W., Ma, C., Yuille, A.L., Wang, X.: Multi-context attention for human pose estimation. In: CVPR. pp. 1831-1840 (2017)
+7. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: CVPR. pp. 248-255 (2009)
+8. Fieraru, M., Khoreva, A., Pishchulin, L., Schiele, B.: Learning to refine human pose estimation. In: CVPR Workshops. pp. 205-214 (2018)
+9. Gong, K., Liang, X., Zhang, D., Shen, X., Lin, L.: Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing. In: CVPR. pp. 932-940 (2017)
+0. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: NIPS. pp. 2672-2680 (2014)
+1. Insafutdinov, E., Pishchulin, L., Andres, B., Andriluka, M., Schiele, B.: Deepercut: A deeper, stronger, and faster multi-person pose estimation model. In: ECCV. pp. 34-50. Springer (2016)
+2. Johnson, S., Everingham, M.: Clustered pose and nonlinear appearance models for human pose estimation. In: BMVC. vol. 2, p. 5 (2010)
+3. Ke, L., Chang, M.C., Qi, H., Lyu, S.: Multi-scale structure-aware network for human pose estimation. In: ECCV. pp. 713-728 (2018)
+4. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. ICLR
+5. Li, W., Wang, Z., Yin, B., Peng, Q., Du, Y., Xiao, T., Yu, G., Lu, H., Wei, Y., Sun, J.: Rethinking on multi-stage networks for human pose estimation. arXiv preprint arXiv:1901.00148 (2019)
+6. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV. pp. 740-755 (2014)
+7. Liu, T., Ruan, T., Huang, Z., Wei, Y., Wei, S., Zhao, Y., Huang, T.: Devil in the details: Towards accurate single and multiple human parsing. arXiv preprint arXiv:1809.05996 (2018)
+8. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: ECCV. pp. 483-499. Springer (2016)
+9. Nie, X., Feng, J., Zuo, Y., Yan, S.: Human pose estimation with parsing induced learner. In: CVPR (2018)
+20. Ning, G., Zhang, Z., He, Z.: Knowledge-guided deep fractal neural networks for human pose estimation. IEEE Transactions on Multimedia 20(5), 1246-1259 (2018)
+21. Peng, X., Tang, Z., Yang, F., Feris, R.S., Metaxas, D.: Jointly optimize data augmentation and network training: Adversarial data augmentation in human pose estimation. In: CVPR (2018)
+
+22. Su, Z., Ye, M., Zhang, G., Dai, L., Sheng, J.: Cascade feature aggregation for human pose estimation. arXiv preprint arXiv:1902.07837 (2019)
+23. Sun, K., Xiao, B., Liu, D., Wang, J.: Deep high-resolution representation learning for human pose estimation. arXiv preprint arXiv:1902.09212 (2019)
+24. Tang, W., Wu, Y.: Does learning specific features for related parts help human pose estimation? In: CVPR. pp. 1107-1116 (2019)
+25. Tang, W., Yu, P., Wu, Y.: Deeply learned compositional models for human pose estimation. In: ECCV. pp. 190-206 (2018)
+26. Thompson, J.J., Jain, A., LeCun, Y., Bregler, C.: Joint training of a convolutional network and a graphical model for human pose estimation. In: NIPS. pp. 1799-1807 (2014)
+27. Toshev, A., Szegedy, C.: Deeppose: Human pose estimation via deep neural networks. In: CVPR. pp. 1653-1660 (2014)
+28. Wang, X., Shrivastava, A., Gupta, A.: A-fast-rcnn: Hard positive generation via adversary for object detection. In: CVPR. pp. 2606-2615 (2017)
+29. Wei, S.E., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: CVPR. pp. 4724-4732 (2016)
+30. Xiao, B., Wu, H., Wei, Y.: Simple baseline for human pose estimation and tracking. In: ECCV. pp. 466-481 (2018)
+31. Yang, W., Li, S., Ouyang, W., Li, H., Wang, X.: Learning feature pyramids for human pose estimation. In: ICCV. pp. 1281-1290 (2017)
+32. Yu, A., Grauman, K.: Semantic jitter: Dense supervision for visual comparisons via synthetic images. In: ICCV. pp. 5570-5579 (2017)
+33. Zhang, H., Ouyang, H., Liu, S., Qi, X., Shen, X., Yang, R., Jia, J.: Human pose estimation with spatial contextual information. arXiv preprint arXiv:1901.01760 (2019)
\ No newline at end of file
diff --git a/adversarialsemanticdataaugmentationforhumanposeestimation/images.zip b/adversarialsemanticdataaugmentationforhumanposeestimation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7ac5b842ddb957b98c3e311a33801d97eb5fbb7b
--- /dev/null
+++ b/adversarialsemanticdataaugmentationforhumanposeestimation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9f099c211d697b7c9a16d73c9b7b1d5420f5fc6f1a813cc5756003a84a7c4e61
+size 942062
diff --git a/adversarialsemanticdataaugmentationforhumanposeestimation/layout.json b/adversarialsemanticdataaugmentationforhumanposeestimation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..49c8e52e877f3f38ae8142406f578ead7b878dd0
--- /dev/null
+++ b/adversarialsemanticdataaugmentationforhumanposeestimation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ac311c95289d110b12d4204f7c7b2830a4ee381c69de31fbb627c462b910c80a
+size 395564
diff --git a/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/afba166e-8a96-4337-9066-a17fe9c8d923_content_list.json b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/afba166e-8a96-4337-9066-a17fe9c8d923_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e5e586ffd9ad446b4f58d7893d52f773020acf92
--- /dev/null
+++ b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/afba166e-8a96-4337-9066-a17fe9c8d923_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:40226f1479b7e4e1524c1dc9d42a45be0a378066b1f89376ed5fc26dc3147729
+size 70694
diff --git a/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/afba166e-8a96-4337-9066-a17fe9c8d923_model.json b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/afba166e-8a96-4337-9066-a17fe9c8d923_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c9af7a81ff9dff0acfc506837afd2bca72733155
--- /dev/null
+++ b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/afba166e-8a96-4337-9066-a17fe9c8d923_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8b891833a086f976fe813bf82ed7f83cb7a6ca81eb5af47132455b9c85f53b98
+size 84485
diff --git a/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/afba166e-8a96-4337-9066-a17fe9c8d923_origin.pdf b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/afba166e-8a96-4337-9066-a17fe9c8d923_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2f838f119a34a04271328d17314498eb7b559557
--- /dev/null
+++ b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/afba166e-8a96-4337-9066-a17fe9c8d923_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:958d07073dcb532813e8d434d4f91a4e748b2fd05f08e6e53f454de4e52590cb
+size 1010001
diff --git a/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/full.md b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9ba1740707965aa7b409ab2759f33750ec2ed389
--- /dev/null
+++ b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/full.md
@@ -0,0 +1,242 @@
+# Adversarial Training with Bi-directional Likelihood Regularization for Visual Classification
+
+Weitao Wan $^{1}$ , Jiansheng Chen $^{1\star}$ , and Ming-Hsuan Yang $^{2,3}$
+
+$^{1}$ Department of Electronic Engineering, Tsinghua University
+$^{2}$ Department of EECS, UC Merced $^{3}$ Google Research
+
+Abstract. Neural networks are vulnerable to adversarial attacks. Practically, adversarial training is by far the most effective approach for enhancing the robustness of neural networks against adversarial examples. The current adversarial training approach aims to maximize the posterior probability for adversarially perturbed training data. However, such a training strategy ignores the fact that the clean data and adversarial examples should have intrinsically different feature distributions despite that they are assigned with the same class label under adversarial training. We propose that this problem can be solved by explicitly modeling the deep feature distribution, for example as a Gaussian Mixture, and then properly introducing the likelihood regularization into the loss function. Specifically, by maximizing the likelihood of features of clean data and minimizing that of adversarial examples simultaneously, the neural network learns a more reasonable feature distribution in which the intrinsic difference between clean data and adversarial examples can be explicitly preserved. We call such a new robust training strategy the adversarial training with bi-directional likelihood regularization (ATBLR) method. Extensive experiments on various datasets demonstrate that the ATBLR method facilitates robust classification of both clean data and adversarial examples, and performs favorably against previous state-of-the-art methods for robust visual classification.
+
+Keywords: Adversarial training, feature distribution, optimization.
+
+# 1 Introduction
+
+A key challenge for utilizing neural networks in visual classification is their vulnerability to adversarial examples, which has attracted increasing concerns in recent years [4,18,16,13]. Visual adversarial examples are crafted by adding small perturbations that are imperceptible to human eyes onto the clean data, causing the neural networks to produce wrong predictions. In addition, researches have demonstrated that adversarial examples can be transferable across different models [11,20], i.e. adversarial examples generated based on one model can successfully attack other models. As such, the existence of adversarial examples has become a serious threat to the safety of neural networks.
+
+
+Fig. 1. Illustration of the expected feature space of (a) adversarial training and (b) the proposed ATBLR method. Adversarial examples are generated to resemble other classes. But existing adversarial training methods ignore their intrinsically different feature distribution and treat them equally with the clean data. The proposed method addresses this issue by optimizing not only the class probability distribution but also the likelihood of the feature distribution.
+
+
+
+Improving the robustness of neural networks has become a critical issue in addition to increasing the classification accuracy. Numerous algorithms are proposed to address this issue, among which the most effective approaches are based on adversarial training [4,12]. The basic idea of adversarial training is to generate adversarial examples based on the latest model weights during training and feed them into the model for training. The adversarial examples are assigned the same class label as their source images. Madry et al. [12] propose a more generic form of adversarial training, which is formulated as a saddle-point optimization problem. However, the adversarial training only aims to optimize the posterior probability, without considering the feature distribution. The feature space of adversarial training is illustrated in Fig. 1(a). This paper focuses on the deepest features of neural networks, e.g. the output of the global average pooling layer after the last convolutional layer in ResNet [5]. Fig. 1(a) shows the expected feature space of adversarial training but it is difficult to achieve in practice because existing adversarial training methods ignore the intrinsic difference between the feature distributions of the clean data and adversarial examples. For instance, a clean sample from class 0 is adversarially perturbed into class 1. Previous research [6] justifies that such an adversarial example contains highly predictive but non-robust features for class 1. As such, its features should follow a different distribution compared to the features of the clean data from class 0. However, the adversarial training scheme ignores its similarity to class 1 and forces the neural network to treat it the same way as the clean data from class 0 by assigning them with the same target class distribution, which is typically a one-hot distribution of the ground-truth (GT) class. This unreasonable underlying constraint in existing adversarial training methods leads to sub-optimal classification performance.
+
+To address this issue, we propose to optimize the neural networks so that not only the clean data and its corresponding adversarial examples can be classified into the same class but also the their feature distributions are explicitly encouraged to be different. The proposed method is illustrated in Fig. 1(b). To achieve this, we explicitly learn the feature distribution of the clean data by incorporating the Gaussian Mixture Model into the network. More specifically, for the visual classification task, features belonging to each class correspond to one Gaussian component, of which the Gaussian mean is the trainable parameter updated by stochastic gradient descent and the Gaussian covariance matrix is reduced to identity matrix for simplicity. As such, the entire network can be trained end-to-end. Then we adopt the likelihood regularization term introduced in [21] to encourage the extracted features of clean data to follow the learned Gaussian Mixture distribution. We note that the likelihood regularization in this paper is intrinsically different from that in [21] because our method takes two different types of inputs, i.e. the clean data and adversarial examples, and optimizes the likelihood term towards different directions for these two inputs. For the clean data, the objective is to maximize the likelihood since we aim to learn its feature distribution through training. For the adversarial examples, since they should follow a distribution that is different from the one of clean data, the objective is to minimize the likelihood. The common objective for both the clean data and adversarial examples is the cross-entropy loss for the posterior probability and the target class. We refer to the proposed method as Adversarial Training with Bi-directional Likelihood Regularization (ATBLR). We present a comparison study in Fig. 3, Section 4.3 to demonstrate that the proposed bidirectional likelihood regularization leads to different feature distributions for the clean data and adversarial examples.
+
+Our method can be implemented efficiently, without increasing the number of trainable parameters. The classification layer in a neural network is typically a fully-connected layer with $K \times C$ trainable parameters, in which $K$ is the number of object classes and $C$ is the dimension of the features. It outputs the class distribution based on the features. Our method replaces it with a Gaussian Mixture Model without adding extra trainable parameters. Since this paper is focused on the visual classification task, the deepest features belonging to each class can be assigned with one Gaussian component. As such, the GMM also requires $K \times C$ trainable parameters in total for the $K$ Gaussian components when the covariance is reduced to identity matrix as aforementioned. The likelihood regularization, which is essentially the $l_{2}$ distance between features and the corresponding Gaussian mean, brings about very little computational overhead to the neural networks.
+
+The main contributions of this paper are summarized as follows:
+
+- We propose the bi-directional likelihood regularization on the conventional adversarial training method based on the learned feature distribution. Features of the clean data and adversarial examples are explicitly encouraged to follow different distributions.
+
+- We improve both the robustness of neural networks and the classification performance on clean data without adding extra trainable parameters.
+- We evaluate the proposed method on various datasets including MNIST [10], CIFAR-10 and CIFAR-100 [8] for different adversarial attacks. Experimental results show that the proposed algorithm performs favorably against the state-of-the-art methods.
+
+# 2 Related Work
+
+# 2.1 Adversarial Attacks
+
+Adversarial examples are crated data with small perturbations that cause misclassification in neural networks [18]. Plenty of algorithms have been developed to generate adversarial examples.
+
+Fast Gradient Sign Method (FGSM). Goodfellow et al. [4] propose the Fast Gradient Sign Method (FGSM) which uses a single-step perturbation along the gradient of the loss function $\mathcal{L}$ with respect to the input image $x$ . The adversarial example $x_{adv}$ is computed by $x_{adv} = x + \epsilon \cdot sign(\nabla_x\mathcal{L}(x,y))$ . To perform a targeted attack, we replace the true label $y$ with a wrong target label $t$ and reverse the sign of the gradient by $x_{adv} = x - \epsilon \cdot sign(\nabla_x\mathcal{L}(x,t))$ .
+
+Basic Iterative Method (BIM). Kurakin et al. [9] extends the single-step approach to an iterative attack which updates the adversarial example at each iteration by the formulation of FGSM method and clips the resulting image to constrain it within the $\epsilon$ -ball from the original input $x$ . The adversarial example is computed by $x_{adv}^{i} = \text{clip}_{x,\epsilon}(x_{adv}^{i-1} + \alpha \cdot \text{sign}(\nabla_{x}\mathcal{L}(x_{adv}^{i-1},y))$ , where $\alpha$ is the step size for each iteration.
+
+Projected Gradient Descent (PGD). Madry et al. [12] discover that stronger attacks can be generated by starting the iterative search of the BIM method from a random initialization point within the allowed norm ball centered at the clean data. This method is called the Projected Gradient Descent (PGD) method.
+
+Carlini & Wagner (C&W). Nicholas et al. [3] propose the C&W algorithm which is an optimization-based attack method. An auxiliary variable $\omega$ is introduced to reparameterize the variable for adversarial example by $x_{adv} = \frac{1}{2} (\tanh (\omega) + 1)$ and solve $\min_{\omega}\| \frac{1}{2} (\tanh (\omega) + 1) - x\| _2^2 +c\cdot f(\frac{1}{2} (\tanh (\omega) + 1))$ . The loss weight $c$ is adjusted by binary search. And $f(x) = \max (\max \{Z(x)_i:i\neq t\} -Z(x)_t, - \kappa)$ , in which $Z(x)_t$ is the logit for the target class $t$ and the non-negative parameter $\kappa$ controls the confidence for the adversarial example.
+
+# 2.2 Defensive Methods
+
+With the development of adversarial attack methods, the defense methods against them have attracted greater concerns in recent years. The defensive distillation
+
+approach [14] aims to train a substitute model with smoother gradients to increase the difficulty of generating adversarial examples. Nevertheless, it is not effective for the optimization-based attack methods such as [3]. Song et al. [17] propose to model the image distribution in the pixel space and restore an adversarial example to be clean by maximizing its likelihood. But it is difficult to effectively model the distribution in the pixel space where there is much noise and the dimension is much larger than that in the feature space. The adversarial training method [4,18] generates adversarial examples during training and use them as training data to improve the robustness against adversarial examples. However, this method is shown to be vulnerable to iterative attacks [9]. Tramer et al. [19] propose to improve the performance of adversarial training by generating adversarial examples using an ensemble of neural networks. Madry et al. [12] propose a more general framework for adversarial training and use random initialization before searching for adversarial examples to deal with iterative attack methods. Wong et al. [23] incorporate the linear programming into the training to minimize the loss for the worse case within the allowed perturbation around the clean data. However, the test accuracy on the clean data is severely compromised. Xie et al. [24] develop a network architecture which uses the non-local mean filter to remove the noises in the feature maps of adversarial examples. Song et al. [15] adopt the domain adaptation algorithms in adversarial training to learn domain-invariant representations across the clean domain and the adversarial domain. However, these methods are all based on the original adversarial training method and they do not address the issues concerning the feature distributions in the adversarial training which we discussed above.
+
+# 3 Proposed Algorithm
+
+# 3.1 Preliminaries
+
+Adversarial Training. This paper focuses on the visual classification task. Suppose the number of object classes in the dataset is $K$ . Denote the set of training samples as $\mathcal{D} = \{(x_i, y_i)\}_{i=1}^N$ , in which $x_i \in \mathbb{R}^{H \times W \times 3}$ is the image, $y_i \in \{1, 2, \dots, K\}$ is the class label and $N$ is the number of training samples. Denote the one-hot label vector corresponding to label $y_i$ as $\pmb{y}_i$ . Let $f_\theta(x): \mathbb{R}^{H \times W \times 3} \to \mathbb{R}^K$ denote a neural network parameterized by $\theta$ . The network outputs the class probability distribution given an input image. Then the classification loss function for the training pair $(x_i, y_i)$ is
+
+$$
+\mathcal {L} _ {c l s} \left(x _ {i}, y _ {i}; \theta\right) = - \boldsymbol {y} _ {i} \log f _ {\theta} \left(x _ {i}\right). \tag {1}
+$$
+
+The adversarial training method [12] is formulated as a min-max optimization problem, which is expressed as
+
+$$
+\min _ {\theta} \max _ {\| \delta_ {i} \| _ {\infty} \leq \epsilon} \frac {1}{N} \sum_ {(x _ {i}, y _ {i}) \sim \mathcal {D}} \mathcal {L} _ {c l s} (x _ {i} + \delta_ {i}, y _ {i}; \theta). \tag {2}
+$$
+
+The maximizer of the inner problem can be approximately found by using $k$ steps of the PGD attack or a single-step FGSM attack. The adversarial examples are crafted by adding the inner maximizer to the clean data. The min-max problem is solved by stochastic gradient descent by feeding the adversarial examples as inputs to the neural network.
+
+# 3.2 Modeling the Feature Distribution
+
+As discusses in Section 1, our motivation is to consider the difference between feature distributions of clean data and adversarial examples. We adopt an effective and tractable distribution to model the feature distribution, i.e. the Gaussian Mixture Model. For simplicity, the covariance matrix is reduced to the identity matrix. This is not only efficient but also beneficial for reducing the redundancy across different feature dimensions. Besides, we assume the prior distribution for each class is the constant $1 / K$ . For the visual classification task, features belonging to each class are assigned with one Gaussian component. Formally, denote the features at the deepest layer of the neural network by
+
+$$
+\tilde {x} _ {i} = h _ {\theta} \left(x _ {i}\right), \tag {3}
+$$
+
+in which $h_{\theta}(\cdot)$ represents the feature extraction process in the neural network. As such, the posterior probability of the ground-truth class $y_{i}$ is expressed by
+
+$$
+p \left(y _ {i} \mid \tilde {x} _ {i}\right) = \frac {\mathcal {N} \left(\tilde {x} _ {i} ; \mu_ {y _ {i}}\right)}{\sum_ {k = 1} ^ {K} \mathcal {N} \left(\tilde {x} _ {i} ; \mu_ {k}\right)}, \tag {4}
+$$
+
+in which $\mu_{k}$ is the Gaussian mean of class $k$ and $\mathcal{N}(\cdot)$ is the density function of Gaussian distribution.
+
+The computation in Eq. 4 can be implemented with a layer in the neural network, with the Gaussian means as its trainable parameters. This layer is deployed immediately after the deepest features of the neural network and outputs the class distribution. The entire network can be trained end-to-end and the Gaussian means are updated by gradient descent through back-propagation. Equipped with such a layer, the neural network can learn to not only predict class probabilities but also model the feature distribution.
+
+# 3.3 Likelihood Regularization for Features
+
+The adversarial training scheme in Eq. 2 adopts only the adversarial examples for training, without using the clean data. In this paper, we leverage both the clean data and the adversarial examples generated by the inner problem in Eq. 2 for training, with equal proportion. We train the neural networks equipped with the layer introduced in Section 3.2 to learn the feature distribution of clean data. In addition, we adopt the likelihood regularization [21] to maximize the likelihood of features of clean data. Formally, the likelihood regularization is defined as the
+
+negative log likelihood, which is given by
+
+$$
+\mathcal {L} _ {l k d} = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \mathcal {N} \left(h _ {\theta} \left(x _ {i}\right); \mu_ {y _ {i}}\right). \tag {5}
+$$
+
+By ignoring the constant term and constant coefficient, we derive
+
+$$
+\mathcal {L} _ {l k d} = \frac {1}{N} \sum_ {i} \| h _ {\theta} \left(x _ {i}\right) - \mu_ {y _ {i}} \| ^ {2}. \tag {6}
+$$
+
+The likelihood regularization is weighted by a hyperparameter $\lambda > 0$ and added to the cross-entropy loss during training. Hence, the final objective function for the clean data is given by
+
+$$
+\mathcal {L} = \frac {1}{N} \sum_ {\left(x _ {i}, y _ {i}\right) \sim \mathcal {D}} \left(- \boldsymbol {y} _ {i} \log f _ {\theta} \left(x _ {i}\right) + \lambda \| h _ {\theta} \left(x _ {i}\right) - \mu_ {y _ {i}} \| ^ {2}\right). \tag {7}
+$$
+
+We note that this formulation is essentially different from the center loss [22] because the center loss does not consider modeling the feature distribution. However, the mapping function $f_{\theta}(\cdot)$ here contains the Gaussian Mixture Model and the posterior probability is generated based on the learned feature distribution.
+
+By minimizing Eq. 7, the neural network not only learns to make classifications but also learns to model the feature distribution of clean data. For the adversarial training, the clean data and adversarial examples are assigned the same class label but their feature distributions should be different since adversarial examples are crafted to resemble the class other than the ground-truth one and researches [6] reveal that they contain highly predictive but non-robust features of other classes. As such, a more reasonable training approach should encourage the features of clean data and adversarial examples to be different. This can be achieved by introducing the regularization term. We propose to maximize the likelihood value for adversarial examples during training. Denote the adversarial examples generated by solving the inner maximization problem in Eq. 2 as $\{a_i\}_{i=1}^N$ , in which $a_i = x_i + \arg \max_{\delta_i} \mathcal{L}_{cls}(x_i + \delta_i, y_i; \theta)$ . We minimize the following loss for adversarial examples.
+
+$$
+\mathcal {L} _ {a d v} = \frac {1}{N} \sum_ {i} \left(- \boldsymbol {y} _ {i} \log f _ {\theta} \left(a _ {i}\right) - \lambda \left\| h _ {\theta} \left(a _ {i}\right) - \mu_ {y _ {i}} \right\| ^ {2}\right). \tag {8}
+$$
+
+The training scheme is illustrated in Fig. 2. It demonstrates two important modifications we make to the original adversarial training scheme. First, the original adversarial training is conducted on a discriminative model, which only considers the output probability distribution and maximizes the target probability. In contrast, our method explicitly models the feature distribution through end-to-end training. Second, we explicitly encourage different feature distributions by optimizing the likelihood regularization towards opposite directions for the clean data and adversarial examples. Our method facilitates a more reasonable feature distribution in the scope of robust classification and improves the classification accuracy of both clean data and various adversarial examples.
+
+
+Fig.2. Training schemes comparison. Top: the original adversarial training method [12]. Bottom: the proposed ATBLR method.
+
+# 4 Experiments
+
+To evaluate the robustness and generalization ability of the proposed method, we present experimental results on datasets including MNIST [10], CIFAR-10 and CIFAR-100 [8]. We report the natural accuracy, i.e. the accuracy of clean data, and that of adversarial examples. Following the widely adopted protocol [24,12,2], we consider the adversarial attack methods including FGSM [4], PGD [12] and C&W [3]. We evaluate the robustness of our method under two different threat models.
+
+- White-box attack: the attacker has access to all the information of the target classification model including the model architecture and model weights. The adversarial examples for testing are generated using gradient information of the target model.
+- Black-box attack: the attacker has knowledge of the model architecture but has no access to its model weights. The adversarial examples for testing are generated using the gradient information of a substitute model, which is independently trained using the same architecture and training hyperparameters as the target model.
+
+Experiments are conducted with the Tensorflow [1] using the Nvidia TITAN X GPU. All the codes and trained models will be made available to the public.
+
+# 4.1 MNIST
+
+We apply the proposed ATBLR method to train robust models for image classification to compare with the baseline method, i.e. the adversarial training [12]. The MNIST dataset [10] is a handwritten digit dataset consisting of 10 classes including 60,000 images for training and 10,000 images for testing. We use the data augmentation method including mirroring and $28 \times 28$ random cropping after 2-pixel zero paddings on each side. The models are tested on different types of adversarial examples, including the FGSM [4], PGD [12] with varying steps and restarts, C&W [3] with $\kappa = 0$ and C&W with a high confidence parameter $\kappa = 50$ (denoted as C&W-hc method).
+
+Table 1. Classification accuracy (%) on the MNIST dataset for clean data and adversarial attacks. The evaluation is conducted for both white-box and black-box attacks.
+
+| Testing Input | Steps | Restarts | Accuracy (%) |
| Adv. Training [12] | ATBLR (ours) |
| Clean | - | - | 98.8 | 99.3 |
| White-box Attack | | | | |
| FGSM | - | - | 95.6 | 97.2 |
| PGD | 40 | 1 | 93.2 | 94.8 |
| PGD | 100 | 1 | 91.8 | 94.1 |
| PGD | 40 | 20 | 90.4 | 93.5 |
| PGD | 100 | 20 | 89.3 | 92.7 |
| C&W | 40 | 1 | 94.0 | 95.8 |
| C&W-hf | 40 | 1 | 93.9 | 96.3 |
| Black-box Attack | | | | |
| FGSM | - | - | 96.8 | 98.4 |
| PGD | 40 | 1 | 96.0 | 97.7 |
| PGD | 100 | 20 | 95.7 | 97.6 |
| C&W | 40 | 1 | 97.0 | 98.8 |
| C&W-hf | 40 | 1 | 96.4 | 98.5 |
+
+Implementation Details. Following the practice in [12], we generate PGD attacks of 40 steps during training and use a network consisting of two convolutional layers with 32 and 64 filters respectively, followed by a fully connected layer of size 1024. The input images are divided by 255 so that the pixel range is $[0,1]$ . The $l_{\infty}$ norm constraint of $\epsilon = 0.3$ is imposed on the adversarial perturbations and the step size for PGD attack is 0.01. The models are trained for 50 epochs using ADAM [7] optimizer with a learning rate of 0.001. The parameter $\lambda$ in Eq. 7 and 8 which balances the trade-off between the classification loss and bi-directional likelihood regularization is set to 0.1. For the evaluation of the black-box attacks, we generate adversarial examples on an independently initialized and trained copy of the target network according to [12].
+
+The experimental results on the MNIST dataset are presented in Table 1. The results show that the strongest attack is the PGD attack with multiple restarts. It can be observed that the proposed method not only improves the robustness against adversarial attacks but also improves the accuracy of the clean data. Moreover, the performance gain is achieved without introducing any extra trainable parameters, which validates the effectiveness of addressing the difference of feature distributions between clean data and adversarial examples.
+
+# 4.2 CIFAR
+
+We apply the proposed ATBLR method to train robust classification models on the CIFAR-10 and CIFAR-100 datasets and make comparisons with previous
+
+Table 2. Classification accuracy (\%) on the CIFAR-10 dataset for clean data and adversarial attacks. The PGD attacks for testing are generated with $l_{\infty}$ norm constraint $\epsilon = 8$ and a step size of 2. We re-run the code of PATDA with $\epsilon = 8$ since it reports the result for $\epsilon = 4$ which generates weaker adversarial attacks.
+
+| Method | Clean | White-box | Black-box |
| FGSM | PGD-10 | PGD-100 | PGD-1000 | PGD-1000 |
| Network: ResNet-32 |
| Natural Training | 92.73 | 27.54 | 0.32 | 0.11 | 0.00 | 3.03 |
| Adv. Training [12] | 79.37 | 51.72 | 44.92 | 43.44 | 43.36 | 60.22 |
| IAAT [2] | 83.26 | 52.05 | 44.26 | 42.13 | 42.51 | 60.26 |
| PATDA†[15] | 83.40 | 53.81 | 46.59 | 45.27 | 44.01 | 61.79 |
| FD [24] | 84.24 | 52.81 | 45.64 | 44.60 | 44.21 | 62.84 |
| ATBLR (ours) | 86.32 | 58.60 | 50.18 | 48.56 | 47.88 | 64.38 |
| Network: WideResNet-32 |
| Natural Training | 95.20 | 32.73 | 2.17 | 0.35 | 0.00 | 4.29 |
| Adv. Training [12] | 87.30 | 56.13 | 46.26 | 45.14 | 44.87 | 61.07 |
| IAAT [2] | 91.34 | 57.08 | 48.53 | 46.50 | 46.54 | 58.20 |
| PATDA†[15] | 84.63 | 57.79 | 49.85 | 48.73 | 48.04 | 58.53 |
| FD [24] | 86.28 | 57.54 | 49.26 | 46.97 | 46.75 | 59.31 |
| ATBLR (ours) | 92.12 | 59.69 | 52.11 | 51.17 | 50.63 | 62.89 |
+
+state-of-the-art methods. The CIFAR-10 dataset [8] consists of $32 \times 32$ pixel color images from 10 classes, with 50,000 training images and 10,000 testing images. The CIFAR-100 dataset [8] has 100 classes containing 50,000 training images and 10,000 testing images. We use the typical data augmentation method including mirroring and $32 \times 32$ random cropping after 4-pixel reflection paddings on each side. We use the network architectures of ResNet-32 [5] and WideRenset-32 [25] following Madry et al. [12] and Zhang et al. [26]. Our method is compared with the natural training and previous state-of-the-art training approaches designed to improve the robustness of classification models:
+
+- Natural Training: Training with cross-entropy loss on the clean training data.
+- Adversarial Training (Adv. Training) [12]: Training on the clean training data and the adversarial examples generated during training.
+- Instance Adaptive Adversarial Training (IAAT) [2]: Training that enforces the sample-specific perturbation margin around every training sample.
+- PGD-Adversarial Training with Domain Adaptation (PATDA) [15]: Adversarial training combined with domain adaptation algorithms.
+- Feature Denoising (FD) [24]: Training that combines Adversarial Training and a network architecture with the non-local filters to remove the noise caused by the adversarial examples in feature space.
+
+Implementation Details. During adversarial training, the adversarial examples are generated by PGD-10 attacks, i.e. 10 steps of PGD attack are conducted on the clean data for each training iteration. The step size for the PGD attack is
+
+set to 2 out of 255. A $l_{\infty}$ norm constraint of $\epsilon = 8$ is imposed on the adversarial perturbations. The models are trained for 200 epochs using ADAM [7] optimizer with the learning rate of 0.001 and the batch size of 128. The parameter $\lambda$ which balances the trade-off between the classification loss and bi-directional likelihood regularization is set to 0.02. We present more quantitative results in Section 4.4 to study the influence of the parameter $\lambda$ . For evaluation in the white-box setting, the models are tested on (1) PGD-10 attacks with 5 random restarts, (2) PGD-100 attacks with 5 random restarts and (3) PGD-1000 attacks with 2 random restarts. For evaluation in the black-box setting, following the experimental setting of [2], the PGD-1000 attack with 2 random restarts is adopted.
+
+The experimental results on the CIFAR-10 dataset are presented in Table 2. We observe that the proposed method improves the classification accuracy of both the clean data and adversarial examples. Compared with the original adversarial training, the results demonstrate that our method achieves large accuracy gain by considering the feature distribution differences and introducing the bi-directional likelihood regularization during training. Moreover, our method performs favorably against the Feature Denoising (FD) method [24], which is the previous state-of-the-art method. By switching from the network of ResNet32 to its $10\times$ wider variant, the classification performance is increased due to larger model capacity. Our method can increase the model's robustness for both the simple and complex models.
+
+We present the experimental results on the CIFAR-100 dataset in Table 3. As shown by the results, the CIFAR-100 dataset is more challenging than the CIFAR-10 dataset. Nevertheless, we observe that the proposed ATBLR method consistently increases the robustness against adversarial examples and performs favorably against previous state-of-the-art methods.
+
+# 4.3 Evolution of the Likelihood Regularization
+
+During training, we propose to optimize different objective functions for clean data and adversarial examples, which are given by Eq. 7 and 8, respectively. Here we investigate the evolution of the values of $\mathcal{L}_{lkd}$ in the training progress to verify that the likelihood of clean data and adversarial examples is optimized to be different. We conduct the experiments on the CIFAR-10 dataset with the same network and training schemes as in Section 4.2. We evaluate and record the value of the likelihood regularization according to Eq. 6 for the clean data and adversarial examples, respectively, in each input batch. We compare two models. The first one is trained with the proposed ATBLR method and the second one is trained without optimizing the $\mathcal{L}_{lkd}$ during training.
+
+The curves for the values of $\mathcal{L}_{lkd}$ are plotted in Fig. 3. We note that larger $\mathcal{L}_{lkd}$ indicates smaller likelihood value since $\mathcal{L}_{lkd}$ is essentially the negative logarithm of likelihood. In the left figure, as the training converges, the $\mathcal{L}_{lkd}$ of clean data (blue) is low, which means the network learns to model the feature distribution of the clean data. The $\mathcal{L}_{lkd}$ of adversarial examples (orange) is large, which is nearly twice that of the clean data. In contrast, the right figure shows that the $\mathcal{L}_{lkd}$ values of the clean data and adversarial examples are almost the
+
+Table 3. Classification accuracy (\%) on the CIFAR-100 dataset for clean data and adversarial attacks. The PGD attacks for testing are generated with $l_{\infty}$ norm constraint $\epsilon = 8$ and a step size of 2.
+
+| Method | Clean | White-box | Black-box |
| FGSM | PGD-10 | PGD-100 | PGD-1000 | PGD-1000 |
| Network: ResNet-32 |
| Natural Training | 74.88 | 4.61 | 0.02 | 0.01 | 0.00 | 1.81 |
| Adv. Training [12] | 55.11 | 26.25 | 20.69 | 19.68 | 19.91 | 35.57 |
| IAAT [2] | 63.90 | 27.13 | 18.50 | 17.17 | 17.12 | 35.74 |
| PATDA [15] | 59.40 | 27.33 | 20.25 | 19.45 | 19.08 | 35.81 |
| FD [24] | 65.13 | 26.96 | 21.14 | 20.39 | 20.06 | 36.28 |
| ATBLR (ours) | 67.34 | 28.55 | 21.80 | 21.54 | 20.96 | 37.79 |
| Network: WideResNet-32 |
| Natural Training | 79.91 | 5.29 | 0.01 | 0.00 | 0.00 | 3.22 |
| Adv. Training [12] | 59.58 | 28.98 | 26.24 | 25.47 | 25.49 | 38.10 |
| IAAT [2] | 68.80 | 29.30 | 26.17 | 24.22 | 24.36 | 35.18 |
| PATDA [15] | 64.24 | 28.35 | 24.51 | 23.45 | 23.08 | 35.81 |
| FD [24] | 67.13 | 29.54 | 27.15 | 25.69 | 25.14 | 37.95 |
| ATBLR (ours) | 70.39 | 30.85 | 29.49 | 27.53 | 27.15 | 39.24 |
+
+same during training. The comparison verifies that the proposed method effectively encourages the features of clean data and adversarial examples to follow different distributions. The quantitative evaluation in Section 4.2 validates that introducing such a regularization during training is effective in improving the accuracy of both clean data and adversarial examples.
+
+In addition, we observe in the left figure that the two curves do not separate until about 700 iterations. This phenomenon can be explained as below. The model parameters are randomly initialized and the likelihood of both types of inputs is low at the start since the features are far from their corresponding Gaussian mean. At the early training stage, the cross-entropy loss in Eq. 7 and 8 is dominating because the $\lambda$ is small. The cross-entropy loss is driving the features to move closer to the corresponding Gaussian mean, thus decreasing the $\mathcal{L}_{lkd}$ for both types of inputs. As the cross-entropy loss becomes smaller, the likelihood regularization is having a larger impact. After a certain point of equilibrium, which is about 700 iterations in this experiment, the likelihood regularization term in Eq. 8 is slowing $\mathcal{L}_{lkd}$ from decreasing for adversarial examples. Finally, the training converges and the $\mathcal{L}_{lkd}$ value of adversarial examples is larger than that of clean data, which means that the clean data follows the learned GM distribution better and the adversarial examples follow a different distribution.
+
+
+Fig. 3. Curves of the likelihood regularization for clean data and adversarial examples. Left: the model is trained using the proposed ATBLR method. Right: the model is trained without optimizing $\mathcal{L}_{lkd}$ but we record its value during training. The experiment is conducted on the CIFAR-10 dataset with ResNet-32. The first 60 epochs are shown here since the changes in the rest 140 epochs are not obvious.
+
+
+
+# 4.4 Hyper-parameter Analysis
+
+We study the effect of choosing different values of $\lambda$ in the proposed ATBLR method and compare the performance. We conduct the experiments on the CIFAR-10 and CIFAR-100 datasets using the ResNet-32 network.
+
+The experimental results are presented in Fig. 4. The PGD attack is stronger than FGSM. Nevertheless, our method improves the classification performance for different types of attacks and the clean data. Comparing results of $\lambda = 0$ and the others, we conclude that the ATBLR method can improve the classification performance consistently for different values of the hyper-parameter $\lambda$ . The results also demonstrate that it is disadvantageous to set a $\lambda$ that is too large. This is reasonable considering that $\lambda$ is a balancing coefficient for the classification loss and the likelihood regularization. Too large a $\lambda$ will lead to the dominance of the likelihood regularization term. This damages the classification performance because the features of all the clean data tend to collapse into one point under this objective function. Nevertheless, our method makes steady improvements for different $\lambda$ in a wide range. We choose $\lambda = 0.02$ for experiments in Section 4.2 based on this hyper-parameter study.
+
+# 4.5 Adversaries for Training
+
+In the previous experiments, following other works, we select PGD attacks with 10 steps as the adversarial examples for training. We investigate the effect of adopting other alternatives and present the results on the CIFAR-10 dataset in Table 4. The results show that the performance gain that our method achieves becomes larger when stronger attacks are used for training. For example, if the training adversaries are switched from PGD-10 to the stronger PGD-100, the performance gain on clean data is increased from $86.32 - 79.37 = 6.95$ to $86.49 - 77.45 = 9.04$ , likewise in other columns. This is expected because stronger adversarial examples have greater similarity to another class. As such, it is more favorable if they are encouraged to follow a feature distribution which is different from that of clean data of the original class.
+
+
+
+
+Fig. 4. Hyper-parameter study for $\lambda$ on the CIFAR-10 dataset (top) and the CIFAR-100 dataset (bottom). $\lambda = 0$ denotes the original adversarial training.
+
+Table 4. Classification accuracy (%) of the proposed ATBLR method / the original adversarial training when trained with different adversaries.
+
+| Model | Clean | FGSM | PGD-10 | PGD-100 |
| Training w/ FGSM | 89.83/87.40 | 91.87/90.93 | 1.03/0.00 | 0.14/0.00 |
| Training w/ PGD-10 | 86.32/79.37 | 58.60/51.72 | 50.18/44.92 | 48.56/43.44 |
| Training w/ PGD-100 | 86.49/77.45 | 58.74/51.58 | 51.25/45.06 | 52.37/45.71 |
+
+# 5 Conclusion
+
+In this paper, we propose a novel method for training robust classification models against adversarial attacks. In contrast to the previous adversarial training method which optimizes only the posterior class distribution, our method learns the feature distribution of clean data through end-to-end training. Furthermore, the intrinsic difference between feature distributions for clean data and adversarial examples is preserved by optimizing the likelihood regularization in opposite directions for these two types of inputs. Moreover, our method introduces no extra trainable parameters. Extensive experiments demonstrate that our method performs favorably against previous state-of-the-art methods in terms of the classification accuracy of both the clean data and adversarial examples.
+
+Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grant 61673234 and the program of China Scholarships Council (No. 201906210354). M.-H. Yang is supported in part by NSF CAREER Grant 1149783.
+
+# References
+
+1. Abadi, M., et al.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015), https://www.tensorflow.org/, software available from tensorflow.org 8
+2. Balaji, Y., Goldstein, T., Hoffman, J.: Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. arXiv preprint arXiv:1910.08051 (2019) 8, 10, 11, 12
+3. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (SP) (2017) 4, 5, 8
+4. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014) 1, 2, 4, 5, 8
+5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016) 2, 10
+6. Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: NeurIPS (2019) 2, 7
+7. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 9, 11
+8. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Tech. rep., University of Toronto (2009) 4, 8, 10
+9. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016) 4, 5
+0. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE (1998) 4, 8
+1. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 (2016) 1
+2. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018) 2, 4, 5, 8, 9, 10, 12
+3. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security and Privacy (EuroS&P) (2016) 1
+4. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (SP) (2016) 5
+5. Song, C., He, K., Wang, L., Hopcroft, J.E.: Improving the generalization of adversarial training with domain adaptation. In: ICLR (2019) 5, 10, 12
+6. Song, D., Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Tramer, F., Prakash, A., Kohno, T.: Physical adversarial examples for object detectors. In: 12th USENIX Workshop on Offensive Technologies (WOOT) (2018) 1
+7. Song, Y., Kim, T., Nowozin, S., Ermon, S., Kushman, N.: Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766 (2017) 5
+8. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013) 1, 4, 5
+9. Tramér, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 (2017) 5
+20. Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453 (2017) 1
+
+21. Wan, W., Zhong, Y., Li, T., Chen, J.: Rethinking feature distribution for loss functions in image classification. In: CVPR (2018) 3, 6
+22. Wen, Y., Zhang, K., Li, Z., Qiao, Y.: A discriminative feature learning approach for deep face recognition. In: ECCV (2016) 7
+23. Wong, E., Kolter, J.Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. ICML (2018) 5
+24. Xie, C., Wu, Y., Maaten, L.v.d., Yuille, A.L., He, K.: Feature denoising for improving adversarial robustness. In: CVPR (2019) 5, 8, 10, 11, 12
+25. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016) 10
+26. Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573 (2019) 10
\ No newline at end of file
diff --git a/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/images.zip b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2d7d1e5bbbf0e8e16a6214c389d5d7df795da26d
--- /dev/null
+++ b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2a6808272db0a44ff39ee46b0195696869b39ea7b9985eb5b19d8a4c295fa04b
+size 395094
diff --git a/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/layout.json b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ea8e46a989fce25abc2a85edd5bee8c56db1c0ba
--- /dev/null
+++ b/adversarialtrainingwithbidirectionallikelihoodregularizationforvisualclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d57254247daf4e1df7ed988e5d4a930b615a3c3dda3e0836f741aff337d9f37a
+size 340878
diff --git a/adversarialtshirtevadingpersondetectorsinaphysicalworld/4675c024-c3f6-4ce2-b97b-71f6b503f065_content_list.json b/adversarialtshirtevadingpersondetectorsinaphysicalworld/4675c024-c3f6-4ce2-b97b-71f6b503f065_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1f2380d0ce25633d51ba2070209c526fea9f2b75
--- /dev/null
+++ b/adversarialtshirtevadingpersondetectorsinaphysicalworld/4675c024-c3f6-4ce2-b97b-71f6b503f065_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:809a2eac40bf46c0c82c8e99e701e7e14dfae71b94ca178a5b0c000d97423077
+size 74152
diff --git a/adversarialtshirtevadingpersondetectorsinaphysicalworld/4675c024-c3f6-4ce2-b97b-71f6b503f065_model.json b/adversarialtshirtevadingpersondetectorsinaphysicalworld/4675c024-c3f6-4ce2-b97b-71f6b503f065_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a99ed794eeee3c99d423a8582e4451dee50b7ab9
--- /dev/null
+++ b/adversarialtshirtevadingpersondetectorsinaphysicalworld/4675c024-c3f6-4ce2-b97b-71f6b503f065_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb84e21867fbd63180163de2304381141fd5f8d73cdb555e4196c7ac5ae785e3
+size 89841
diff --git a/adversarialtshirtevadingpersondetectorsinaphysicalworld/4675c024-c3f6-4ce2-b97b-71f6b503f065_origin.pdf b/adversarialtshirtevadingpersondetectorsinaphysicalworld/4675c024-c3f6-4ce2-b97b-71f6b503f065_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a27737bd9144abdb91c7a25696a934d9f2f08666
--- /dev/null
+++ b/adversarialtshirtevadingpersondetectorsinaphysicalworld/4675c024-c3f6-4ce2-b97b-71f6b503f065_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec6a2825942128fffffd10ed1865d4c7ec8df012d30d891fcda95986f45d1e37
+size 9319260
diff --git a/adversarialtshirtevadingpersondetectorsinaphysicalworld/full.md b/adversarialtshirtevadingpersondetectorsinaphysicalworld/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fc2388936b6be9f2bee1e3d16b50152b71a3d856
--- /dev/null
+++ b/adversarialtshirtevadingpersondetectorsinaphysicalworld/full.md
@@ -0,0 +1,262 @@
+# Adversarial T-shirt! Evading Person Detectors in A Physical World
+
+Kaidi Xu $^{1}$ Gaoyuan Zhang $^{2}$ Sijia Liu $^{2}$ Quanfu Fan $^{2}$ Mengshu Sun $^{1}$ Hongge Chen $^{3}$ Pin-Yu Chen $^{2}$ Yanzhi Wang $^{1}$ Xue Lin $^{1}$
+
+$^{1}$ Northeastern University, USA
+ $^{2}$ MIT-IBM Watson AI Lab, IBM Research, USA
+
+3Massachusetts Institute of Technology, USA
+
+Abstract. It is known that deep neural networks (DNNs) are vulnerable to adversarial attacks. The so-called physical adversarial examples deceive DNN-based decision makers by attaching adversarial patches to real objects. However, most of the existing works on physical adversarial attacks focus on static objects such as glass frames, stop signs and images attached to cardboard. In this work, we propose Adversarial $T$ -shirts, a robust physical adversarial example for evading person detectors even if it could undergo non-rigid deformation due to a moving person's pose changes. To the best of our knowledge, this is the first work that models the effect of deformation for designing physical adversarial examples with respect to non-rigid objects such as T-shirts. We show that the proposed method achieves $74\%$ and $57\%$ attack success rates in the digital and physical worlds respectively against YOLOv2. In contrast, the state-of-the-art physical attack method to fool a person detector only achieves $18\%$ attack success rate. Furthermore, by leveraging min-max optimization, we extend our method to the ensemble attack setting against two object detectors YOLO-v2 and Faster R-CNN simultaneously.
+
+Keywords: Physical adversarial attack; object detection; deep learning
+
+# 1 Introduction
+
+The vulnerability of deep neural networks (DNNs) against adversarial attacks (namely, perturbed inputs deceiving DNNs) has been found in applications spanning from image classification to speech recognition [33,21,34,37,6,32,2]. Early works studied adversarial examples only in the digital space. Recently, some works showed that it is possible to create adversarial perturbations on physical objects and fool DNN-based decision makers under a variety of real-world conditions [28,14,1,15,25,7,30,5,20]. The design of physical adversarial attacks helps to evaluate the robustness of DNNs deployed in real-life systems, e.g., autonomous vehicles and surveillance systems. However, most of the studied physical adversarial attacks encounter two limitations: a) the physical objects are usually considered being static, and b) the possible deformation of adversarial pattern
+
+
+Fig. 1: Evaluation of the effectiveness of adversarial T-shirts to evade person detection by YOLOv2. Each row corresponds to a specific attack method while each column except the last one shows an individual frame in a video. The last column shows the adversarial patterns applied to the T-shirts. At each frame, there are two persons, one of whom wears the adversarial T-shirt. First row: digital adversarial T-shirt generated using TPS. Second row: physical adversarial T-shirt generated using TPS. Third row: physical adversarial T-shirt generated using affine transformation (namely, in the absence of TPS). Fourth row: T-shirt with physical adversarial patch considered in [30] to evade person detectors.
+
+attached to a moving object (e.g., due to pose change of a moving person) is commonly neglected. In this paper, we propose a new type of physical adversarial attack, adversarial $T$ -shirt, to evade DNN-based person detectors when a person wears the adversarial T-shirt; see the second row of Fig. 1 for illustrative examples.
+
+Related work Most of the existing physical adversarial attacks are generated against image classifiers and object detectors. In [28], a face recognition system is fooled by a real eyeglass frame designed under a crafted adversarial pattern. In [14], a stop sign is misclassified by adding black or white stickers on it against the image classification system. In [20], an image classifier is fooled by placing a crafted sticker at the lens of a camera. In [1], a so-called Expectation over Transformation (EoT) framework was proposed to synthesize adversarial examples robust to a set of physical transformations such as rotation, translation, contrast, brightness, and random noise. Moreover, the crafted adversarial examples on the rigid objects can be designed in camouflage style [35] or natural style [11] that appear legitimate to human observers in the real world. Compared to attacking image classifiers, generating physical adversarial attacks against object detectors is more involved. For example, the adversary is required to mislead the bounding box detector of an object when attacking YOLOv2 [26] and SSD [24]. A well-known success of such attacks in the physical world is the generation of adversarial stop sign [15], which deceives state-of-the-art object detectors such as YOLOv2 and Faster R-CNN [27].
+
+The most relevant approach to ours is the work of [30], which demonstrates that a person can evade a detector by holding a cardboard with an adversarial patch. However, such a physical attack restricts the adversarial patch to be attached to a rigid carrier (namely, cardboard), and is different from our setting here where the generated adversarial pattern is directly printed on a T-shirt. We show that the attack proposed by [30] becomes ineffective when the adversarial patch is attached to a T-shirt (rather than a cardboard) and worn by a moving person (see the fourth row of Fig. 1). At the technical side, different from [30] we propose a thin plate spline (TPS) based transformer to model deformation of non-rigid objects, and develop an ensemble physical attack that fools object detectors YOLOv2 and Faster R-CNN simultaneously. We highlight that our proposed adversarial T-shirt is not just a T-shirt with printed adversarial patch for clothing fashion, it is a physical adversarial wearable designed for evading person detectors in the real world.
+
+Our work is also motivated by the importance of person detection on intelligent surveillance. DNN-based surveillance systems have significantly advanced the field of object detection [18,17]. Efficient object detectors such as Faster R-CNN [27], SSD [24], and YOLOv2 [26] have been deployed for human detection. Thus, one may wonder whether or not there exists a security risk for intelligent surveillance systems caused by adversarial human wearables, e.g., adversarial T-shirts. However, paralyzing a person detector in the physical world requires substantially more challenges such as low resolution, pose changes and occlusions. The success of our adversarial T-shirt against real-time person detectors offers new insights for designing practical physical-world adversarial human wearables.
+
+Contributions We summarize our contributions as follows:
+
+- We develop a TPS-based transformer to model the temporal deformation of an adversarial T-shirt caused by pose changes of a moving person. We
+
+also show the importance of such non-rigid transformation to ensuring the effectiveness of adversarial T-shirts in the physical world.
+
+- We propose a general optimization framework for design of adversarial T-shirts in both single-detector and multiple-detector settings.
+- We conduct experiments in both digital and physical worlds and show that the proposed adversarial T-shirt achieves $74\%$ and $57\%$ attack success rates respectively when attacking YOLOv2. By contrast, the physical adversarial patch [30] printed on a T-shirt only achieves $18\%$ attack success rate. Some of our results are highlighted in Fig. 1.
+
+# 2 Modeling Deformation of A Moving Object by Thin Plate Spline Mapping
+
+In this section, we begin by reviewing some existing transformations required in the design of physical adversarial examples. We then elaborate on the Thin Plate Spline (TPS) mapping we adopt in this work to model the possible deformation encountered by a moving and non-rigid object.
+
+Let $\mathbf{x}$ be an original image (or a video frame), and $t(\cdot)$ be the physical transformer. The transformed image $\mathbf{z}$ under $t$ is given by
+
+$$
+\mathbf {z} = t (\mathbf {x}). \tag {1}
+$$
+
+Existing transformations. In [1], the parametric transformers include scaling, translation, rotation, brightness and additive Gaussian noise; see details in [1, Appendix D]. In [23], the geometry and lighting transformations are studied via parametric models. Other transformations including perspective transformation, brightness adjustment, resampling (or image resizing), smoothing and saturation are considered in [29,9]. All the existing transformations are included in our library of physical transformations. However, they are not sufficient to model the cloth deformation caused by pose change of a moving person. For example, the second and third rows of Fig. 1 show that adversarial T-shirts designed against only existing physical transformations yield low attack success rates.
+
+
+(a)
+
+
+Fig. 2: Generation of TPS. (a) and (b): Two frames with checkerboard detection results. (c): Anchor point matching process between two frames (d): Real-world close deformation in (b) versus the synthesized TPS transformation (right plot).
+
+
+(b)
+
+
+
+
+(c)
+
+
+(d)
+
+
+
+TPS transformation for cloth deformation. A person's movement can result in significantly and constantly changing wrinkles (aka deformations) in her clothes. This makes it challenging to develop an adversarial T-shirt effectively in the real world. To circumvent this challenge, we employ TPS mapping [4] to model the cloth deformation caused by human body movement. TPS has been widely used as the non-rigid transformation model in image alignment and shape matching [19]. It consists of an affine component and a non-affine warping component. We will show that the non-linear warping part in TPS can provide an effective means of modeling cloth deformation for learning adversarial patterns of non-rigid objects.
+
+TPS learns a parametric deformation mapping from an original image $\mathbf{x}$ to a target image $\mathbf{z}$ through a set of control points with given positions. Let $\mathbf{p} := (\phi, \psi)$ denote the 2D location of an image pixel. The deformation from $\mathbf{x}$ to $\mathbf{z}$ is then characterized by the displacement of every pixel, namely, how a pixel at $\mathbf{p}^{(x)}$ on image $\mathbf{x}$ changes to the pixel on image $\mathbf{z}$ at $\mathbf{p}^{(z)}$ , where $\phi^{(z)} = \phi^{(x)} + \Delta_{\phi}$ and $\psi^{(z)} = \psi^{(x)} + \Delta_{\psi}$ , and $\Delta_{\phi}$ and $\Delta_{\psi}$ denote the pixel displacement on image $\mathbf{x}$ along $\phi$ direction and $\psi$ direction, respectively.
+
+Given a set of $n$ control points with locations $\{\hat{\mathbf{p}}_i^{(x)}\coloneqq (\hat{\phi}_i^{(x)},\hat{\psi}_i^{(x)})\}_{i = 1}^n$ on image $\mathbf{x}$ , TPS provides a parametric model of pixel displacement when mapping $\mathbf{p}^{(x)}$ to $\mathbf{p}^{(z)}$ [8]
+
+$$
+\Delta (\mathbf {p} ^ {(x)}; \boldsymbol {\theta}) = a _ {0} + a _ {1} \phi^ {(x)} + a _ {2} \psi^ {(x)} + \sum_ {i = 1} ^ {n} c _ {i} U \left(\| \hat {\mathbf {p}} _ {i} ^ {(x)} - \mathbf {p} ^ {(x)} \| _ {2}\right), \tag {2}
+$$
+
+where $U(r) = r^2\log (r)$ and $\pmb{\theta} = [\mathbf{c};\mathbf{a}]$ are the TPS parameters, and $\varDelta(\mathbf{p}^{(x)};\pmb{\theta})$ represents the displacement along either $\phi$ or $\psi$ direction.
+
+Moreover, given the locations of control points on the transformed image $\mathbf{z}$ (namely, $\{\hat{\mathbf{p}}_i^{(z)}\}_{i = 1}^n$ ), TPS resorts to a regression problem to determine the parameters $\pmb{\theta}$ in (2). The regression objective is to minimize the distance between $\{\varDelta_{\phi}(\mathbf{p}_i^{(x)};\pmb {\theta}_\phi)\}_{i = 1}^n$ and $\{\hat{\varDelta}_{\phi ,i}:= \hat{\phi}_i^{(z)} - \hat{\phi}_i^{(x)}\}_{i = 1}^n$ along the $\phi$ direction, and the distance between $\{\varDelta_{\psi}(\mathbf{p}_i^{(x)};\pmb {\theta}_\psi)\}_{i = 1}^n$ and $\{\hat{\varDelta}_{\psi ,i}:= \hat{\psi}_i^{(z)} - \hat{\psi}_i^{(x)}\}_{i = 1}^n$ along the $\psi$ direction, respectively. Thus, TPS (2) is applied to coordinate $\phi$ and $\psi$ separately (corresponding to parameters $\pmb{\theta}_{\phi}$ and $\pmb{\theta}_{\psi}$ ). The regression problem can be solved by the following linear system of equations [10]
+
+$$
+\left[ \begin{array}{l l} \mathbf {K} & \mathbf {P} \\ \mathbf {P} ^ {T} & \mathbf {0} _ {3 \times 3} \end{array} \right] \boldsymbol {\theta} _ {\phi} = \left[ \begin{array}{l} \hat {\boldsymbol {\Delta}} _ {\phi} \\ \mathbf {0} _ {3 \times 1} \end{array} \right], \left[ \begin{array}{l l} \mathbf {K} & \mathbf {P} \\ \mathbf {P} ^ {T} & \mathbf {0} _ {3 \times 3} \end{array} \right] \boldsymbol {\theta} _ {\psi} = \left[ \begin{array}{l} \hat {\boldsymbol {\Delta}} _ {\psi} \\ \mathbf {0} _ {3 \times 1} \end{array} \right], \qquad (3)
+$$
+
+where the $(i,j)$ th element of $\mathbf{K} \in \mathbb{R}^{n \times n}$ is given by $K_{ij} = U(\| \hat{\mathbf{p}}_i^{(x)} - \hat{\mathbf{p}}_j^{(x)} \|_2)$ , the $i$ th row of $\mathbf{P} \in \mathbb{R}^{n \times 3}$ is given by $P_i = [1, \hat{\phi}_i^{(x)}, \hat{\psi}_i^{(x)}]$ , and the $i$ th elements of $\hat{\Delta}_{\phi} \in \mathbb{R}^n$ and $\hat{\Delta}_{\psi} \in \mathbb{R}^n$ are given by $\hat{\Delta}_{\phi,i}$ and $\hat{\Delta}_{\psi,i}$ , respectively.
+
+Non-trivial application of TPS The difficulty of implementing TPS for design of adversarial T-shirts exists from two aspects: 1) How to determine the set of
+
+control points? And 2) how to obtain positions $\{\hat{\mathbf{p}}_i^{(x)}\}$ and $\{\hat{\mathbf{p}}_i^{(z)}\}$ of control points aligned between a pair of video frames $\mathbf{x}$ and $\mathbf{z}$ ?
+
+To address the first question, we print a checkerboard on a T-shirt and use the camera calibration algorithm [16,36] to detect points at the intersection between every two checkerboard grid regions. These successfully detected points are considered as the control points of one frame. Fig. 2-(a) shows the checkerboard-printed T-shirt, together with the detected intersection points. Since TPS requires a set of control points aligned between two frames, the second question on point matching arises. The challenge lies in the fact that the control points detected at one video frame are different from those at another video frame (e.g., due to missing detection). To address this issue, we adopt a 2-stage procedure, coordinate system alignment followed by point aliment, where the former refers to conducting a perspective transformation from one frame to the other, and the latter finds the matched points at two frames through the nearest-neighbor method. We provide an illustrative example in Fig. 2-(c). We refer readers to Appendix A for more details about our method.
+
+# 3 Generation of Adversarial T-shirt: An Optimization Perspective
+
+In this section, we begin by formalizing the problem of adversarial T-shirt and introducing notations used in our setup. We then propose to design a universal perturbation used in our adversarial T-shirt to deceive a single object detector. We lastly propose a min-max (robust) optimization framework to design the universal adversarial patch against multiple object detectors.
+
+Let $\mathcal{D} := \{\mathbf{x}_i\}_{i=1}^M$ denote $M$ video frames extracted from one or multiple given videos, where $\mathbf{x}_i \in \mathbb{R}^d$ denotes the $i$ th frame. Let $\pmb{\delta} \in \mathbb{R}^d$ denote the universal adversarial perturbation applied to $\mathcal{D}$ . The adversarial T-shirt is then characterized by $M_{c,i} \circ \pmb{\delta}$ , where $M_{c,i} \in \{0,1\}^d$ is a bounding box encoding the position of the cloth region to be perturbed at the $i$ th frame, and $\circ$ denotes element-wise product. The goal of adversarial T-shirt is to design $\pmb{\delta}$ such that the perturbed frames of $\mathcal{D}$ are mis-detected by object detectors.
+
+Fooling a single object detector. We generalize the Expectation over Transformation (EoT) method in [3] for design of adversarial T-shirts. Note that different from the conventional EoT, a transformers' composition is required for generating an adversarial T-shirt. For example, a perspective transformation on the bounding box of the T-shirt is composited with an TPS transformation applied to the cloth region. Let us begin by considering two video frames, an anchor image $\mathbf{x}_0$ (e.g., the first frame in the video) and a target image $\mathbf{x}_i$ for $i \in [M]^1$ . Given the bounding boxes of the person $(M_{p,0} \in \{0,1\}^d)$ and the T-shirt $(M_{c,0} \in \{0,1\}^d)$ at $\mathbf{x}_0$ , we apply the perspective transformation from $\mathbf{x}_0$ to $\mathbf{x}_i$ to obtain the bounding boxes $M_{p,i}$ and $M_{c,i}$ at image $\mathbf{x}_i$ . In the absence
+
+
+Fig. 3: Overview of the pipeline to generate adversarial T-shirts. First, the video frames containing a person whom wears the T-shirt with printed checkerboard pattern are used as training data. Second, the universal adversarial perturbation (to be designed) applies to the cloth region by taking into account different kinds of transformations. Third, the adversarial perturbation is optimized through problem (6) by minimizing the largest bounding-box probability belonging to the 'person' class. The optimization procedure is performed as a closed loop through back-propagation.
+
+of physical transformations, the perturbed image $\mathbf{x}_i^{\prime}$ with respect to (w.r.t.) $\mathbf{x}_i$ is given by
+
+$$
+\mathbf {x} _ {i} ^ {\prime} = \underbrace {\left(\mathbf {1} - M _ {p , i}\right) \circ \mathbf {x} _ {i}} _ {\mathrm {A}} + \underbrace {M _ {p , i} \circ \mathbf {x} _ {i}} _ {\mathrm {B}} - \underbrace {M _ {c , i} \circ \mathbf {x} _ {i}} _ {\mathrm {C}} + \underbrace {M _ {c , i} \circ \boldsymbol {\delta}} _ {\mathrm {D}}, \tag {4}
+$$
+
+where the term $A$ denotes the background region outside the bounding box of the person, the term $B$ is the person-bounded region, the term $C$ erases the pixel values within the bounding box of the T-shirt, and the term $D$ is the newly introduced additive perturbation. In (4), the prior knowledge on $M_{p,i}$ and $M_{c,i}$ is acquired by person detector and manual annotation, respectively. Without taking into account physical transformations, Eq. (4) simply reduces to the conventional formulation of adversarial example $(1 - M_{c,i}) \circ \mathbf{x}_i + M_{c,i} \circ \boldsymbol{\delta}$ .
+
+Next, we consider three main types of physical transformations: a) TPS transformation $t_{\mathrm{TPS}} \in \mathcal{T}_{\mathrm{TPS}}$ applying to the adversarial perturbation $\delta$ for modeling the effect of cloth deformation, b) physical color transformation $t_{\mathrm{color}}$ which converts digital colors to those printed and visualized in the physical world, and c) conventional physical transformation $t \in \mathcal{T}$ applying to the region within the person's bounding box, namely, $(M_{p,i} \circ \mathbf{x}_i - M_{c,i} \circ \mathbf{x}_i + M_{c,i} \circ \delta)$ . Here $\mathcal{T}_{\mathrm{TPS}}$ denotes the set of possible non-rigid transformations, $t_{\mathrm{color}}$ is given by a regression model learnt from the color spectrum in the digital space to its printed counterpart, and $\mathcal{T}$ denotes the set of commonly-used physical transformations, e.g., scaling, translation, rotation, brightness, blurring and contrast. A modification of (4) under different sources of transformations is then given by
+
+$$
+\mathbf {x} _ {i} ^ {\prime} = t _ {\text {e n v}} (\mathrm {A} + t (\mathrm {B} - \mathrm {C} + t _ {\text {c o l o r}} (M _ {c, i} \circ t _ {\text {T P S}} (\boldsymbol {\delta} + \mu \mathbf {v})))) \tag {5}
+$$
+
+for $t \in \mathcal{T}$ , $t_{\mathrm{TPS}} \in \mathcal{T}_{\mathrm{TPS}}$ , and $\mathbf{v} \sim \mathcal{N}(0,1)$ . In (5), the terms A, B and C have been defined in (4), and $t_{\mathrm{env}}$ denotes a brightness transformation to model the environmental brightness condition. In (5), $\mu \mathbf{v}$ is an additive Gaussian noise that allows the variation of pixel values, where $\mu$ is a given smoothing parameter and we set it as 0.03 in our experiments such that the noise realization falls into the range [−0.1, 0.1]. The randomized noise injection is also known as Gaussian smoothing [12], which makes the final objective function smoother and benefits the gradient computation during optimization.
+
+Different with the prior works, e.g. [28,13], established a non-printability score (NPS) to measure the distance between the designed perturbation vector and a library of printable colors, we propose to model the color transformer $t_{\mathrm{color}}$ using a quadratic polynomial regression. The detailed color mapping is showed in Appendix B.
+
+With the aid of (5), the EoS formulation to fool a single object detector is cast as
+
+$$
+\underset {\delta} {\text {m i n i m i z e}} \frac {1}{M} \sum_ {i = 1} ^ {M} \mathbb {E} _ {t, t _ {\mathrm {T P S}}, \mathbf {v}} \left[ f \left(\mathbf {x} _ {i} ^ {\prime}\right) \right] + \lambda g (\delta) \tag {6}
+$$
+
+where $f$ denotes an attack loss for misdetection, $g$ is the total-variation norm that enhances perturbations' smoothness [15], and $\lambda > 0$ is a regularization parameter. We further elaborate on our attack loss $f$ in problem (6). In YOLOv2, a probability score associated with a bounding box indicates whether or not an object is present within this box. Thus, we specify the attack loss as the largest bounding-box probability over all bounding boxes belonging to the 'person' class. For Faster R-CNN, we attack all bounding boxes towards the class 'background'. The more detailed derivation on the attack loss is provided in Appendix C. Fig. 3 presents an overview of our approach to generate adversarial T-shirts.
+
+Min-max optimization for fooling multiple object detectors. Unlike digital space, the transferability of adversarial attacks largely drops in the physical environment, thus we consider a physical ensemble attack against multiple object detectors. It was recently shown in [31] that the ensemble attack can be designed from the perspective of min-max optimization, and yields much higher worst-case attack success rate than the averaging strategy over multiple models. Given $N$ object detectors associated with attack loss functions $\{f_i\}_{i=1}^N$ , the physical ensemble attack is cast as
+
+$$
+\underset {\boldsymbol {\delta} \in \mathcal {C}} {\text {m i n i m i z e}} \underset {\mathbf {w} \in \mathcal {P}} {\text {m a x i m i z e}} \sum_ {i = 1} ^ {N} w _ {i} \phi_ {i} (\boldsymbol {\delta}) - \frac {\gamma}{2} \| \mathbf {w} - \mathbf {1} / N \| _ {2} ^ {2} + \lambda g (\boldsymbol {\delta}), \tag {7}
+$$
+
+where $\mathbf{w}$ are known as domain weights that adjust the importance of each object detector during the attack generation, $\mathcal{P}$ is a probabilistic simplex given by $\mathcal{P} = \{\mathbf{w}|\mathbf{1}^T\mathbf{w} = 1,\mathbf{w}\geq \mathbf{0}\}$ , $\gamma >0$ is a regularization parameter, and $\phi_i(\pmb {\delta})\coloneqq \frac{1}{M}\sum_{i = 1}^{M}\mathbb{E}_{t\in \mathcal{T},t_{\mathrm{TPS}}\in \mathcal{T}_{\mathrm{TPS}}}[f(\mathbf{x}_i^{\prime})]$ following (6). In (7), if $\gamma = 0$ , then the adversarial perturbation $\pmb{\delta}$ is designed over the maximum attack loss (worst-case attack scenario) since maximize $_{\mathbf{w}\in \mathcal{P}}\sum_{i = 1}^{N}w_{i}\phi_{i}(\pmb {\delta}) = \phi_{i^{*}}(\pmb {\delta})$ , where $i^{*} = \arg \max_{i}\phi_{i}(\pmb {\delta})$ at a fixed $\pmb{\delta}$ . Moreover, if $\gamma \rightarrow \infty$ , then the inner maximization of problem (7)
+
+implies $\mathbf{w} \to \mathbf{1} / N$ , namely, an averaging scheme over $M$ attack losses. Thus, the regularization parameter $\gamma$ in (7) strikes a balance between the max-strategy and the average-strategy.
+
+# 4 Experiments
+
+In this section, we demonstrate the effectiveness of our approach (we call advT-TPS) for design of the adversarial T-shirt by comparing it with 2 attack baseline methods, a) adversarial patch to fool YOLOv2 proposed in [30] and its printed version on a T-shirt (we call advPatch²), and b) the variant of our approach in the absence of TPS transformation, namely, $\mathcal{T}_{\mathrm{TPS}} = \emptyset$ in (5) (we call advT-Affine). We examine the convergence behavior of proposed algorithms as well as its Attack Success Rate³ (ASR) in both digital and physical worlds. We clarify our algorithmic parameter setting in Appendix D.
+
+Prior to detailed illustration, we briefly summarize the attack performance of our proposed adversarial T-shirt. When attacking YOLOv2, our method achieves $74\%$ ASR in the digital world and $57\%$ ASR in the physical world, where the latter is computed by averaging successfully attacked video frames over all different scenarios (i.e., indoor, outdoor and unforeseen scenarios) listed in Table 2. When attacking Faster R-CNN, our method achieves $61\%$ and $47\%$ ASR in the digital and the physical world, respectively. By contrast, the baseline advPatch only achieves around $25\%$ ASR in the best case among all digital and physical scenarios against either YOLOv2 or Faster R-CNN (e.g., $18\%$ against YOLOv2 in the physical case).
+
+# 4.1 Experimental Setup
+
+Data collection. We collect two datasets for learning and testing our proposed attack algorithm in digital and physical worlds. The training dataset contains 40 videos (2003 video frames) from 4 different scenes: one outdoor and three indoor scenes. Each video takes 5-10 seconds and was captured by a moving person wearing a T-shirt with printed checkerboard. The desired adversarial pattern is then learnt from the training dataset. The test dataset in the digital space contains 10 videos captured under the same scenes as the training dataset. This dataset is used to evaluate the attack performance of the learnt adversarial pattern in the digital world. In the physical world, we customize a T-shirt with the printed adversarial pattern learnt from our algorithm. Another 24 test videos (Section 4.3) are then collected at a different time capturing two or three persons (one of them wearing the adversarial T-shirt) walking a side by side or b) at different distances. An additional control experiment in which actors wearing adversarial T-shirts walk in an exaggerated way is conducted to introduce large
+
+pose changes in the test data. In addition, we also test our adversarial T-shirt by unforeseen scenarios, where the test videos involve different locations and different persons which are never covered in the training dataset. All videos were taken using an iPhone X and resized to $416 \times 416$ . In Table A2 of the Appendix F, we summarize the collected dataset under all circumstances.
+
+Object detectors. We use two state-of-the-art object detectors: Faster R-CNN [27] and YOLOv2 [26] to evaluate our method. These two object detectors are both pre-trained on COCO dataset [22] which contains 80 classes including 'person'. The minimum detection threshold are set as 0.7 for both Faster R-CNN and YOLOv2 by default. The sensitivity analysis of this threshold is performed in Fig. A4 Appendix D.
+
+# 4.2 Adversarial T-shirt in the digital world
+
+Convergence performance of our proposed attack algorithm. In Fig. 4, we show ASR against the epoch number used by our proposed algorithm to solve problem (6). Here the success of our attack at one testing frame is required to meet two conditions, a) misdetection of the person who wears the adversarial T-shirt, and b) successful detection of the person whom dresses a normal cloth. As we can see, the proposed attack method coversges well for attacking both YOLOv2 and Faster R-CNN. We also note that attacking Faster R-CNN is more difficult than attacking YOLOv2. Furthermore, if TPS is not applied during training, then ASR drops around $30\%$ compared to our approach by leveraging TPS.
+
+
+Fig. 4: ASR v.s. epoch numbers against YOLOv2 (left) and Faster R-CNN (right).
+
+
+
+ASR of adversarial T-shirts in various attack settings. We perform a more comprehensive evaluation on our methods by digital simulation. Table 1 compares the ASR of adversarial T-shirts generated w/ or w/o TPS transformation in 4 attack settings: a) single-detector attack referring to adversarial T-shirts designed and evaluated using the same object detector, b) transfer single-detector attack referring to adversarial T-shirts designed and evaluated using different object
+
+detectors, c) ensemble attack (average) given by (7) but using the average of attack losses of individual models, and d) ensemble attack (min-max) given by (7). As we can see, it is crucial to incorporate TPS transformation in the design of adversarial T-shirts: without TPS, the ASR drops from $61\%$ to $34\%$ when attacking Faster R-CNN and drops from $74\%$ to $48\%$ when attacking YOLOv2 in the single-detector attack setting. We also note that the transferability of single-detector attack is weak in all settings. And Faster R-CNN is consistently more robust than YOLOv2, similar to the results in Fig. 4. Compared to our approach and advT-Affine, the baseline method advPatch yields the worst ASR when attacking a single detector. Furthermore, we evaluate the effectiveness of the proposed min-max ensemble attack (7). As we can see, when attacking Faster R-CNN, the min-max ensemble attack significantly outperforms its counterpart using the averaging strategy, leading to $15\%$ improvement in ASR. This improvement is at the cost of $7\%$ degradation when attacking YOLOv2.
+
+Table 1: The ASR (\%) of adversarial T-shirts generated from our approach, advT-Affine and the baseline advPatch in digital-world against Faster R-CNN and YOLOv2.
+
+| method | model | target | transfer | ensemble(average) | ensemble(min-max) |
| advPatch[30] | Faster R-CNN | 22% | 10% | N/A | N/A |
| advT-Affine | 34% | 11% | 16% | 32% |
| advT-TPS(ours) | 61% | 10% | 32% | 47% |
| advPatch[30] | YOLOv2 | 24% | 10% | N/A | N/A |
| advT-Affine | 48% | 13% | 31% | 27% |
| advT-TPS(ours) | 74% | 13% | 60% | 53% |
+
+# 4.3 Adversarial T-shirt in the physical world
+
+We next evaluate our method in the physical world. First, we generate an adversarial pattern by solving problem (6) against YOLOv2 and Faster R-CNN, following Section 4.2. We then print the adversarial pattern on a white T-shirt, leading to the adversarial T-shirt. For fair comparison, we also print adversarial patterns generated by the advPatch [30] and advT-Affine in Section 4.2 on white T-shirts of the same style. It is worth noting that different from evaluation by taking static photos of physical adversarial examples, our evaluation is conducted at a more practical and challenging setting. That is because we record videos to track a moving person wearing adversarial T-shirts, which could encounter multiple environment effects such as distance, deformation of the T-shirt, poses and angles of the moving person.
+
+In Table 2, we compare our method with advPatch and advT-Affine under 3 specified scenarios, including the indoor, outdoor, and unforeseen scenarios4, to-
+
+gether with the overall case of all scenarios. We observe that our method achieves $64\%$ ASR (against YOLOv2), which is much higher than advT-Affine $(39\%)$ and advPatch $(19\%)$ in the indoor scenario. Compared to the indoor scenario, evading person detectors in the outdoor scenario becomes more challenging. The ASR of our approach reduces to $47\%$ but outperforms advT-Affine $(36\%)$ and advPatch $(17\%)$ . This is not surprising since the outdoor scenario suffers more environmental variations such as lighting change. Even considering the unforeseen scenario, we find that our adversarial T-shirt is robust to the change of person and location, leading to $48\%$ ASR against Faster R-CNN and $59\%$ ASR against YOLOv2. Compared to the digital results, the ASR of our adversarial T-shirt drops around $10\%$ in all tested physical-world scenarios; see specific video frames in Fig. A5 in Appendix.
+
+Table 2: The ASR (\%) of adversarial T-shirts generated from our approach, advT-Affine and advPatch under different physical-world scenes.
+
+| method | model | indoor | outdoor | new scenes | average ASR |
| advPatch[30] | Faster R-CNN | 15% | 16% | 12% | 14% |
| advT-Affine | 27% | 25% | 25% | 26% |
| advT-TPS(ours) | 50% | 42% | 48% | 47% |
| advPatch[30] | YOLOv2 | 19% | 17% | 17% | 18% |
| advT-Affine | 39% | 36% | 34% | 37% |
| advT-TPS(ours) | 64% | 47% | 59% | 57% |
+
+# 4.4 Ablation Study
+
+In this section, we conduct more experiments for better understanding the robustness of our adversarial T-shirt against various conditions including angles and distances to camera, camera view, person's pose, and complex scenes that include crowd and occlusion. Since the baseline method (advPatch) performs poorly in most of these scenarios, we focus on evaluating our method (advT-TPS) against advT-Affine using YOLOv2. We refer readers to Appendix E for details on the setup of our ablation study.
+
+Angles and distances to camera. In Fig. 5, we present ASRs of advT-TPS and advT-Affine when the actor whom wears the adversarial T-shit at different angles and distances to the camera. As we can see, advT-TPS works well within the angle $20^{\circ}$ and the distance $4\mathrm{m}$ . And advT-TPS consistently outperforms advT-Affine. We also note that ASR drops significantly at the angle $30^{\circ}$ since it induces occlusion of the adversarial pattern. Further, if the distance is greater than $7\mathrm{m}$ , the pattern cannot clearly be seen from the camera.
+
+
+Fig. 5: Average ASR v.s. different angles (left) and distance (right).
+
+
+
+Human Pose. In Table 3 (left), we evaluate the effect of pose change on advT-TPS, where videos are taken for an actor with some distinct postures including crouching, siting and running in place; see Fig. 6 for specific examples. To alleviate other latent effects, the camera was made to look straight at the person at a fixed distance of about $1 \sim 2\mathrm{m}$ away from the person. As we can see, advT-TPS consistently outperforms advT-Affine. In additional, we study the effect of occlusion on advT-Affine and advT-TPS in Appendix F.
+
+Complex scenes. In Table 3 (right), we test our adversarial T-shirt in several complex scenes with cluttered backgrounds, including a) an office with multiple objects and people moving around; b) a parking lot with vehicles and pedestrians; and c) a crossroad with busy traffic and crowd. We observe that compared to advT-Affine, advT-TPS is reasonably effective in complex scenes without suffering a significant loss of ASR. Compared to the other factors such as camera angle and occlusion, cluttered background and even crowd are probably the least of a concern for our approach. This is explainable, as our approach works on object proposals directly to suppress the classifier.
+
+Table 3: The ASR (\%) of adversarial T-shirts generated from our approach, advT-Affine and advPatch under different physical-world scenarios.
+
+| Pose Method | crouching | sitting | running |
| advT-Affine | 27% | 26% | 52% |
| advT-TPS | 53% | 32% | 63% |
+
+| Scenario Method | office | parking lot | crossroad |
| advT-Affine | 69% | 53% | 51% |
| advT-TPS | 73% | 65% | 54% |
+
+# 5 Conclusion
+
+In this paper, we propose Adversarial $T$ -shirt, the first successful adversarial wearable to evade detection of moving persons. Since T-shirt is a non-rigid ob-
+
+
+Fig. 6: Some video frames of person who wears adversarial T-shirt generated by advT-Affine (first row) and advT-TPS (second row) with different poses.
+
+
+Fig. 7: The person who wears our adversarial T-shirt generate by TPS in three complex scenes: office, parking lot and crossroad.
+
+ject, its deformation induced by a person's pose change is taken into account when generating adversarial perturbations. We also propose a min-max ensemble attack algorithm to fool multiple object detectors simultaneously. We show that our attack against YOLOv2 can achieve $74\%$ and $57\%$ attack success rate in the digital and physical world, respectively. By contrast, the advPatch method can only achieve $24\%$ and $18\%$ ASR. Based on our studies, we hope to provide some implications on how the adversarial perturbations can be implemented in physical worlds.
+
+# Acknowledgement
+
+This work is partly supported by the National Science Foundation CNS-1932351. We would also like to extend our gratitude to MIT-IBM Watson AI Lab.
+
+# References
+
+1. Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. vol. 80, pp. 284-293 (10-15 Jul 2018)
+2. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018)
+3. Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: International Conference on Machine Learning. pp. 284-293 (2018)
+4. Bookstein, F.L.: Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Transactions on pattern analysis and machine intelligence 11(6), 567-585 (1989)
+5. Cao, Y., Xiao, C., Yang, D., Fang, J., Yang, R., Liu, M., Li, B.: Adversarial objects against lidar-based autonomous driving systems. arXiv preprint arXiv:1907.05418 (2019)
+6. Carlini, N., Wagner, D.: Audio adversarial examples: Targeted attacks on speech-to-text. In: 2018 IEEE Security and Privacy Workshops (SPW). pp. 1-7. IEEE (2018)
+7. Chen, S.T., Cornelius, C., Martin, J., Chau, D.H.P.: Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. pp. 52-68. Springer (2018)
+8. Chui, H.: Non-rigid point matching: algorithms, extensions and applications. Cite-seer (2001)
+9. Ding, G.W., Lui, K.Y.C., Jin, X., Wang, L., Huang, R.: On the sensitivity of adversarial robustness to input data distributions. In: International Conference on Learning Representations (2019)
+10. Donato, G., Belongie, S.: Approximate thin plate spline mappings. In: European conference on computer vision. pp. 21-31. Springer (2002)
+1. Duan, R., Ma, X., Wang, Y., Bailey, J., Qin, A.K., Yang, Y.: Adversarial camouflage: Hiding physical-world attacks with natural styles. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1000-1008 (2020)
+2. Duchi, J.C., Bartlett, P.L., Wainwright, M.J.: Randomized smoothing for stochastic optimization. SIAM Journal on Optimization 22(2), 674-701 (2012)
+3. Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., Song, D.: Robust physical-world attacks on machine learning models. arXiv preprint arXiv:1707.08945 (2017)
+4. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1625-1634 (2018)
+5. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Tramer, F., Prakash, A., Kohno, T., Song, D.: Physical adversarial examples for object detectors. In: 12th USENIX Workshop on Offensive Technologies (WOOT 18) (2018)
+6. Geiger, A., Moosmann, F., Car, O., Schuster, B.: Automatic camera and range sensor calibration using a single shot. In: 2012 IEEE International Conference on Robotics and Automation. pp. 3936-3943. IEEE (2012)
+
+17. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 1440-1448 (2015)
+18. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 580-587 (2014)
+19. Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in neural information processing systems. pp. 2017-2025 (2015)
+20. Li, J., Schmidt, F., Kolter, Z.: Adversarial camera stickers: A physical camera-based attack on deep learning systems. In: International Conference on Machine Learning. pp. 3896-3904 (2019)
+21. Lin, J., Gan, C., Han, S.: Defensive quantization: When efficiency meets robustness. In: International Conference on Learning Representations (2019), https://openreview.net/forum?id=ryetZ20ctX
+22. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dólar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740-755. Springer (2014)
+23. Liu, H.T.D., Tao, M., Li, C.L., Nowrouzezahrai, D., Jacobson, A.: Beyond pixel norm-balls: Parametric adversaries using an analytically differentiable renderer. In: International Conference on Learning Representations (2019), https://openreview.net/forum?id=SJ12niR9KQ
+24. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European conference on computer vision. pp. 21-37. Springer (2016)
+25. Lu, J., Sibai, H., Fabry, E.: Adversarial examples that fool detectors. arXiv preprint arXiv:1712.02494 (2017)
+26. Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7263-7271 (2017)
+27. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp. 91-99 (2015)
+28. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. pp. 1528-1540. ACM (2016)
+29. Sitawarin, C., Bhagoji, A.N., Mosenia, A., Mittal, P., Chiang, M.: Rogue signs: Deceiving traffic sign recognition with malicious ads and logos. arXiv preprint arXiv:1801.02780 (2018)
+30. Thys, S., Van Ranst, W., Goedemé, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp. 0-0 (2019)
+31. Wang, J., Zhang, T., Liu, S., Chen, P.Y., Xu, J., Fardad, M., Li, B.: Beyond adversarial training: Min-max optimization in adversarial attack and defense. arXiv preprint arXiv:1906.03563 (2019)
+32. Xu, K., Chen, H., Liu, S., Chen, P.Y., Weng, T.W., Hong, M., Lin, X.: Topology attack and defense for graph neural networks: An optimization perspective. In: International Joint Conference on Artificial Intelligence (IJCAI) (2019)
+33. Xu, K., Liu, S., Zhang, G., Sun, M., Zhao, P., Fan, Q., Gan, C., Lin, X.: Interpreting adversarial examples by activation promotion and suppression. arXiv preprint arXiv:1904.02057 (2019)
+
+34. Xu, K., Liu, S., Zhao, P., Chen, P.Y., Zhang, H., Fan, Q., Erdogmus, D., Wang, Y., Lin, X.: Structured adversarial attack: Towards general implementation and better interpretability. In: International Conference on Learning Representations (2019)
+35. Zhang, Y., Foroosh, H., David, P., Gong, B.: CAMOU: Learning physical vehicle camouflages to adversarially attack detectors in the wild. In: International Conference on Learning Representations (2019), https://openreview.net/forum?id= SJgEl3A5tm
+36. Zhang, Z.: A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence 22 (2000)
+37. Zhao, P., Xu, K., Liu, S., Wang, Y., Lin, X.: Admm attack: an enhanced adversarial attack for deep neural networks with undetectable distortions. In: Proceedings of the 24th Asia and South Pacific Design Automation Conference. pp. 499-505. ACM (2019)
\ No newline at end of file
diff --git a/adversarialtshirtevadingpersondetectorsinaphysicalworld/images.zip b/adversarialtshirtevadingpersondetectorsinaphysicalworld/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a92a53a4fbead9a7e7fd950368390050aa99456f
--- /dev/null
+++ b/adversarialtshirtevadingpersondetectorsinaphysicalworld/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:664a9ec4bc9e6e81f9ab569c5473e9cf25afeca87c7ea341f35211cc2d01f46d
+size 586670
diff --git a/adversarialtshirtevadingpersondetectorsinaphysicalworld/layout.json b/adversarialtshirtevadingpersondetectorsinaphysicalworld/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..fd438b0d3c410f2e407425758186fa7b1d1252c1
--- /dev/null
+++ b/adversarialtshirtevadingpersondetectorsinaphysicalworld/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:82d10b954f3be623c7949aa80253590327cdbfffae851759cf33d4ed9db824f7
+size 442993
diff --git a/advpctransferableadversarialperturbationson3dpointclouds/2032e1b7-6bc2-4792-9f86-fcd57ed758d6_content_list.json b/advpctransferableadversarialperturbationson3dpointclouds/2032e1b7-6bc2-4792-9f86-fcd57ed758d6_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fd5142fe3f1e4deb8fb9d0cee9e1e172fb14ea6a
--- /dev/null
+++ b/advpctransferableadversarialperturbationson3dpointclouds/2032e1b7-6bc2-4792-9f86-fcd57ed758d6_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:23ea922f4f9b4921f32ade274774132f956a05776f3dea7ce0c6dfe74aacf2ba
+size 80382
diff --git a/advpctransferableadversarialperturbationson3dpointclouds/2032e1b7-6bc2-4792-9f86-fcd57ed758d6_model.json b/advpctransferableadversarialperturbationson3dpointclouds/2032e1b7-6bc2-4792-9f86-fcd57ed758d6_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5bbb7df4901c69fdd535ac24346863a7d0027d78
--- /dev/null
+++ b/advpctransferableadversarialperturbationson3dpointclouds/2032e1b7-6bc2-4792-9f86-fcd57ed758d6_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1cbe4159c201114761c4f58813f205177b79d3ef1b0f5270ae81603b3faedae9
+size 96550
diff --git a/advpctransferableadversarialperturbationson3dpointclouds/2032e1b7-6bc2-4792-9f86-fcd57ed758d6_origin.pdf b/advpctransferableadversarialperturbationson3dpointclouds/2032e1b7-6bc2-4792-9f86-fcd57ed758d6_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..372f42c49aaccba8b578cb869101541951727570
--- /dev/null
+++ b/advpctransferableadversarialperturbationson3dpointclouds/2032e1b7-6bc2-4792-9f86-fcd57ed758d6_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b00b24300068acca9789898887dfa2b54e2b416f2e48efbca61c73cd6468d65f
+size 1382294
diff --git a/advpctransferableadversarialperturbationson3dpointclouds/full.md b/advpctransferableadversarialperturbationson3dpointclouds/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8dd89722a4815ee334c89f0c7bab52156d3443a3
--- /dev/null
+++ b/advpctransferableadversarialperturbationson3dpointclouds/full.md
@@ -0,0 +1,289 @@
+# AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds
+
+Abdullah Hamdi, Sara Rojas, Ali Thabet, and Bernard Ghanem
+
+King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia {abdullah.hamdi, sara.rojasmartinez, ali.thabet, bernard.ghanem}@kaust.edu.sa
+
+Abstract. Deep neural networks are vulnerable to adversarial attacks, in which imperceptible perturbations to their input lead to erroneous network predictions. This phenomenon has been extensively studied in the image domain, and has only recently been extended to 3D point clouds. In this work, we present novel data-driven adversarial attacks against 3D point cloud networks. We aim to address the following problems in current 3D point cloud adversarial attacks: they do not transfer well between different networks, and they are easy to defend against via simple statistical methods. To this extent, we develop a new point cloud attack (dubbed AdvPC) that exploits the input data distribution by adding an adversarial loss, after Auto-Encoder reconstruction, to the objective it optimizes. AdvPC leads to perturbations that are resilient against current defenses, while remaining highly transferable compared to state-of-the-art attacks. We test AdvPC using four popular point cloud networks: PointNet, PointNet++ (MSG and SSG), and DGCNN. Our proposed attack increases the attack success rate by up to $40\%$ for those transferred to unseen networks (transferability), while maintaining a high success rate on the attacked network. AdvPC also increases the ability to break defenses by up to $38\%$ as compared to other baselines on the ModelNet40 dataset. The code is available at https://github.com/ajhamdi/AdvPC.
+
+# 1 Introduction
+
+Deep learning has shown impressive results in many perception tasks. Despite its performance, several works show that deep learning algorithms can be susceptible to adversarial attacks. These attacks craft small perturbations to the inputs that push the network to produce incorrect outputs. There is significant progress made in 2D image adversarial attacks, where extensive work shows diverse ways to attack 2D neural networks [23,6,11,18,4,2,35,8,7]. In contrast, there is little focus on their 3D counterparts [31,38,37,25]. 3D point clouds captured by 3D sensors like LiDAR are now widely processed using deep networks for safety-critical applications, including but not limited to self-driving [3,27]. However, as we show in this paper, 3D deep networks tend to be vulnerable to input perturbations, a fact that increases the risk of using them in such applications. In this paper, we present a novel approach to attack deep learning algorithms applied to 3D point clouds with a primary focus on attack transferability between networks.
+
+
+Fig. 1: Transferable Adversarial Perturbations on 3D point clouds: Generating adversarial attacks to fool PointNet [21](PN) by perturbing a Table point cloud. The perturbed 3D object not only forces PointNet to predict an incorrect class, but also induces misclassification on other unseen 3D networks (PointNet++ [22], DGCNN [29]) that are not involved in generating the perturbation. Fooling unseen networks poses a threat to 3D deep vision models.
+
+The concept of attack transferability has been extensively studied in the 2D image domain [17,19,20]. Transferability allows an adversary to fool any network, without access to the network's architecture. Clearly, transferable attacks pose a serious security concern, especially in the context of deep learning model deployment. In this work, the goal is to generate adversarial attacks with network-transferability, i.e. the attack to a given point cloud is generated using a single and accessible victim network, and the perturbed sample is directly applied to an unseen and inaccessible transfer network. Accessibility here refers to whether the parameters and architecture of the network are known, while optimizing the attack (white-box). Fig. 1 illustrates the concept of transferability. The perturbation generated by our method for a 3D point cloud not only flips the class label of a victim network to a wrong class (i.e. it is adversarial), but it also induces a misclassification for the transfer networks that are not involved in generating the perturbation (i.e. it is transferable).
+
+Very few adversarial attacks have been developed for 3D point clouds. The first method was introduced by Xiang et. al. [31] and it proposes point perturbation and adversarial point generation as two attack modes. More recently, Tsai et. al. [25] proposed to make point cloud attacks more smooth and natural by incorporating a K-Nearest Neighbor (KNN) loss on the points, thus making the attacks physically realizable. We identify two main shortcomings in current 3D adversarial perturbations methods [31,25]. First, their attacks are unsuccessful in the presence of simple defenses, such as Statistical Outlier Removal [38]. Second, they are limited to the victim network and do not transfer well to other networks [31]. In contrast, our work not only focuses on adversarial perturbations that are significantly more resilient against currently available point cloud defenses, but also on those that transfer well between different point cloud networks.
+
+To generate more transferable attacks, we use a point cloud Auto-Encoder (AE), which can effectively reconstruct the unperturbed input after it is perturbed, and then add a data adversarial loss. We optimize the perturbation added to the input to fool the classifier before it passes through the AE (regular adversarial loss) and after it passes through the AE (data adversarial loss). In doing so, the attack tends to be less dependent on the victim network, and generalizes better to different networks. Our attack is dubbed "AdvPC", and our full pipeline is optimized end-to-end from the classifier output to the perturbation. The AE learns the natural distribution of the data to generalize the attack to a broader range of unseen classifiers [26], thus making the attack more dangerous. Our attacks surpass state-of-the-art attacks [31,25] by a large margin (up to $40\%$ ) on point cloud networks operating on the standard ModelNet40 dataset [30] and for the same maximum allowed perturbation norms (norm-budgets).
+
+Contributions. Our contributions are two-fold. (1) We propose a new pipeline and loss function to perform transferable adversarial perturbations on 3D point clouds. By introducing a data adversarial loss targeting the victim network after reconstructing the perturbed input with a point cloud AE, our approach can be successful in both attacking the victim network and transferring to unseen networks. Since the AE is trained to leverage the point cloud data distribution, incorporating it into the attack strategy enables better transferability to unseen networks. To the best of our knowledge, we are the first to introduce network-transferable adversarial perturbations for 3D point clouds. (2) We perform extensive experiments under constrained norm-budgets to validate the transferability of our attacks. We transfer our attacks between four point cloud networks and show superiority against the state-of-the-art. Furthermore, we demonstrate how our attacks outperform others when targeted by currently available point cloud defenses.
+
+# 2 Related Work
+
+# 2.1 Deep Learning for 3D Point Clouds
+
+PointNet [21] paved the way as the first deep learning algorithm to operate directly on 3D point clouds. PointNet computes point features independently, and aggregates them using an order invariant function like max-pooling. An update to this work was PointNet++ [22], where points are aggregated at different 3D scales. Subsequent works focused on how to aggregate more local context [5] or on more complex aggregation strategies like RNNs [9,33]. More recent methods run convolutions across neighbors of points, instead of using point-wise operations [29,15,24,13,12,15,28,14]. Contrary to PointNet and its variants, these works achieve superior recognition results by focusing on local feature representation. In this paper and to evaluate/validate our adversarial attacks, we use three pointwise networks, PointNet [21] and PointNet++ [22] in single-scale (SSG) and multi-scale (MSG) form, and a Dynamic Graph convolutional Network, DGCNN [29]. We study the sensitivity of each network to adversarial perturbations and show the transferability of AdvPC attacks between the networks.
+
+# 2.2 Adversarial Attacks
+
+Pixel-based Adversarial Attacks. The initial image-based adversarial attack was introduced by Szegedy et. al. [23], who cast the attack problem as optimization with pixel perturbations being minimized so as to fool a trained classifier into predicting a wrong class label. Since then, the topic of adversarial attacks has attracted much attention [6,11,18,4,16]. More recent works take a learning-based approach to the attack [19,20,36]. They train a neural network (adversary) to perform the attack and then use the trained adversary model to attack unseen samples. These learning approaches [19,20,36] tend to have better transferability properties than the optimizations approaches [6,11,18,4,16], while the latter tend to achieve higher success rates on the victim networks. As such, our proposed AdvPC attack is a hybrid approach, in which we leverage an AE to capture properties of the data distribution but still define the attack as an optimization for each sample. In doing so, AdvPC captures the merits of both learning and optimization methods to achieve high success rates on the victim networks as well as better transferability to unseen networks.
+
+Adversarial Attacks in 3D. Several adversarial attacks have moved beyond pixel perturbations to the 3D domain. One line of work focuses on attacking image-based CNNs by changing the 3D parameters of the object in the image, instead of changing the pixels of the image [8,35,2,7,32]. Recently, Xiang et. al. [31] developed adversarial perturbations on 3D point clouds, which were successful in attacking PointNet [21]; however, this approach has two main shortcomings. First, it can be easily defended against by simple statistical operations [38]. Second, the attacks are non-transferable and only work on the attacked network [31,38]. In contrast, Zheng et. al. [37] proposed dropping points from the point cloud using a saliency map, to fool trained 3D deep networks. As compared to [37], our attacks are modeled as an optimization on the additive perturbation variable with a focus on point perturbations instead of point removal. As compared to [31], our AdvPC attacks are significantly more successful against available defenses and more transferable beyond the victim network, since AdvPC leverages the point cloud data distribution through the AE. Concurrent to our work is the work of Tsai et. al. [25], in which the attack is crafted with KNN loss to make smooth and natural shapes. The motivation of their work is to craft natural attacks on 3D point clouds that can be 3D-printed into real objects. In comparison, our novel AdvPC attack utilizes the data distribution of point clouds by utilizing an AE to generalize the attack.
+
+Defending Against 3D Point Cloud Attacks. Zhou et. al. [38] proposed a Statistical Outlier Removal (SOR) method as a defense against point cloud attacks. SOR uses KNN to identify and remove point outliers. They also propose DUP-Net, which is a combination of their SOR and a point cloud up-sampling network PU-Net [34]. Zhou et. al. also proposed removing unnatural points by Simple Random Sampling (SRS), where each point has the same probability of being randomly removed. Adversarial training on the attacked point cloud is also proposed as a mode of defense by [31]. Our attacks surpass state-of-the-art
+
+
+Fig.2: AdvPC Attack Pipeline: We optimize for the constrained perturbation variable $\pmb{\Delta}$ to generate the perturbed sample $\mathcal{X}' = \mathcal{X} + \pmb{\Delta}$ . The perturbed sample fools a trained classifier $\mathbf{F}$ (i.e. $\mathbf{F}(\mathcal{X}')$ is incorrect), and at the same time, if the perturbed sample is reconstructed by an Auto-Encoder (AE) $\mathbf{G}$ , it too fools the classifier (i.e. $\mathbf{F}(\mathbf{G}(\mathcal{X}'))$ is incorrect). The AdvPC loss for network $\mathbf{F}$ is defined in Eq (6) and has two parts: network adversarial loss (purple) and data adversarial loss (green). Dotted lines are gradients flowing to the perturbation variable $\pmb{\Delta}$ .
+
+attacks [31,25] on point cloud networks by a large margin (up to $38\%$ ) on the standard ModelNet40 dataset [30] against the aforementioned defenses [38].
+
+# 3 Methodology
+
+The pipeline of AdvPC is illustrated in Fig. 2. It consists of an Auto-Encoder (AE) $\mathbf{G}$ , which is trained to reconstruct 3D point clouds and a point cloud classifier $\mathbf{F}$ . We seek to find a perturbation variable $\pmb{\Delta}$ added to the input $\mathcal{X}$ to fool $\mathbf{F}$ before and after it passes through the AE for reconstruction. The setup makes the attack less dependent on the victim network and more dependent on the data. As such, we expect this strategy to generalize to different networks. Next, we describe the main components of our pipeline: 3D point cloud input, AE, and point cloud classifier. Then, we present our attack setup and loss.
+
+# 3.1 AdvPC Attack Pipeline
+
+3D Point Clouds $(\mathcal{X})$ . We define a point cloud $\mathcal{X} \in \mathbb{R}^{N \times 3}$ , as a set of $N$ 3D points, where each point $\mathbf{x}_i \in \mathbb{R}^3$ is represented by its 3D coordinates $(x_i, y_i, z_i)$ . Point Cloud Networks (F). We focus on 3D point cloud classifiers with a feature max pooling layer as detailed in Eq (1), where $h_{\mathrm{mlp}}$ and $h_{\mathrm{conv}}$ are MLP and Convolutional $(1 \times 1$ or edge) layers, respectively. This produces a K-class classifier $\mathbf{F}$ .
+
+$$
+\mathbf {F} (\mathcal {X}) = h _ {\operatorname {m l p}} \left(\max _ {\mathbf {x} _ {i} \in \mathcal {X}} \left\{h _ {\operatorname {c o n v}} \left(\mathbf {x} _ {i}\right) \right\}\right) \tag {1}
+$$
+
+Here, $\mathbf{F}:\mathbb{R}^{N\times 3}\to \mathbb{R}^K$ produces the logits layer of the classifier with size $K$ . For our attacks, we take $\mathbf{F}$ to be one of the following widely used networks in the
+
+literature: PointNet [21], PointNet++ [22] in single-scale form (SSG) and multiscale form (MSG), and DGCNN [29]. Section 5.2 delves deep into the differences between them in terms of their sensitivities to adversarial perturbations.
+
+Point Cloud Auto-Encoder (G). An AE learns a representation of the data and acts as an effective defense against adversarial attacks. It ideally projects a perturbed point cloud onto the natural manifold of inputs. Any AE architecture in point clouds can be used, but we select the one in [1] because of its simple structure and effectiveness in recovering from adversarial perturbation. The AE $\mathbf{G}$ consists of an encoding part, $\mathbf{g}_{\mathrm{encode}}: \mathbb{R}^{N \times 3} \to \mathbb{R}^q$ (similar to Eq (1)), and an MLP decoder, $\mathbf{g}_{\mathrm{mlp}}: \mathbb{R}^q \to \mathbb{R}^{N \times 3}$ , to produce a point cloud. It can be described formally as: $\mathbf{G}(.) = \mathbf{g}_{\mathrm{mlp}}(\mathbf{g}_{\mathrm{encode}}(.))$ . We train the AE with the Chamfer loss as in [1] on the same data used to train $\mathbf{F}$ , such that it can reliably encode and decode 3D point clouds. We freeze the AE weights during the optimization of the adversarial perturbation on the input. Since the AE learns how naturally occurring point clouds look like, the gradients updating the attack, which is also tasked to fool the reconstructed sample after the AE, actually become more dependent on the data and less on the victim network. The enhanced data dependency of our attack results in the success of our attacks on unseen transfer networks besides the success on the victim network. As such, the proposed composition allows the crafted attack to successfully attack the victim classifier, as well as, fool transfer classifiers that operate on a similar input data manifold.
+
+# 3.2 AdvPC Attack Loss
+
+Soft Constraint Loss. In AdvPC attacks, like the ones in Fig. 3, we focus solely on perturbations of the input. We modify each point $\mathbf{x}_i$ by an additive perturbation variable $\delta_i$ . Formally, we define the perturbed point set $\mathcal{X}' = \mathcal{X} + \Delta$ , where $\Delta \in \mathbb{R}^{N \times 3}$ is the perturbation parameter we are optimizing for. Consequently, each pair $(\mathbf{x}_i, \mathbf{x}_i')$ are in correspondence. Adversarial attacks are commonly formulated as in Eq (2), where the goal is to find an input perturbation $\Delta$ that successfully fools $\mathbf{F}$ into predicting an incorrect label $t'$ , while keeping $\mathcal{X}'$ and $\mathcal{X}$ close under distance metric $\mathcal{D} \colon \mathbb{R}^{N \times 3} \times \mathbb{R}^{N \times 3} \to \mathbb{R}$ .
+
+$$
+\underset {\pmb {\Delta}} {\min} \mathcal {D} (\mathcal {X}, \mathcal {X} ^ {\prime}) \quad \mathrm {s . t .} \left[ \underset {i} {\arg \max} \mathbf {F} (\mathcal {X} ^ {\prime}) _ {i} \right] = t ^ {\prime} \qquad (2)
+$$
+
+The formulation in Eq (2) can describe targeted attacks (if $t'$ is specified before the attack) or untargeted attacks (if $t'$ is any label other than the true label of $\mathcal{X}$ ). We adopt the following choice of $t'$ for untargeted attacks: $t' = [\arg \max_{i \neq \text{true}} \mathbf{F}(\mathcal{X}')_i]$ . Unless stated otherwise, we primarily use untargeted attacks in this paper. As pointed out in [4], it is difficult to directly solve Eq (2). Instead, previous works like [31,25] have used the well-known C&W formulation, giving rise to the commonly known soft constraint attack: $\min_{\Delta} f_{t'}(\mathbf{F}(\mathcal{X}')) + \lambda \mathcal{D}(\mathcal{X}, \mathcal{X}')$ where $f_{t'}(\mathbf{F}(\mathcal{X}'))$ is the adversarial loss function defined on the network $\mathbf{F}$ to move it to label $t'$ as in Eq (3).
+
+$$
+f _ {t ^ {\prime}} \left(\mathbf {F} \left(\mathcal {X} ^ {\prime}\right)\right) = \max \left(\max _ {i \neq t ^ {\prime}} \left(\mathbf {F} \left(\mathcal {X} ^ {\prime}\right) _ {i}\right) - \mathbf {F} \left(\mathcal {X} ^ {\prime}\right) _ {t ^ {\prime}} + \kappa , 0\right), \tag {3}
+$$
+
+
+airplane $\checkmark$
+
+
+airplane
+
+
+
+
+
+
+
+
+
+
+PN++ (SSG):
+bookshelf
+
+
+DGCNN:
+bottle
+
+
+bottle
+PN
+table
+Fig. 3: Examples of AdvPC Attacks: Adversarial attacks are generated for victim networks PointNet, PointNet ++ (MSG/SSG) and DGCNN using AdvPC. The unperturbed point clouds are in black (top) while the perturbed examples are in blue (bottom). The network predictions are shown under each point cloud. The wrong prediction of each perturbed point cloud matches the target of the AdvPC attack.
+
+
+bottle
+PN++ (MSS):
+monitor
+
+
+chair
+PN:
+airplane
+
+
+chair
+PN:
+vase
+
+where $\kappa$ is a loss margin. The 3D-Adv attack [31] uses $\ell_2$ for $\mathcal{D}(\mathcal{X},\mathcal{X}')$ , while the KNN Attack [25] uses Chamfer Distance.
+
+Hard Constraint Loss. An alternative to Eq (2) is to put $\mathcal{D}(\mathcal{X},\mathcal{X}')$ as a hard constraint, where the objective can be minimized using Projected Gradient Descent (PGD) [11,16] as follows.
+
+$$
+\min _ {\boldsymbol {\Delta}} f _ {t ^ {\prime}} \left(\mathbf {F} \left(\mathcal {X} ^ {\prime}\right)\right) \quad s. t. \mathcal {D} \left(\mathcal {X}, \mathcal {X} ^ {\prime}\right) \leq \epsilon \tag {4}
+$$
+
+Using a hard constraint sets a limit to the amount of added perturbation in the attack. This limit is defined by $\epsilon$ in Eq (4), which we call norm-budget in this work. Having this bound ensures a fair comparison between different attack schemes. We compare these schemes by measuring their attack success rate at different levels of norm-budget. Using PGD, the above optimization in Eq (4) with $\ell_p$ distance $\mathcal{D}_{\ell_p}(\mathcal{X},\mathcal{X}')$ can be solved by iteratively projecting the perturbation $\pmb{\Delta}$ onto the $\ell_p$ sphere of size $\epsilon_{p}$ after each gradient step such that: $\pmb{\Delta}_{t + 1} = \Pi_p(\pmb {\Delta}_t - \eta \nabla_{\pmb {\Delta}_t}f_{t'}(\mathbf{F}(\mathcal{X}')),\epsilon_p)$ . Here, $\Pi_p(\pmb {\Delta},\epsilon_p)$ projects the perturbation $\pmb{\Delta}$ onto the $\ell_p$ sphere of size $\epsilon_{p}$ , and $\eta$ is a step size. The two most commonly used $\ell_p$ distance metrics in the literature are $\ell_2$ , which measures the energy of the perturbation, and $\ell_{\infty}$ , which measures the maximum point perturbation of each $\delta_i\in \pmb{\Delta}$ . In our experiments, we choose to use the $\ell_{\infty}$ distance defined as $\mathcal{D}_{\ell_{\infty}}(\mathcal{X},\mathcal{X}') = \max_i\| \pmb {\delta}_i\|_{\infty}$ , The projection of $\pmb{\Delta}$ onto the $\ell_{\infty}$ sphere of size $\epsilon_{\infty}$ is: $\Pi_{\infty}(\pmb {\Delta},\epsilon_{\infty}) = \mathrm{SAT}_{\epsilon_{\infty}}(\pmb {\delta}_i)$ , $\forall \pmb {\delta}_i\in \pmb{\Delta}$ , where $\mathrm{SAT}_{\epsilon_{\infty}}(\pmb {\delta}_i)$ is the element-wise saturation function that takes every element of vector $\delta_{i}$ and limits its range to $[- \epsilon_{\infty},\epsilon_{\infty}]$ . Norm-budget $\epsilon_{\infty}$ is used throughout the experiments in this work.
+
+In supplement, we detail our formulation when $\ell_2$ is used as the distance metric and report similar superiority over the baselines just as the $\ell_{\infty}$ results. For completeness, we also show in the supplement the effect of using different
+
+distance metrics $(\ell_2, \text{Chamfer, and Earth Mover Distance})$ as soft constraints on transferability and attack effectiveness.
+
+Data Adversarial Loss. The objectives in Eq (2, 4) focus solely on the network $\mathbf{F}$ . We also want to add more focus on the data in crafting our attacks. We do so by fooling $\mathbf{F}$ using both the perturbed input $\mathcal{X}'$ and the AE reconstruction $\mathbf{G}(\mathcal{X}')$ (see Fig. 2). Our new objective becomes:
+
+$$
+\min _ {\boldsymbol {\Delta}} \mathcal {D} (\mathcal {X}, \mathcal {X} ^ {\prime}) \quad \text {s . t .} [ \underset {i} {\arg \max } \mathbf {F} (\mathcal {X} ^ {\prime}) _ {i} ] = t ^ {\prime}; [ \underset {i} {\arg \max } \mathbf {F} (\mathbf {G} (\mathcal {X} ^ {\prime})) _ {i} ] = t ^ {\prime \prime} \tag {5}
+$$
+
+Here, $t''$ is any incorrect label $t'' \neq \arg \max_i \mathbf{F}(\mathcal{X})_i$ and $t'$ is just like Eq (2). The second constraint ensures that the prediction of the perturbed sample after the AE differs from the true label of the unperturbed sample. Similar to Eq (2), this objective is hard to optimize, so we follow similar steps as in Eq (4) and optimize the following objective for AdvPC using PGD (with $\ell_{\infty}$ as the distance metric):
+
+$$
+\min _ {\boldsymbol {\Delta}} \left(1 - \gamma\right) f _ {t ^ {\prime}} \left(\mathbf {F} \left(\mathcal {X} ^ {\prime}\right)\right) + \gamma f _ {t ^ {\prime \prime}} \left(\mathbf {F} \left(\mathbf {G} \left(\mathcal {X} ^ {\prime}\right)\right)\right) \quad s. t. \quad \mathcal {D} _ {\ell_ {\infty}} \left(\mathcal {X}, \mathcal {X} ^ {\prime}\right) \leq \epsilon_ {\infty} \tag {6}
+$$
+
+Here, $f$ is as in Eq (3), while $\gamma$ is a hyper-parameter that trades off the attack's success before and after the AE. When $\gamma = 0$ , the formulation in Eq (6) becomes Eq (4). We use PGD to solve Eq (6) just like Eq (4). We follow the same procedures as in [31] when solving Eq (6) by keeping a record of any $\pmb{\Delta}$ that satisfies the constraints in Eq (5) and by trying different initializations for $\pmb{\Delta}$ .
+
+# 4 Experiments
+
+# 4.1 Setup
+
+Dataset and Networks. We use ModelNet40 [30] to train the classifier network $(\mathbf{F})$ and the AE network $(\mathbf{G})$ , as well as test our attacks. ModelNet40 contains 12,311 CAD models from 40 different classes. These models are divided into 9,843 for training and 2,468 for testing. Similar to previous work [38,31,37], we sample 1,024 points from each object. We train the $\mathbf{F}$ victim networks: PointNet[21], PointNet++ in both Single-Scale (SSG) and Multi-scale (MSG) [22] settings, and DGCNN [29]. For a fair comparison, we adopt the subset of ModelNet40 detailed in [31] to perform and evaluate our attacks against their work (we call this the attack set). In the attack set, 250 examples are chosen from 10 ModelNet40 classes. We train the AE using the full ModelNet40 training set with the Chamfer Distance loss and then fix the AE when the attacks are being generated.
+
+Adversarial Attack Methods. We compare AdvPC against the state-of-the-art baselines 3D-Adv [31] and KNN Attack [25]. For all attacks, we use Adam optimizer [10] with learning rate $\eta = 0.01$ , and perform 2 different initializations for the optimization of $\pmb{\Delta}$ (as done in [31]). The number of iterations for the attack optimization for all the networks is 200. We set the loss margin $\kappa = 30$ in Eq (3) for both 3D-Adv [31] and AdvPC and $\kappa = 15$ for KNN Attack [25] (as suggested in their paper). For other hyperparameters of [31,25], we follow what is reported in their papers. We pick $\gamma = 0.25$ in Eq (6) for AdvPC because it
+
+
+Fig. 4: Transferability Across Different Norm-Budgets: Here, the victim network is DGCNN [29] and the attacks are optimized using different $\epsilon_{\infty}$ norm-budgets. We report the attack success on DGCNN and on the transfer networks (PointNet, PointNet ++ MSG, and PointNet++ SSG). We note that our AdvPC transfers better to the other networks across different $\epsilon_{\infty}$ as compared to the baselines 3D-Adv[31] and KNN Attack [25]. Similar plots for the other victim networks are provided in the supplement.
+
+
+
+
+
+
+
+strikes a balance between the success of the attack and its transferability (refer to Section 5.1 for details). In all of the attacks, we follow the same procedure as [31], where the best attack that satisfies the objective during the optimization is reported. We add the hard $\ell_{\infty}$ projection $\varPi_{\infty}(\pmb{\Delta},\epsilon_{\infty})$ described in Section 3 to all the methods to ensure fair comparison on the same norm-budget $\epsilon_{\infty}$ . We report the best performance of the baselines obtained under this setup.
+
+Transferability. We follow the same setup as [19,20] by generating attacks using the constrained $\ell_{\infty}$ metric and measure their success rate at different norm-budgets $\epsilon_{\infty}$ taken to be in the range $[0, 0.75]$ . This range is chosen because it enables the attacks to reach $100\%$ success on the victim network, as well as offer an opportunity for transferability to other networks. We compare AdvPC against the state-of-the-art baselines [31,25] under these norm-budgets (e.g. see Fig. 4 for attacking DGCNN). To measure the success of the attack, we compute the percentage of samples out of all attacked samples that the victim network misclassified. We also measure transferability from each victim network to the transfer networks. For each pair of networks, we optimize the attack on one network (victim) and measure the success rate of this optimized attack when applied as input to the other network (transfer). We report these success rates for all network pairs. No defenses are used in the transferability experiment. All the attacks performed in this section are untargeted attacks (following the convention for transferability experiments [31]).
+
+Attacking the Defenses. We also analyze the success of our attacks against point cloud defenses. We compare AdvPC attacks and the baselines [31,25] against several defenses used in the point cloud literature: SOR, SRS, DUP-Net [38], and Adversarial Training [31]. We also add a newly trained AE (different from the one used in the AdvPC attack) to this list of defenses. For SRS, we use a drop rate of $10\%$ , while in SOR, we use the same parameters proposed in [38]. We train DUP-Net on ModelNet40 with an up-sampling rate of 2. For Adversarial Training, all four networks are trained using a mix of the training data of ModelNet40 and adversarial attacks generated by [31]. While these experiments are for untargeted
+
+| -
+Victim
+Network | -
+Attack | ε∞ = 0.18 | ε∞ = 0.45 |
| PN | PN++ (MSG) | PN++ (SSG) | DGCNN | PN | PN++ (MSG) | PN++ (SSG) | DGCNN |
| PN | 3D-Adv [31] | 100 | 8.4 | 10.4 | 6.8 | 100 | 8.8 | 9.6 | 8.0 |
| KNN [25] | 100 | 9.6 | 10.8 | 6.0 | 100 | 9.6 | 8.4 | 6.4 |
| AdvPC (Ours) | 98.8 | 20.4 | 27.6 | 22.4 | 98.8 | 18.0 | 26.8 | 20.4 |
| PN++ (MSG) | 3D-Adv [31] | 6.8 | 100 | 28.4 | 11.2 | 7.2 | 100 | 29.2 | 11.2 |
| KNN [25] | 6.4 | 100 | 22.0 | 8.8 | 6.4 | 100 | 23.2 | 7.6 |
| AdvPC (Ours) | 13.2 | 97.2 | 54.8 | 39.6 | 18.4 | 98.0 | 58.0 | 39.2 |
| PN++ (SSG) | 3D-Adv [31] | 7.6 | 9.6 | 100 | 6.0 | 7.2 | 10.4 | 100 | 7.2 |
| KNN [25] | 6.4 | 9.2 | 100 | 6.4 | 6.8 | 7.6 | 100 | 6.0 |
| AdvPC (Ours) | 12.0 | 27.2 | 99.2 | 22.8 | 14.0 | 30.8 | 99.2 | 27.6 |
| DGCNN | 3D-Adv [31] | 9.2 | 11.2 | 31.2 | 100 | 9.6 | 12.8 | 30.4 | 100 |
| KNN [25] | 7.2 | 9.6 | 14.0 | 99.6 | 6.8 | 10.0 | 11.2 | 99.6 |
| AdvPC (Ours) | 19.6 | 46.0 | 64.4 | 94.8 | 32.8 | 48.8 | 64.4 | 97.2 |
+
+Table 1: Transferability of Attacks: We use norm-budgets (max $\ell_{\infty}$ norm allowed in the perturbation) of $\epsilon_{\infty} = 0.18$ and $\epsilon_{\infty} = 0.45$ . All the reported results are the untargeted Attack Success Rate (higher numbers are better attacks). Bold numbers indicate the most transferable attacks. Our attack consistently achieves better transferability than the other attacks for all networks, especially on DGCNN [29]. For reference, the classification accuracies on unperturbed samples for networks PN, PN++(MSG), PN++(SSG) and DGCNN are $92.8\%$ , $91.5\%$ , $91.5\%$ , and $93.7\%$ , respectively.
+
+attacks, we perform similar experiments under targeted attacks and report the results in supplement for reference and completeness.
+
+# 4.2 Results
+
+We present quantitative results that focus on two main aspects. First, we show the transferable power of AdvPC attacks to different point cloud networks. Second, we highlight the strength of AdvPC under different point cloud defenses.
+
+Transferability. Table 1 reports transferability results for $\epsilon_{\infty} = 0.18$ and $\epsilon_{\infty} = 0.45$ and compares AdvPC with the baselines [31,25]. The value $\epsilon_{\infty} = 0.18$ is chosen, since it allows the DGCNN attack to reach maximum success (see Section 5.2), and the value $\epsilon_{\infty} = 0.45$ is arbitrarily chosen to be midway in the remaining range of $\epsilon_{\infty}$ . It is clear that AdvPC attacks consistently beat the baselines when transferring between networks (up to $40\%$ ). Our method shows substantial gains in the case of DGCNN. We also report transferability results for a range of $\epsilon_{\infty}$ values in Fig. 4 when the victim network is DGCNN, and the attacks transferred to all other networks. In supplement, we show the same plots when the victim network is taken to be PN and $\mathrm{PN}++$ . To represent all these transferability curves compactly, we aggregate their results into a Transferability Matrix. Every entry in this matrix measures the transferability from the victim network (row) to the transfer network (column), and it is computed as the
+
+
+Fig. 5: Transferability Matrix: Visualizing the overall transferability for 3D-Adv [31] (left), KNN Attack [25] (middle), and our AdvPC (right). Elements in the same row correspond to the same victim network used in the attack, while those in the same column correspond to the network that the attack is transferred to. Each matrix element measures the average success rate over the range of $\epsilon_{\infty}$ for the transfer network. We expect the diagonal elements of each transferability matrix (average success rate on the victim network) to have high values, since each attack is optimized on the same network it is transferred to. More importantly, brighter off-diagonal matrix elements indicate better transferability. We observe that our proposed AdvPC attack is more transferable than the other attacks and that DGCNN is a more transferable victim network than the other point cloud networks. The transferability score under each matrix is the average of the off-diagonal matrix values, which summarizes overall transferability for an attack.
+
+
+
+
+
+average success rate of the attack evaluated on the transfer network across all $\epsilon_{\infty}$ values. This value reflects how good the perturbation is at fooling the transfer network overall. As such, we advocate the use of the transferability matrix as a standard mode of evaluation for future work on network-transferable attacks. In Fig. 5, we show the transferability matrices for our attack and the baselines. AdvPC transfers better overall, since it leads to higher (brighter) off-diagonal values in the matrix. Using the average of off-diagonal elements in this matrix as a single scalar measure of transferability, AdvPC achieves $24.9\%$ average transferability, as compared to $11.5\%$ for 3D-Adv [31] and $8.92\%$ for KNN Attack [25]. We note that DGCNN [29] performs best in terms of transferability and is the hardest network to attack (for AdvPC and the baselines).
+
+Attacking Defenses. Since DGCNN performs the best in transferability, we use it to evaluate the resilience of our AdvPC attacks under different defenses. We use the five defenses described in Section 4.1 and report their results in Table 2. Our attack is more resilient than the baselines against all defenses. We note that the AE defense is very strong against all attacks compared to other defenses [38], which explains why AdvPC works very well against other defenses and transfers well to unseen networks. We also observe that our attack is strong against simple statistical defenses like SRS (38% improvement over the baselines). We report results for other victim networks (PN and PN++) in the supplement, where AdvPC shows superior performance against the baselines under these defenses.
+
+| Defenses | ε∞ = 0.18 | ε∞ = 0.45 |
| 3D-Adv [31] | KNN [25] | AdvPC (ours) | 3D-Adv [31] | KNN [25] | AdvPC (ours) |
| No defense | 100 | 99.6 | 94.8 | 100 | 99.6 | 97.2 |
| AE (newly trained) | 9.2 | 10.0 | 17.2 | 12.0 | 10.0 | 21.2 |
| Adv Training [31] | 7.2 | 7.6 | 39.6 | 8.8 | 7.2 | 42.4 |
| SOR [38] | 18.8 | 17.2 | 36.8 | 19.2 | 19.2 | 32.0 |
| DUP Net [38] | 28 | 28.8 | 43.6 | 28 | 31.2 | 37.2 |
| SRS [38] | 43.2 | 29.2 | 80.0 | 47.6 | 31.2 | 85.6 |
+
+Table 2: Attacking Point Cloud Defenses: We evaluate untargeted attacks using norm-budgets of $\epsilon_{\infty} = 0.18$ and $\epsilon_{\infty} = 0.45$ with DGCNN [29] as the victim network under different defenses for 3D point clouds. Similar to before, we report attack success rates (higher indicates better attack). AdvPC consistently outperforms the other attacks [31,25] for all defenses. Note that both the attacks and evaluations are performed on DGCNN, which has an accuracy of $93.7\%$ without input perturbations (for reference).
+
+# 5 Analysis
+
+We perform several analytical experiments to further explore the results obtained in Section 4.2. We first study the effect of different factors that play a role in the transferability of our attacks. We also show some interesting insights related to the sensitivity of point cloud networks and the effect of the AE on the attacks.
+
+# 5.1 Ablation Study (hyperparameter $\gamma$ )
+
+Here, we study the effect of $\gamma$ used in Eq (6) on the performance of our attacks. While varying $\gamma$ between 0 and 1, we record the attack success rate on the victim network and report the transferability to all of the other three transfer networks (average success rate on the transfer networks). We present averaged results over all norm-budgets in Fig. 6 for the four victim networks. One observation is that adding the AE loss with $\gamma > 0$ tends to deteriorate the success rate, even though it improves transferability. We pick $\gamma = 0.25$ in our experiments to balance success and transferability.
+
+# 5.2 Network Sensitivity to Point Cloud Attacks
+
+Fig. 7 plots the sensitivity of the various networks when they are subject to input perturbations of varying norm-budgets $\epsilon_{\infty}$ . We measure the classification accuracy of each network under our AdvPC attack ( $\gamma = 0.25$ ), 3D-Adv [31], and KNN Attack [25]. We observe that DGCNN [29] tends to be the most robust to adversarial perturbations in general. This might be explained by the fact that the convolution neighborhoods in DGCNN are dynamically updated across layers and iterations. This dynamic behavior in network structure may hinder the effect of the attack because gradient directions can change significantly from one iteration to another. This leads to failing attacks and higher robustness for DGCNN [29].
+
+
+Fig. 6: Ablation Study: Studying the effect of changing AdvPC hyperparameter $(\gamma)$ on the success rate of the attack (left) and on its transferability (right). The transferability score reported for each victim network is the average success rate on the transfer networks averaged across all different norm-budgets $\epsilon_{\infty}$ . We note that as $\gamma$ increases, the success rate of the attack on the victim network drops, and the transferability varies with $\gamma$ . We pick $\gamma = 0.25$ in all of our experiments.
+
+
+
+
+Fig. 7: Sensitivity of Architectures: We evaluate the sensitivity of each of the four networks for increasing norm-budget. For each network, we plot the classification accuracy under 3D-Adv perturbation [31] (left), KNN Attack [25] (middle), and our AdvPC attack (right). Overall, DGCNN [29] is affected the least by adversarial perturbation.
+
+
+
+
+
+# 5.3 Effect of the Auto-Encoder (AE)
+
+In Fig. 8, we show an example of how AE reconstruction preserves the details of the unperturbed point cloud and does not change the classifier prediction. When a perturbed point cloud passes through the AE, it recovers a natural-looking shape. The AE's ability to reconstruct natural-looking 3D point clouds from various perturbed inputs might explain why it is a strong defense against attacks in Table 2. Another observation from Fig. 8 is that: when we fix the target $t'$ and do not enforce a specific incorrect target $t''$ (i.e. untargeted attack setting) for the data adversarial loss on the reconstructed point cloud in the AdvPC attack (Eq (6)), the optimization mechanism tends to pick $t''$ to be a similar class to the correct one. For example, a Toilet point cloud perturbed by AdvPC can be transformed into a Chair (similar in appearance to a toilet), if reconstructed by the AE. This effect is not observed for the other attacks [31,25], which do not consider the
+
+| unperturbed
+point cloud | 3D-adv [31] | KNN [25] | AdvPC (ours) |
| before AE after AE | before AE after AE | before AE after AE | before AE after AE |
| PN:
+Toilet | PN:
+Toilet | PN:
+Bed × | PN:
+Toilet | PN:
+Bed × | PN:
+Toilet | PN:
+Bed × | PN:
+Chair × |
+
+Fig. 8: Effect of the Auto-Encoder (AE): The AE does not affect the unperturbed point cloud (classified correctly by PN before and after AE). The AE cleans the point cloud perturbed by 3D-Adv and KNN [31,25], which allows PN to predict the correct class label. However, our AdvPC attack can fool PN before and after AE reconstruction. Samples perturbed by AdvPC, if passed through the AE, transform into similar looking objects from different classes (Chair looks similar to Toilet).
+
+data distribution and optimize solely for the network. For completeness, we tried replacing the AE with other 3D generative models from [1] in our AdvPC attack, and we tried to use the learning approach in [19,20] instead of optimization, but the attack success was less than satisfactory in both cases (refer to supplement).
+
+# 6 Conclusions
+
+In this paper, we propose a new adversarial attack for 3D point clouds that utilizes a data adversarial loss to formulate network-transferable perturbations. Our attacks achieve better transferability to four popular point cloud networks than other 3D attacks, and they improve robustness against popular defenses. Future work would extend this attack to other 3D deep learning tasks, such as detection and segmentation, and integrate it into a robust training framework for point cloud networks.
+
+Acknowledgments. This work was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research under Award No. RGC/3/3570-01-01.
+
+# References
+
+1. Achlioptas, P., Diamanti, O., Mitliagkas, I., Guibas, L.: Learning representations and generative models for 3d point clouds. International Conference on Machine Learning (ICML) (2018)
+2. Alcorn, M.A., Li, Q., Gong, Z., Wang, C., Mai, L., Ku, W.S., Nguyen, A.: Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
+
+3. Cao, Y., Xiao, C., Yang, D., Fang, J., Yang, R., Liu, M., Li, B.: Adversarial objects against lidar-based autonomous driving systems. CoRR abs/1907.05418 (2019)
+4. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (SP) (2017)
+5. Engelmann, F., Kontogianni, T., Hermans, A., Leibe, B.: Exploring spatial context for 3d semantic segmentation of point clouds. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). pp. 716-724 (Oct 2017)
+6. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR) (2015)
+7. Hamdi, A., Ghanem, B.: Towards analyzing semantic robustness of deep neural networks. CoRR abs/1904.04621 (2019)
+8. Hamdi, A., Muller, M., Ghanem, B.: SADA: semantic adversarial diagnostic attacks for autonomous applications. In: AAAI Conference on Artificial Intelligence (2020)
+9. Huang, Q., Wang, W., Neumann, U.: Recurrent slice networks for 3d segmentation of point clouds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2626-2635 (2018)
+10. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014)
+11. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial machine learning at scale. CoRR abs/1611.01236 (2016)
+12. Landrieu, L., Boussaha, M.: Point cloud oversegmentation with graph-structured deep metric learning pp. 7440-7449 (2019)
+13. Landrieu, L., Simonovsky, M.: Large-scale point cloud semantic segmentation with superpoint graphs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4558-4567 (2018)
+14. Li, J., Chen, B.M., Hee Lee, G.: So-net: Self-organizing network for point cloud analysis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 9397–9406 (2018)
+15. Li, Y., Bu, R., Sun, M., Wu, W., Di, X., Chen, B.: Pointcnn: Convolution on x-transformed points. In: Advances in neural information processing systems (NIPS). pp. 820-830 (2018)
+16. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (ICLR) (2018)
+17. Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
+18. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: A simple and accurate method to fool deep neural networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
+19. Naseer, M.M., Khan, S.H., Khan, M.H., Shahbaz Khan, F., Porikli, F.: Cross-domain transferability of adversarial perturbations. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 12905-12915 (2019)
+20. Poursaeed, O., Katsman, I., Gao, B., Belongie, S.: Generative adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4422-4431 (2018)
+21. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 652-660 (2017)
+
+22. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In: Advances in neural information processing systems (NIPS). pp. 5099-5108 (2017)
+23. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., Fergus, R.: Intriguing properties of neural networks. CoRR abs/1312.6199 (2013)
+24. Tatarchenko, M., Park, J., Koltun, V., Zhou, Q.Y.: Tangent convolutions for dense prediction in 3d. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3887-3896 (2018)
+25. Tsai, T., Yang, K., Ho, T.Y., Jin, Y.: Robust adversarial objects against deep learning models. In: AAAI Conference on Artificial Intelligence (2020)
+26. Tu, C.C., Ting, P., Chen, P.Y., Liu, S., Zhang, H., Yi, J., Hsieh, C.J., Cheng, S.M.: Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 742-749 (2019)
+27. Tu, J., Ren, M., Manivasagam, S., Liang, M., Yang, B., Du, R., Cheng, F., Urtasun, R.: Physically realizable adversarial examples for lidar object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 13716-13725 (2020)
+28. Wang, W., Yu, R., Huang, Q., Neumann, U.: Sgpn: Similarity group proposal network for 3d point cloud instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2569-2578 (2018)
+29. Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (TOG) (2019)
+30. Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3d shapenets: A deep representation for volumetric shapes. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1912-1920 (2015)
+31. Xiang, C., Qi, C.R., Li, B.: Generating 3d adversarial point clouds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 9136-9144 (2019)
+32. Xiao, C., Yang, D., Li, B., Deng, J., Liu, M.: Meshadv: Adversarial meshes for visual recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 6898-6907 (2019)
+33. Ye, X., Li, J., Huang, H., Du, L., Zhang, X.: 3d recurrent neural networks with context fusion for point cloud semantic segmentation. In: European Conference on Computer Vision (ECCV). pp. 415-430. Springer (2018)
+34. Yu, L., Li, X., Fu, C.W., Cohen-Or, D., Heng, P.A.: Pu-net: Point cloud upsampling network. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
+35. Zeng, X., Liu, C., Wang, Y.S., Qiu, W., Xie, L., Tai, Y.W., Tang, C.K., Yuille, A.L.: Adversarial attacks beyond the image space. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
+36. Zhao, Z., Dua, D., Singh, S.: Generating natural adversarial examples. In: International Conference on Learning Representations (ICLR) (2018)
+37. Zheng, T., Chen, C., Yuan, J., Li, B., Ren, K.: Pointcloud saliency maps. In: The IEEE International Conference on Computer Vision (ICCV) (2019)
+38. Zhou, H., Chen, K., Zhang, W., Fang, H., Zhou, W., Yu, N.: Dup-net: Denoiser and upsampler network for 3d adversarial point clouds defense. In: The IEEE International Conference on Computer Vision (ICCV) (2019)
\ No newline at end of file
diff --git a/advpctransferableadversarialperturbationson3dpointclouds/images.zip b/advpctransferableadversarialperturbationson3dpointclouds/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..57b2be3d4c61d2529e661983a778c20aa7fc01da
--- /dev/null
+++ b/advpctransferableadversarialperturbationson3dpointclouds/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f854d222ed9c4ed75defcb5b8b8aa86af1382130e1888e172cd97b547997d24
+size 535282
diff --git a/advpctransferableadversarialperturbationson3dpointclouds/layout.json b/advpctransferableadversarialperturbationson3dpointclouds/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..41b6c4ca3a60490ce8c92838cefed86e071a1cdd
--- /dev/null
+++ b/advpctransferableadversarialperturbationson3dpointclouds/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:928afda1ddaeb551c78cc369ec59c787332b869c6a985d346325063ff9f925f7
+size 451936
diff --git a/airattentionwithreasoningcapability/ae3c0401-7ac6-4961-b3de-5d031c7777c3_content_list.json b/airattentionwithreasoningcapability/ae3c0401-7ac6-4961-b3de-5d031c7777c3_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b4df6214acc8670df1a00917b67590ba69650eca
--- /dev/null
+++ b/airattentionwithreasoningcapability/ae3c0401-7ac6-4961-b3de-5d031c7777c3_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:435f383da4509d565ae24cc614ab71a05399a4351fc47d73eb03f936aad2b963
+size 77391
diff --git a/airattentionwithreasoningcapability/ae3c0401-7ac6-4961-b3de-5d031c7777c3_model.json b/airattentionwithreasoningcapability/ae3c0401-7ac6-4961-b3de-5d031c7777c3_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b2e8b37a1233552504b31415d3bc1de1eb3dce99
--- /dev/null
+++ b/airattentionwithreasoningcapability/ae3c0401-7ac6-4961-b3de-5d031c7777c3_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14e69b9572777f37a7c09b891ce5301f8102020f68eea4d49b34ffd12becf562
+size 97966
diff --git a/airattentionwithreasoningcapability/ae3c0401-7ac6-4961-b3de-5d031c7777c3_origin.pdf b/airattentionwithreasoningcapability/ae3c0401-7ac6-4961-b3de-5d031c7777c3_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..68f1dfa5bf96c08431d94b5f7eb3bfdc38cb2c0e
--- /dev/null
+++ b/airattentionwithreasoningcapability/ae3c0401-7ac6-4961-b3de-5d031c7777c3_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04bc94ce988a052a5778c79499e62191e1c464c056577c565de4be7a6cb2f5c0
+size 1367689
diff --git a/airattentionwithreasoningcapability/full.md b/airattentionwithreasoningcapability/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..020ae1b2767ec0144f766965a696aabc83efa327
--- /dev/null
+++ b/airattentionwithreasoningcapability/full.md
@@ -0,0 +1,289 @@
+# AiR: Attention with Reasoning Capability
+
+Shi Chen\*[0000-0002-3749-4767], Ming Jiang\*[0000-0001-6439-5476], Jinhui Yang[0000-0001-8322-1121], and Qi Zhao[0000-0003-3054-8934]
+
+University of Minnesota, Minneapolis MN 55455, USA
+
+{chen4595,mjiang,yang7004,qzhao}@umn.edu
+
+Abstract. While attention has been an increasingly popular component in deep neural networks to both interpret and boost performance of models, little work has examined how attention progresses to accomplish a task and whether it is reasonable. In this work, we propose an Attention with Reasoning capability (AiR) framework that uses attention to understand and improve the process leading to task outcomes. We first define an evaluation metric based on a sequence of atomic reasoning operations, enabling quantitative measurement of attention that considers the reasoning process. We then collect human eye-tracking and answer correctness data, and analyze various machine and human attentions on their reasoning capability and how they impact task performance. Furthermore, we propose a supervision method to jointly and progressively optimize attention, reasoning, and task performance so that models learn to look at regions of interests by following a reasoning process. We demonstrate the effectiveness of the proposed framework in analyzing and modeling attention with better reasoning capability and task performance. The code and data are available at https://github.com/szzexpoi/AiR.
+
+Keywords: Attention, Reasoning, Eye-Tracking Dataset
+
+# 1 Introduction
+
+Recent progress in deep neural networks (DNNs) has resulted in models with significant performance gains in many tasks. Attention, as an information selection mechanism, has been widely used in various DNN models, to improve their ability of localizing important parts of the inputs, as well as task performances. It also enables fine-grained analysis and understanding of the black-box DNN models, by highlighting important information in their decision-making. Recent studies explored different machine attentions and showed varied degrees of agreement on where human consider important in various vision tasks, such as captioning [15, 34] and visual question answering (VQA) [7].
+
+Similar to humans who look and reason actively and iteratively to perform a visual task, attention and reasoning are two intertwined mechanisms underlying the decision-making process. As shown in Fig. 1, answering the question requires humans or machines to make a sequence of decisions based on the relevant regions
+
+of interest (ROIs) (i.e., to sequentially look for the jeans, the girl wearing the jeans, and the bag to the left of the girl). Guiding attention to explicitly look for these objects following the reasoning process has the potential to improve both interpretability and performance of a computer vision model.
+
+
+Is there a bag to the left of the girl that is wearing jeans?
+
+
+1.select (jeans)
+Fig. 1: Attention is an essential mechanism that affects task performances in visual question answering. Eye fixation maps of humans suggest that people who answer correctly look at the most relevant ROIs in the reasoning process (i.e., jeans, girl, and bag), while incorrect answers are caused by misdirected attention
+
+
+2. relate 3. relate (girl, wearing, jeans) (bag, to the left of, girl)
+
+
+
+
+Correct Attention (Answer: yes)
+
+
+Incorrect Attention (Answer: no)
+
+To understand the roles of visual attention in the visual reasoning context, and leverage it for model development, we propose an integrated Attention with Reasoning capability (AiR) framework. It represents the visual reasoning process as a sequence of atomic operations each with specific ROIs, defines a metric and proposes a supervision method that enables the quantitative evaluation and guidance of attentions based on the intermediate steps of the visual reasoning process. A new eye-tracking dataset is collected to support the understanding of human visual attention during the visual reasoning process, and is also used as a baseline for studying machine attention. This framework is a useful toolkit for research in visual attention and its interaction with visual reasoning.
+
+Our work has three distinctions from previous attention evaluation [7, 18, 19, 27] and supervision [29, 30, 45] methods: (1) We go beyond the existing evaluation methods that are either qualitative or focused only on the alignment with outputs, and propose a measure that encodes the progressive attention and reasoning defined by a set of atomic operations. (2) We emphasize the tight correlation between attention, reasoning, and task performance, conducting fine-grained analyses of the proposed method with various types of attention, and incorporating attention with the reasoning process to enhance model interpretability and performance. (3) Our new dataset with human eye movements and answer correctness enables more accurate evaluation and diagnosis of attention.
+
+To summarize, the proposed framework makes the following contributions:
+
+1. A new quantitative evaluation metric (AiR-E) to measure attention in the reasoning context, based on a set of constructed atomic reasoning operations.
+2. A supervision method (AiR-M) to progressively optimize attention throughout the entire reasoning process.
+3. An eye-tracking dataset (AiR-D) featuring high-quality attention and reasoning labels as well as ground truth answer correctness.
+
+4. Extensive analyses of various machine and human attention with respect to reasoning capability and task performance. Multiple factors of machine attention have been examined and discussed. Experiments show the importance of progressive supervision on both attention and task performance.
+
+# 2 Related Works
+
+This paper is most closely related to prior studies on the evaluation of attention in visual question answering (VQA) [7, 18, 19, 27]. In particular, the pioneering work by Das et al. [7] is the only one that collected human attention data on a VQA dataset and compared them with machine attention, showing considerable discrepancies in the attention maps. Our proposed study highlights several distinctions from related works: (1) Instead of only considering one-step attention and its alignment with a single ground-truth map, we propose to integrate attention with progressive reasoning that involves a sequence of operations each related to different objects. (2) While most VQA studies assume human answers to be accurate, it is not always the case [39]. We collect ground truth correctness labels to examine the effects of attention and reasoning on task performance. (3) The only available dataset [7], with post-hoc attention annotation collected on blurry images using a "bubble-like" paradigm and crowdsourcing, may not accurately reflect the actual attention of the task performers [33]. Our work addresses these limitations by using on-site eye tracking data and QA annotations collected from the same participants. (4) Das et al. [7] compared only spatial attention with human attention. Since recent studies [18, 27] suggest that attention based on object proposals are more semantically meaningful, we conduct the first quantitative and principled evaluation of object-based attentions.
+
+This paper also presents a progressive supervision approach for attention, which is related to the recent efforts on improving attention accuracy with explicit supervision. Several studies use different sources of attention ground truth, such as human attention [30], adversarial learning [29] and objects mined from textual descriptions [45], to explicitly supervise the learning of attentions. Similar to the evaluation studies introduced above, these attention supervision studies only consider attention as a single-output mechanism and ignores the progressive nature of the attention process or whether it is reasonable or not. As a result, they fall short of acquiring sufficient information from intermediate steps. Our work addresses these challenges with joint prediction of the reasoning operations and the desired attentions along the entire decision-making process.
+
+Our work is also related to a collection of datasets for eye-tracking and visual reasoning. Eye tracking data is collected to study passive exploration [1, 5, 10, 22, 25, 37] as well as task-guided attention [1, 9, 25]. Despite the less accurate and post-hoc mouse-clicking approximation [7], there has been no eye-tracking data recorded from human participants performing the VQA tasks. To facilitate the analysis of human attention in VQA tasks, we construct the first dataset of eye-tracking data collected from humans performing the VQA tasks. A number of visual reasoning datasets [3, 13, 18, 20, 32, 44] are collected in the form of VQA.
+
+Some are annotated with human-generated questions and answers [3, 32], while others are developed with synthetic scenes and rule-based templates to remove the subjectiveness of human answers and language biases [13, 18, 20, 44]. The one most closely related to this work is GQA [18], which offers naturalistic images annotated with scene graphs and synthetic question-answer pairs. With balanced questions and answers, it reduces the language bias without compromising generality. Their data efforts benefit the development of various visual reasoning models [2, 8, 11, 16, 24, 28, 36, 40-43, 31]. In this work, we use a selection of GQA data and annotations in the development of the proposed framework.
+
+# 3 Method
+
+Real-life vision tasks require looking and reasoning interactively. This section presents a principled framework to study attention in the reasoning context. It consists of three novel components: (1) a quantitative measure to evaluate attention accuracy in the reasoning context, (2) a progressive supervision method for models to learn where to look throughout the reasoning process, and (3) an eye-tracking dataset featuring human eye-tracking and answer correctness data.
+
+# 3.1 Attention with Reasoning Capability
+
+To model attention as a process and examine its reasoning capability, we describe reasoning as a sequence of atomic operations. Following the sequence, an intelligent agent progressively attends to the key ROIs at each step and reasons what to do next until eventually making a final decision. A successful decision-making relies on accurate attention for various reasoning operations, so that the most important information is not filtered out but passed along to the final step.
+
+To represent the reasoning process and obtain the corresponding ROIs, we define a vocabulary of atomic operations emphasizing the role of attention. These operations are grounded on the 127 types of operations of GQA [18] that completely represent all the questions. As described in Table 1, some operations require attention to a specific object (query, verify); some require attention to objects of the same category (select), attribute (filter), or relationship (relate);
+
+Table 1: Semantic operations of the reasoning process
+
+| Operation | Semantic |
| Select | Searching for objects from a specific category. |
| Filter | Determining the targeted objects by looking for a specific attribute. |
| Query | Retrieving the value of a specific attribute from the ROIs. |
| Verify | Examining the targeted objects and checking if they have a given attribute. |
| Compare | Comparing the values of an attribute between multiple objects. |
| Relate | Connecting different objects through their relationships. |
| And/Or | Serving as basic logical operations that combine the results of the previous operation(s). |
+
+
+Fig. 2: AiR-E scores of Correct and Incorrect human attention maps, measuring their alignments with the bounding boxes of the ROIs
+
+and others require attention to any (or) or all (and, compare) ROIs from the previous operations. The ROIs of each operation are jointly determined by the type of operation and the scene information (i.e., object categories, attributes and relationships). Given the operation sequence and annotated scene information, we can traverse the reasoning process, starting with all objects in the scene, and sequentially apply the operations to obtain the ROIs at each step. Details of this method are described in the supplementary materials.
+
+# 3.2 Measuring Attention Accuracy with ROIs
+
+Decomposing the reasoning process into a sequence of operations allows us to evaluate the quality of attention (machine and human attentions) according to its alignment with the ROIs at each operation. Attention can be represented as a 2D probability map where values indicate the importance of the corresponding input pixels. To quantitatively evaluate attention accuracy in the reasoning context, we propose the AiR-E metric that measures the alignment of the attention maps with ROIs relevant to reasoning. As shown in Fig. 2, for humans, a better attention map leading to the correct answer has higher AiR-E scores, while the incorrect attention with lower scores fails to focus on the most important object (i.e., car). It suggests potential correlation between the AiR-E and the task performance. The specific definition of AiR-E is introduced as follows:
+
+Inspired by the Normalized Scanpath Saliency [6] (NSS), given an attention map $A(x)$ where each value represents the importance of a pixel $x$ , we first standardize the attention map into $A^{*}(x) = (A(x) - \mu) / \sigma$ , where $\mu$ and $\sigma$ are the mean and standard deviation of the attention values in $A(x)$ , respectively. For each ROI, we compute AiR-E as the average of $A^{*}(x)$ inside its bounding box $B$ : AiR-E(B) = $\sum_{x\in B}A^{*}(x) / |B|$ . Finally, we aggregate the AiR-E of all ROIs for each reasoning step:
+
+1. For operations with one set of ROIs (i.e., select, query, verify, and filter), as well as or that requires attention to one of multiple sets of ROIs, an accurate attention map should align well with at least one ROI. Therefore, the aggregated AiR-E score is the maximum AiR-E of all ROIs.
+
+2. For those with multiple sets of ROIs (i.e., relate, compare, and), we compute the aggregated AiR-E for each set, and take the mean across all sets.
+
+# 3.3 Reasoning-aware Attention Supervision
+
+For models to learn where to look along the reasoning process, we propose a reasoning-aware attention supervision method (AiR-M) to guide models to progressively look at relevant places following each reasoning operation. Different from previous attention supervision methods [29, 30, 45], the AiR-M method considers the attention throughout the reasoning process and jointly supervises the prediction of reasoning operations and ROIs across the sequence of multiple reasoning steps. Integrating attention with reasoning allows models to accurately capture ROIs along the entire reasoning process for deriving the correct answers.
+
+The proposed method has two major distinctions: (1) integrating attention progressively throughout the entire reasoning process and (2) joint supervision on attention, reasoning operations and answer correctness. Specifically, following the reasoning decomposition discussed in Section 3.1, at the $t$ -th reasoning step, the proposed method predicts the reasoning operation $\boldsymbol{r}_t$ , and generates an attention map $\alpha_t$ to predict the ROIs. With the joint prediction, models learn desirable attentions for capturing the ROIs throughout the reasoning process and deriving the answer. The predicted operations and the attentions are supervised together with the prediction of answers:
+
+$$
+L = L _ {a n s} + \theta \sum_ {t} L _ {\boldsymbol {\alpha} _ {t}} + \phi \sum_ {t} L _ {\boldsymbol {r} _ {t}} \tag {1}
+$$
+
+where $\theta$ and $\phi$ are hyperparameters. We use the standard cross-entropy loss $L_{ans}$ and $L_{\pmb{r}_t}$ to supervise the answer and operation prediction, and a Kullback-Leibler divergence loss $L_{\alpha_t}$ to supervise the attention prediction. We aggregate the loss for operation and attention predictions over all reasoning steps.
+
+The proposed AiR-M supervision method is general, and can be applied to various models with attention mechanisms. In the supplementary materials, we illustrate the implementation details for integrating AiR-M with different state-of-the-art models used in our experiments.
+
+# 3.4 Evaluation Benchmark and Human Attention Baseline
+
+Previous attention data collected under passive image viewing [21], approximations with post-hoc mouse clicks [7], or visually grounded answers [19] may not accurately or completely reflect human attention in the reasoning process. They also do not explicitly verify the correctness of human answers. To demonstrate the effectiveness of the proposed evaluation metric and supervision method, and
+
+to provide a benchmark for attention evaluation, we construct the first eye-tracking dataset for VQA. It, for the first time, enables the step-by-step comparison of how humans and machines allocate attention during visual reasoning.
+
+Specifically, we (1) select images and questions that require humans to actively look and reason; (2) remove ambiguous or ill-formed questions and verify the ground truth answer to be correct and unique; (3) collect eye-tracking data and answers from the same human participants, and evaluate their correctness with the ground-truth answers.
+
+Images and questions. Our images and questions are selected from the balanced validation set of GQA [18]. Since the questions of the GQA dataset are automatically generated from a number of templates based on scene graphs [26], the quality of these automatically generated questions may not be sufficiently high. Some questions may be too trivial or too ambiguous. Therefore, we perform automated and manual screenings to control the quality of the questions. First, to avoid trivial questions, all images and questions are first screened with these criteria: (1) image resolution is at least $320 \times 320$ pixels; (2) image scene graph consists of at least 16 relationships; (3) total area of question-related objects does not exceed $4\%$ of the image. Next, one of the authors manually selects 987 images and 1,422 questions to ensure that the ground-truth answers are accurate and unique. The selected questions are non-trivial and free of ambiguity, which require paying close attention to the scene and actively searching for the answer.
+
+Eye-tracking experiment. The eye-tracking data are collected from 20 paid participants, including 16 males and 4 females from age 18 to 38. They are asked to wear a Vive Pro Eye headset with an integrated eye-tracker to answer questions from images presented in a customized Unity interface. The questions are randomly grouped into 18 blocks, each shown in a 20-minute session. The eye-tracker is calibrated at the beginning of each session. During each trial, a question is first presented, and the participant is given unlimited time to read and understand it. The participant presses a controller button to start viewing the image. The image is presented in the center for 3 seconds. The image is scaled such that both the height and width occupy 30 degrees of visual angle (DVA). After that, the question is shown again and the participant is instructed to provide an answer. The answer is then recorded by the experimenter. The participant presses another button to proceed to the next trial.
+
+Human attention maps and performances. Eye fixations are extracted from the raw data using the Cluster Fix algorithm [23], and a fixation map is computed for each question by aggregating the fixations from all participants. The fixation maps are scaled into $256 \times 256$ pixels, smoothed using a Gaussian kernel ( $\sigma = 9$ pixels, $\approx 1$ DVA) and normalized to the range of [0,1]. The overall accuracy of human answers is $77.64 \pm 24.55\%$ (M±SD). A total of 479 questions have consistently correct answers, and 934 have both correct and incorrect answers. The histogram of human answer accuracy is shown in Fig. 3a. We further separate the fixations into two groups based on answer correctness and compute a fixation map for each group. Correct and incorrect answers have comparable numbers of fixations per trial (10.12 vs. 10.27), while the numbers of fixations
+
+
+(a)
+
+
+Fig. 3: Distributions of answer accuracy and eye fixations of humans. (a) Histogram of human answer accuracy (b) Center biases of the correct and incorrect attention
+
+
+(b)
+
+for the correct answers have a lower standard deviation across trials (0.99 vs. 1.54). Fig. 3b shows the prior distributions of the two groups of fixations, and their high similarity (Pearson's $r = 0.997$ ) suggests that the answer correctness is independent of center bias. The correct and incorrect fixation maps are considered as two human attention baselines to compare with machine attentions, and also play a role in validating the effectiveness of the proposed AiR-E metric. More illustration is provided in the supplementary video.
+
+# 4 Experiments and Analyses
+
+In this section, we conduct experiments and analyze various attention mechanisms of humans and machines. Our experiments aim to shed light on the following questions that have yet to be answered:
+
+1. Do machines or humans look at places relevant to the reasoning process? How does the attention process influence task performances? (Section 4.1)
+2. How does attention accuracy evolve over time, and what about its correlation with the reasoning process? (Section 4.2)
+3. Does guiding models to look at places progressively following the reasoning process help? (Section 4.3)
+
+# 4.1 Do Machines or Humans Look at Places Important to Reasoning? How Does Attention Influence Task Performances?
+
+First, we measure attention accuracy throughout the reasoning process with the proposed AiR-E metric. Answer correctness is also compared, and its correlation with the attention accuracy reveals the joint influence of attention and reasoning operations to task performance. With these experiments, we observe that humans attend more accurately than machines, and the correlation between attention accuracy and task performance is dependent on the reasoning operations.
+
+We evaluate four types of attentions that are commonly used in VQA models, including spatial soft attention (S-Soft), spatial Transformer attention (S-Trans), object-based soft attention (O-Soft), and object-based Transformer attention (O-Trans). Spatial and object-based attentions differ in terms of their inputs (i.e., image features or regional features), while soft and Transformer attention
+
+Q: Do the tank top and the street sign have the same color? A: no
+Q: Is the white bowl to the left of the bottle? A: yes
+Q: What color is the table the vase is below? A: brown
+Q: Is there a bag to the left of the girl that is wearing jeans? A: yes
+Q: Are there both a bag and a woman in this scene? A: yes
+
+
+Fig. 4: Example question-answer pairs (column 1), images (column 2), ROIs at each reasoning step (columns 3-5), and attention maps (columns 6-11)
+
+methods differ in terms of the computational methods of attention (i.e., with convolutional layers or matrix multiplication). We use spatial features extracted from ResNet-101 [14] and object-based features from [2] as the two types of inputs, and follow the implementations of [2] and [12] for the soft attention [38] and Transformer attention [35] computation, respectively. We integrate the aforementioned attentions with different state-of-the-art VQA models as backbones. Our observations are general and consistent across various backbones. In the following sections, we use the results on UpDown [2] for illustration (results for the other backbones are provided in the supplementary materials). For human attentions, we denote the fixation maps associated with correct and incorrect answers as H-Cor and H-Inc, and the aggregated fixation map regardless of correctness is denoted as H-Tot. Fig. 4 presents examples of ROIs for different reasoning operations and the compared attention maps.
+
+Attention accuracy and task performance of humans and models. Table 2 quantitatively compares the AiR-E scores and VQA task performance across humans and models with different types of attentions. The task performance for models is the classification score of the correct answer, while the task performance for humans is the proportion of correct answers. Three clear gaps can be observed from the table: (1) Humans who answer correctly have significantly higher AiR-E scores than those who answer incorrectly. (2) Humans consistently outperform models in both attention and task performance. (3) Object-based attentions attend much more accurately than spatial attentions. The low AiR-E of spatial attentions confirms the previous conclusion drawn from the VQA-HAT dataset [7]. By constraining the visual inputs to a set of semantically meaningful objects, object-based attention typically increases the probabilities of attending to the correct ROIs. Between the two object-based attentions, the soft attention slightly outperforms its Transformer counterpart. Since the Transformer attentions explicitly learn the inter-object relationships, they perform better for logical operations (i.e., and, or). However, due to the complexity of the scenes and fewer parameters used [35], they do not perform as well as soft
+
+Table 2: Quantitative evaluation of AiR-E scores and task performance
+
+ | Attention | and | compare | filter | or | query | relate | select | verify |
| AirE | H-Tot | 2.197 | 2.669 | 2.810 | 2.429 | 3.951 | 3.516 | 2.913 | 3.629 |
| H-Cor | 2.258 | 2.717 | 2.925 | 2.529 | 4.169 | 3.581 | 2.954 | 3.580 |
| H-Inc | 1.542 | 1.856 | 1.763 | 1.363 | 2.032 | 2.380 | 1.980 | 2.512 |
| O-Soft | 1.334 | 1.204 | 1.518 | 1.857 | 3.241 | 2.243 | 1.586 | 2.091 |
| O-Trans | 1.579 | 1.046 | 1.202 | 1.910 | 3.041 | 1.839 | 1.324 | 2.228 |
| S-Soft | -0.001 | -0.110 | 0.251 | 0.413 | 0.725 | 0.305 | 0.145 | 0.136 |
| S-Trans | 0.060 | -0.172 | 0.243 | 0.343 | 0.718 | 0.370 | 0.173 | 0.101 |
| Accuracy | H-Tot | 0.700 | 0.625 | 0.668 | 0.732 | 0.633 | 0.672 | 0.670 | 0.707 |
| O-Soft | 0.604 | 0.547 | 0.603 | 0.809 | 0.287 | 0.483 | 0.548 | 0.605 |
| O-Trans | 0.606 | 0.536 | 0.608 | 0.832 | 0.282 | 0.487 | 0.550 | 0.592 |
| S-Soft | 0.592 | 0.520 | 0.558 | 0.814 | 0.203 | 0.427 | 0.511 | 0.544 |
| S-Trans | 0.597 | 0.525 | 0.557 | 0.811 | 0.211 | 0.435 | 0.517 | 0.607 |
+
+Table 3: Pearson's $r$ between attention accuracy (AiR-E) and task performance. Bold numbers indicate significant positive correlations (p<0.05)
+
+| Attention | and | compare | filter | or | query | relate | select | verify |
| H-Tot | 0.205 | 0.329 | 0.051 | 0.176 | 0.282 | 0.210 | 0.134 | 0.270 |
| O-Soft | 0.167 | 0.217 | -0.022 | 0.059 | 0.331 | 0.058 | 0.003 | 0.121 |
| O-Trans | 0.168 | 0.205 | 0.090 | 0.174 | 0.298 | 0.041 | 0.063 | -0.027 |
| S-Soft | 0.177 | 0.237 | -0.084 | 0.082 | -0.017 | -0.170 | -0.084 | 0.066 |
| S-Trans | 0.171 | 0.210 | -0.152 | 0.086 | -0.024 | -0.139 | -0.100 | 0.270 |
+
+attention. The ranks of different attentions are consistent with the intuition and literature, suggesting the effectiveness of the proposed AiR-E metric.
+
+Attention accuracy and task performance among different reasoning operations. Comparing the different operations, Table 2 shows that query is the most challenging operation for models. Even with the highest attention accuracy among all operations, the task performance is the lowest. This is probably due to the inferior recognition capability of models compared with humans. To humans, 'compare' is the most challenging in terms of task performance, largely because it often appears in complex questions that require close attention to multiple objects and thus take longer processing time. Since models can process multiple input objects in parallel, their performance is not highly influenced by the number of objects to look at.
+
+Correlation between attention accuracy and task performance. The similar rankings of AiR-E and task performance suggest a correlation between attention accuracy and task performance. To further investigate this correlation on a sample basis, for each attention and operation, we compute the Pearson's $r$ between the attention accuracy and task performance across different questions.
+
+As shown in Table 3, human attention accuracy and task performance are correlated for most of the operations (up to $r = 0.329$ ). The correlation is higher than most of the compared machine attentions, suggesting that humans' task
+
+performance is more consistent with their attention quality. In contrast, though commonly referred as an interface for interpreting models' decisions [7, 19, 27], spatial attention maps do not reflect the decision-making process of models. They typically have very low and even negative correlations (e.g., relate, select). By limiting the visual inputs to foreground objects, object-based attentions achieve higher attention-answer correlations.
+
+The differences of correlations between operations are also significant. For the questions requiring focused attention to answer (i.e., with query and compare operations), the correlations are relatively higher than the others.
+
+# 4.2 How Does Attention Accuracy Evolve Throughout the Reasoning Process?
+
+To complement our previous analysis on the spatial allocation of attentions, we move forward to analyze the spatiotemporal alignment of attentions. Specifically, we analyze the AiR-E scores according to the chronological order of reasoning operations. We show in Fig. 5a that the AiR-E scores peak at the $3^{rd}$ or $4^{th}$ steps, suggesting that human and machine attentions focus more on the ROIs closely related to the final task outcome, instead of the earlier steps. In the rest of this section, we focus our analysis on the spatiotemporal alignment between multiple attention maps and the ROIs at different reasoning steps. In particular, we study the change of human attention over time, and compare it with multi-glimpse machine attentions. Our analysis reveals the significant spatiotemporal discrepancy between human and machine attentions.
+
+Do human attentions follow the reasoning process? First, to analyze the spatiotemporal deployment of human attention in visual reasoning, we group the fixations into three temporal bins (0-1s, 1-2s and 2-3s), and compute AiR-E scores for each fixation map and reasoning step (see Fig. 5b-c). Humans start exploration (0-1s) with relatively low attention accuracy. After the initial exploration, human attention shows improved accuracy across all reasoning steps (1-2s), and particularly focuses on the early-step ROIs. In the final steps (2-3s), depending on the correctness of answers, human attention either shifts to the ROIs at later stages (correct), or becomes less accurate with lowered AiR-E scores (incorrect). Such observations suggest high spatiotemporal alignments between human attention and the sequence of reasoning operations.
+
+Do machine attentions follow the reasoning process? Similarly, we evaluate multi-glimpse machine attentions. We compare the stacked attention from SAN [40], compositional attention from MAC [17] and the multi-head attention [11, 43], which all adopt the object-based attention. Fig. 5d-f shows that multi-glimpse attentions do not evolve with the reasoning process. Stacked attention's first glimpse already attends to the ROIs at the $4^{th}$ step, and the other glimpses contribute little to the attention accuracy. Compositional attention and multi-head attention consistently align the best with the ROIs at the $3^{rd}$ or $4^{th}$ step, and ignore those at the early steps.
+
+The spatiotemporal correlations indicate that following the correct order of reasoning operations is important for humans to attend and answer correctly.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+
+(e)
+
+
+(f)
+Fig. 5: Spatiotemporal accuracy of attention throughout the reasoning process. (a) shows the AiR-E of different reasoning steps for human aggregated attentions and single-glimpse machine attentions, (b)-(c) AiR-E scores for decomposed human attentions with correct and incorrect answers, (d)-(f) AiR-E for multi-glimpse machine attentions. For heat maps shown in (b)-(f), the x-axis denotes different reasoning steps while the y-axis corresponds to the indices of attention maps
+
+In contrast, models tend to directly attend to the final ROIs, instead of shifting their attentions progressively.
+
+# 4.3 Does Progressive Attention Supervision Improve Attention and Task Performance?
+
+Experiments in Section 4.1 and Section 4.2 suggest that attention towards ROIs relevant to the reasoning process contributes to task performance, and furthermore, the order of attention matters. Therefore, we propose to guide models to look at places important to reasoning in a progressive manner. Specifically, we propose to supervise machine attention along the reasoning process by jointly optimizing attention, reasoning operations, and task performance (AiR-M, see Section 3.3). Here we investigate the effectiveness of the AiR-M supervision method on three VQA models, i.e., UpDown [2], MUTAN [4], and BAN [24]. We compare AiR-M with a number of state-of-the-art attention supervision methods, including supervision from human-like attention (HAN) [30], attention supervision mining (ASM) [45] and adversarial learning (PAAN) [29]. Note that while the other compared methods are typically limited to supervision on a single attention map, our AiR-M method is generally applicable to various VQA models with single or multiple attention maps (e.g., BAN [24]).
+
+Table 4: Comparative results on GQA test sets (test-dev and test-standard). We report the single-model performance trained on the balanced training set of GQA
+
+ | UpDown [2] | MUTAN [4] | BAN [24] |
| dev | standard | dev | standard | dev | standard |
| w/o Supervision | 51.31 | 52.31 | 50.78 | 51.16 | 50.14 | 50.38 |
| PAAN [29] | 48.03 | 48.92 | 46.40 | 47.22 | n/a | n/a |
| HAN [30] | 49.96 | 50.58 | 48.76 | 48.99 | n/a | n/a |
| ASM [45] | 52.96 | 53.57 | 51.46 | 52.36 | n/a | n/a |
| AiR-M | 53.46 | 54.10 | 51.81 | 52.42 | 53.36 | 54.15 |
+
+
+Fig. 6: Qualitative comparison between attention supervision methods, where Baseline refers to UpDown [2]. For each row, from left to right are the questions and the correct answers, input images, and attention maps learned by different methods. The predicted answers associated with the attentions are shown below its respective attention map
+
+According to Table 4, the proposed AiR-M supervision significantly improves the performance of all baselines and consistently outperforms the other attention supervision methods. Two of the compared methods, HAN and PAAN, fail to improve the performance of object-based attention. Supervising attention with knowledge from objects mined from language, ASM [45] is able to consistently improve the performance of models. However, without considering the intermediate steps of reasoning, it is not as effective as the proposed method.
+
+Fig. 6 shows the qualitative comparison between supervision methods. The proposed AiR-M not only directs attention to the ROIs most related to the answers (i.e., freezer, wheel, chair, purse), but also highlights other important ROIs mentioned in the questions (i.e., keyboard, man), thus reflecting the entire reasoning process, while attentions in other methods fail to localize these ROIs.
+
+Table 5 reports the AiR-E scores across operations. It shows that the AiR-M supervision method significantly improves attention accuracy (attention aggre
+
+Table 5: AiR-E scores of the supervised attentions
+
+| Attention | and | compare | filter | or | query | relate | select | verify |
| Human | 2.197 | 2.669 | 2.810 | 2.429 | 3.951 | 3.516 | 2.913 | 3.629 |
| AiR-M | 2.396 | 2.553 | 2.383 | 2.380 | 3.340 | 2.862 | 2.611 | 4.052 |
| Baseline [2] | 1.859 | 1.375 | 1.717 | 2.271 | 3.651 | 2.448 | 1.796 | 2.719 |
| ASM | 1.415 | 1.334 | 1.443 | 1.752 | 2.447 | 1.884 | 1.584 | 2.265 |
| HAN | 0.581 | 0.428 | 0.468 | 0.607 | 1.576 | 0.923 | 0.638 | 0.680 |
| PAAN | 1.017 | 0.872 | 1.039 | 1.181 | 2.656 | 1.592 | 1.138 | 1.221 |
+
+
+Fig. 7: Alignment between the proposed attention and reasoning process
+
+gated across different steps), especially on those typically positioned in early steps (e.g., select, compare). In addition, the AiR-M supervision method also aligns the multi-glimpse attentions better according to their chronological order in the reasoning process (see Fig. 7 and the supplementary video), showing progressive improvement of attention throughout the entire process.
+
+# 5 Conclusion
+
+We introduce AiR, a novel framework with a quantitative evaluation metric (AiR-E), a supervision method (AiR-M), and an eye-tracking dataset (AiR-D) for understanding and improving attention in the reasoning context. Our analyses show that accurate attention deployment can lead to improved task performance, which is related to both the task outcome and the intermediate reasoning steps. Our experiments also highlight the significant gap between models and humans on the alignment of attention and reasoning process. With the proposed attention supervision method, we further demonstrate that incorporating the progressive reasoning process in attention can improve the task performance by a considerable margin. We hope that this work will be helpful for future development of visual attention and reasoning method, and inspire the analysis of model interpretability throughout the decision-making process.
+
+# Acknowledgements
+
+This work is supported by NSF Grants 1908711 and 1849107.
+
+# References
+
+1. Alers, H., Liu, H., Redi, J., Heynderickx, I.: Studying the effect of optimizing the image quality in saliency regions at the expense of background content. In: Image Quality and System Performance VII. vol. 7529, p. 752907. International Society for Optics and Photonics (2010)
+2. Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., Zhang, L.: Bottom-up and top-down attention for image captioning and visual question answering. In: cvpr (2018)
+3. Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., Parikh, D.: VQA: Visual Question Answering. In: ICCV (2015)
+4. Ben-Younes, H., Cadène, R., Thome, N., Cord, M.: Mutan: Multimodal tucker fusion for visual question answering. ICCV (2017)
+5. Borji, A., Itti, L.: Cat2000: A large scale fixation dataset for boosting saliency research. arXiv preprint arXiv:1505.03581 (2015)
+6. Bylinskii, Z., Judd, T., Oliva, A., Torralba, A., Durand, F.: What do different evaluation metrics tell us about saliency models? IEEE Transactions on Pattern Analysis and Machine Intelligence (2019)
+7. Das, A., Agrawal, H., Zitnick, C.L., Parikh, D., Batra, D.: Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions? In: Conference on Empirical Methods in Natural Language Processing (EMNLP) (2016)
+8. Do, T., Do, T.T., Tran, H., Tjiputra, E., Tran, Q.D.: Compact trilinear interaction for visual question answering. In: ICCV (2019)
+9. Ehinger, K.A., Hidalgo-Sotelo, B., Torralba, A., Oliva, A.: Modelling search for people in 900 scenes: A combined source model of eye guidance. Visual cognition 17(6-7), 945-978 (2009)
+10. Fan, S., Shen, Z., Jiang, M., Koenig, B.L., Xu, J., Kankanhalli, M.S., Zhao, Q.: Emotional attention: A study of image sentiment and visual attention. In: Proceedings of the IEEE Conference on computer vision and pattern recognition. pp. 7521-7531 (2018)
+11. Fukui, A., Park, D.H., Yang, D., Rohrbach, A., Darrell, T., Rohrbach, M.: Multimodal compact bilinear pooling for visual question answering and visual grounding. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pp. 457-468 (2016)
+12. Gao, P., Jiang, Z., You, H., Lu, P., Hoi, S.C.H., Wang, X., Li, H.: Dynamic fusion with intra- and inter-modality attention flow for visual question answering. In: CVPR (2019)
+13. Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., Parikh, D.: Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In: CVPR (2017)
+14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
+15. He, S., Tavakoli, H.R., Borji, A., Pugeault, N.: Human attention in image captioning: Dataset and analysis. In: ICCV (2019)
+16. Hu, R., Andreas, J., Rohrbach, M., Darrell, T., Saenko, K.: Learning to reason: End-to-end module networks for visual question answering. In: ICCV (2017)
+17. Hudson, D.A., Manning, C.D.: Compositional attention networks for machine reasoning (2018)
+
+18. Hudson, D.A., Manning, C.D.: Gqa: A new dataset for real-world visual reasoning and compositional question answering. In: CVPR (2019)
+19. Huk Park, D., Anne Hendricks, L., Akata, Z., Rohrbach, A., Schiele, B., Darrell, T., Rohrbach, M.: Multimodal explanations: Justifying decisions and pointing to the evidence. In: CVPR (2018)
+20. Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., Girshick, R.: Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In: CVPR (2017)
+21. Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: ICCV (2009)
+22. Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: 2009 IEEE 12th international conference on computer vision. pp. 2106-2113. IEEE (2009)
+23. König, S.D., Buffalo, E.A.: A nonparametric method for detecting fixations and saccades using cluster analysis: Removing the need for arbitrary thresholds. Journal of Neuroscience Methods 227, 121 - 131 (2014)
+24. Kim, J.H., Jun, J., Zhang, B.T.: Bilinear Attention Networks. In: NeurIPS. pp. 1571-1581 (2018)
+25. Koehler, K., Guo, F., Zhang, S., Eckstein, M.P.: What do saliency models predict? Journal of vision 14(3), 14-14 (2014)
+26. Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.J., Shamma, D.A., et al.: Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision 123(1), 32-73 (2017)
+27. Li, W., Yuan, Z., Fang, X., Wang, C.: Knowing where to look? analysis on attention of visual question answering system. In: ECCV Workshops (2018)
+28. Mascharka, D., Tran, P., Soklaski, R., Majumdar, A.: Transparency by design: Closing the gap between performance and interpretability in visual reasoning. In: CVPR (2018)
+29. Patro, B.N., Anupriy, Namboodiri, V.P.: Explanation vs attention: A two-player game to obtain attention for vqa. In: AAAI (2020)
+30. Qiao, T., Dong, J., Xu, D.: Exploring human-like attention supervision in visual question answering. In: AAAI (2018)
+31. Selvaraju, R.R., Lee, S., Shen, Y., Jin, H., Ghosh, S., Heck, L., Batra, D., Parikh, D.: Taking a hint: Leveraging explanations to make vision and language models more grounded. In: ICCV (2019)
+32. Tapaswi, M., Zhu, Y., Stiefelhagen, R., Torralba, A., Urtasun, R., Fidler, S.: Movieqa: Understanding stories in movies through question-answering. In: CVPR (2016)
+33. Tavakoli, H.R., Ahmed, F., Borji, A., Laaksonen, J.: Saliency revisited: Analysis of mouse movements versus fixations. In: CVPR (2017)
+34. Tavakoli, H.R., Shetty, R., Borji, A., Laaksonen, J.: Paying attention to descriptions generated by image captioning models. In: ICCV (2017)
+35. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L.u., Polosukhin, I.: Attention is all you need. In: NeurIPS. pp. 5998-6008 (2017)
+36. Wu, J., Mooney, R.: Self-critical reasoning for robust visual question answering. In: NeurIPS (2019)
+37. Xu, J., Jiang, M., Wang, S., Kankanhalli, M.S., Zhao, Q.: Predicting human gaze beyond pixels. Journal of vision 14(1), 28-28 (2014)
+
+38. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In: ICML. pp. 2048-2057 (2015)
+39. Yang, C.J., Grauman, K., Gurari, D.: Visual question answer diversity. In: Sixth AAAI Conference on Human Computation and Crowdsourcing (2018)
+40. Yang, Z., He, X., Gao, J., Deng, L., Smola, A.: Stacked attention networks for image question answering. In: CVPR (2016)
+41. Yi, K., Wu, J., Gan, C., Torralba, A., Kohli, P., Tenenbaum, J.: Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. In: NeurIPS, pp. 1031-1042 (2018)
+42. Yu, Z., Yu, J., Cui, Y., Tao, D., Tian, Q.: Deep modular co-attention networks for visual question answering. In: CVPR (2019)
+43. Yu, Z., Yu, J., Fan, J., Tao, D.: Multi-modal factorized bilinear pooling with coattention learning for visual question answering. In: ICCV (2017)
+44. Zellers, R., Bisk, Y., Farhadi, A., Choi, Y.: From recognition to cognition: Visual commonsense reasoning. In: CVPR (2019)
+45. Zhang, Y., Niebles, J.C., Soto, A.: Interpretable visual question answering by visual grounding from attention supervision mining. In: WACV. pp. 349-357 (2019)
\ No newline at end of file
diff --git a/airattentionwithreasoningcapability/images.zip b/airattentionwithreasoningcapability/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..91a946eb1e5ca4112356d8798b43ea9ff5461097
--- /dev/null
+++ b/airattentionwithreasoningcapability/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9ca0509b3ff547e9ca251b111f1fd38dc4ce0bde3d00e2e5d591de76be20be8f
+size 484216
diff --git a/airattentionwithreasoningcapability/layout.json b/airattentionwithreasoningcapability/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..17f6c33e137e97c7416b0eaa6ed3d09c6c68991e
--- /dev/null
+++ b/airattentionwithreasoningcapability/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fc59e048015a36c1446b1c8be72382586318305f37f58f06500da02646d9e9bc
+size 359808
diff --git a/aligningandprojectingimagestoclassconditionalgenerativenetworks/2e73ad2a-9ddf-4a83-a437-6a30832ee719_content_list.json b/aligningandprojectingimagestoclassconditionalgenerativenetworks/2e73ad2a-9ddf-4a83-a437-6a30832ee719_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5e0cc8de10971cb3042c15c36c0656eda2a1d6c1
--- /dev/null
+++ b/aligningandprojectingimagestoclassconditionalgenerativenetworks/2e73ad2a-9ddf-4a83-a437-6a30832ee719_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f5107280a73eff33dfbe19aeb66f21fa1416d32276fe5d643727d94e545f601f
+size 84655
diff --git a/aligningandprojectingimagestoclassconditionalgenerativenetworks/2e73ad2a-9ddf-4a83-a437-6a30832ee719_model.json b/aligningandprojectingimagestoclassconditionalgenerativenetworks/2e73ad2a-9ddf-4a83-a437-6a30832ee719_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5e655c9c7550dcd0f76c378a089712dc663eecfc
--- /dev/null
+++ b/aligningandprojectingimagestoclassconditionalgenerativenetworks/2e73ad2a-9ddf-4a83-a437-6a30832ee719_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cab9cc9e555031d2dcf8c98efbf9e0620e3e9e176edab8a26ccf84e0371f3a65
+size 105962
diff --git a/aligningandprojectingimagestoclassconditionalgenerativenetworks/2e73ad2a-9ddf-4a83-a437-6a30832ee719_origin.pdf b/aligningandprojectingimagestoclassconditionalgenerativenetworks/2e73ad2a-9ddf-4a83-a437-6a30832ee719_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..07e32c58067609e6f1a8070d7df939854ed77ee6
--- /dev/null
+++ b/aligningandprojectingimagestoclassconditionalgenerativenetworks/2e73ad2a-9ddf-4a83-a437-6a30832ee719_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8fdea7caed4ebced53be595e885ab075fe4d0f98b52054e5dedb74d12534c668
+size 7043801
diff --git a/aligningandprojectingimagestoclassconditionalgenerativenetworks/full.md b/aligningandprojectingimagestoclassconditionalgenerativenetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fc2b3a2f67b43e6736384465a5c8bb80e22e81fa
--- /dev/null
+++ b/aligningandprojectingimagestoclassconditionalgenerativenetworks/full.md
@@ -0,0 +1,323 @@
+# Transforming and Projecting Images into Class-conditional Generative Networks
+
+Minyoung Huh $^{12*}$ Richard Zhang $^{2}$ Jun-Yan Zhu $^{2}$ Sylvain Paris $^{2}$ Aaron Hertzmann $^{2}$
+
+$^{1}$ MIT CSAIL $^{2}$ Adobe Research
+
+Abstract. We present a method for projecting an input image into the space of a class-conditional generative neural network. We propose a method that optimizes for transformation to counteract the model biases in generative neural networks. Specifically, we demonstrate that one can solve for image translation, scale, and global color transformation, during the projection optimization to address the object-center bias and color bias of a Generative Adversarial Network. This projection process poses a difficult optimization problem, and purely gradient-based optimizations fail to find good solutions. We describe a hybrid optimization strategy that finds good projections by estimating transformations and class parameters. We show the effectiveness of our method on real images and further demonstrate how the corresponding projections lead to better editability of these images. The project page and the code is available at https://minyoungg.github.io/GAN-Transform-and-Project/.
+
+# 1 Introduction
+
+Deep generative models, particularly Generative Adversarial Networks (GANs) [24], can create a diverse set of realistic images, with a number of controls for transforming the output, e.g., [48, 6, 29, 32, 48]. However, most of these methods apply only to synthetic images that are generated by GANs in the first place. In many real-world cases, a user would like to edit their own image. One approach is to train a network for each separate image transformation. However, this would require a combinatorial explosion of training time and model parameters.
+
+Instead, a user could "project" their image to the manifold of images produced by the GAN, by searching for an appropriate latent code [60]. Then, any transformations available within the GAN could be applied to the user's image. This could allow a powerful range of editing operations within a relatively compact representation. However, projection is a challenging problem. Previous methods have focused on class-specific models, for example, for objects [60], faces [46, 9], or specific scenes such as bedrooms and churches [5, 7]. With the challenges in both optimization and generative model's limited capacity, we wish to find a generic method that can fit real images from diverse categories into the same generative model.
+
+
+Target
+
+
+Transform + Project
+
+
+Project
+
+
+Finetune + Blend
+
+
+Finetune
+
+
+Snowbird
+
+
+Fig.1. Given a pre-trained BigGAN [8] and a target image (left), our method uses gradient-free BasinCMA to transform the image and find a latent vector to closely reconstruct the image. Our method (top) can better fit the input image, compared to the baseline (bottom), which does not model image transformation and uses gradient-based ADAM optimization. Finding an accurate solution to the inversion problem allows us to further fine-tune the model weights to match the target image without losing downstream editing capabilities. For example, our method allows for changing the class of the object (top row), compared to the baseline (bottom).
+
+
+Class Vector Edits
+
+
+Indigo Finch
+
+Digo Finch
+
+
+Robin
+
+
+
+
+Magpie
+Paaeep
+
+
+Baseline
+
+This paper proposes the first method for projecting images into class-conditional models. In particular, we focus on BigGAN [8]. We address the main problems with these tasks, mainly, the challenges of optimization, object alignment, and class label estimation:
+
+- To help avoid local minima during the optimization process, we systematically study choices of both gradient-based and gradient-free optimizers and show Covariance Matrix Adaptation (CMA) [26] to be more effective than standalone gradient-based optimizers, such as L-BFGS [40] and Adam [33].
+- To better fit a real image into the latent space, we account for the model's center bias by simultaneously estimating both spatial image transformation (translation, scale, and color) and latent variable. Such a transformation can then be inverted back to the input image frame. Our simultaneous transformation and projection method largely expands the scope and diversity of the images that a GAN can reconstruct.
+- Finally, we show that estimating and jointly optimizing the continuous embedding of the class variable leads to better projections. This ultimately leads to more expressive editing by harnessing the representation of the class-conditional generative model.
+
+We evaluate our method against various baselines on projecting real images from ImageNet. We quantitatively and qualitatively demonstrate that it is crucial to simultaneously estimate the correct transformation during the projection step. Furthermore, we show that CMA, a non-parametric gradient-free optimization technique, significantly improves the robustness of the optimization and leads to better solutions. As shown in Figure 1, our method allows us to fine-tune our model to recover the missing details without losing the editing capabilities of the generative model.
+
+
+Fig. 2. Overview: Our method first searches for a transformation to apply to the input target image. We then solve for the latent vector that closely resembles the object in the target image, using our proposed optimization method, also referred to as "projection". The generative model can then be further fine-tuned to reconstruct the missing details that the original model could not generate. Finally, we can edit the image by altering the latent code or the class vector (e.g., changing the border collie to a west highland white terrier), and invert and blend the edited image back into the original image.
+
+
+
+# 2 Related Work
+
+Image editing with generative models. Image editing tools allow a user to manipulate a photograph according to their goal while producing realistic visual content. Seminal work is often built on low-level visual properties, such as patch-based texture synthesis [18, 28, 17, 4], gradient-domain image blending [47], and image matting with locally affine color model [37]. Different from previous handcrafted low-level methods, several recent works [60, 9] proposed to build editing tools based on a deep generative model, with the hope that a generative model can capture high-level information about the image manifold.
+
+Many prior works have investigated using trained generative models as a tool to edit images [60, 9, 5, 6]. The same image prior from deep generative models has also been used in face editing, image inpainting, colorization, and deblurring prior [46, 56, 2, 25, 51]. Unlike these works that focuses on single-class and fixated image, our method presents a new ways of embedding an image into a class-conditional generative model, which allows the same GAN to be applied to many more "in-the-wild" scenarios.
+
+Inverting networks. Our work is closely related to methods for inverting pre-trained networks. Earlier work proposes to invert CNN classifiers and intermediate features for visualizing recognition networks [41, 15, 43, 44]. More recently, researchers adopted the above methods to invert generative models. The common techniques include: (1) Optimization-based methods: they find the latent vector that can closely reconstruct the input image using gradient-based method (e.g., ADAM, LBFGS) [60, 9, 39, 56, 50, 49, 10] or MCMC [20], (2) Encoder-based methods: they learn an encoder to directly predict the latent vector given a real image [12, 16, 60, 46, 9, 13], (3) Hybrid methods [60, 5, 7]: they use the encoder to initialize the latent vector and then solve the optimization problem.
+
+tion problem. Although the optimized latent vector roughly approximates the real input image, many important visual details are missing in the reconstruction [5]. To address the issue, GANPaint [5] generates residual features to adapt to the individual image. Image2StyleGAN [1] optimizes StyleGAN's intermediate representation rather than the input latent vector. Unfortunately, the above techniques still cannot handle images in many scenarios due to the limited model capacity [7], the lack of generalization ability [1], and their single-class assumption. As noted by prior work [1], the reconstruction quality severely degrades under simple image transformation, and translation has been found to cause most of the damage. Compared to prior work, we consider two new aspects in the reconstruction pipeline: image transformation and class vector. Together, these two aspects significantly expand the diversity of the images that we can reconstruct and edit.
+
+# 3 Image projection methods
+
+We aim to project an image into a class-conditional generative model (e.g., BigGAN [8]) for the purposes of downstream editing. We first introduce the basic objective function that we slowly build upon. Next, since BigGAN is an object-centric model for most classes, we infer an object mask from the input image and focus on fitting the pixels inside the mask.
+
+Furthermore, to better fit our desired image into the generative model, we propose to optimize for various image transformation (scale, translation, and color) to be applied to the target image. Lastly, we explain how we optimize the aforementioned objective loss function.
+
+# 3.1 Basic Loss Function
+
+Class-conditional generative model A class-conditional generative network can synthesize an image $\hat{\mathbf{y}}\in \mathbb{R}^{H\times W\times 3}$ , given a latent code $\mathbf{z}\in \mathbb{R}^Z$ that models intra-class variations and a one-hot class-conditioning vector $\tilde{\mathbf{c}}\in \varDelta^{\mathcal{C}}$ to choose over $C$ classes. We focus on the $256\times 256$ BigGAN model [8] specifically, where $Z = 128$ and $C = 1,000$ ImageNet classes.
+
+The BigGAN architecture first maps the one-hot $\tilde{\mathbf{c}}$ into a continuous vector $\mathbf{c} \in \mathbb{R}^{128}$ with a linear layer $\mathbf{W} \in \mathbb{R}^{128 \times 1000}$ , before injecting into the main network $G_{\theta}$ , with learned parameters $\theta$ .
+
+$$
+\hat {\mathbf {y}} = G _ {\theta} (\mathbf {z}, \mathbf {c}) = G _ {\theta} (\mathbf {z}, \mathbf {W} \tilde {\mathbf {c}}). \tag {1}
+$$
+
+Here, a choice must be made whether to optimize over the discrete $\tilde{\mathbf{c}}$ or continuous $\mathbf{c}$ . As optimizing a discrete class vector is non-trivial, we optimize over the continuous embedding.
+
+Optimization setup. Given a target image $\mathbf{y}$ , we would like to find a $\mathbf{z}^*$ and $\mathbf{c}^*$ that generates the image.
+
+$$
+\mathbf {z} ^ {*}, \mathbf {c} ^ {*} = \underset {\mathbf {z}, \mathbf {c}} {\arg \min } \mathcal {L} (G _ {\theta} (\mathbf {z}, \mathbf {c}), \mathbf {y}) \quad \text {s . t .} C (\mathbf {z}) \leq C _ {\max }. \tag {2}
+$$
+
+During training, the latent code is sampled from a multivariate Gaussian $\mathbf{z} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ . Interestingly, recent methods [8, 34] find that restricting the distribution at test time produces higher-quality samples. We follow this and constrain our search space to match the sampling distribution from Brock et al. [8]. Specifically, we use $C(\mathbf{z}) = ||\mathbf{z}||_{\infty}$ and $C_{\max} = 2$ . During optimization, elements of $\mathbf{z}$ that fall outside the threshold are clamped to $+2$ , if positive, or $-2$ , if negative. Allowing larger values of $\mathbf{z}$ produces better fits but compromises editing ability.
+
+Loss function. The loss function $\mathcal{L}$ attempts to capture how close the approximate solution is to the target. A loss function that perfectly corresponds to human perceptual similarity is a longstanding open research problem [54], and evaluating the difference solely on a per-pixel basis leads to blurry results [58]. Distances in the feature space of a pre-trained CNN correspond more closely with human perception [30, 14, 21, 59]. We use the LPIPS metric [59], which calibrates a pre-trained model using human perceptual judgments. Here, we define our basic loss function, which combines per-pixel $\ell_1$ and LPIPS.
+
+$$
+\mathcal {L} _ {\text {b a s i c}} (\mathbf {y}, \hat {\mathbf {y}}) = \frac {1}{H W} \| \hat {\mathbf {y}} - \mathbf {y} \| _ {1} + \beta \mathcal {L} _ {\text {L P I P S}} (\hat {\mathbf {y}}, \mathbf {y}). \tag {3}
+$$
+
+In preliminary experiments, we tried various loss combinations and found $\beta = 10$ to work well. We now expand upon this loss function by leveraging object mask information.
+
+# 3.2 Object Localization
+
+Real images are often more complex than the ones generated by BigGAN. For example, objects may be off-centered and partially occluded, or multiple objects appear in an image. Moreover, it is possible that the object in the image can be approximated by GANs but not the background.
+
+Accordingly, we focus on fitting a single foreground object in an image and develop a loss function to emphasize foreground pixels. We automatically produce a foreground rectangular mask $\mathbf{m} \in [0,1]^{H \times W \times 1}$ using the bounding box of an object detector [27]. Here, we opt for bounding boxes for simplicity, but one could consider using segmentation mask, saliency maps, user-provided masks, etc. The foreground and background values within mask $\mathbf{m}$ are set to 1 and 0.3, respectively. We adjust the objective function to spatially weigh the loss:
+
+$$
+\mathcal {L} _ {\text {m a s k}} (\mathbf {y}, \hat {\mathbf {y}}, \mathbf {m}) = \frac {1}{M} \| \mathbf {m} \odot (\hat {\mathbf {y}} - \mathbf {y}) \| _ {1} + \beta \mathcal {L} _ {\mathrm {m L P I P S}} (\hat {\mathbf {y}}, \mathbf {y}, \mathbf {m}), \tag {4}
+$$
+
+where normalization parameter $M = \| \mathbf{m}\| _1$ and $\odot$ represents element-wise multiplication across the spatial dimensions. Given a mask of all foreground (all ones), the objective function is equivalent to Equation 3. We calculate the masked version of the perceptual loss $\mathcal{L}_{\mathrm{mLPIPS}}(\hat{\mathbf{y}},\mathbf{y},\mathbf{m})$ by bilinearly downsampling the mask at the resolution of the intermediate spatial feature maps within the perceptual loss. The details are described in Appendix B. With the provided mask, we now explore how one can optimize for image transformation to better fit the object in the image.
+
+
+Fig.3.Object center comparison: We use an object detector to compute the histogram of object locations. Note that ImageNet (left) is biased towards the center but exhibits a long-tail. BigGAN (right) is further biased towards center.
+
+
+Fig.4. Object size comparison: We use an object detector to compute the distribution of object widths (left) and heights (right). Note that ImageNet (black) has a long-tail, whereas the BigGAN (blue) accentuates the mode.
+
+# 3.3 Transformation Model and Loss
+
+Generative models may exhibit biases for two reasons: (a) inherited biases from the training distribution and (b) bias introduced by mode collapse [23], where the generative model only captures a portion of the distribution. We mitigate two types of biases, spatial and color during image reconstruction process.
+
+Studying spatial biases. To study spatial bias, we first use a pre-trained object detector, MaskRCNN [27], over 10,000 real and generated images to compute the statistics of object locations. We show the statistics regarding the center locations and object sizes in Figures 3 and 4, respectively.
+
+Figure 3 (left) demonstrates that ImageNet images exhibit clear center bias over the location of objects, albeit with a long tail. While the BigGAN learns to mimic this distribution, it further accentuates the bias [7, 29], largely forgoing the long tail to generate high-quality samples in the middle of the image. In Figure 4, we see similar trends with object height and width. Abdal et al. [1] noted that the quality of image reconstruction degrades given a simple translation in the target image. Motivated by this, we propose to incorporate spatial alignment in the inversion process.
+
+Searching over spatial alignments. We propose to transform the generated image using $\mathcal{T}_{\psi}^{\mathrm{spatial}}(\cdot)$ , which shifts and scales the image using parameters $\psi = [s_x, s_y, t_x, t_y]$ . The parameters $\psi$ are used to generate a sampling grid which in turn is used by a grid-sampler to construct a new transformed image [42]. The corresponding inverse parameters are $\psi^{-1} = \left[\frac{1}{s_x}, \frac{1}{s_y}, -\frac{t_x}{s_x}, -\frac{t_y}{s_y}\right]$ .
+
+Transforming the generated image allows for more flexibility in the optimization. For example, if $G$ can perfectly generate the target image, but at different scales or at off-centered locations, this framework allows it to do so.
+
+Searching over color transformations. Furthermore, we show that the same framework allows us to search over color transformations $\mathcal{T}_{\gamma}^{\mathrm{color}}(\cdot)$ . We experimented with various color transformations such as hue, brightness, gamma, saturation, contrast, and found brightness and contrast to work the best. Specifically, we optimize for brightness, which is parameterized by scalar $\gamma$ with inverse
+
+
+Fig. 5. Initialization from various methods: We show samples drawn from different methods, before the final gradient descent optimization. In "random initialization", seeds are drawn from the normal distribution; the results show higher variation. For the "encoder initialization", we use a trained encoder network to predict the latent vector and apply a minor perturbation. Our method uses CMA to find a good starting distribution. For "Encoder+BasinCMA", we initialize CMA with the output of the encoder. The results are more consistent and better reconstruct the target image.
+
+value $\gamma^{-1} = -\gamma$ . If the generator can perfectly generate the target image, but slightly darker or brighter, this allows a learned brightness transformation to compensate for the difference.
+
+Final objective. Let transformation function $\mathcal{T}_{\phi} = \mathcal{T}_{\psi}^{\mathrm{spatial}} \circ \mathcal{T}_{\gamma}^{\mathrm{color}}$ be a composition of spatial and color transformation functions, where transformation parameters $\phi$ is a concatenation of spatial and color parameters $\psi, \gamma$ , respectively. The inverse function is $\mathcal{T}_{\phi^{-1}}$ . Our final optimization objective function, with consideration for (a) the foreground object and (b) spatial and color biases, is to minimize the following loss:
+
+$$
+\underset {\mathbf {z}, \mathbf {c}, \phi} {\arg \min } \mathcal {L} _ {\operatorname {m a s k}} \left(\mathcal {T} _ {\phi^ {- 1}} \left(G _ {\theta} (\mathbf {z}, \mathbf {c})\right), \mathbf {y}, \mathbf {m}\right) \quad \text {s . t .} C (\mathbf {z}) \leq C _ {\max } \tag {5}
+$$
+
+Our optimization algorithm, described next, has a mix of gradient-free and gradient-based updates. Alternatively, instead of inverse transforming the generated image, we can transform the target and mask images during gradient-based updates and compute the following loss: $\mathcal{L}_{\mathrm{mask}}(G_{\theta}(\mathbf{z},\mathbf{c}),\mathcal{T}_{\phi}(\mathbf{y}),\mathcal{T}_{\phi}(\mathbf{m}))$ . We will discuss when to use each variant in the next section.
+
+# 3.4 Optimization Algorithms
+
+Unfortunately, the objective function is highly non-convex. Gradient-based optimization, as used in previous inversion methods, frequently fall into poor local minima. Bau et al. [7] note that recent large-scale GAN models [31, 32] are significantly harder to invert due to a large number of layers, compared to earlier models [48]. Thus, formulating an optimizer that reliably finds good solutions is a significant challenge. We evaluate our method against various baselines and ablations in Section 4. Given the input image $\mathbf{y}$ and foreground rectangular mask $\mathbf{m}$ (which is automatically computed), we present the following algorithm.
+
+Class and transform initialization We first predict the class of the image with a pre-trained ResNeXt101 classifier [55] and multiply it by $\mathbf{W}$ to obtain our initial class vector $\mathbf{c}_0$ .
+
+Algorithm 1 Transformation-aware projection algorithm
+
+| Input: Image y, initial class vector c0, mask m
+Output: Transformation parameter φ*, latent variable z*, class vector c* |
| 1: # Optimize for transformation φ |
| 2: Initialize (μφ, Σφ) ← (φ0, 0.1 · I) | ▷ φ0 precomputed in Section 3.3 |
| 3: for n iterations do | | |
| 4: φ1:N ~ SampleCMA(μφ, Σφ) | ▷ Draw N samples of φ | |
| 5: z1:N ~ N(0,I), reset c1:N ← c0 | ▷ Reinitialize z and c | |
| 6: for m iterations do | | |
| 7: for i ← 1 to N do | ▷ This loop is batched | |
| 8: gi ← Lmask(Gθ(zi,ci),Tφi(y),Tφi(m)) | | |
| 9: (zi,ci) ← (zi,ci) - η · ∇z,c gi | ▷ Update each sample z, c | |
| 10: g1:N inv Lmask(Tφ-11N(Gθ(z1:N, c1:N)), y, m) | ▷ Recompute loss with inverse | |
| 11: μφ, φ1:N ← UpdateCMA(φ1:N, g1:N, μφ, Σφ) | ▷ Section 3.4 | |
| 12: Set φ* ← μφ | | |
| 13: # Optimize for latent variables z, c | | |
| 14: Initialize (μz, Σz) ← (0, I) | | |
| 15: for p iterations do | | |
| 16: z1:M ~ SampleCMA(μz, Σz), reset c1:M ← c0 | ▷ Draw M samples of z | |
| 17: for q iterations do | | |
| 18: for i ← 1 to M do | ▷ This loop is batched | |
| 19: gi ← Lmask(Gθ(zi,ci), Tφ*(y), Tφ*(m)) | | |
| 20: (zi,ci) ← (zi,ci) - ∇z,c gi | | |
| 21: g1:N inv Lmask(Tφ-11N(Gθ(z1:N, c1:N)), y, m) | ▷ Recompute loss with inverse | |
| 22: μz, Σz ← UpdateCMAz(z1:M, g1:M, μz, Σz) | ▷ Section 3.4 | |
| 23: Set z*, c* ← arg minz,c(g1:M) | ▷ Choose the best z, c | |
+
+Next, we initialize the spatial transformation vector $\psi_0 = [s_{x_0}, s_{y_0}, t_{y_0}, t_{x_0}]$ such that the foreground object is well-aligned with the statistics of the BigGAN model. As visualized in Figures 3 and 4, $(\bar{h}, \bar{w}) = (137, 127)$ is the center of BigGAN-generated objects and $(\bar{y}, \bar{x}) = (213, 210)$ is the mode of object sizes. We define $(h_{\mathbf{m}}, w_{\mathbf{m}})$ to be the height and width and $(y_{\mathbf{m}}, x_{\mathbf{m}})$ to be center of the masked region. We initialize scale factors as $s_{y_0} = s_{x_0} = \max \left(\frac{h_{\mathbf{m}}}{h}, \frac{w_{\mathbf{m}}}{\bar{w}}\right)$ and translations as $(t_{y_0}, t_{x_0}) = \left(\frac{\bar{y} - y_{\mathbf{m}}}{2}, \frac{\bar{x} - x_{\mathbf{m}}}{2}\right)$ . Finally, initial brightness transformation parameter is initialized as $\gamma_0 = 1$ .
+
+Choice of optimizer. We find the choice of optimizer critical and that BasinCMA [7] provides better results than previously used optimizers for the GAN inversion problem. Previous work [60, 1] has exclusively used gradient-based optimization, such as LBFGS [40] and ADAM [33]. However, such methods are prone to obtaining poor results due to local minima, requiring the use of multiple random initial seeds. Covariance Matrix Adaptation (CMA) [26], a gradient-free optimizer, finds better solutions than gradient-based methods. CMA maintains a Gaussian distribution in parameter space $\mathbf{z} \sim \mathcal{N}(\mu, \Sigma)$ . At each iteration, $N$ samples are drawn, and the Gaussian is updated using the loss. The details of this update are described in Hansen and Ostermeier [26]. A weakness of CMA is that when it nears a solution, it is slow to refine results, as it does not use gradients. To address this, we use a variant, BasinCMA [53], that alternates between CMA updates and ADAM optimization, where CMA distribution is updated after taking $M$ gradient steps.
+
+Next, we describe the optimization procedure between the transformation parameters $\phi$ and latent variables $\mathbf{z},\mathbf{c}$ .
+
+Choice of Loss function In Equation 5, we described two variants of our optimization objective. Ideally, we would like to optimize the former variant $\mathcal{L}_{\mathrm{mask}}(\mathcal{T}_{\phi^{-1}}(G_{\theta}(\mathbf{z},\mathbf{c})),\mathbf{y},\mathbf{m})$ such that the target image $\mathbf{y}$ is consistent throughout optimization; and we do so for all CMA updates. However for gradient optimization, we found that back-propagating through a grid-sampler to hurt performance, especially for small objects. A potential reason is that when shrinking a generated image, the grid-sampling operation sparsely samples the image. Without low-pass filtering, this produces a noisy and aliased result [22, 45]. Therefore, for gradient-based optimization, we optimize the latter version $\mathcal{L}_{\mathrm{mask}}(G_{\theta}(\mathbf{z},\mathbf{c}),\mathcal{T}_{\phi}(\mathbf{y}),\mathcal{T}_{\phi}(\mathbf{m}))$ .
+
+Two-stage approach. Historically, searching over spatial transformations with reconstruction loss as guidance has proven to be a difficult task in computer vision [3]. We find this to be the case in our application as well, and that joint optimization over the transformation $\phi$ , and variables $\mathbf{z}, \mathbf{c}$ is unstable. We use a two-stage approach, as shown in Algorithm 1, where we first search for $\phi^*$ and use $\phi^*$ to optimize for $\mathbf{z}^*$ and $\mathbf{c}^*$ . In both stages, a gradient-free CMA outer loop maintains a distribution over the variable of interest in that stage. In the inner loop, ADAM is used to quickly find the local optimum over latent variables $\mathbf{z}, \mathbf{c}$ .
+
+To optimize for the transformation parameter, we initialize CMA distribution for $\phi$ . The mean $\mu_{\phi}$ is initialized with pre-computed statistics $\phi_0$ , and $\Sigma_{\phi}$ is set to $0.1 \cdot \mathbf{I}$ (Alg. 1, line 2). A set of transformations $\phi_{1:N}$ is drawn from CMA, and latent variables $\mathbf{z}_{1:N}$ are randomly initialized (Alg. 1, line 4-5). To evaluate the sampled transformation, we take gradient updates w.r.t. $\mathbf{z}_{1:N}, \mathbf{c}_{1:N}$ for $m = 30$ iterations (Alg. 1, line 6-9). This inner loop can be interpreted as quickly assessing the viability of a given spatial transform. The final samples of $\mathbf{z}_{1:N}, \mathbf{c}_{1:N}, \phi_{1:N}$ are used to compute the loss for the CMA update (Alg. 1, line 10-11). This procedure is repeated for $n = 30$ iterations, and the final transformation $\phi^{*}$ is set to the mean of the current estimate of CMA (Alg. 1, line 12).
+
+After solving for the transformation $\phi^{*}$ , a similar procedure is used to optimize for $\mathbf{z}$ . We initialize CMA distribution for $\mathbf{z}$ with $\mu_{\mathbf{z}} = \mathbf{0}$ and $\Sigma_{\mathbf{z}} = \mathbf{I}$ (Alg. 1, line 14). $M$ samples of $\mathbf{z}_{1:M}$ are drawn from the CMA distribution and $\mathbf{c}_{1:M}$ is set to the initial predicted class vector (Alg. 1, line 16). The drawn samples are evaluated by taking $q = 30$ gradient updates w.r.t $\mathbf{z}_{1:M}$ and $\mathbf{c}_{1:M}$ (Alg. 1, line 17-20). The optimized samples are used to compute the loss for the CMA update (Alg. 1, line 21-22). This procedure is repeated for $p = 30$ iterations. On the final iteration, we take 300 gradient updates instead to obtain the final solution $\mathbf{z}, \mathbf{c}$ (Alg. 1, line 23).
+
+# 3.5 Fine-tuning
+
+So far, we have located an approximate match within a generative model. We hypothesize that if a high-quality match is found, fine-tuning to fit the image will preserve the editability of the generative model. On the contrary, if a poor match is found, the fine-tuning will corrupt the network and result in low-quality images after editing. Next, we describe this fine-tuning process.
+
+
+Fig. 6. ImageNet comparisons: Comparison across various methods on inverting ImageNet images without fine-tuning. A rectangular mask centered around the object of interest is provided for all methods using MaskRCNN [27]. The losses are weighted by the mask. BasinCMA+Transform is our full method.
+
+To synthesize the missing details that the generator could not produce, we wish to fine-tune our model after solving for the latent vector $\mathbf{z}$ , the class vector $\mathbf{c}$ , and transformation parameters $\phi$ . Unlike previous work [5], which proposed to produce the residual features using a small, auxiliary network, we update the weights of the original GAN directly. This allows us to perform edits that spatially deform the image. After obtaining the values for $\phi, \mathbf{z}, \mathbf{c}$ in our projection step, we fine-tune the weights of the generative model. During fine-tuning, the full objective function is:
+
+$$
+\underset {\mathbf {z}, \mathbf {c}, \phi , \theta} {\arg \min } \mathcal {L} _ {\operatorname {m a s k}} \left(\mathcal {T} _ {\phi^ {- 1}} \left(G _ {\theta} (\mathbf {z}, \mathbf {c})\right), \mathbf {y}, \mathbf {m}\right) + \lambda \| \theta - \theta_ {0} \| _ {2} \quad \text {s . t .} \quad C (\mathbf {z}) \leq C _ {\max } \tag {6}
+$$
+
+We put an $\ell_2$ -regularization on the weights, such that the fine-tuned weights do not deviate too much from the original weights $\theta_0$ . In doing so, we can prevent overfitting and preserve the generative model's ability to edit the final image. We use $\lambda = 10^3$ for our results with fine-tuning.
+
+# 4 Results
+
+We demonstrate results on images from ImageNet [11], compare against baselines and ablations, examine cases that BigGAN cannot generate, and show failure cases. We further demonstrate the validity of our method on out-of-distribution data such as COCO and conduct perceptual studies on the edited images.
+
+The ImageNet dataset consists of 1.3 million images with 1,000 classes. We construct a test set by using PASCAL [19] classes as super-classes. There are a total of 229 classes from ImageNet that map to 16 out of 20 classes in PASCAL. We select 10 images at random from each super-class to construct a dataset of 160 images. We run off-the-shelf Mask-RCNN [27] and take the highest activating
+
+
+Fig. 7. ImageNet results: Results using our final method without fine-tuning. The final method uses BasinCMA as well as spatial and color transformation. Our generated results are inverted back for visualization. We also provide the ADAM baseline along with the blended result using Poisson blending [47].
+
+class to generate the detection boxes. We use the same bounding box for all baselines, and the optimization hyper-parameters are tuned on a separate set of ImageNet images.
+
+Experimental details. We use a learning rate of 0.05 for $\mathbf{z}$ and 0.0001 for $\mathbf{c}$ . We use AlexNet-LPIPS [36, 59] as our perceptual loss for all our methods. We did observe an improvement using VGG-LPIPS [52, 59] but found it to be 1.5 times slower. In our experiments, we use a total of 18 seeds for each method. After we project and edit the object, we blend the newly edited object with the original background using Poisson blending [47].
+
+For all of our baselines, we optimize both the latent vector $\mathbf{z}$ and class embedding $\mathbf{c}$ . We use the same mask $\mathbf{m}$ , and the same loss function throughout all of our experiments. The optimization details of our method and the baselines are in the Appendix A.
+
+Experiments. We show qualitative comparisons of various optimization methods for ImageNet images in Figure 6. We show results of our final method with blending in Figure 7. We then quantify these results by comparing against each
+
+| Method | Average of 18 seeds | Best of 18 seeds |
| Optimizer | Spatial Transform | Color Transform | Encoder | Per-pixel | LPIPS | Per-pixel | LPIPS |
| L1 | L2 | Alex | VGG | L1 | L2 | Alex | VGG |
| ADAM | | | | 0.98 | 0.62 | 0.41 | 0.58 | 0.83 | 0.47 | 0.33 | 0.51 |
| L-BFGS | | | | 1.04 | 0.68 | 0.45 | 0.61 | 0.85 | 0.49 | 0.35 | 0.53 |
| CMA | | | | 0.96 | 0.61 | 0.39 | 0.55 | 0.91 | 0.54 | 0.37 | 0.54 |
| None | | | ✓ | 1.61 | 1.39 | 0.62 | 0.68 | 1.35 | 1.00 | 0.55 | 0.64 |
| ADAM | | | ✓ | 0.96 | 0.60 | 0.39 | 0.56 | 0.82 | 0.46 | 0.32 | 0.51 |
| ADAM | | ✓ | | 0.98 | 0.62 | 0.42 | 0.58 | 0.83 | 0.47 | 0.33 | 0.51 |
| ADAM | ✓ | | | 0.90 | 0.54 | 0.44 | 0.57 | 0.76 | 0.41 | 0.36 | 0.50 |
| ADAM | ✓ | ✓ | ✓ | 0.88 | 0.52 | 0.42 | 0.55 | 0.76 | 0.40 | 0.36 | 0.49 |
| CMA+ADAM | | | | 0.93 | 0.57 | 0.37 | 0.55 | 0.83 | 0.47 | 0.32 | 0.51 |
| BasinCMA | | | | 0.82 | 0.48 | 0.29 | 0.51 | 0.78 | 0.43 | 0.26 | 0.49 |
| BasinCMA | | | ✓ | 0.82 | 0.47 | 0.29 | 0.50 | 0.78 | 0.43 | 0.26 | 0.49 |
| BasinCMA | | ✓ | | 0.81 | 0.46 | 0.29 | 0.50 | 0.77 | 0.42 | 0.25 | 0.49 |
| BasinCMA | ✓ | | | 0.72 | 0.38 | 0.33 | 0.48 | 0.69 | 0.35 | 0.31 | 0.46 |
| BasinCMA | ✓ | ✓ | ✓ | 0.71 | 0.37 | 0.32 | 0.47 | 0.68 | 0.34 | 0.31 | 0.46 |
+
+Table 1. ImageNet: We compare various methods for inverting images from ImageNet (lower is better). The last row is our full method. The model is optimized using L1 and AlexNet-LPIPS perceptual loss. The mask and ground-truth class vector is provided for each method. We show the error using different metrics: per-pixel and perceptual [59]. We show the average and the best score among 18 random seeds. Methods that optimized for transformation are inverted to the original location and the loss is computed on the masked region for a fair comparison. All the results here are not fine-tuned.
+
+
+Fig. 8. Failure cases: Our method fails to Fig. 9. Projection error by class: The invert images that are not well represented average VGG-perceptual loss with stand-by BigGAN. The mask is overlayed on the dard error. The ImageNet images are sampled from the PASCAL super-class.
+
+
+
+method using various metrics in Table 1. For all methods, we do not fine-tune our results and we only compute the loss inside the mask for a fair comparison. For methods optimized with transformation, the projected images are inverted back before computing the loss. We further evaluate on COCO dataset [38] in Table 3, and observed our findings to hold true on out-of-distribution dataset. The success of hybrid optimization over purely gradient-based optimization techniques may indicate that the generative model latent space is locally smooth but not globally.
+
+Without transforming the object, we observed that the optimization often fails to find an approximate solution, specifically when the objects are off-centered or contain multiple objects. We observed that optimizing over color transformation does not lead to drastic improvements. Possibly because BigGAN can closely match the color gamut statistics of ImageNet images. Nonetheless, we found that optimizing for color transformation can slightly improve visual aesthetics. Out of the experimented color transformations, optimizing for brightness gave us the best result, and we use this for color transformation throughout our
+
+| Class search | Best of 18 seeds |
| Per-pixel | LPIPS |
| L1 | L2 | Alex | VGG |
| Random Gaussian | 1.26 | 0.88 | 0.69 | 0.86 |
| Random Class | 0.88 | 0.51 | 0.40 | 0.59 |
| Predicted | 0.84 | 0.47 | 0.33 | 0.52 |
| Ground Truth | 0.83 | 0.47 | 0.33 | 0.51 |
+
+| Method | Best of 18 seeds |
| Per-pixel | LPIPS |
| L1 | L2 | Alex | VGG |
| ADAM | 0.96 | 0.57 | 0.32 | 0.56 |
| ADAM + Transform | 0.81 | 0.45 | 0.39 | 0.52 |
| BasinCMA | 0.93 | 0.18 | 0.81 | 0.53 |
| BasinCMA + Transform | 0.78 | 0.42 | 0.36 | 0.49 |
+
+Table 2. Class search: Given a fixed op- Table 3. Out-of-distribution: We com-. tization method (ADAM), we compare pare different methods on the COCO- different methods for initializing the class dataset (lower is better). BigGAN was not vector (lower is better). Bases are: ini- trained on COCO images. The class labels. tized from $\mathcal{N}(\mathbf{0},\mathbf{I})$ , a random class, and are predicted using ResNext-101 and the the ground truth class. masks are predicted using MaskRCNN.
+
+experiments. We further experimented with composing multiple color transformations but did not observe additional improvements.
+
+We found that using CMA/BasinCMA is robust to initialization and is a better optimization technique regardless of whether the transform was applied. Note that we did not observe any benefits of optimizing the class vectors $\mathbf{c}$ with CMA compared to gradient-based methods, perhaps because the embedding between the continuous class vectors is not necessarily meaningful. Qualitatively, we often found the class embeddings to be meaningful when it is either in the close vicinity of original class embeddings or between the interpolation of 2 similar classes and not more. As a result, we use gradient descent to search within the local neighborhood of the initial class embedding space.
+
+We also provide ablation study on how the number of CMA and ADAM updates for BasinCMA affects performance, and how other gradient-free optimizers compare against CMA in Appendix D. We further provide additional qualitative results for our final method in Appendix C.
+
+Class initialization. In downstream editing application, the user may not know the exact ImageNet class the image belongs to. In Table 2, we compare different strategies for initializing the class vector. Here the classifier makes an incorrect prediction $20\%$ of the time. We found that using the predicted class of an ImageNet classifier performs almost as well as the ground truth class. Since we optimize the class vector, we can potentially recover from a wrong initial guess if the predicted class is sufficiently close to the ground-truth.
+
+Failure cases. Figure 8 shows some typical failure cases. We observed that our method fails to embed images that are not well modeled by BigGAN - outlier modes that may have been dropped. For example, we failed to project images that are unique, complicated, rotated, or heavily occluded. More sophisticated transformations such as rotations and perspective transformation could address many of these failure cases and are left for future work.
+
+Which classes does BigGAN struggle to generate? Given our method, we analyze which classes BigGAN, or our method has difficulty generating. In Figure 9, we plot the mean and the standard error for each class. The plot is
+
+
+Fig. 10. Fine-tuned edits: Inversion results on various datasets. We use BasinCMA and transformation to optimize for the latent variables. After obtaining the projections, we fine-tune the model weights and perform edits in the latent and class vector space.
+
+from the output of the method optimized with ADAM + CMA + Transform. We observed a general tendency for the model to struggle in generating objects with delicate structures or with large inter-class variance.
+
+Image Edits. A good approximate solution allows us to fine-tune the generative model and recover the details easily. Good approximations require less fine-tuning and therefore preserve the original generative model editing capabilities. In Figure 10, we embed images from various datasets including CIFAR [35], LSUN [57], and images in-the-wild. We then fine-tune and edit the results by changing the latent vector or class vector. Prior works [48, 29] have found that certain latent vectors can consistently control the appearance change of GANs-generated images such as shifting an image horizontally or zooming an image in and out. We used the "shift" and "zoom" vectors [29] to modify our images. Additionally, we also varied the class vector to a similar class and observed the editability to stay consistent. Even for images like CIFAR, our method was able to find good solutions that allowed us to edit the image. In cases like LSUN, where there is no corresponding class for the scene, we observed that the edits ended up being meaningless.
+
+# 5 Discussion
+
+Projecting an image into the "space" of a generative model is a crucial step for editing applications. We have systematically explored methods for this projection. We show that using a gradient-free optimizer, CMA, produces higher quality matches. We account for biases in the generative model by enabling spatial and color transformations in the search, and the combination of these techniques finds a closer match and better serves downstream editing pipelines. Future work includes exploring more transformations, such as local geometric changes and global appearance changes, as well as modeling generation of multiple objects or foreground/background.
+
+Acknowledgements. We thank David Bau, Phillip Isola, Lucy Chai, and Erik Härkönen for discussions, and David Bau for encoder training code.
+
+# References
+
+1. Abdal, R., Qin, Y., Wonka, P.: Image2stylegan: How to embed images into the stylegan latent space? In: International Conference on Computer Vision (2019) 4, 6, 8
+2. Asim, M., Shamshad, F., Ahmed, A.: Blind image deconvolution using deep generative priors. In: British Machine Vision Conference (2018) 3
+3. Baker, S., Matthews, I.: Lucas-kanade 20 years on: A unifying framework. International Journal of Computer Vision 56(3), 221-255 (2004) 9
+4. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics (TOG) (2009) 3
+5. Bau, D., Strobelt, H., Peebles, W., Wulff, J., Zhou, B., Zhu, J.Y., Torralba, A.: Semantic photo manipulation with a generative image prior. ACM Transactions on Graphics (TOG) (2019) 1, 3, 4, 10
+6. Bau, D., Zhu, J.Y., Strobelt, H., Zhou, B., Tenenbaum, J.B., Freeman, W.T., Torralba, A.: Gan dissection: Visualizing and understanding generative adversarial networks. In: International Conference on Learning Representations (2019) 1, 3
+7. Bau, D., Zhu, J.Y., Wulff, J., Peebles, W., Strobelt, H., Zhou, B., Torralba, A.: Seeing what a gan cannot generate. In: International Conference on Computer Vision (2019) 1, 3, 4, 6, 7, 8
+8. Brock, A., Donahue, J., Simonyan, K.: Large scale gan training for high fidelity natural image synthesis. In: International Conference on Learning Representations (2019) 2, 4, 5
+9. Brock, A., Lim, T., Ritchie, J.M., Weston, N.: Neural photo editing with introspective adversarial networks. In: International Conference on Learning Representations (2017) 1, 3
+0. Creswell, A., Bharath, A.A.: Inverting the generator of a generative adversarial network. IEEE Transactions on Neural Networks and Learning Systems 30(7), 1967-1974 (2018) 3
+1. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (2009) 10
+2. Donahue, J., Krahenbihl, P., Darrell, T.: Adversarial feature learning. In: International Conference on Learning Representations (2017) 3
+3. Donahue, J., Simonyan, K.: Large scale adversarial representation learning. In: Advances in Neural Information Processing Systems (2019) 3
+4. Dosovitskiy, A., Brox, T.: Generating images with perceptual similarity metrics based on deep networks. In: Advances in Neural Information Processing Systems (2016) 5
+5. Dosovitskiy, A., Brox, T.: Inverting visual representations with convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2016) 3
+6. Dumoulin, V., Belghazi, I., Poole, B., Lamb, A., Arjovsky, M., Mastropietro, O., Courville, A.: Adversarily learned inference. In: International Conference on Learning Representations (2017) 3
+7. Efros, A.A., Freeman, W.T.: Image quilting for texture synthesis and transfer. In: ACM SIGGRAPH (2001) 3
+8. Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: International Conference on Computer Vision (1999) 3
+
+19. Everingham, M., Eslami, S.M.A., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The Pascal visual object classes challenge: A retrospective. International Journal of Computer Vision 111(1), 98-136 (2015) 10
+20. Fang, T., Schwing, A.: Co-generation with gans using ais based hmc. In: Advances in Neural Information Processing Systems (2019) 3
+21. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2016) 5
+22. Gonzalez, R.C., Woods, R.E.: Digital Image Processing. Pearson, 2nd edn. (1992) 9
+23. Goodfellow, I.: NIPS 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016) 6
+24. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (2014) 1
+25. Gu, J., Shen, Y., Zhou, B.: Image processing using multi-code gan prior. In: IEEE Conference on Computer Vision and Pattern Recognition (2020) 3
+26. Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. evolutionary computation. Evolutionary Computation (2001) 2, 8
+27. He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask r-cnn. In: International Conference on Computer Vision (2017) 5, 6, 10
+28. Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: ACM SIGGRAPH (2001) 3
+29. Jahanian, A., Chai, L., Isola, P.: On the"steerability" of generative adversarial networks. In: International Conference on Learning Representations (2020) 1, 6, 14
+30. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision (2016) 5
+31. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation. In: International Conference on Learning Representations (2018) 7
+32. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2019) 1, 7
+33. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations (2015) 2, 8
+34. Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. In: Advances in Neural Information Processing Systems (2018) 5
+35. Krizhevsky, A.: Learning Multiple Layers of Features from Tiny Images. Master's thesis, University of Toronto (2009) 14
+36. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012) 11
+37. Levin, A., Lischinski, D., Weiss, Y.: A closed-form solution to natural image matting. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(2), 228-242 (2007) 3
+38. Lin, T., Maire, M., Belongie, S.J., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Doll'ar, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: European Conference on Computer Vision (2014) 12
+39. Lipton, Z.C., Tripathi, S.: Precise recovery of latent vectors from generative adversarial networks. ICLR workshop (2017) 3
+
+40. Liu, D.C., Nocedal, J.: On the limited memory bfgs method for large scale optimization. Mathematical programming 45(1-3), 503-528 (1989) 2, 8
+41. Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: IEEE Conference on Computer Vision and Pattern Recognition (2015) 3
+42. Max Jaderberg, Karen Simonyan, A.Z.K.K.: Spatial transformer networks. In: Advances in Neural Information Processing Systems (2015) 6
+43. Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distill 2(11), e7 (2017) 3
+44. Olah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., Mordvintsev, A.: The building blocks of interpretability. Distill 3(3), e10 (2018) 3
+45. Oppenheim, A.V., Schafer, R.W., Buck, J.R.: Discrete-Time Signal Processing. Pearson, 2nd edn. (1999) 9
+46. Perarnau, G., Van De Weijer, J., Raducanu, B., Álvarez, J.M.: Invertible conditional gans for image editing. In: NIPS 2016 Workshop on Adversarial Training (2016) 1, 3
+47. Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. ACM Transactions on Graphics (TOG) 22(3), 313-318 (2003) 3, 11
+48. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: International Conference on Learning Representations (2016) 1, 7, 14
+49. Raj, Ankit, Y.L., Bresler, Y.: Gan-based projector for faster recovery with convergence guarantees in linear inverse problems. In: International Conference on Computer Vision (2019) 3
+50. Shah, V., Hegde, C.: Solving linear inverse problems using gan priors: An algorithm with provable guarantees. In: International Conference on Acoustics, Speech, and Signal Processing (2018) 3
+51. Shen, Y., Gu, J., Tang, X., Zhou, B.: Interpreting the latent space of gans for semantic face editing. In: IEEE Conference on Computer Vision and Pattern Recognition (2020) 3
+52. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015) 11
+53. Wampler, K., Popovic, Z.: Optimal gait and form for animal locomotion. ACM Transactions on Graphics (TOG) (2009) 8
+54. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13(4), 600-612 (2004) 5
+55. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431 (2016) 7
+56. Yeh, R.A., Chen, C., Yian Lim, T., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models. In: IEEE Conference on Computer Vision and Pattern Recognition (2017) 3
+57. Yu, F., Zhang, Y., Song, S., Seff, A., Xiao, J.: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) 14
+58. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: European Conference on Computer Vision (2016) 5
+59. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep networks as a perceptual metric. In: IEEE Conference on Computer Vision and Pattern Recognition (2018) 5, 11, 12
+
+60. Zhu, J.Y., Krahenbuhl, P., Shechtman, E., Efros, A.A.: Generative visual manipulation on the natural image manifold. In: European Conference on Computer Vision (2016) 1, 3, 8
\ No newline at end of file
diff --git a/aligningandprojectingimagestoclassconditionalgenerativenetworks/images.zip b/aligningandprojectingimagestoclassconditionalgenerativenetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..052a73187e66df22b93652ae8f3d8ad81ad4b477
--- /dev/null
+++ b/aligningandprojectingimagestoclassconditionalgenerativenetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ca5de30d2e8fb9180f1677f4a01e81dc382dfc2070650a04dded17d61810b2e6
+size 705864
diff --git a/aligningandprojectingimagestoclassconditionalgenerativenetworks/layout.json b/aligningandprojectingimagestoclassconditionalgenerativenetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..17304c1be30405597ae764aa85f86a2def4a5ac2
--- /dev/null
+++ b/aligningandprojectingimagestoclassconditionalgenerativenetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:55204ff89bc3810060b39d3d430e752c5226a79c899ad49e28454b52cec1c58c
+size 455230
diff --git a/aligningvideosinspaceandtime/0e1970d7-0f59-4d01-b286-c1268dd840c1_content_list.json b/aligningvideosinspaceandtime/0e1970d7-0f59-4d01-b286-c1268dd840c1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4a66f46a929c5a3f603562a132d61c92fd317037
--- /dev/null
+++ b/aligningvideosinspaceandtime/0e1970d7-0f59-4d01-b286-c1268dd840c1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:30dd1524c5599dbd089804f3c96dd044d0a25c2a75913219f1f2f14b63da37e9
+size 75597
diff --git a/aligningvideosinspaceandtime/0e1970d7-0f59-4d01-b286-c1268dd840c1_model.json b/aligningvideosinspaceandtime/0e1970d7-0f59-4d01-b286-c1268dd840c1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b4610083c1eda0d071a8e7d2b7b6a16626333465
--- /dev/null
+++ b/aligningvideosinspaceandtime/0e1970d7-0f59-4d01-b286-c1268dd840c1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f7097a670078823980e9be27e36c73cc9b5490b48f900634d9c3e0b72dfb327
+size 95218
diff --git a/aligningvideosinspaceandtime/0e1970d7-0f59-4d01-b286-c1268dd840c1_origin.pdf b/aligningvideosinspaceandtime/0e1970d7-0f59-4d01-b286-c1268dd840c1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3eaee44e5f4407dd760376332921a1d32c12e965
--- /dev/null
+++ b/aligningvideosinspaceandtime/0e1970d7-0f59-4d01-b286-c1268dd840c1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7bdd0fbae0cc2b658efda18b0aae03992c3bf4cc74093d879beb8695109a0d3f
+size 2005959
diff --git a/aligningvideosinspaceandtime/full.md b/aligningvideosinspaceandtime/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d4934e1a073e1ea752148989338095028ed6e64e
--- /dev/null
+++ b/aligningvideosinspaceandtime/full.md
@@ -0,0 +1,321 @@
+# Aligning Videos in Space and Time
+
+Senthil Purushwalkam\*, Tian Ye\*, Saurabh Gupta, and Abhinav Gupta\*
+
+1 Carnegie Mellon University
+
+2 University of Illinois at Urbana-Champaign
+
+3 Facebook AI Research
+
+Abstract. In this paper, we focus on the task of extracting visual correspondences across videos. Given a query video clip from an action class, we aim to align it with training videos in space and time. Obtaining training data for such a fine-grained alignment task is challenging and often ambiguous. Hence, we propose a novel alignment procedure that learns such correspondence in space and time via cross video cycle-consistency. During training, given a pair of videos, we compute cycles that connect patches in a given frame in the first video by matching through frames in the second video. Cycles that connect overlapping patches together are encouraged to score higher than cycles that connect non-overlapping patches. Our experiments on the Penn Action and Pouring datasets demonstrate that the proposed method can successfully learn to correspond semantically similar patches across videos, and learns representations that are sensitive to object and action states.
+
+Keywords: understanding via association, video alignment, visual correspondences
+
+# 1 Introduction
+
+Ask not "what is this?", ask "what is this like".
+
+Moshe Bar
+
+What does it mean to understand a video? The most popular answer right now is labeling videos with categories such as "opening bottle". However, action categories hardly tell us anything about the process - it doesn't tell us where is the bottle or when it was opened, let alone the different other states it can exist in, and what parts are involved in what transitions. Dense semantic labeling is a non-starter because exhaustive and accurate labels for objects, their states and actions are not easy to gather.
+
+In this paper, we investigate the alternative of understanding via association, i.e. video understanding by extracting visual correspondences between training
+
+
+Fig. 1: Learning Correspondence via Cycle Supervision. Features that allow sequences of matches (cycles) that begin and end at the same patch are desired.
+
+and test videos. Focusing on 'what is a given video like', rather than 'what class it belongs to', side-steps the problem of hand-defining a huge taxonomy and dense labeling. Inspired by this, in this paper, we focus on the task of creating associations or visual correspondences across training and test videos. More specifically, we try to align videos in both space and time. This poses two core and inter-related questions: (a) what is the granularity of visual correspondence? (b) what is the right distance metric or features to extract this correspondence?
+
+Let us focus on the first issue: the granularity, i.e. the level at which we should establish correspondence: pixel-level, patch-level or frame-level. The trade-off here is between discriminability and the amount of data required for good correspondences. While full frames are more discriminative (and easy to match), they are also quite specific. For example, finding a frame that depicts the same relation between the bottle and the cup as shown in Figure 1 would require large amounts of training data before a good full-frame correspondence can be found. Consequently, past work with hand-crafted descriptors focused on establishing visual correspondence by matching interest points [30,47] and image patches [42]. However, given lack of dense supervision, recent work that tries to revisit these ideas through learning [9] seeks to correspond whole frames, through temporal consistency of frames. While this works well for full frame correspondence, it doesn't produce patch-level correspondences which is both richer, and more widely applicable. This motivates our pursuit for a method to obtain dense patch-level correspondences across videos.
+
+The second issue at hand is of how to learn a distance metric (or equivalently an appropriate feature space) for extracting visual correspondences. Classical work focused on using manually-defined features [30,47] with a variety of distance
+
+metrics. However, given the widespread effectiveness of supervised end-to-end learning for computer vision tasks [25] (including visual correspondence [36]), it is natural to ask how to leverage learning for this task, i.e. what is the right objective function and supervision for learning features for obtaining correspondences? The conventional approach would be to reuse generic features from a standard task such as image classification or action recognition. As our experiments will demonstrate, neither features learned for ImageNet classification, nor ones trained for action recognition generate good correspondences due to their inability to encode object states. At the same time, direct manual annotation for visual correspondence across videos is challenging and infeasible to scale. This necessitates design of a self-supervised approach.
+
+Interestingly, some recent efforts pursue this direction, and exploit consistency in correspondences as supervision to learn frame-level correspondence [9], or intra-video correspondence (tracking) [52]. Our proposed method extends these methods to learn patch-level correspondences across videos via cross video cycle-consistency. During training, given a pair of videos, we compute matches for a patch forward in time in the first video, then match to a patch in the second video, match this patch backward in time in the second video and finally match back to a patch in the first video. This sequence of patches is referred to as a 'cycle'. Cycles that start and end at overlapping patches are encouraged to score higher than cycles that connect non-overlapping patches (see Figure 1). This allows our approach to generate finer level correspondence across videos (as SIFT Flow [29] does for images), while also harnessing the capabilities of the modern end-to-end learning approaches. Our experiments show that features learned using our approach are more effective at corresponding objects in the same state across videos, than features trained for ImageNet classification, or for action classification.
+
+# 2 Related Work
+
+Our work learns space-time visual correspondence by use of cycle consistency. In this section, we present a survey of related literature on video understanding (datasets, tasks and techniques), correspondence techniques in videos, and use of self-supervision and cycle consistency for learning features and correspondences.
+
+Video Datasets and Tasks. A number of past efforts have been devoted to collecting new video understanding datasets, and extending static image tasks to videos. Leading efforts in recent times include datasets like Kinetics [22], AvA [16], Charades [40], EPIC Kitchen [5], VLOG [11], MultiTHUMOS [56]. While some of these datasets focus on action classification, a number of them investigate new tasks, such as temporal action localization [56], detection of subjects, verbs and objects [16], classification in first-person videos [5], and analysis of crowd-sourced videos [15, 40]. These works extend video understanding by scaling it up.
+
+Architectures for Action Classification. Researchers have also pursued design of expressive neural network architectures for the task of action classification [4, 41, 45, 46, 48, 54]. Some works investigate architectures to encourage the
+
+modelling of time flow [33, 38], or long-range temporal dependencies [10, 50, 53], or object tracking [13]. While these models often capture useful intuitions, their focus is still on optimizing models for the task of action classification. Hence, even though the model has the right inductive biases, learning is bottle-necked by the low-entropy output space that of action class labels.
+
+Beyond Action Recognition. Many efforts have also pursued the task of detailed video understanding in recent times. For example, video prediction tasks [7, 26] have the promise to go beyond action classification, as they force the model to predict much more than what can be effectively annotated. Wang et al. [49] model actions as operators that transform states of objects, and Nagarajan et al. [34] learn about how humans interact with different objects. In contrast, we take a non-parametric approach, and understand videos by understanding what they are like, and corresponding them with other videos in space and time.
+
+Cycle Consistency and Correspondence. Forward-backward consistency and cycle consistency have been used in computer vision for establishing correspondence in an unsupervised manner [21,39]. Zhou et al. [61] use cycle-consistency to establish dense correspondence between 3D shapes, Godard et al. [14], use cycle consistency for learning to predict depth, Zhu et al. [62] use cycle consistency to learn how to generate images, and Wang et al. [52] use cycle consistency to learn features for correspondence over time in videos. Work from Wang et al. [52] is a primary motivation for our work, and we investigate use of cycle consistency to learn cross-video correspondences. To our knowledge, ours is the first work to investigate spatio-temporal alignment across videos with cycle consistency.
+
+Spatial Correspondence. Finding correspondences across video frames is a fundamental problem and has been actively studied for decades. Optical flow [3] seeks to establish correspondences at the pixel-level. While numerous effective approaches have been proposed [31, 32, 43, 44], optical flow estimation is still challenging over long time periods, and fails across videos. This issue is partially alleviated by performing correspondence at a patch level. SIFT Flow [29], a seminal work in this domain, uses SIFT descriptors [30] to match patches across scene. SIFT Flow can be used to transfer labels from training data to test samples in many applications [12, 28, 37, 57]. However, patch correspondence approaches [17, 23, 60], rely on the local appearance of the patches for matching. We use a similar method to obtain spatio-temporal correspondences across videos, but account for the object states and not just the local appearance.
+
+Cross-video Spatio-Temporal Alignment. Past works have studied spatiotemporal alignment in videos. Sermanet et al. [38] learn time sensitive features in a supervised manner by collecting time aligned data for an action. Alayrac et al. [2] learn features sensitive to object states by classifying object bounding box into before or after action. Dwibedi et al. [9] focus on learning temporal correspondence by enforcing consistency in nearest neighbors at frame-level. This focus on frame-level modeling ignores spatial alignment. In contrast, we focus on corresponding image patches across videos in time and space. This leads to learning of state-sensitive object representations (as opposed to scene representations). We are not aware of any past work that tackles the problem of establishing
+
+
+Fig. 2: What is a good correspondence? A good correspondence is a match where patches correspond to the same semantic part, and are in the same state with respect to the depicted action.
+
+spatio-temporal correspondences across videos.
+
+Self-supervision. A number of past works employ self-supervised learning to alleviate the need for semantic supervision from humans to acquire generic image representations. Past works have employed images [8, 58], videos [33, 35, 38, 51, 52], and also motor actions [1, 20]. Our alignment of videos in space and time, can also be seen as a way to learn representations in a self-supervised manner. However, we learn features that are sensitive to object state, as opposed to generic image features learned by these past methods.
+
+# 3 Alignment via Cross-Video Cycle Consistency
+
+Our goal is to learn how to spatio-temporally align two videos. We tackle this problem by extracting patch level visual correspondence across two videos. But what defines a good correspondence? A good spatio-temporal correspondence is one where two patches from different videos are linked when they depict the same objects (or their parts) and are in similar states. For example, two patches depicting rim of the cups are in correspondence as shown in Figure 2 because the patches correspond to same part and the cups are in same state (tilted for pouring). On the other hand, the other two correspondences are bad because either the patches correspond to different object parts or the states of object do not match.
+
+While it is easy to learn features that can correspond the same objects in various states over time by learning to track [51, 52], it is far more challenging to learn features that correspond different objects in the same state. We specifically tackle this problem in our proposed approach. One of the biggest challenge here is the supervision. It is difficult to obtain supervision for such a dense correspondence task, thus we pursue a weakly-supervised approach. Our central idea is to employ cross-video cycle-consistency. Specifically, we create cycles in videos of the same action class, that track patches within a video, match it to a patch in another video, track this patch back in time, and then match back to the original video. Figure 3 illustrates the idea. Cycles that can track back to the same patch are encouraged (green cycle), while cycles that get back to a different
+
+
+Fig. 3: Overview: Given tracks in two video of the same class (shown by white dotted lines), we learn an embedding to correspond patches across videos. This is done by computing cycles (pair of cross-video edges) that correctly track a patch back to itself. We compute the best cycle that corresponds a patch to itself (shown in green) and encourage it to have a higher similarity than the best cycle that corresponds a patch to a different patch (shown in red) via a margin loss.
+
+patch in the first video are discouraged (red cycles). Enforcing this objective on a large collection of foreground patches would lead to choosing semantically aligned tracks. However, note that this could lead to some trivial cycles involving very short (or single frame) tracks in the second video. It is important to disregard such solutions in order to focus on cycles where object states vary (we disregard cycles that involve tracks of length 3 or less). We now formally describe the training objective.
+
+# 3.1 Formulation
+
+Let's assume we have a tracker $\mathcal{T}$ , that given a video $V$ , produces a set of tracks on the video. We will use $V_{m:n}^{i}$ to denote the sequence of patches in track $i$ starting from frame $m$ and ending at frame $n$ . The image patch for track $i$ in frame $m$ is denoted as $V_{m}^{i}$ (see Figure 4). In this work, for obtaining tracks, we use the tracker proposed in [52] which is trained in an unsupervised manner. $f_{\theta}$ , realized via convolutional neural networks, denotes the desired feature embedding that establishes visual correspondence across different videos.
+
+Consider the cycle shown in Figure 4: $V_{m}^{i} \to V_{n}^{i} \to W_{q}^{j} \to W_{p}^{j} \to V_{m}^{k}$ . This cycle has following jumps: forward-tracking in $V$ , matching $V$ to $W$ , backward-tracking in $W$ and matching back from $W$ to $V$ . We represent this cycle as $\{V_{m:n}^{i}, W_{p:q}^{j}, V_{m}^{k}\}$ . The score of this cycle can be expressed as the sum of patch similarities of the matches involved. However, note that the first and third matches in a cycle are extracted using off-the-shelf tracker, therefore do not depend on $f_{\theta}$ and can be assumed to have a constant score. Therefore, the final score of a cycle can be computed using cosine similarity $s$ as:
+
+
+Fig. 4: Formulation: The score of a cycle is sum of the scores of two jumps as per $f_{\theta}$ .
+
+$$
+S \left(\left\{V _ {m: n} ^ {i}, W _ {p: q} ^ {j}, V _ {m} ^ {k} \right\}\right) = \underbrace {s \left(f _ {\theta} \left(V _ {n} ^ {i}\right) , f _ {\theta} \left(W _ {q} ^ {j}\right)\right)} _ {\text {J u m p f r o m v i d e o V (f r a m e n ,}} + \underbrace {s \left(f _ {\theta} \left(W _ {p} ^ {j}\right) , f _ {\theta} \left(V _ {m} ^ {k}\right)\right)} _ {\text {J u m p f r o m v i d e o W (f r a m e p ,}}
+$$
+
+Given a starting patch $V_{m}^{i}$ and an ending patch $V_{m}^{k}$ , there can be numerous cycles depending on the length $n$ considered in video $V$ , the segment $(p,q)$ of video $W$ considered and the track $j$ chosen in video $W$ . When the patches $V_{m}^{i}$ and $V_{m}^{k}$ are highly overlapping, we expect the best cycle to have a high score. On the other hand, when these patches do not overlap, we want all the cycles to score low. We formulate this objective to optimize $f_{\theta}$ as a margin loss. First, for the pair of patches $V_{m}^{i}, V_{m}^{k}$ , we compute the score of the best cycle as:
+
+$$
+\kappa \left(V _ {m} ^ {i}, V _ {m} ^ {k}\right) = \max _ {n, p, q, j} S \left(\left\{V _ {m: n} ^ {i}, W _ {p: q} ^ {j}, V _ {m} ^ {k} \right\}\right) \tag {2}
+$$
+
+The margin loss can then be formulated as:
+
+$$
+\max \left[ 0, - \kappa (V _ {m} ^ {i}, V _ {m} ^ {i _ {+}}) + \kappa (V _ {m} ^ {i}, V _ {m} ^ {i _ {-}}) + \delta \right]
+$$
+
+$$
+\forall i _ {+}, i _ {-}: \operatorname {I o U} \left(V _ {m} ^ {i}, V _ {m} ^ {i _ {+}}\right) \geq 0. 5 \text {a n d} \operatorname {I o U} \left(V _ {m} ^ {i}, V _ {m} ^ {i _ {-}}\right) < 0. 5 \tag {3}
+$$
+
+where, $\delta$ is the fixed margin. This can be optimized using stochastic gradient descent, to learn function $f_{\theta}$ .
+
+We found that using a soft version of the max function ( $\Gamma$ as defined below) instead of the max function in Eq. 2 was important for training. Soft version of max function, $\Gamma$ is defined as follows:
+
+$$
+\Gamma (\mathbf {x}) = \sum_ {c} \mathbf {x} _ {c} \frac {e ^ {\mathbf {x} _ {c}}}{\sum_ {c ^ {\prime}} e ^ {\mathbf {x} _ {c ^ {\prime}}}} \tag {4}
+$$
+
+Here $c$ represents a cycle and $\mathbf{x}_c$ represents the score of that cycle. This prevents the model from getting stuck in the local minima of greedily boosting the single best cycle. The soft version of max also allows computation of gradients w.r.t all patches that participate in score computation, thereby updating the representations of a larger number of samples.
+
+# 3.2 Using Features for Spatio-Temporal Alignment
+
+The representation $f_{\theta}$ trained using our approach can be used to extract cross-video correspondences at the level of patches, tracks, frames and videos:
+
+Patch Correspondence. $f_{\theta}$ can be used to correspond image patches. As $f_{\theta}$ learns features sensitive to state of the object, it allows us to correspond and retrieve objects that are in the same state. See Section 4 for results.
+
+Track Correspondence. Cycles in our formulation correspond tracks with one another. Given a set of tracks in videos $V$ and $W$ , we correspond each track $i$ in video $V$ , to the track in $W$ that maximizes the score in Eq. 1:
+
+$$
+\arg \max _ {j} \left(\max _ {n, p, q} S \left(\left\{V _ {m: n} ^ {i}, W _ {p: q} ^ {j}, V _ {m} ^ {i} \right\}\right)\right). \tag {5}
+$$
+
+Temporal Alignment. We compute the similarity between a given pair of frames $(V_{m}$ and $W_{p})$ in the two videos $V$ and $W$ by computing the total similarity between corresponding patches in the two frames:
+
+$$
+T (V _ {m}, W _ {p}) = \sum_ {i} \max _ {j} s \left(f _ {\theta} \left(V _ {m} ^ {i}\right), f _ {\theta} \left(W _ {p} ^ {j}\right)\right). \tag {6}
+$$
+
+These frame-level similarities can be used to obtain sub-video alignments. For example, if one wants to align $K$ frames in video 1 to $K$ frames in video 2 we can pick temporally-consistent top- $K$ correspondences.
+
+Video Retrieval. $f_{\theta}$ provides a natural metric for retrieving videos. Given a query video $V$ and a set of videos $\mathcal{W}$ , we retrieve the most similar video to $V$ , by maximizing the total frame-level temporal alignment score:
+
+$$
+W = \underset {W \in \mathcal {W}} {\arg \max } \sum_ {m} \underset {p} {\max } T \left(V _ {m}, W _ {p}\right). \tag {7}
+$$
+
+# 4 Experiments
+
+Our goal is to demonstrate that we can align videos in space and time by leveraging $f_{\theta}$ learned using cross-video cycle-consistency supervision. Quantitatively
+
+measuring performance of dense spatio-temporal alignment is challenging due to the lack of ground-truth data. Therefore, in order to demonstrate the effectiveness of our approach, our experiments involve factored quantitative evaluations, and qualitative visualizations. More specifically, we study performance of our model at track correspondence, and temporal alignment.
+
+Datasets: We perform alignment experiments on the Penn Action Dataset [59] and the Pouring Dataset [38].
+
+Baselines: We compare our learned features to three alternate popular feature learning paradigms that focus on:
+
+- semantics (image classification, object detection),
+- local patch appearance (object trackers),
+- motion and therefore object transformations (action classification models). For models that capture semantics, we compare to ImageNet-trained ResNet-18 model layer4 features (earlier layers do not improve results significantly), and a Mask-RCNN [18] object detection model trained on the MS-COCO [27] dataset. These models capture rich object-level semantics. For models that capture local patch appearance, we compare to features obtained via learning to track from Wang et al. [52]. Lastly, for models that focus on motion, we compare to features obtained via training for action classification on Kinetics [22] (ResNet-3D-18) and for frame-level action classification on Penn Action Dataset. Note, these together represent existing feature learning paradigms. Comparisons to these help us understand the extent to which our learned representations capture object state. Lastly, we also compare to recent paper from Dwibedi et al. [9] which only performs temporal alignment. To demonstrate the need for also modeling spatial alignment, we consider a spatial downstream task of detecting the contact point between the thumb and a cup in the Pouring Dataset (since models from [9] are only available for the Pouring Dataset).
+
+# 4.1 Experimental Settings
+
+Tracks: We use an off-the-shelf tracker [52] to obtain tracks on videos for training and testing. Since we wish to focus on the foreground of videos for alignment, the pre-processing requires extracting tracks of foreground patches. To show robustness to patch extraction mechanism, we experiment with the following patch generation schemes (use of more sophisticated schemes is future work). For the Penn Action dataset, we track patches sampled on human detections from a Mask-RCNN detector [18]. For the Pouring dataset, we perform foreground estimation by clustering optical flow. As an ablation, we also experiment with ground-truth tracks of human keypoints in Penn Action dataset.
+
+Training Details. We use a ResNet-18 [19] pre-trained on the ImageNet dataset [6] as our backbone model, and extract features from the last convolutional layer using RoI pooling. These features are further processed using 2 fully connected layers (and ReLU non-linearities) to obtain a 256-dimensional embedding for the input patch. We optimize the model using the Adam optimizer [24], with a learning rate of 0.0001, and a weight decay of 0.00001. We train the model for 30000 iterations on the Penn Action dataset and 500 iterations
+
+
+Fig. 5: Nearest neighbor patch correspondence. For random patches in query videos (left), we show the nearest neighbor patch across all frames (right) in a video retrieved using our method. We observe that our learned feature space is sensitive to the state of the object. Example in row 2 further highlights this point where our features match similar appearing patches differently based on the state of the person in the query. Row 3 shows an example from the Pouring dataset.
+
+on the Pouring Dataset with each batch consisting of 8 pairs of videos. For computational efficiency, we divide each video into 8 temporal chunks. During training, we randomly sample one frame from each chunk to construct a sequence of 8 frames.
+
+# 4.2 Qualitative Results
+
+First we show some qualitative results of correspondences that can be extracted by our approach. Figure 5 shows some examples. We show the query frame on the left, and the corresponding nearest neighbor patch across all frames on the right. We observe that our model matches based on both the appearance and the state of the object. Next, we show that our approach can temporally align videos. Figure 6 visualizes temporal alignment on the pouring task.
+
+Finally, we qualitatively compare the correspondence using our features compared to ImageNet and action classification features. Figure 7 shows the spatio-temporal alignment on Penn-Action dataset. Given a query video, we retrieve the most similar video based on spatio-temporal alignment. We use human keypoints to form tracks. The spatial alignment is shown by shape and color of keypoints, and the temporal alignment is shown in vertical (frames on
+
+
+Fig. 6: Qualitative Results on Pouring Dataset: We show qualitative examples of retrieval and temporal alignment (query on left, retrieval on right) from the Pouring Dataset, based on the similarity metric learned by our model.
+
+
+
+top and bottom are temporally aligned). As compared to baseline methods, our approach is able to retrieve a more similar video, better align the frames in time, and more accurately correspond tracks with one other.
+
+# 4.3 Quantitative Evaluation
+
+Evaluating Temporal Alignment. Given a query video, we first obtain the closest video and then do temporal alignment as described in Section 3.2. For a given pair of frames $V_{m}$ and $W_{p}$ , we densely sample foreground patches and compute an average similarity using $f_{\theta}$ as the feature extractor. We can then temporally align the frames of videos $V$ and $W$ using the similarity measure in Eq. 6. Starting with 8 frames each, we align 4 frames from the query video to 4 frames in the retrieved video.
+
+We evaluate the quality of the temporal alignment, by comparing the pose configuration of the human in the aligned frames (i.e. is the human in the same state in query and retrieved video). More specifically, we use the ground truth keypoint annotations to estimate and compare the angle between the surrounding limbs at left and right knee, left and right elbow, left and right hip and the neck. We report the average absolute angle difference over all joints (lower is better) in Table 1. We observe that features learned using our proposed cross-video cycle consistency leads to better temporal alignment than features from ImageNet classification, Mask-RCNN [18], frame and video classification, and intra-video correspondence [52].
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Fig. 7: We show qualitative examples of retrieval and spatio-temporal alignment on the Penn Action Dataset to compare different feature spaces. The top row shows snapshots from the query video, the second row shows video retrieved from our model (trained on tracks from [52]), the third row shows retrievals using ImageNet features, and the fourth row shows retrievals using features obtained by finetuning on the dataset using the class labels. Each columns show temporally aligned frames, while coloured markers show spatial alignment. For all methods, we use keypoint tracks at inference time in order to showcase spatial alignment.
+
+
+
+
+
+
+
+Evaluating Spatial Alignment with Patches. Our proposed model can also perform spatial alignment. Given temporally aligned video frames, we use the similarity function $s$ with the learned features $f_{\theta}$ to correspond image patches in temporally aligned video frames. We measure the quality of alignment by counting how many of the corresponding keypoints lie in aligned patches. We report the average accuracy using various feature extractors in Table 2.
+
+Evaluating Keypoint Tracks Correspondence. Given a track in query video $V$ , a spatially aligned track in reference video $W$ can be identified, by using the same similarity function $s$ with the learned features $f_{\theta}$ . We evaluate this by aligning keypoint tracks provided in the Penn Action dataset. Given a track of a keypoint in video $V$ , we measure the accuracy which the aligned track corresponds to the same keypoint in video $W$ . We report this accuracy in Table 3. Note that this alignment uses keypoint tracks only for performing inference and quantitative evaluations. Model was trained using tracks from Wang et al. [52] on foreground patches as before.
+
+Table 1: Temporal Alignment on Penn Action Dataset [59]: We measure temporal alignment by measuring alignment in keypoint configuration at point of temporal alignment.
+
+| Method | Temporal Alignment Error ↓ |
| ImageNet features | 0.509 |
| Features from Mask-RCNN [18] | 0.504 |
| Features from cycle-consistency based tracker [52] | 0.501 |
| Features from Kinetics [22] action classification model | 0.492 |
| Features from action classification | 0.521 |
| Our features (using tracks from [52] to train) | 0.448 |
+
+Table 2: Spatial Alignment on Penn Action Dataset [59]: We measure spatial alignment by measuring how accurately we can match keypoint by corresponding random patches between query and reference videos.
+
+| Method | Spatial Alignment | Accuracy ↑ |
| ImageNet features | 0.153 | |
| Features from Mask-RCNN [18] | 0.202 | |
| Features from cycle-consistency based tracker [52] | 0.060 | |
| Features from Kinetics [22] action classification model | 0.150 | |
| Features from action classification | 0.157 | |
| Our features (using tracks from [52] to train) | 0.284 | |
+
+# 4.4 Ablations
+
+Additionally, we also compare to 3 variants of our model, to understand the effectiveness of the different parts of our model. We discuss spatial alignment results (as measured by accuracy at keypoint track correspondence).
+
+Impact of quality of tracks used during training. We experiment with using tracks derived from ground truth key-point labels during training. We find that this leads to better features, and achieves a keypoint track correspondence accuracy of 0.650 vs. 0.551 when using tracks from Wang et al. [52]. The next ablations also use ground-truth tracks for training.
+
+Not searching for temporal alignment during training. Our formulation
+
+Table 3: Track Correspondence on Penn Action Dataset [59]: We measure spatial alignment by measuring how accurately we can match keypoint tracks across videos. We compare our learned cross-video features with those obtained by pre-training on ImageNet and for action classification on the Penn Action dataset.
+
+| Method | Track Correspondence Accuracy ↑ |
| ImageNet features | 0.252 |
| Features from action classification | 0.110 |
| Our features (using tracks from [52] to train) | 0.551 |
+
+searches over temporal alignment at training time. This is done by searching for frames to jump between the two videos (max over $n$ , $p$ and $q$ in Eq. 2). In this ablation, we learn features without searching for this temporal alignment, i.e. simply assume that the frames are aligned. The resulting features are worse at spatial alignment (keypoint track correspondence accuracy of 0.584 vs. 0.650).
+
+Importance of reference video retrieval. As a first step for spatio-temporal alignment, we retrieve the best video to align. In order to ablate the performance of this retrieval task, we measure the average keypoint track correspondence accuracy by aligning all the queries to all reference videos. We observe that the accuracy drops by $15\%$ indicating that the retrieval step is effective at choosing relevant videos.
+
+# 4.5 Comparison on Pouring Dataset
+
+We now show the necessity of learning spatial alignment by considering a spatial downstream task of predicting contact locations. We annotate the Pouring Dataset [38] with locations of the contact point between the human thumb
+
+| Method | Accuracy ↑ |
| ImageNet features | 27.1% |
| TCC [9] | 32.7% |
| Ours | 38.6% |
+
+and the cup. We train a linear $1 \times 1$ convolution layer on the spatial features in various models to predict the probability of the contact point. We compare features from our model that are sensitive to locations of objects, vs. features from Dwibedi et al. [9] that only focus on learning good temporal alignment. We split the data into 210 training and 116 test images. We train a linear classifier on top of different features. Table shows the Percentage of Correct Keypoint (PCK) [55] metric for the localization of this contact point within a $16\mathrm{px} \times 16\mathrm{px}$ neighborhood of the ground truth. We see that our features perform better than both ImageNet features, and features from [9]. Thus, features that are sensitive to object locations are essential for obtaining a rich understanding of videos.
+
+# 5 Discussion
+
+In this work, we address the problem of video understanding in the paradigm of "understanding via associations". More specifically, we address the problem of finding dense spatial and temporal correspondences between two videos. We propose a weakly supervised cycle-consistency loss based approach to learn meaningful representations that can be used to obtain patch, track and frame level correspondences. In our experimental evaluation, we show that the features learned are more effective at encoding the states of the patches or objects involved in the videos compared to existing work. We demonstrate the efficacy of the spatiotemporal alignment through exhaustive qualitative and quantitative experiments conducted on multiple datasets.
+
+# References
+
+1. Agrawal, P., Carreira, J., Malik, J.: Learning to see by moving. In: ICCV (2015) 5
+2. Alayrac, J.B., Sivic, J., Laptev, I., Lacoste-Julien, S.: Joint discovery of object states and manipulation actions. In: ICCV (2017) 5
+3. BK, P.H., Schunck, B.G.: Determining optical flow. Artificial intelligence 17(1-3) (1981) 4
+4. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: CVPR (2017) 4
+5. Damen, D., Doughty, H., Maria Farinella, G., Fidler, S., Furnari, A., Kazakos, E., Moltisanti, D., Munro, J., Perrett, T., Price, W., et al.: Scaling egocentric vision: The epic-kitchens dataset. In: ECCV (2018) 4
+6. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR (2009) 8
+7. Denton, E., Fergus, R.: Stochastic video generation with a learned prior. In: ICML (2018) 4
+8. Doersch, C., Gupta, A., Efros, A.A.: Unsupervised visual representation learning by context prediction. In: ICCV (2015) 5
+9. Dwibedi, D., Aytar, Y., Thompson, J., Sermanet, P., Zisserman, A.: Temporal cycle-consistency learning. In: CVPR (2019) 2, 5, 8, 11
+10. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: ICCV (2019) 4
+1. Fouhey, D.F., Kuo, W., Efros, A.A., Malik, J.: From lifestyle vlogs to everyday interactions. In: CVPR (2018) 4
+12. Garro, V., Fusiello, A., Savarese, S.: Label transfer exploiting three-dimensional structure for semantic segmentation. In: Proceedings of the 6th International Conference on Computer Vision/Computer Graphics Collaboration Techniques and Applications (2013) 4
+13. Girdhar, R., Carreira, J., Doersch, C., Zisserman, A.: Video action transformer network. In: CVPR (2019) 4
+14. Godard, C., Mac Aodha, O., Brostow, G.J.: Unsupervised monocular depth estimation with left-right consistency. In: CVPR (2017) 4
+15. Goyal, R., Kahou, S.E., Michalski, V., Materzynska, J., Westphal, S., Kim, H., Haenel, V., Fruend, I., Yianilos, P., Mueller-Freitag, M., et al.: The "something something" video database for learning and evaluating visual common sense. In: ICCV. vol. 1 (2017) 4
+16. Gu, C., Sun, C., Ross, D.A., Vondrick, C., Pantofaru, C., Li, Y., Vijayanarasimhan, S., Toderici, G., Ricco, S., Sukthankar, R., et al.: Ava: A video dataset of spatiotemporally localized atomic visual actions. In: CVPR (2018) 4
+17. Ham, B., Cho, M., Schmid, C., Ponce, J.: Proposal flow. In: CVPR (2016) 4
+18. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: ICCV (2017) 8, 9, 10
+19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016) 8
+20. Jayaraman, D., Grauman, K.: Learning image representations tied to ego-motion. In: ICCV (2015) 5
+21. Kalal, Z., Mikolajczyk, K., Matas, J.: Forward-backward error: Automatic detection of tracking failures. In: ICPR (2010) 4
+22. Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., Viola, F., Green, T., Back, T., Natsev, P., et al.: The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017) 3, 8, 9, 10
+
+23. Kim, J., Liu, C., Sha, F., Grauman, K.: Deformable spatial pyramid matching for fast dense correspondences. In: CVPR (2013) 4
+24. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 9
+25. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012) 2
+26. Lee, A.X., Zhang, R., Ebert, F., Abbeel, P., Finn, C., Levine, S.: Stochastic adversarial video prediction (2018) 4
+27. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV (2014) 8
+28. Liu, C., Yuen, J., Torralba, A.: Nonparametric scene parsing: Label transfer via dense scene alignment. In: CVPR (2009) 4
+29. Liu, C., Yuen, J., Torralba, A.: Sift flow: Dense correspondence across scenes and its applications. TPAMI 33(5) (2010) 3, 4
+30. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV 60(2) (2004) 2, 4
+31. Lucas, B.D., Kanade, T., et al.: An iterative image registration technique with an application to stereo vision (1981) 4
+32. Mémin, E., Pérez, P.: Dense estimation and object-based segmentation of the optical flow with robust techniques. IEEE Transactions on Image Processing 7(5) (1998) 4
+33. Misra, I., Zitnick, C.L., Hebert, M.: Shuffle and learn: unsupervised learning using temporal order verification. In: ECCV. Springer (2016) 4, 5
+34. Nagarajan, T., Feichtenhofer, C., Grauman, K.: Grounded human-object interaction hotspots from video. In: ICCV (2019) 4
+35. Pathak, D., Girshick, R., Dollar, P., Darrell, T., Hariharan, B.: Learning features by watching objects move. In: CVPR (2017) 5
+36. Rocco, I., Arandjelovic, R., Sivic, J.: End-to-end weakly-supervised semantic alignment. In: CVPR (2018) 2
+37. Rubinstein, M., Joulin, A., Kopf, J., Liu, C.: Unsupervised joint object discovery and segmentation in internet images. In: CVPR (2013) 4
+38. Sermanet, P., Lynch, C., Chebotar, Y., Hsu, J., Jang, E., Schaal, S., Levine, S., Brain, G.: Time-contrastive networks: Self-supervised learning from video. In: ICRA Pouring dataset licensed under (CC BY 4.0). (2018) 4, 5, 8, 11
+39. Sethi, I.K., Jain, R.: Finding trajectories of feature points in a monocular image sequence. TPAMI (1) (1987) 4
+40. Sigurdsson, G.A., Varol, G., Wang, X., Farhadi, A., Laptev, I., Gupta, A.: Hollywood in homes: Crowdsourcing data collection for activity understanding. In: ECCV (2016) 4
+41. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: NIPS (2014) 4
+42. Singh, S., Gupta, A., Efros, A.A.: Unsupervised discovery of mid-level discriminative patches. In: ECCV. Springer (2012) 2
+43. Sun, D., Roth, S., Black, M.J.: Secrets of optical flow estimation and their principles In: CVPR (2010) 4
+44. Sun, D., Yang, X., Liu, M.Y., Kautz, J.: Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In: CVPR (2018) 4
+45. Tran, D., Bourdev, L.D., Fergus, R., Torresani, L., Paluri, M.: C3D: generic features for video analysis. CoRR, abs/1412.0767 2(7) (2014) 4
+46. Varol, G., Laptev, I., Schmid, C.: Long-term temporal convolutions for action recognition. TPAMI 40(6) (2017) 4
+
+47. Wang, H., Ullah, M.M., Klaser, A., Laptev, I., Schmid, C.: Evaluation of local spatio-temporal features for action recognition. In: BMVC (2009) 2
+48. Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Van Gool, L.: Temporal segment networks: Towards good practices for deep action recognition. In: ECCV. Springer (2016) 4
+49. Wang, X., Farhadi, A., Gupta, A.: Actions $\sim$ transformations. In: CVPR (2016) 4
+50. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR (2018) 4
+51. Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: ICCV (2015) 5
+52. Wang, X., Jabri, A., Efros, A.A.: Learning correspondence from the cycle-consistency of time. In: CVPR (2019) 2, 4, 5, 6, 8, 9, 10, 11, 18
+53. Wu, C.Y., Feichtenhofer, C., Fan, H., He, K., Krahenbuhl, P., Girshick, R.: Long-term feature banks for detailed video understanding. In: CVPR (2019) 4
+54. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In: ECCV (2018) 4
+55. Yang, Y., Ramanan, D.: Articulated human detection with flexible mixtures of parts. TPAMI (2012) 11
+56. Yeung, S., Russakovsky, O., Jin, N., Andriluka, M., Mori, G., Fei-Fei, L.: Every moment counts: Dense detailed labeling of actions in complex videos. IJCV 126(2-4) (2018) 4
+57. Zhang, H., Xiao, J., Quan, L.: Supervised label transfer for semantic segmentation of street scenes. In: ECCV. Springer (2010) 4
+58. Zhang, R., Isola, P., Efros, A.A.: Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In: CVPR (2017) 5
+59. Zhang, W., Zhu, M., Derpanis, K.G.: From actemes to action: A strongly-supervised representation for detailed action understanding. In: ICCV (2013) 8, 9, 10, 11
+60. Zhou, T., Jae Lee, Y., Yu, S.X., Efros, A.A.: Flowweb: Joint image set alignment by weaving consistent, pixel-wise correspondences. In: CVPR (2015) 4
+61. Zhou, T., Krahenbuhl, P., Aubry, M., Huang, Q., Efros, A.A.: Learning dense correspondence via 3d-guided cycle consistency. In: CVPR (2016) 4
+62. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017) 4
\ No newline at end of file
diff --git a/aligningvideosinspaceandtime/images.zip b/aligningvideosinspaceandtime/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..23c0ecc33b967aae0fb340711b490a9f2e3d4c69
--- /dev/null
+++ b/aligningvideosinspaceandtime/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:014c91aad6f75368754eb637f9dd402840d58ecc2af74b7e253677128f7c98f2
+size 622152
diff --git a/aligningvideosinspaceandtime/layout.json b/aligningvideosinspaceandtime/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9aba0d650b4e7ff935814f7338ccca9e105e9cc2
--- /dev/null
+++ b/aligningvideosinspaceandtime/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0c762cc24e1503abee6fe825b89b9b6167a906ad5f987e7e8a484c1d0139635
+size 406462
diff --git a/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/4921e827-0883-4cec-9c0d-b452a7342c98_content_list.json b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/4921e827-0883-4cec-9c0d-b452a7342c98_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9b51516a4379dc03e3a64bcc1898b0a0aef4787d
--- /dev/null
+++ b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/4921e827-0883-4cec-9c0d-b452a7342c98_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a455f6fa262709c68e1d690b6b89af7d9a2ab90488a2bf807c1e8d2f4b083f09
+size 83218
diff --git a/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/4921e827-0883-4cec-9c0d-b452a7342c98_model.json b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/4921e827-0883-4cec-9c0d-b452a7342c98_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..cb6edf9c7572bc737d2ba8bd81a705dc2dcda7b1
--- /dev/null
+++ b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/4921e827-0883-4cec-9c0d-b452a7342c98_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7224669b9a60a44217d00c6172fd8d362ca4770d52989a096f5ba6a4fa1c5e73
+size 99886
diff --git a/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/4921e827-0883-4cec-9c0d-b452a7342c98_origin.pdf b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/4921e827-0883-4cec-9c0d-b452a7342c98_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c669378135945f3584340ce56cc85998e7fc7b11
--- /dev/null
+++ b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/4921e827-0883-4cec-9c0d-b452a7342c98_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03eef4de3555028eefecc25cc1eff036fd814e1f08d82472d22708d6f425d438
+size 4595063
diff --git a/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/full.md b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..401f1c92548f2e042309e6dfc5dfdf9aa973cc2a
--- /dev/null
+++ b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/full.md
@@ -0,0 +1,355 @@
+# All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced Motion Modeling
+
+Zhixiang Chi $^{1}$ , Rasoul Mohammadi Nasiri $^{1}$ , Zheng Liu $^{1}$ , Juwei Lu $^{1}$ , Jin Tang $^{1}$ Konstantinos N Plataniotis $^{2[0000-0003-3647-5473]}$
+
+$^{1}$ Noah's Ark Lab, Huawei Technologies $^{2}$ University of Toronto, Canada {zhixiang.chi, rasoul.nasiri, zheng.liu1, tangjin, juwei.lu}@huawei.com, kostas@ece.utoronto.ca
+
+Abstract. Recent advances in high refresh rate displays as well as the increased interest in high rate of slow motion and frame up-conversion fuel the demand for efficient and cost-effective multi-frame video interpolation solutions. To that regard, inserting multiple frames between consecutive video frames are of paramount importance for the consumer electronics industry. State-of-the-art methods are iterative solutions interpolating one frame at the time. They introduce temporal inconsistencies and clearly noticeable visual artifacts.
+
+Departing from the state-of-the-art, this work introduces a true multi-frame interpolator. It utilizes a pyramidal style network in the temporal domain to complete the multi-frame interpolation task in one-shot. A novel flow estimation procedure using a relaxed loss function, and an advanced, cubic-based, motion model is also used to further boost interpolation accuracy when complex motion segments are encountered. Results on the Adobe240 dataset show that the proposed method generates visually pleasing, temporally consistent frames, outperforms the current best off-the-shelf method by 1.57db in PSNR with 8 times smaller model and 7.7 times faster. The proposed method can be easily extended to interpolate a large number of new frames while remaining efficient because of the one-shot mechanism.
+
+# 1 Introduction
+
+Video frame interpolation targets generating new frames for the moments in which no frame is recorded. It is mostly used in slow motion generation [26], adaptive streaming [25], and frame rate up-conversion [5]. The fast innovation in high refresh rate displays and great interests in a higher rate of slow motion and frame up-conversion bring the needs to multi-frame interpolation.
+
+Recent efforts focus on the main challenges of interpolation, including occlusion and large motions, but they have not explored the temporal consistency as a key factor in video quality, especially for multi-frame interpolation. Almost all the existing methods interpolate one frame in each execution, and generating multiple frames can be addressed by either iteratively generating a middle frame [19, 15, 27] or independently creating each intermediate frame for corresponding time stamp [10, 2, 3, 17, 14]. The former approach might cause error
+
+propagation by treating the generated middle frame as input. As well, the later one may suffer from temporal inconsistency due to the independent process for each frame and causes temporal jittering at playback. Those artifacts are further enlarged when more frames are interpolated. An important point that has been missed in existing methods is the variable level of difficulties in generating intermediate frames. In fact, the frames closer to the two initial frames are easier to generate, and those with larger temporal distance are more difficult. Consequently, the current methods are not optimized in terms of model size and running time for multi-frame interpolation, which makes them inapplicable for real-life applications.
+
+On the other hand, most of the state-of-the-art interpolation methods commonly synthesize the intermediate frames by simply assuming linear transition in motion between the pair of input frames. However, real-world motions reflected in video frames follow a variety of complex non-linear trends [26]. While a quadratic motion prediction model is proposed in [26] to overcome this limitation, it is still inadequate to model real-world scenarios especially for non-rigid bodies, by assuming constant acceleration. As forces applied to move objects in the real world are not necessarily constant, it results in variation in acceleration.
+
+To this end, we propose a temporal pyramidal processing structure that efficiently integrates the multi-frame generation into one single network. Based on the expected level of difficulties, we adaptively process the easier cases (frames) with shallow parts to guide the generation of harder frames that are processed by deeper structures. Through joint optimization of all the intermediate frames, higher quality and temporal consistency can be ensured. In addition, we exploit the advantage of multiple input frames as in [26, 13] to propose an advanced higher-order motion prediction modeling, which explores the variation in acceleration. Furthermore, inspired by [27], we develop a technique to boost the quality of motion prediction as well as the final interpolation results by introducing a relaxed loss function to the optical flow (O.F.) estimation module. In particular, it gives the flexibility to map the pixels to the neighbor of their ground truth locations at the reference frame while a better motion prediction for the intermediate frames can be achieved. Comparing to the current state-of-the-art method [26], we outperform it in interpolation quality measured by PSNR by 1.57dB on the Adobe240 dataset and achieved 8 times smaller in model size and 7.7 times faster in generating 7 frames.
+
+We summarize our contributions as 1) We propose a temporal pyramidal structure to integrate the multi-frame interpolation task into one single network to generate temporally consistent and high-quality frames; 2) We propose a higher-order motion modeling to exploit variations in acceleration involved in real-world motion; 3) We develop a relaxed loss function to the flow estimation task to boost the interpolation quality; 4) We optimize the network size and speed so that it is applicable for the real world applications especially for mobile devices.
+
+# 2 Related work
+
+Recent efforts on frame interpolation have focused on dealing with the main sources of degradation in interpolation quality, such as large motion and occlusion. Different ideas have been proposed such as estimating occlusion maps [10, 28], learning adaptive kernel for each pixel [19, 18], exploring depth information [2] or extracting deep contextual features [17, 3]. As most of these methods interpolate frames one at a time, inserting multiple frames is achieved by iteratively executing the models. In fact, as a fundamental issue, the step-wise implementation of multi-frame interpolation does not consider the time continuity and may cause temporally inconsistency. In contrast, generating multiple frames in one integrated network will implicitly enforce the network to generate temporally consistent sequences. The effectiveness of the integrated approach has been verified by Super SloMo [10]; however, their method is not purposely designed for the task of multi-frame interpolation. Specifically, what has been missed in [10] is to utilize the error cue from temporal distance between a middle frame and the input frames and optimize the whole model accordingly. Therefore, the proposed adaptive processing based on this difficulty pattern can result in a more optimized solution, which is not considered in the state-of-the-art methods [10, 2, 3, 26, 19].
+
+Given the estimated O.F. among the input frames, one important step in frame interpolation is modeling the traversal of pixels in between the two frames. The most common approach is to consider a linear transition and scaling of the O.F. [28, 17, 10, 15, 3, 2]. Recent work in [26, 4] applied an acceleration-aware method by also contributing the neighborhood frames of the initial pair. However, in real life, the force applied to the moving object is not constant; thus, the motion is not following the linear or quadratic pattern. In this paper, we propose a simple but powerful higher-order model to handle more complex motions happen in the real world and specially non-rigid bodies. On the other hand, [10] imposes accurate estimation of the O.F. by the warping loss. However, [27] reveals that accurate O.F. is not tailored for task-oriented problems. Motivated by that, we apply a flexible O.F. estimation between initial frames, which gives higher flexibility to model complex motions.
+
+# 3 Proposed method
+
+# 3.1 Algorithm overview
+
+An overview of the proposed method is shown in Fig. 1 where we use four input frames $(I_{-1}, I_0, I_1$ and $I_2)$ to generate 7 frames $(I_{t_i}, t_i = \frac{i}{8}, i \in [1, 2, \dots, 7])$ between $I_0$ and $I_1$ . We first use two-step O.F. estimation module to calculate O.F.s $(f_{0 \to 1}, f_{1 \to 0}, f_{1 \to -1}, f_{0 \to 2})$ and then use these flows and cubic modeling to predict the flow between input frames and the new frames. Our proposed temporal pyramidal network then refines the predicted O.F. and generates an initial estimation of middle frames. Finally, the post processing network further improves the quality of interpolated frames $(I_{t_i})$ with the similar temporal pyramid.
+
+
+Fig. 1: An overview of the proposed multi-frame interpolation method.
+
+# 3.2 Cubic flow prediction
+
+In this work, we integrate the cubic motion modeling to specifically handle the acceleration variation in motions. Considering the motion starting from $I_0$ to a middle time stamp $t_i$ as $f_{0\rightarrow t_i}$ , we model object motion by the cubic model as:
+
+$$
+f _ {0 \rightarrow t _ {i}} = v _ {0} \times t _ {i} + \frac {a _ {0}}{2} \times t _ {i} ^ {2} + \frac {\Delta a _ {0}}{6} \times t _ {i} ^ {3}, \tag {1}
+$$
+
+where $v_{0}$ , $a_{0}$ , and $\Delta a_{0}$ are the velocity, acceleration, and acceleration change rate estimated at $I_{0}$ , respectively. The acceleration terms can be computed as:
+
+$$
+\Delta a _ {0} = a _ {1} - a _ {0}, a _ {0} = f _ {0 \rightarrow 1} + f _ {0 \rightarrow - 1}, a _ {1} = f _ {1 \rightarrow 2} + f _ {1 \rightarrow 0}. \tag {2}
+$$
+
+where $a_0$ and $a_1$ are calculated for pixels at $I_0$ and $I_1$ respectively. However, the $\Delta a_0$ should be calculated for the pixels correspond to the same real-world point rather than pixels with the same coordinate in the two frames. Therefore, we reformulate $a_1$ to calculate $\Delta a_0$ based on referencing pixel's locations at $I_0$ as:
+
+$$
+a _ {1} = f _ {0 \rightarrow 2} - 2 \times f _ {0 \rightarrow 1}. \tag {3}
+$$
+
+To calculate $v_{0}$ in (1), the calculation in [26] does not hold when the acceleration is variable, instead, we apply (1) for $t_{i} = 1$ to solve for $v_{0}$ using only the information computed above
+
+$$
+v _ {0} = f _ {0 \rightarrow 1} - \frac {a _ {0}}{2} - \frac {a _ {1} - a _ {0}}{6}. \tag {4}
+$$
+
+Finally, $f_{0 \to t_i}$ for any $t_i \in [0,1]$ can be expressed based on only O.F. between input frames by
+
+$$
+f _ {0 \rightarrow t _ {i}} = f _ {0 \rightarrow 1} \times t _ {i} + \frac {a _ {0}}{2} \times \left(t _ {i} ^ {2} - t _ {i}\right) + \frac {a _ {1} - a _ {0}}{6} \times \left(t _ {i} ^ {3} - t _ {i}\right). \tag {5}
+$$
+
+$f_{1\rightarrow t_i}$ can be computed using the same manner. The detailed derivation and proof of all the above equations will be provided in the supplementary document.
+
+In Fig. 2, we simulate three different 1-D motions, including constant velocity, constant acceleration, and variable acceleration, as distinguished in three path lines. For each motion, the object position at four time stamps of $[t_0, t_1, t_2, t_3]$ are given as shown by gray circles; we apply three predictive models: linear, quadratic[26] and our cubic model to estimate the location of the object for time stamp $t_{1.5}$ blindly (without having the parameters of simulated motions). The prediction results show that our cubic model is more robust to simulate different order of motions.
+
+
+Fig.2: A toy example to illustrate the performance of three models (Linear, Quadratic, and Cubic) in predicting three motion patterns (constant velocity, constant acceleration, and variant acceleration).
+
+# 3.3 Motion estimation
+
+Flow estimation module. To estimate the O.F. among the input frames, the existing frame interpolation methods commonly adopt the off-the-shelf networks [26, 17, 3, 2, 24, 6, 8]. However, the existing flow networks are not efficiently designed for multi-frame input, and some are limited to one-directional flow estimation. To this end, following the three-scale coarse-to-fine architecture in SPyNet [22], we design a customized two-stage flow estimation to involve the neighbor frames in better estimating O.F. between $I_0$ and $I_1$ . Both stages are following similar three-scale architecture, and they optimally share the weights of two coarser levels. The first stage network is designed to compute O.F. between two consecutive frames. We use that to estimate $f_{0\rightarrow -1}$ and $f_{1\rightarrow 2}$ . In the finest level of second-stage network, we use $I_0$ and $I_1$ concatenated with $-f_{0\rightarrow -1}$ and $-f_{1\rightarrow 2}$ as initial estimations to compute $f_{0\rightarrow 1}$ and $f_{1\rightarrow 0}$ . Alongside, we are calculating the estimation of $f_{0\rightarrow 2}$ and $f_{1\rightarrow -1}$ in the first stage, which are used in our cubic motion modeling in later steps.
+
+Motion estimation constraint relaxation. Common O.F. estimation methods try to map the pixel from the first frame to the exact corresponding location in the second frame. However, TOFlow [27] reveals that the accurate O.F. as a part of a higher conceptual level task like frame interpolation does not lead to the optimal solution of that task, especially for occlusion. Similarly, we observed that a strong constraint on O.F. estimation among input frames might degrade the motion prediction for the middle frames, especially for complex motion. In contrast, accepting some flexibility in flow estimation will provide a closer estimation to ground truth motion between frames. The advantage of this flexibility will be illustrated in the following examples.
+
+Consider the two toy examples, as shown in Fig. 3, where a pixel is moving on the blue curve in consecutive frames and $(\mathbf{x},\mathbf{y})$ is the pixel coordinate in frame space. The pixel position is given in four consecutive frames as $P_{-1}, P_0, P_1$ and $P_2$ and the aim is to find locations for seven moments between $P_0$ and $P_1$ indicated by blue stars. We consider $P_0$ as a reference point in motion prediction. The green lines represent ground truth O.F. between $P_0$ and other points. We predict middle points (green stars) by quadratic [26] and cubic models in (5) as shown in Fig. 3. The predicted locations are far from the ground truths (blue stars). However, instead of estimating the exact O.F., giving it a flexibility of
+
+
+(a) Quadratic prediction.
+
+
+(b) Cubic prediction.
+Fig. 3: An example of an object motion path (blue curve) and the motion prediction (with and without relaxation) by Quadratic (a) and Cubic (b) model.
+
+mapping $P_0$ to the neighbor of other points denoted as $P_{-1}^{\prime}, P_1^{\prime}, P_2^{\prime}$ , a better prediction of the seven middle locations can be achieved as shown by the red stars. It also reduces the mean squared error (MSE) significantly. The idea is an analogy to introduce certain errors to the flow estimation process.
+
+To apply the idea of relaxation, we employ the same unsupervised learning in O.F. estimation as [10], but with a relaxed warping loss. For example, the loss for estimating $f_{0\rightarrow 1}$ is defined as:
+
+$$
+\mathcal {L} _ {w _ {\text {r e l a x}}} ^ {f _ {0 \rightarrow 1}} = \sum_ {i = 0} ^ {h - 1} \sum_ {j = 0} ^ {z - 1} \min _ {m, n} \left\| I _ {0} ^ {w \rightarrow 1} (i, j) - I _ {1} (i + m, j + n) \right\| _ {1}, \text {f o r} m, n \in [ - d, + d ], \tag {6}
+$$
+
+where $I_0^{w\to 1}$ denotes $I_0$ warped by $f_{0\rightarrow 1}$ to the reference point $I_{1}$ , $d$ determines the range of neighborhood and $h, z$ are the image height and width. We use $\mathcal{L}_{w_{relax}}$ for both stages of O.F. estimation. We evaluate the trade-off between the performance of flow estimation and the final results in Section 4.4.
+
+# 3.4 Temporal pyramidal network
+
+Considering the similarity between consecutive frames and also the pattern of difficulty for this task, it leads to the idea of introducing adaptive joint processing. We applied this by proposing temporal pyramidal models.
+
+Temporal pyramidal network for O.F. refinement. The bidirectional O.F.s $f_{0 \to t_i}$ and $f_{1 \to t_i}$ predicted by (5) are based on the O.F.s computed among the input frames. The initial prediction may inherit errors from flow estimation and cubic motion modeling, notably for the motion boundaries [10]. To effectively improve $f_{0 \to t_i}$ and $f_{1 \to t_i}$ , unlike the existing methods [2, 3, 15, 10, 17, 28, 20, 14], we aim to consider the relationship among intermediate frames and process all at one forward pass. To this end, we propose a temporal pyramidal O.F. refinement network, which enforces a strong bond between the intermediate frames, as shown in Fig. 4a. The network takes the concatenation of seven pairs of predicted O.F.s as input and adaptively refines the O.F.s based on the expected quality of the interpolation correspond to the distance to $I_0$ and $I_1$ . In fact, the closest
+
+
+
+
+(a) O.F. refinement network
+(b) Post processing network
+Fig. 4: The pyramidal network model designed for O.F. refinement (a) and adaptive pyramidal structure in post processing (b).
+
+ones, $I_{t_1}$ and $I_{t_7}$ are processed only by one level of pyramid as they are more likely to achieve higher quality. With the same patterns, $(I_{t_2}, I_{t_6})$ are processed by two levels, $(I_{t_3}, I_{t_5})$ by three levels and finally $I_{t_4}$ by the entire four levels of the network as it is expected to achieve the lowest quality in interpolation.
+
+To fully utilize the refined O.F.s, we warp $I_0$ and $I_1$ by the refined O.F. in each level as $I_0^{w\rightarrow t_i}$ and $I_1^{w\rightarrow t_i}$ and feed them to the next level. It is helpful to achieve better results in the next level as the warped frames are one step closer in time domain toward the locations in the target frame of that layer compared to $I_0$ and $I_1$ . Thus, the motion between $I_0$ and $I_1$ is composed of step-wise motions, each measured within a short temporal interval.
+
+Additional to the refined O.F. at each level, a blending mask $b_{t_i}$ [28] is also generated. Therefore, the intermediate frames can be synthesized as [28] by
+
+$$
+I _ {t _ {i}} = b _ {t _ {i}} \odot g \left(I _ {0}, \hat {f} _ {0 \rightarrow t _ {i}}\right) + \left(1 - b _ {t _ {i}}\right) \odot g \left(I _ {1}, \hat {f} _ {1 \rightarrow t _ {i}}\right), \tag {7}
+$$
+
+where $\hat{f}_{0\rightarrow t_i}$ and $\hat{f}_{1\rightarrow t_i}$ are refined bidirectional O.F. at $t_i$ , $\odot$ denotes elementwise multiplication, and $g(\cdot ,\cdot)$ is the bilinear warping function from [28, 9].
+
+Temporal pyramidal network for post processing. The intermediate frames synthesized by (7) may still contain artifacts due to the inaccurate O.F., blending masks, or synthesis process. Therefore, we introduce a post processing network
+
+following the similar idea of the O.F. refine network to adaptively refine the interpolated frames $I_{t_i}$ . However, as the generated frames are not aligned, feeding all the frames at the beginning level cannot properly enhance the quality. Instead, we input the generated frame separately at different levels of the network according to the temporal distance, as shown in Fig. 4b. At each time stamp $t_i$ , we also feed the warped inputs $I_0^{w\rightarrow t_i}$ and $I_1^{w\rightarrow t_i}$ to reduce the error caused by inaccurate blending masks. Similar to O.F. refinement network, the refined frames $\hat{I}_{t_i}$ are also fed to the next level as guidance.
+
+For both pyramidal networks, we employ the same sub network for each level of the pyramid and adopt residual learning to learn the O.F. and frame residuals. The sub network is composed of two residual blocks proposed by [16] and one convolutional layer at the input and another at the output. We set the number of channels in a reducing order for O.F. refinement pyramid, as fewer frames are dealt with when moving to the middle time step. In contrast, we keep the same channel numbers for all the levels of post processing module.
+
+# 3.5 Loss functions
+
+The proposed integrated network for multi-frame interpolation targets temporal consistency by joint optimization of all frames. To further impose consistency between frames, we apply generative adversarial learning scheme [29] and two-player min-max game idea in [7] to train a discriminator network $D$ which optimizes the following problem:
+
+$$
+\underset {G} {\min } \underset {D} {\max } \mathbb {E} _ {\mathbf {g} \sim p \left(I _ {t _ {i}} ^ {g t}\right)} [ \log D (\mathbf {g}) ] + \mathbb {E} _ {\mathbf {x} \sim p (I)} [ \log (1 - D (G (\mathbf {x}))) ], \tag {8}
+$$
+
+where $\mathbf{g} = [I_{t_1}^{gt},\dots I_{t_7}^{gt}]$ are the seven ground truth frames and $\mathbf{x} = [I_{-1},I_0,I_1,I_2]$ are the four input frames. We add the following generative component of the GAN as the temporal loss [29, 12]:
+
+$$
+\mathcal {L} _ {\text {t e m p}} = \sum_ {n = 1} ^ {N} - \log D (G (\mathbf {x})). \tag {9}
+$$
+
+The proposed framework in Fig. 1 serves as a generator and is trained alternatively with the discriminator. To optimize the O.F. refinement and post processing networks, we apply the $\ell_1$ loss. The whole architecture is trained by combining all the loss functions:
+
+$$
+\mathcal {L} = \sum_ {i = 1} ^ {7} \left(\left\| \hat {I} _ {t _ {i}} - I _ {t _ {i}} ^ {g t} \right\| _ {1} + \left\| I _ {t _ {i}} - I _ {t _ {i}} ^ {g t} \right\| _ {1}\right) + \mathcal {L} _ {w _ {\text {r e l a x}}} + \lambda \mathcal {L} _ {\text {t e m p}}, \tag {10}
+$$
+
+where the $\lambda$ is the weighting coefficient and equals to 0.001.
+
+# 4 Experiments
+
+In this section, we provide the implementation details and the analysis of the results of the proposed method in comparison to the other methods and different ablation studies.
+
+# 4.1 Implementation details
+
+To train our network, we collected a dataset of 903 short video clips (2 to 10 seconds) with the frame rate of 240fps and a resolution of $720 \times 1280$ from YouTube. The videos are covering various scenes, and we randomly select 50 videos for validation. From these videos, we created 8463 training samples of 25 consecutive frames as in [26]. Our model takes the $1^{st}$ , $9^{th}$ , $17^{th}$ , and $25^{th}$ frames as inputs to generate the seven frames between the $9^{th}$ and $17^{th}$ frames by considering $10^{th}$ to $16^{th}$ frames as ground truths. We randomly crop $352 \times 352$ patches and apply horizontal, vertical as well as temporal flip for data augmentation in training.
+
+To improve the convergence speed, a stage-wise training strategy is adopted [30]. We first train each module except the discriminator using $\ell_1$ loss independently for 15 epochs with the learning rate of $10^{-4}$ by not updating other modules. The whole network is then jointly trained using (10) and learning rate of $10^{-5}$ for 100 epochs. We use the Adam optimizer [11] and empirically set the neighborhood range $d$ in (6) to 9. During the training, the pixel values of all images are scaled to [-1, 1]. All the experiments are conducted on an Nvidia V100 GPU. More detailed network architecture will be provided in the supplementary material.
+
+# 4.2 Evaluation datasets
+
+We evaluate the performance of the proposed method on widely used datasets including two multi-frame interpolation dataset (Adobe240 [23] and GOPRO [16]) and two single-frame interpolation (Vimeo90K [27] and DAVIS[21]). Adobe240 and GOPRO are initially designed for deblurring tasks with a frame rate of 240fps and resolution of $720 \times 1280$ . Both are captured by hand-held high-speed cameras and contain a combination of object and camera motion in different levels, which makes them challenging for the frame interpolation task. We follow the same setting as Sec. 4.1 to extract 4276 and 1393 samples of frame patch for Adobe240 and GOPRO, respectively. DAVIS dataset is designed for video segmentation, which normally contains large motions. It has 90 videos, and we extract 2637 samples of 7 frames. As for Video90K, since the interpolation subset only contains triplets, which are not applicable for our methods as we need more frames for cubic motion modeling. Instead, we use the super-resolution test set, which contains 7824 samples of 7 consecutive frames. We interpolate 7 frames for Adobe240 and GOPRO and interpolate the $4^{th}$ (middle) frame for DAVIS and Video90K by using the $1^{st}$ , $3^{rd}$ , $5^{th}$ and $7^{th}$ frames as inputs.
+
+# 4.3 Comparison with the state-of-the-arts
+
+We compare our method with four state-of-the-art frame interpolation methods: Super SloMo [10], Quadratic [26], DAIN [2], and SepConv [19], where we train [10] and [26] on our training data and use the model released by authors in the last two. We use PSNR, SSIM and interpolation error (IE) [1] as evaluation metrics. For multi-frame interpolation in GOPRO and Adobe240, we borrow
+
+Table 1: Performance evaluation of the proposed method compared to the state-of-the-art methods in different datasets.
+
+| Methods | Adobe240 | GoPro | Vimeo90K | DAVIS |
| PSNR | SSIM | TCC | PSNR | SSIM | TCC | PSNR | SSIM | IE | PSNR | SSIM | IE |
| SepConv | 32.38 | 0.938 | 0.832 | 30.82 | 0.910 | 0.789 | 33.60 | 0.944 | 5.30 | 26.30 | 0.789 | 15.61 |
| Super SloMo | 31.63 | 0.927 | 0.809 | 30.50 | 0.904 | 0.784 | 33.38 | 0.938 | 5.41 | 26.00 | 0.770 | 16.19 |
| DAIN | 31.36 | 0.932 | 0.808 | 29.74 | 0.900 | 0.759 | 34.54 | 0.950 | 4.76 | 27.25 | 0.820 | 13.17 |
| Quadratic | 32.80 | 0.949 | 0.842 | 32.01 | 0.936 | 0.822 | 33.62 | 0.946 | 5.22 | 27.38 | 0.834 | 12.46 |
| Ours | 34.37 | 0.959 | 0.860 | 32.91 | 0.943 | 0.837 | 34.93 | 0.951 | 4.70 | 27.91 | 0.837 | 12.40 |
+
+
+
+
+Input>
+
+
+
+
+SepConv
+
+
+
+
+Super SloMo
+
+
+
+
+DAIN
+Fig. 5: An example from Adobe240 to visualize the temporal consistency. The top row shows the middle frames generated by different methods, and the bottom row shows the interpolation error. Our method experiences less shifting in the temporal domain.
+
+
+
+
+Quadratic
+
+
+
+
+Ours
+
+the concept of Temporal Change Consistency [29] which compares the generated frames and ground truth in terms of changes between adjacent frames by
+
+$$
+T C C (F, G) = \frac {\sum_ {i = 1} ^ {6} \operatorname {S S I M} \left(\operatorname {a b s} \left(f ^ {i} - f ^ {i + 1}\right) , \operatorname {a b s} \left(g ^ {i} - g ^ {i + 1}\right)\right)}{6}, \tag {11}
+$$
+
+where, $F = (f^{1},\dots ,f^{7})$ and $G = (g^{1},\dots ,g^{7})$ are the 7 interpolated and ground truth frames respectively. For the multi-frame interpolation task, we report the average of the metrics for 7 interpolated frames. The results reported in Table 1, shows that our proposed method consistently performs better than the existing methods on both single and multi-frame interpolation scenarios. Notably, for multi-frame interpolation datasets (Adobe240 and GOPRO), our method significantly outperforms the best existing method [26] by 1.57dB and 0.9dB. The proposed method also achieves the highest temporal consistency measured by TCC thanks to the temporal pyramid structure and joint optimization of the middle frames, which exploits the temporal relation among the middle frames.
+
+In addition to the TCC, to better show the power of the proposed method in preserving temporal consistency between frames, Fig. 5 reports $\hat{I}_{t_4}$ and IE generated by different methods from Adobe240. As shown in Fig. 5, the generated
+
+Table 2: Ablation studies on the network components on Adobe240 and GOPRO.
+
+| Methods | Adobe240 | GOPRO |
| PSNR | SSIM | IE | TCC | PSNR | SSIM | IE | TCC |
| w/o post pro.. | 33.87 | 0.954 | 6.21 | 0.848 | 32.63 | 0.942 | 6.80 | 0.831 |
| w/o adv. loss | 34.35 | 0.958 | 5.89 | 0.850 | 32.86 | 0.942 | 6.77 | 0.830 |
| w/o 2ndO.F. | 34.24 | 0.957 | 5.97 | 0.854 | 32.73 | 0.940 | 6.91 | 0.832 |
| w/o O.F. relax. | 33.92 | 0.955 | 6.14 | 0.851 | 32.45 | 0.936 | 7.09 | 0.828 |
| w/o pyr. | 33.92 | 0.954 | 6.33 | 0.845 | 32.37 | 0.935 | 7.30 | 0.820 |
| Full model | 34.37 | 0.959 | 5.89 | 0.860 | 32.91 | 0.943 | 6.74 | 0.837 |
+
+
+Fig. 6: Visualization of the seven intermediate frames of $I_{t_1}$ to $I_{t_7}$ generated by our method compared to Quadratic [26] and Super SloMo [10] from GOPRO.
+
+middle frames by different methods are visually very similar to the ground truth. However, a comparison of the IE reveals significant errors that occurred near the edges of moving objects caused because of time inconsistency between generated frames and the ground truth. In contrast, our method generates a high-quality consistent frame with the ground truth in both spatial and temporal domains.
+
+Another example from GOPRO in Fig. 6, shows the results of the proposed method in comparison with Super SloMo [10] and Quadratic [26] which they have not applied any adaptive processing for frames interpolation. As it can be seen in Fig. 6, at $t_1$ and $t_7$ which are closer to the input frames, all the methods generate comparable results. However, approaching to the middle frame as the temporal distance from the input increases, the quality of frames generated by Super SloMo and Quadratic start to degrade while our method experiences less degradation and higher quality. Especially for $I_{t_4}$ , our improvement is significant, as also shown by the PSNR values at each time stamp $t_i$ in Fig. 9c.
+
+Our method also works better on DAVIS and Vimeo90K, as reported in Table 1. Fig. 7 shows an example of a challenging scenario that involves both translational and rotational motion. The acceleration-aware Quadratic can better estimate the motion, while others have undergone severe degradation. However, undesired artifacts are still generated by Quadratic near the motion boundary. In contrast, our method exploits the cubic motion modeling and temporal pyramidal processing, which better captures this complex motion and generates comparable results against the ground truth.
+
+Table 3: Comparison between linear, quadratic and cubic motion models.
+
+| Models | Adobe240 | GOPRO |
| PSNR | SSIM | IE | PSNR | SSIM | IE |
| Linear | 33.97 | 0.955 | 6.13 | 32.40 | 0.936 | 7.09 |
| Quad. | 34.24 | 0.957 | 5.95 | 32.70 | 0.941 | 6.85 |
| Cubic | 34.37 | 0.959 | 5.89 | 32.91 | 0.943 | 6.74 |
+
+Table 4: Comparison between models generating different number of frames.
+
+| Methods | DAVIS | Vimeo90K |
| PSNR | SSIM | PSNR | SSIM |
| 1 frames | 27.07 | 0.819 | 32.02 | 0.944 |
| 3 frames | 27.44 | 0.816 | 34.67 | 0.950 |
| 7 frames(no pyr.) | 27.25 | 0.815 | 34.56 | 0.950 |
| 7 frames | 27.91 | 0.837 | 34.93 | 0.951 |
+
+
+Inputs
+SepConv
+Fig. 7: Sample results for interpolating the middle frame for a complex motion example from DAVIS dataset.
+
+
+
+
+Super SloMo
+
+
+DAIN
+
+
+Quadratic
+
+
+Ours
+
+
+GT
+
+# 4.4 Ablation studies
+
+Analysis of the model. To explore the impact of different components of the proposed model, we investigate the performance of our solution when applying different variations including 1) w/o post pro.: removing post processing; 2) w/o adv. loss: removing adversarial loss; 3) w/o $2^{nd}$ O.F.: replace the second stage flow estimation with the exact same network as the first stage; 4) w/o O.F. relax.: replace $\mathcal{L}_{w_{relax}}$ by $\mathcal{L}_{\ell_1}$ ; 5) w/o pyr.: in both pyramidal modules, we place all the input as the first level of the network, and the outputs are caught at the last layer. The performance of the above variations evaluated on Adobe240 and GOPRO datasets, as shown in Table 2, reveals that all the listed modifications lead to degradation in performance. As expected, motion relaxation and the pyramidal structure are important as they provide more accurate motion prediction and enforce the temporal consistency among the interpolated frames, as reflected in TCC. The post processing as its missing in the model also brings a large degradation is a crucial component that compensates the inaccurate O.F. and blending process. It is worth noting that even though the quantitative improvement of PSNR and SSIM for the adversarial loss is small, it is effective to preserve the temporal consistency as reported by the TCC values.
+
+Motion models. To investigate the impact of different motion models, we trained our method with linear and quadratic [26] motion prediction as well. The reported average quality in Table 3, shows that the cubic modeling has been dominant in both GOPRO and Adobe240. Importantly, the improvement by quadratic against linear in the model proposed in [26], is reported to be more than 1dB, however, we observed 0.27dB and 0.3dB on Adobe240 and GOPRO datasets. We give credit to the proposed temporal pyramidal processing and applying motion relaxation. In comparison with the impact of quadratic over
+
+Table 5: Motion relaxation evaluation for warping, prediction and final results.
+
+| Datasets | PSNR(I1w→0, I0) | PSNR(I1w→t4, It4gt) | PSNR(ˆIt4, It4gt) |
| Lℓ1 | Lwrelax | Lℓ1 | Lwrelax | Lℓ1 | Lwrelax |
| DAVIS | 30.13 | 23.37 | 25.13 | 25.43 | 27.15 | 27.91 |
+
+
+
+
+Inputs & GT
+
+
+
+
+Error $(I_1^{w\to 0})$
+
+
+
+
+Error $(I_1^{w\rightarrow t_4})$
+Fig.8: Sample results from Vimeo90K to show the comparison between O.F. estimation with (bottom row) and without (top row) relaxation in terms of the interpolation error for motion prediction and final interpolation result.
+
+
+
+
+Error $(\hat{I}_{t_4})$
+
+
+
+
+$\hat{I}_{t_4}$
+
+linear, our cubic modeling adds another 0.13dB and 0.21dB improvement on the Adobe240 and GOPRO, respectively, which shows the necessity of applying cubic modeling on the complexity of motions we have in different videos.
+
+Constraints relaxation in motion estimation. To investigate the impact of applying motion estimation relaxation in our architecture, we train two versions of the entire solution, with relaxation $(\ell_{w_{relax}})$ and without relaxation $(\ell_1)$ . For each case we perform three comparisons, first, $I_{1}$ warped by $f_{1\rightarrow 0}$ which named $(I_1^{w\to 0})$ and compare to $I_0$ , second, $I_{1}$ warped by the predicted $f_{1\rightarrow t_4}$ (before refinement) named by $(I_1^{w\rightarrow t_4})$ and compared to $I_{t_4}^{gt}$ , and finally, we also compared the final output of the network with $I_{t_4}^{gt}$ . Table 5 reports results of evaluation on DAVIS and Fig. 8 shows IE for an example from Vimeo90k. Both Table 5 and Fig. 8 show that although the relaxation makes the O.F. estimation between two initial pair poor, it gives better initial motion prediction for the middle frame as well as the final interpolation result.
+
+Temporal pyramidal structure. The effectiveness of the temporal pyramidal structure in interpolating multiple frames has already been verified in Table 2. To further investigate this impact by also considering the number of frames it generates, we trained another 3 variations of model including predicting all 7 frames without pyramidal structure, predicting 3 frames ( $i = 2,4,6$ ), and only 1 middle frame ( $i = 4$ ) with pyramidal model. Table 4 reports the interpolation quality of the middle frame on DAVIS and Vimeo90K for all these cases. The results in Table 4 demonstrate that the interpolation of the middle frame benefits from the joint optimization with other frames.
+
+
+(a) Model size VS. PSNR.
+
+
+(b) Inference speed.
+Fig. 9: Efficiency of the proposed method compared to state-of-the-art methods from the perspective of performance and model size (a), inference speed (b), and performance trend in multiple frame interpolation (c).
+
+
+(c) PSNR for 7 frames.
+
+# 4.5 Efficiency analysis
+
+Considering the wide applications for frame interpolation, especially on mobile and embedded devices, investigating the efficiency of the solution is crucial. We report the efficiency of the proposed method in terms of model size, interpolation quality, and inference time. Fig. 9a reports PSNR values evaluated on Adobe240 in relation with the model sizes. The proposed method outperforms all the methods in the quality of the results with a large margin while having a significantly smaller model size. In particular, our method outperforms Quadratic [26] by 1.57dB by using only $12.5\%$ of its parameters. We also show the inference times for interpolating different numbers of frames in Fig. 9b. To interpolate more than 8 frames, our method is able to be extended to interpolate more frames by simply adding more levels in the pyramid. However, higher frame rate videos are hard to be obtained for training; thus, we adopt the iterative interpolation method (run 8x model multiple times and drop the corresponding frames). As reported in Fig. 9b, our method is around 7 times faster than [26] for interpolating more than 8 frames. Our method is the fastest and has the smallest size while keeping the high-quality results for multi-frame interpolation tasks, which makes it applicable for low power devices.
+
+# 5 Conclusions
+
+In this work, we proposed a powerful and efficient multi-frame interpolation solution that considers prior information and the challenges in this particular task. The prior information about the difficulty levels among the intermediate frames helps us to design a temporal pyramidal processing structure. To handle the challenges of real world complex motion, our method benefits from the proposed advanced motion modeling, including cubic motion prediction and relaxed loss function for flow estimation. All these parts together help to integrate multi-frame generation in a single optimized and efficient network while the temporal consistency of frames and spatial quality are at maximum level beating the state-of-the-art solutions.
+
+# References
+
+1. Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. International journal of computer vision 92(1), 1-31 (2011)
+2. Bao, W., Lai, W.S., Ma, C., Zhang, X., Gao, Z., Yang, M.H.: Depth-aware video frame interpolation. In: IEEE Conferene on Computer Vision and Pattern Recognition (2019)
+3. Bao, W., Lai, W.S., Zhang, X., Gao, Z., Yang, M.H.: Memc-net: Motion estimation and motion compensation driven neural network for video interpolation and enhancement. arXiv preprint arXiv:1810.08768 (2018)
+4. Bao, W., Zhang, X., Chen, L., Ding, L., Gao, Z.: High-order model and dynamic filtering for frame rate up-conversion. IEEE Transactions on Image Processing 27(8), 3813-3826 (2018)
+5. Castagno, R., Haavisto, P., Ramponi, G.: A method for motion adaptive frame rate up-conversion. IEEE Transactions on circuits and Systems for Video Technology 6(5), 436-446 (1996)
+6. Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., Van Der Smagt, P., Cremers, D., Brox, T.: Flownet: Learning optical flow with convolutional networks. In: Proceedings of the IEEE international conference on computer vision. pp. 2758-2766 (2015)
+7. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. pp. 2672-2680 (2014)
+8. Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: Flownet 2.0: Evolution of optical flow estimation with deep networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2462-2470 (2017)
+9. Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in neural information processing systems. pp. 2017-2025 (2015)
+10. Jiang, H., Sun, D., Jampani, V., Yang, M.H., Learned-Miller, E., Kautz, J.: Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9000-9008 (2018)
+11. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
+12. Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image superresolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4681-4690 (2017)
+13. Lee, W.H., Choi, K., Ra, J.B.: Frame rate up conversion based on variational image fusion. IEEE Transactions on Image Processing 23(1), 399-412 (2013)
+14. Liu, Y.L., Liao, Y.T., Lin, Y.Y., Chuang, Y.Y.: Deep video frame interpolation using cyclic frame generation. In: AAAI Conference on Artificial Intelligence (2019)
+15. Liu, Z., Yeh, R.A., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4463-4471 (2017)
+16. Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3883-3891 (2017)
+
+17. Niklaus, S., Liu, F.: Context-aware synthesis for video frame interpolation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1701-1710 (2018)
+18. Niklaus, S., Mai, L., Liu, F.: Video frame interpolation via adaptive convolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 670-679 (2017)
+19. Niklaus, S., Mai, L., Liu, F.: Video frame interpolation via adaptive separable convolution. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 261-270 (2017)
+20. Peleg, T., Szekely, P., Sabo, D., Sendik, O.: Im-net for high resolution video frame interpolation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2398-2407 (2019)
+21. Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 724-732 (2016)
+22. Ranjan, A., Black, M.J.: Optical flow estimation using a spatial pyramid network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4161-4170 (2017)
+23. Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1279-1288 (2017)
+24. Sun, D., Yang, X., Liu, M.Y., Kautz, J.: Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8934-8943 (2018)
+25. Wu, J., Yuen, C., Cheung, N.M., Chen, J., Chen, C.W.: Modeling and optimization of high frame rate video transmission over wireless networks. IEEE Transactions on Wireless Communications 15(4), 2713-2726 (2015)
+26. Xu, X., Siyao, L., Sun, W., Yin, Q., Yang, M.H.: Quadratic video interpolation. In: Advances in Neural Information Processing Systems. pp. 1645-1654 (2019)
+27. Xue, T., Chen, B., Wu, J., Wei, D., Freeman, W.T.: Video enhancement with task-oriented flow. International Journal of Computer Vision (IJCV) 127(8), 1106-1125 (2019)
+28. Yuan, L., Chen, Y., Liu, H., Kong, T., Shi, J.: Zoom-in-to-check: Boosting video interpolation via instance-level discrimination. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 12183-12191 (2019)
+29. Zhang, H., Shen, C., Li, Y., Cao, Y., Liu, Y., Yan, Y.: Exploiting temporal consistency for real-time video depth estimation. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1725-1734 (2019)
+30. Zhang, H., Patel, V.M.: Densely connected pyramid dehazing network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3194-3203 (2018)
\ No newline at end of file
diff --git a/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/images.zip b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..91cdd98383fcf36d19b353e18e0af9a540f02965
--- /dev/null
+++ b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c2a017e1ea83d111e0bb305f254c8c9477a37ddbe440fc33619e03e0fa753ef0
+size 575212
diff --git a/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/layout.json b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9a03749b19614e37992263ecfeb04713ea628bd3
--- /dev/null
+++ b/allatoncetemporallyadaptivemultiframeinterpolationwithadvancedmotionmodeling/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fca1806ff3559f08dbedfec5e7a948653121b0048e58c690ad4d31a6418e13b6
+size 488640
diff --git a/amplifyingkeycuesforhumanobjectinteractiondetection/8b231a42-4d97-4a01-a9c2-46d4a7ec2f39_content_list.json b/amplifyingkeycuesforhumanobjectinteractiondetection/8b231a42-4d97-4a01-a9c2-46d4a7ec2f39_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9c913049ba1d431b38e78e05b9db1fb05d20706c
--- /dev/null
+++ b/amplifyingkeycuesforhumanobjectinteractiondetection/8b231a42-4d97-4a01-a9c2-46d4a7ec2f39_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e54c72f980cecd1a06b5d34e2875e807388145bc167dfdf11046065e8eb15553
+size 81136
diff --git a/amplifyingkeycuesforhumanobjectinteractiondetection/8b231a42-4d97-4a01-a9c2-46d4a7ec2f39_model.json b/amplifyingkeycuesforhumanobjectinteractiondetection/8b231a42-4d97-4a01-a9c2-46d4a7ec2f39_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..504a83e7485a3ebd179caff8a1e31a38b0d881e1
--- /dev/null
+++ b/amplifyingkeycuesforhumanobjectinteractiondetection/8b231a42-4d97-4a01-a9c2-46d4a7ec2f39_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b28e23884dfa40f5c176db4fa318b9389befe866bf303fb0488d3708b44c2ee4
+size 99043
diff --git a/amplifyingkeycuesforhumanobjectinteractiondetection/8b231a42-4d97-4a01-a9c2-46d4a7ec2f39_origin.pdf b/amplifyingkeycuesforhumanobjectinteractiondetection/8b231a42-4d97-4a01-a9c2-46d4a7ec2f39_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..25078e780970145c67819c27cd4f3aacdf03b9fd
--- /dev/null
+++ b/amplifyingkeycuesforhumanobjectinteractiondetection/8b231a42-4d97-4a01-a9c2-46d4a7ec2f39_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:67252bd63a51c4b8214d480678edca7a3756f16f76e726a1185c739d30d17b8c
+size 11175175
diff --git a/amplifyingkeycuesforhumanobjectinteractiondetection/full.md b/amplifyingkeycuesforhumanobjectinteractiondetection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3d47196b608a48f9c40d7bdc1e59484758e3d6ca
--- /dev/null
+++ b/amplifyingkeycuesforhumanobjectinteractiondetection/full.md
@@ -0,0 +1,276 @@
+# Amplifying Key Cues for Human-Object-Interaction Detection
+
+Yang Liu $^{1}$ [0000-0002-4259-3882], Qingchao Chen $^{2}$ [0000-0002-1216-5609], and Andrew Zisserman $^{1}$ [0000-0002-8945-8573]
+
+$^{1}$ Visual Geometry Group, Department of Engineering Science, Oxford, UK
+ $^{2}$ Department of Engineering Science, Oxford, UK
+
+Abstract. Human-object interaction (HOI) detection aims to detect and recognise how people interact with the objects that surround them. This is challenging as different interaction categories are often distinguished only by very subtle visual differences in the scene. In this paper we introduce two methods to amplify key cues in the image, and also a method to combine these and other cues when considering the interaction between a human and an object. First, we introduce an encoding mechanism for representing the fine-grained spatial layout of the human and object (a subtle cue) and also semantic context (a cue, represented by text embeddings of surrounding objects). Second, we use plausible future movements of humans and objects as a cue to constrain the space of possible interactions. Third, we use a gate and memory architecture as a fusion module to combine the cues. We demonstrate that these three improvements lead to a performance which exceeds prior HOI methods across standard benchmarks by a considerable margin.
+
+# 1 Introduction
+
+Human-Object Interaction (HOI) detection—which focuses specifically on relations involving humans—requires not only to retrieve human and object locations but also to infer the relations between them. Thus, for a given image, the objective of HOI is to identify all triplets of the form $\langle \text{human}, \text{verb}, \text{object} \rangle$ . The ability to predict such triplets robustly is central to enabling applications in robotic manipulations [15] and surveillance event detection [1].
+
+Driven by impressive progress on instance detection and recognition, there has been growing interest in the HOI detection problem. However, the majority of existing methods [9, 30, 38] first detect all human and object instances and then infer their pairwise relations using the appearance feature of the detected instances and their coarse layout (position of human and object boxes). Despite their general efficacy, the performance of prior work may still be limited by some particular design choices, which we discuss next.
+
+First, although recent works have sought to introduce some fine-grained spatial configuration descriptors and context cues from the whole image into the HOI detection, the encoding mechanisms have limitations. Specifically, (1) Some
+
+
+Top: Bottom:
+
+
+Top: Bottom:
+
+
+
+
+Fig.1: (Left): Interactions with similar spatial layouts can be resolved through fine-grained spatial information. (Centre): Global and local context encode the scene and other local objects to provide strong clues for the interaction taking place; (Right): Plausible motion estimation distinguishes between interactions for which dynamics play an important role.
+
+
+
+
+Top: Bottom:
+
+approaches [24, 14, 38, 45] use human pose to distinguish the fine-grained relation (going beyond the standard human and object coarse boxes). However, these approaches encode human pose via key-point estimation (plus post-processing), which is problematic as it loses boundary and shape information. For example in Figure 1(left), encoding the fine-grained spatial information as key points exhibits ambiguity when distinguishing the differences between 'riding a bicycle' and 'walking with a bicycle'. We argue that the boundaries of both human parts and objects should be encoded explicitly, due to their critical ability to reveal interaction boundaries and support inference of relations. Thus we improve the encoding mechanism for the fine-grained information by leveraging the fine-grained human parsing and object semantic segmentation masks, to better capture the geometric relations between them. (2) Some approaches use the visual appearance feature from other image regions or the whole image as the auxiliary context information. However, there are a limited number of triplets in existing HOI datasets—this is insufficient to capture the full intra-class appearance variations of relationships (making it harder to generalise). We draw inspiration from classical recognition techniques (e.g. the use of context for detection [7]) and argue that the semantic categories of other objects present in the surrounding neighbourhood of the candidate instance pair (local context) and the scene category (global context) provide valuable cues for distinguishing between different interactions, but the detailed visual appearance of them is often not crucial for HOI detection. For instance, as shown in Fig.1(middle), the surrounding neighbourhood in the 'eating a cake' category will likely comprise a spoon-like tool, whereas for the 'cutting a cake' category, it is a knife-like tool. But the colour and design of the spoon/knife do not provide useful cues when inferring its relation with the human. Instead of using visual appearance features directly, we encode categories via a semantic embedding (word2vec). This enables the model to leverage language priors to capture possible co-occurrence
+
+and affordance relations between objects and predicates. As we show through careful ablation studies in Sec. 4, these mechanisms for encoding fine-grained spatial configuration and contextual cues bring consistent improvements to HOI detection performance, highlighting the importance of studying these choices.
+
+Second, plausible motion—the set of probable movements most likely to follow a static image—is not currently accounted for when detecting human object interactions. Nevertheless, humans can trivially enumerate plausible future movements (what may happen next in an image) and characterise their relative likelihood.
+
+The inference of plausible motions brings two benefits: the first is saliency—it provides a natural attention over the key object and human body parts present in an image; the second is that it constrains the space of relations to the subset that is consistent with these dynamics. For example in Fig. 1(right), it is clear that the object and arm highlighted by motion are concerned with throwing or catching the frisbee, not concerned with eating or writing on it. Furthermore, if the estimation of the direction is also correct then that would distinguish whether the person is throwing or catching the frisbee. Note that while the plausible motion can elucidate a salient signal for human object interaction detection, it can be difficult to learn directly from the image alone. We benefit here from recent work on motion hallucination [10], that has learnt to predict local optical flow from a static image to reveal plausible future movement by identifying which regions of pixels will move (together with their velocity) in the instant following image capture. To the best of our knowledge, this work represents the first study of the utility of plausible motion as an additional cue for HOI detection.
+
+In this paper, we aim to tackle the challenges described above with a unified framework that amplifies important cues for HOI detection (as shown in Fig. 2). We design a novel multi-expert fusion module, where different features (i.e., plausible motion, enhanced fine-grained spatial configuration and context cues) are viewed as cooperative experts to infer the human object interaction. As different cues and their relationships will have different contribution for detecting the human object interaction, we use the gate and memory mechanism to fuse the available cues sequentially, select the discriminative information and gradually generate the representation for the whole scene step by step. By doing so, the final representation is more discriminate than those from existing methods that lack a reasoning mechanism, and this leads to better HOI detection performance.
+
+The contributions of this work are summarised as follows:
+
+(1) We propose a mechanism for amplifying fine-grained spatial layout and contextual cues, to better capture the geometric relations and distinguish the subtle difference between relation categories.
+(2) We are the first to explore the utility of the plausible motion estimation (which regions of pixels will move) as an additional cue for HOI detection.
+(3) We propose a gate and memory mechanism to perform sequential fusion on these available cues to attain a more discriminative representation.
+(4) Our approach achieves state-of-the-art performance on two popular HOI detection benchmarks: V-COCO and HICO-DET.
+
+# 2 Related Work
+
+Visual Relationship Detection. Visual Relationship Detection (VRD) aims to detect objects and simultaneously predict the relationship between them. This topic has attracted considerable attention, supported by the recent development of large-scale relationship datasets such as VRD [26], Visual Genome [21] and Open Images [22]. However, detecting subtle differences between visual relationship remains difficult and the task is made yet more challenging by the distribution of visual relations, which is extremely long-tailed. Several recent works have proposed various mechanisms to address this problem [26, 43, 20, 5, 42, 23]. Our work focuses on one particular class of visual relationship detection problem: detecting human object interaction (HOI). HOI detection poses additional challenges over VRD: a human can also perform multiple actions with one or more objects simultaneously and the range of human actions we are interested in are typically more fine-grained and diverse than for other generic objects.
+
+Human Object Interaction (HOI) Detection. HOI detection aims to detect and recognise how each person interacts with the objects that surround them—it provides the fundamental basis for understanding human behaviour in a complex scene. Recently, driven by the release of relevant benchmarks, HOI detection has attracted significant attention [9, 12, 24, 14, 38, 45, 40, 33, 41].
+
+The earlier works typically focused on tackling HOI by utilizing human and object visual appearance features or by capturing their spatial relationship through their coarse layout (box locations) [9, 12]. Recently, several methods [24, 14] have been developed that use human pose configuration maps to distinguish fine-grained relations. Wan et al. [38] and Fang et al. [6] use human pose cues to zoom into the relevant regions of the image via attention mechanism. Zhou et al. [45] encode human pose through graph neural networks and message passing. Note however, that each of these approaches encode human pose via keypoint estimation (either draw rectangle around the key point or link the keypoint into skeleton), which removes detailed information about the boundary and shape of the human, in particular, about human parts. By contrast, we argue that the boundaries of both human parts and objects are crucial. We are the first to leverage fine-grained human parsing and object semantic segmentation masks to better capture the geometric relations between them—such cues enable discrimination between subtly different relation categories.
+
+More recently, although some approaches [28, 12, 31, 9, 39] have sought to use contextual cues from the whole image (global context), they do so by learning a spatial attention map in pixel space based on instance visual appearance to highlight image regions, making optimisation challenging when training data is limited. By contrast, we draw inspiration from classical recognition techniques [34, 29] and argue that the semantic categories of other objects present in the surrounding neighbourhood of the candidate instance pair (local context) and the scene information (global context) provide valuable cues for resolving ambiguity between different interactions, but the detailed visual appearance of them is often not crucial. Instead of using visual appearance features, we are the first to encode context information via a semantic embedding, i.e., word2vec, that
+
+enables us to leverage the language priors to capture which objects might afford or co-occur with particular predicates.
+
+The prediction of plausible motion has received limited attention for HOI detection. Nevertheless, estimation of current movement—when coupled with an understanding of the dynamics of an interaction—provides a cue for assessing the degree to which the configuration of humans and objects is probable for that interaction. We are the first to leverage flow prediction from a static image to infer the motion most plausible for a given image (i.e., which regions of pixels will move, together with their velocity, in the instant following image capture) as an auxiliary cue for HOI detection. Our approach is related to a wide body of work on visual future prediction [36, 8, 35] and draws on techniques for flow prediction from static images [37, 10]. Differently from prior work, our objective is to infer plausible motion as an auxiliary cue for HOI detection in static images. In this sense, our method bridges static image HOI detection with video-level action understanding by transferring a motion prior learned from videos to images.
+
+Current approaches predominately use either late or early fusion strategy to combine multi-stream features when inferring relations, while the relationship between different streams is often overlooked. Specifically, Late fusion are adopted by [12, 19, 33, 41], where interaction predictions are performed on each stream independently and then summed at inference time. A wide body of work, i.e., [9, 14, 24, 30, 40] use the early fusion strategy, where multi-stream features are concatenated first and then use it to predict the score (sometimes with attention mechanism as in [38, 39, 31, 45]). In this work, we are the first to use the gate and memory mechanism to fuse the available cues for HOI detection, i.e., select the discriminative information and gradually generate the representation for the whole scene step by step.
+
+# 3 Method
+
+We now introduce our proposed Fine-grained layout-Context-Motion Network (FCMNet), for localising and recognising all human-object interaction instances in an image. We first provide a high-level overview of FCMNet in Sec. 3.1, followed by a detailed description of the model architecture in Sec. 3.2. Finally, Sec. 3.3 describes the training and inference procedures.
+
+# 3.1 Overview
+
+Our approach to human-object interaction detection comprises two main stages: (1) human and object detection and (2) interaction prediction. First, given an image $I$ , we use Faster R-CNN [32] (Detectron implementation [11]) to detect all person instances $p = (p^1, p^2, \dots, p^m)$ and object instances $o = (o^1, o^2, \dots, o^n)$ , generating a set of bounding boxes $b = (b^1, b^2, \dots, b^{m+n})$ where $m$ denotes the total number of detected person and $n$ denotes the number of detected objects. We use $b_H \in \mathbb{R}^4$ and $b_O \in \mathbb{R}^4$ to denote the human and object bounding boxes. HOI proposals are then generated by enumerating all pairs of candidate human
+
+
+Fig. 2: The proposed FCMNet framework. The backbone module first detects all human and object boxes and encodes their representations; For each candidate pair of human-object boxes $< b_{H}, b_{O} >$ , the spatial encoder encodes coarse and fine-grained layouts to capture the spatial configuration between them; the context encoder accumulates other readily available auxiliary information in the other region of the image, including local and global context; the motion encoder infer the likely movement of humans and objects (i.e. which regions of pixels will move together with their velocity in the instant following image capture) via flow prediction from the static image as an approximation to plausible motion. Finally, the fusion module combines all available knowledge about the candidate pair $< b_{H}, b_{O} >$ to predict the final interaction score for the candidate pair and outputs the detected triplet $< \text{human}, \text{verb}, \text{object} >$ .
+
+and object bounding boxes. Next, we process each human-object bounding box pair $< b_{H}, b_{O} >$ with FCMNet to predict an interaction action score $S_{H,O} \in \mathbb{R}^{A}$ , where $A$ represents the number of interaction classes. FCMNet encodes three image cues independently via the spatial encoder, context encoder, and motion encoder. Finally, a fusion module combines the outputs of these encoders and generates a robust HOI detection. Fig. 2 shows an overview of the model.
+
+# 3.2 Model Architecture
+
+FCMNet contains the following five modules: (1) a backbone module that detects human and object boxes and encodes their representations; (2) a spatial encoder that leverages both coarse and fine-grained layout information; (3) a context encoder that accumulates auxiliary information in other regions of the image; (4) a motion encoder that predicts which regions of pixels would move in the instant following image capture via flow prediction; (5) a fusion module that combines all cues to predict the final interaction score and outputs the detected triplet $\langle \text{human}, \text{verb}, \text{object} \rangle$ . We next provide details of each component.
+
+(1) Backbone module: We adopt Faster R-CNN (with a ResNet-50 trunk) to detect all human and object instances. To encode a detected human box $b_{H}$ , we
+
+extract instance level visual appearance feature $f_{H}$ using standard techniques: we apply ROI pooling, pass the result through a residual block and then perform global average pooling. For the object box $b_{O}$ , we do not keep the visual appearance feature but instead use the word2vec instead to encode the semantic knowledge of the object category, denoted by $f_{O}$ . The motivation for this design choice is that the intra-class variation for each object category can be considerable, but detailed visual appearance of the object is often not crucial for the interaction category: for example, the colour and design of the "frisbee" do not provide useful cues when inferring its relation with the human. Using the word2vec semantic embedding for the object enables the model to leverage language priors to capture which objects might afford similar predicates and therefore generalise to previously unseen or rare HOI instances [41].
+
+(2) Spatial Encoder: The spatial encoder is designed to encode the spatial relationship between the human and object instance at both a coarse and fine-grained level. The encoding methods for each granularity are described next.
+
+Coarse Layout representation: We use the two-channel binary image representation advocated by [9] to describe the interaction pattern at a coarse level. Specifically, we take the union of the human and object instance boxes as the reference box and construct two binary images (as separate channels) within it: the first channel has value 1 within the human bounding box and 0 elsewhere; the second channel has value 1 within the object box and 0 elsewhere.
+
+Fine-grained Layout representation: Instead of encoding the fine-grained information via key point estimation, we compute a human parse (segmentation masks for the human parts) of the detected human and a segmentation mask for the detected object instance to provide fine-grained layout information. The primary hypothesis underpinning our design is that the shape and boundary of both the human and object can greatly help disambiguate different actions with a similar coarse spatial layout. The reason is that these fine-grained cues can reveal the interaction boundary explicitly when inferring the relations. Moreover, fine-grained human parsing both reflects the human's pose and keeps much of the valuable information of their visual appearance. In our work, we use Mask-RCNN [16] to extract the segmentation mask of the object, and use MMAN [27] to extract the segmentation mask of all visible human parts. These two masks are stacked to form the fine-grained layout representation. Specifically, the first channel contains a set of discrete intensity values uniformly ranging from 0.05 to 0.95 to indicate different human parts; the second channel is a binary map that has a value of 1 within the object mask area. Examples of the information conveyed by each channel information can be found in Fig. 2.
+
+We show in our ablation experiments (in Sec. 4.2) that each of the channels forming the coarse and fine-grained layout representations yield improvements in HOI detection performance. Changing the encoding mechanism of the fine-grained information outperforms the one encoding via key point estimation in the literature. In all other experiments, we stack coarse (two channels) and fine-grained (two channels) layout representations together as the input to the spatial encoder unless otherwise specified. The spatial encoder extracts the instance
+
+spatial relationship $f_{SP}$ via a small CNN. The CNN comprises two convolutional layers (the first of which uses an SE block [17] to learn more powerful features) and is trained end-to-end.
+
+(3) Context Encoder: The context encoder is designed to uncover complementary cues conveyed in other regions of the image. Instead of using visual appearance to encode the contexts information directly, we change the encoding mechanism by using semantic categories. It comprises a global context encoding, i.e., the estimated scene, and a local context encoding, namely the semantic categories of other objects present in the surrounding neighbourhood of the candidate instance pair.
+
+Global Context: For the global context representation $f_{G}$ , we use a DenseNet-161 model [18] pretrained on Places365 [44] to extract scene features. After that we encode the scene class (with largest likelihood) using word2vec. The global context embedding $f_{G}$ (scene feature) is therefore a 300-dimensional vector.
+
+Local Context: During inference of the relationship between the candidate pair containing object $o^i$ with human $h$ , all the other detected objects $o^j$ where $j \neq i$ in the neighbourhood can be considered to provide local context. In particular, their semantic category and position relative to the candidate object $o^i$ are both valuable cues for distinguishing between different interactions. For example, the objects present in the neighbourhood of an 'eating a cake' interaction will likely comprise a spoon-like tool, whereas for the 'cutting a cake' interaction, it is a knife-like tool. The distance between the knife and the cake is also important (if it is far away from the cake, it is very unlikely that the knife is being used to cut the cake).
+
+Motivated by this observation, we first use word2vec to encode the semantic knowledge of each object $o^j$ in the neighbourhood, and then concatenate the resulting embedding with its spatial relationship $f_{R}$ (computed with respect to the candidate object $o^i$ ). Following prior work [46] on visual relationship detection, the spatial geometric relationship feature $f_{R}$ between candidate object $o^i$ and its neighbour $o^j$ is encoded as follows:
+
+$$
+f _ {R} = \left[ \left(\frac {x _ {1} ^ {i} - x _ {1} ^ {j}}{x _ {2} ^ {j} - x _ {1} ^ {j}}\right), \left(\frac {y _ {1} ^ {i} - y _ {1} ^ {j}}{y _ {2} ^ {j} - y _ {1} ^ {j}}\right), \log \left(\frac {x _ {2} ^ {i} - x _ {1} ^ {i}}{x _ {2} ^ {j} - x _ {1} ^ {j}}\right), \log \left(\frac {y _ {2} ^ {i} - y _ {1} ^ {i}}{y _ {2} ^ {j} - y _ {1} ^ {j}}\right) \right], \tag {1}
+$$
+
+where $(x_1^i, x_2^i, y_1^i, y_2^i)$ and $(x_1^j, x_2^j, y_1^j, y_2^j)$ are the normalised box coordinates of the candidate object $o_i$ and its neighbour $o^j$ . This geometric feature $f_R$ is a measure of the scales and relative positioning of the two object entities.
+
+To account for the variable number of objects present in the neighbourhood of an interaction, we use NetVLAD [2] to aggregate all object representations when forming the local context embedding $f_{L}$ . During the end-to-end training, the NetVLAD aggregation module can learn to discriminatively select which information should be promoted (or demoted). Finally, the output of the Context Encoder, $f_{C}$ , is generated by concatenating the global context $f_{G}$ and local context $f_{L}$ embeddings.
+
+(4) Motion Encoder: The Motion Encoder aims to infer the likely movements of humans and objects in a given image, and provide cues to detect and recognise
+
+their interactions. We draw inspiration from the work of [36] which sought to learn models from video that were capable of synthesising plausible futures from a single static image and in particular, the more recent work of [10] for static image flow prediction. In our work, we focus on the latter task of predicting local optical flow as a proxy for plausible future movements of objects. We adopt the Im2Flow model [10] to predict flow from the static image to encode plausible scene motion. The flow describes which pixel regions will move (together with their direction and velocity) in the instant following image capture. Concretely, we first predict the flow information of the image and then use the plausible motion encoder (a CNN with learnable parameters) to extract plausible motion features $f_{M}$ . Qualitative examples of the predicted flow can be seen in Fig. 1 (right) and Fig. 2.
+
+(5) Fusion Module: The fusion module combines the outputs of the backbone, spatial, context and motion encoders into a single feature embedding and uses it to predict a score for the interaction $s_{H,O}$ of the candidate instance pair $< b_{H},b_{O}>$ . The design of the fusion module is shown in Fig. 2. Specifically, we perform fusion by putting the sequence of available features $f^{*} = \{f_{1},\dots,f_{k}\} = \{f_{H},f_{O},f_{SP},f_{C},f_{M}\}$ , one by one into GRUs [4] $^3$ . The description of the whole scene gradually accumulate and update in the memory cell $m_{k}$ (hidden state), where the lower index $k$ is the number of reasoning steps. At each step $k$ , we start with calculate the update gate $z_{k}$ as:
+
+$$
+z _ {k} = \sigma_ {z} \left(W _ {z} f _ {k} + U _ {z} m _ {k - 1}\right) + b _ {z}, \tag {2}
+$$
+
+where $W_{z}, U_{z}$ and $b_{z}$ are weights and bias and $\sigma_{z}$ is a sigmoid activation function. The update gate $z_{k}$ analyzes the description of the whole scene at last step $m_{k-1}$ and the current input feature $f_{k}$ to decide how much the current step updates its memory cell. The new added information $f_{k}$ at step $k$ helping grow the description of the whole scene is computed as follows:
+
+$$
+\hat {m} _ {i} = \sigma_ {m} \left(W _ {m} f _ {k} + U _ {m} \left(r _ {k} \circ m _ {k - 1}\right) + b _ {z}\right), \tag {3}
+$$
+
+where $W_{m}, U_{m}$ and $b_{m}$ are weights and bias and $\sigma_{m}$ is a tanh activation function. $\circ$ is an element-wise multiplication. $r_k$ is the reset gate that decides what content to forget based on the reasoning between the $m_{k-1}$ and $f_{k}$ , can be computed as
+
+$$
+r _ {k} = \sigma_ {r} \left(W _ {r} f _ {k} + U _ {r} m _ {k - 1}\right) + b _ {r}, \tag {4}
+$$
+
+where $W_{r}, U_{r}$ and $b_{r}$ are weights and bias and $\sigma_{r}$ is a sigmoid activation function. Then the description of the whole scene $m_{k}$ at the current step is a linear interpolation using the update gate $z_{k}$ between the previous description $m_{k-1}$ and the new added content $\hat{m}_{k}$ :
+
+$$
+m _ {k} = \left(1 - z _ {k}\right) \circ m _ {k - 1} + z _ {k} \circ \hat {m} _ {k}, \tag {5}
+$$
+
+where $\circ$ is an element wise multiplication. Lastly, we take the memory cell $m_{k}$ at the end of sequence $f^{*}$ as the final representation to predict the relation category: the output of the fusion module is the interaction score $S_{H,O}$ for the candidate pair $< b_{H},b_{O}>$ . The proposed gate and memory mechanism allows the model to dynamically select which information should be highlighted or suppressed in the final representation, rendering it more discriminative for the final objective.
+
+# 3.3 Training and Inference
+
+Since a human can concurrently perform different actions and interact with one or more objects, e.g. "eating a cake and reading a book while sitting on the coach", HOI detection represents a multi-label classification problem in which each predicate is independent and not mutually exclusive. During training, a binary sigmoid classifier is used for each predicate that minimises the cross-entropy between the prediction score $s_{H,O}$ and the ground truth label.
+
+During inference, all human and object instances are first detected in each image. Each human and object box pair $< b_{H}, b_{O} >$ is then passed through the network to produce a score $s_{H,O}^{a}$ for each predicate class $a \in 1, \dots, A$ , where $A$ denotes the total number of predicate classes. The final relation score is then combined with the detection scores of the human and object instances $s_{H}$ and $s_{O}$ that represent the detection quality of the instance boxes $b_{H}$ and $b_{O}$ . The final HOI score $S_{H,O}$ for the candidate box pair $< b_{H}, b_{O} >$ is then calculated as:
+
+$$
+S _ {H, O} = s _ {H, O} \cdot s _ {H} \cdot s _ {O} \tag {6}
+$$
+
+# 4 Experiments
+
+We first introduce the dataset used, evaluation metric and implementation details in Sec. 4.1. Next, we conduct a detailed ablation study in Sec. 4.2 to verify the effectiveness of each proposed component and present some qualitative results to demonstrate the strengths and failure cases of our approach. Finally we report our HOI detection results quantitatively and compare with state-of-the-art approaches on two benchmarks: V-COCO [13] and HICO-DET[3].
+
+# 4.1 Experimental Setup
+
+Datasets: We evaluate our proposed approach on two HOI detection benchmarks: V-COCO [13] and HICO-DET [3]. V-COCO contains 5400 images in the training and validation split and 4946 images in the test set. It is annotated with 26 common action category labels and the bounding boxes for human and object instances. HICO-DET comprises 47,776 images (38,118 images in the training set and 9,658 in the test set) with more than 150K triplets. It is annotated with 600 HOI categories over 80 object classes and 117 unique verbs. HICO-DET is both larger and more diverse than V-COCO.
+
+Evaluation Metric: We follow the standard evaluation setting in [9] and use mean average precision to measure HOI detection performance. Formally, a
+
+triplet of $\langle \text{human}, \text{verb}, \text{object} \rangle$ is considered as true positive if and only if: (1) the predicted bounding box of both human and object instance overlap with the ground truth bounding box with IoU greater than 0.5, and (2) the predicted HOI matches the ground truth HOI category. For the V-COCO dataset, we evaluate $mAP_{role}$ following [9]. For the HICO-DET dataset, mAP performance is reported for three different HOI category sets: (1) all 600 HOI categories (Full), (2) 138 categories with less than 10 training samples (Rare) and (3) the remaining 462 categories with more than 10 training samples (Non-Rare).
+
+Implementation details: Following [9], we build our approach on the Faster-RCNN [32] object detector (without FPN) with the ResNet-50 backbone to enable a fair comparison with other prior approaches. The human parse masks are obtained with a Macro-Micro Adversarial Net [27], trained on the LIP dataset [27]. Object masks are obtained with Mask-RCNN [16] pretrained on MSCOCO [25]. Global scene features are extracted using a DenseNet-161 model [18] pretrained on Places365 [44]. The object and scene semantic embeddings are obtained from Google News trained word2vec embeddings. We adopt the Im2Flow model [10] to predict flow from the static image to encode plausible motion. All pretrained models described above are kept frozen during the training in this paper. More implementation details can be found in the extended arXiv version of this paper.
+
+# 4.2 Ablation Studies
+
+In this section, we empirically investigate the sensitivity of the proposed method to different design choices. As the HICO-DET dataset is both larger and more diverse than V-COCO, we perform all ablation studies on HICO-DET. We study four aspects of FCMNet: the contribution of the network components, fine-grained spatial encodings, context features and fusion strategies. Finally, we present some qualitative examples to illustrate the challenging and failure cases.
+
+Architectural variants: We conduct an ablation study by examining the effectiveness of each proposed component in our network structure. We use combinations of human, object embeddings and coarse layout spatial configuration (instance boxes) as our baseline (Base). As shown in Table 1, each proposed module yields improved performance under all three HICO-DET settings. By looking at the first four rows in Table 1, we can observe that the fine-grained spatial configuration information contributes the most significant performance gain as an individual module, suggesting that fine-grained shape and boundary of the human parts and object can greatly help disambiguate different actions with a similar coarse spatial layout. The FCMNet which includes all proposed modules (human and object features, spatial encoder with coarse and fine-grained layout, context encoder, motion encoder and fusion) at the same time achieves the best performance, which verifying the effectiveness of the proposed components. More ablation studies can be found in the extended arXiv version of this paper.
+
+Fine-grained spatial configuration encoding: We compare the performance of using different forms of fine-grained spatial information in Table 2. To
+
+Table 1: Ablation Study on Different Network Components
+
+| Methods | Full | Rare | Non-Rare |
| Baseline (Base) | 14.8 | 12.3 | 15.7 |
| Base+ Fine | 17.6 | 15.4 | 18.3 |
| Base + Context | 16.2 | 14.1 | 16.9 |
| Base + Motion | 16.5 | 14.4 | 17.2 |
| FCMNet (ours) | 20.4 | 17.3 | 21.6 |
+
+Table 2: Ablation Study on Different Fine-grained Information
+
+| Methods | Full | Rare | Non-Rare |
| Baseline (Base) | 14.8 | 12.3 | 15.7 |
| Base +HS | 15.8 | 12.9 | 16.8 |
| Base +HP | 17.1 | 14.7 | 18.0 |
| Base +OM | 16.2 | 14.0 | 17.1 |
| Base +HP+OM | 17.6 | 15.4 | 18.3 |
+
+Table 3: Ablation Study on Different Context Information
+
+| Methods | Full | Rare | Non-Rare |
| Baseline (Base) | 14.8 | 12.3 | 15.7 |
| Base+Local(w2v) | 15.7 | 13.7 | 16.4 |
| Base+Global(w2v) | 15.1 | 13.0 | 16.2 |
| Base+Both(visual) | 15.2 | 12.8 | 15.9 |
| Base+Both(w2v) | 16.2 | 14.1 | 16.9 |
+
+Table 4: Ablation Study on Different Fusion Strategy
+
+| Methods | Full | Rare | Non-Rare |
| Baseline | 14.8 | 12.3 | 15.7 |
| Late Fusion | 18.1 | 15.3 | 19.0 |
| Concatenation | 17.9 | 14.9 | 20.0 |
| Fusion (Attention) | 19.7 | 16.6 | 20.1 |
| FCMNet (ours) | 20.4 | 17.3 | 21.6 |
+
+enable a fair comparison, we use the same model architecture, i.e., with only human and object features and a spatial encoder. We observe that each of the investigated fine-grained spatial encoding, i.e., Human Parts (HP) and Object Mask (OM), improves performance. Since prior work ([38, 14]) has shown that encoding fine-grained information via human key-point estimation is also useful to convey pose, we also compare with the Human skeleton (HS) configuration (following [38, 14]) in this table. It can be seen that using human parts information outperforms human skeletons. Using both proposed fine-grained spatial encoding HP and OM concurrently outperforming the baseline by $2.8\mathrm{mAP}$ in the Full setting, which demonstrates the effectiveness of the proposed encoding mechanism for fine-grained spatial configuration.
+
+Context features: We compare the performance of using different contextual information in Table 3. It can be seen that both local context (features of the objects in the neighbourhood) and global context (the scene) contribute to improved performance. Encoding the context via word2vec outperforms the one using the visual appearance directly.
+
+Fusion Mechanism: We compare the performance of using different fusion mechanisms in Table 4. The proposed fusion design strongly outperforms late fusion, simple concatenation and fusion with attention mechanism, demonstrating the utility of providing the model with a gate and memory mechanism for filtering its representation of the candidate pair using all available information gradually. Nevertheless, both late fusion, concatenation, fusion with attention and proposed fusion module boost performance, verifying that the different encoders capture valuable complementary information for HOI detection.
+
+
+Fig. 3: Samples of human object interactions detected by FCMNet. The first four illustrate correct detections; the last two shows failure cases.
+
+Table 5: Performance comparison on the V-COCO dataset. The scores are reported in mAP(role) as in the standard evaluation metric and the best score is marked in bold. Our approach sets a new state-of-the-art on this dataset.
+
+| Method | Feature Backbone | AProle |
| InteractNet [12] | ResNet-50-FPN | 40.0 |
| BAR-CNN [19] | Inception-ResNet | 41.1 |
| GPNN [31] | ResNet-152 | 44.0 |
| iHOI [40] | ResNet-50-FPN | 45.8 |
| Xu et al. [41] | ResNet-50 | 45.9 |
| iCAN [9] | ResNet-50 | 44.7 |
| Wang et al. [39] | ResNet-50 | 47.3 |
| RPNN [45] | ResNet-50 | 47.5 |
| Li et al. [24] | ResNet-50 | 47.8 |
| PMFNet [38] | ResNet-50-FPN | 52.0 |
| Baseline (Ours) | ResNet-50 | 45.3 |
| FCMNet (Ours) | ResNet-50 | 53.1 |
+
+Qualitative Visual Examples: In Fig. 3, we present qualitative examples to illustrate strengths and failure cases on the HICO-DET dataset. We highlight the detected human and object with red and blue bounding boxes respectively. The first four samples are some challenging cases where our proposed approach produce correct detection. It indicates our model can distinguish subtle visual differences between interactions (first two) and be able to detect co-occurrence relations (third and fourth). The last two show some failure cases.
+
+# 4.3 Results and Comparisons
+
+In this section, we compare our FCMNet with several existing approaches for evaluation. We use combinations of humans, objects embeddings and coarse layout spatial configuration (instance boxes) as our baseline—the final FCMNet model integrates all the proposed modules.
+
+For the VCOCO dataset, we present the quantitative results in terms of $AP_{role}$ in Table 5. Our baseline achieves competitive performance with an $AP_{role}$ of 45.3 (placing it above approximately half of the listed prior work). Different from those approaches, we use word2vec embeddings to represent the object rather than the visual embedding from ROI pooling, which turns out to be very effective for HOI detection in small datasets like V-COCO. Using the word2vec
+
+Table 6: Performance comparison on the HICO-DET dataset. Mean average precision (mAP) is reported for the default and Known object setting. The best score is marked in bold. Our approach sets a new state-of-the-art on this dataset.
+
+| Method | Feature Backbone | Default | Known Object |
| Full | Rare | Non-Rare | Full | Rare | Non-Rare |
| InteractNet [12] | ResNet-50-FPN | 9.94 | 7.16 | 10.77 | - | - | - |
| GPNN [31] | ResNet-152 | 13.11 | 9.34 | 14.23 | - | - | - |
| iHOI [40] | ResNet-50-FPN | 13.39 | 9.51 | 14.55 | - | - | - |
| Xu et al. [41] | ResNet-50 | 14.70 | 13.26 | 15.13 | - | - | - |
| iCAN [9] | ResNet-50 | 14.84 | 10.45 | 16.15 | 16.43 | 12.01 | 17.75 |
| Wang et al. [39] | ResNet-50 | 16.24 | 11.16 | 17.75 | | | |
| Li et al. [24] | ResNet-50 | 17.03 | 13.42 | 18.11 | 19.17 | 15.51 | 20.26 |
| Gupta et al [14] | ResNet-152 | 17.18 | 12.17 | 18.68 | - | - | - |
| RPNN [45] | ResNet-50 | 17.35 | 12.78 | 18.71 | - | - | - |
| PMFNet [38] | ResNet-50-FPN | 17.46 | 15.65 | 18.00 | 20.34 | 17.47 | 21.20 |
| Baseline (Ours) | ResNet-50 | 14.77 | 12.27 | 15.65 | 16.07 | 13.97 | 16.82 |
| FCMNet (Ours) | ResNet-50 | 20.41 | 17.34 | 21.56 | 22.04 | 18.97 | 23.12 |
+
+semantic embedding for the object representation enables us to leverage language priors to capture which objects might afford similar actions when the training data is limited. Our full FCMNet model (with all components proposed in Section 3) achieves $53.1\mathrm{mAP}$ , which outperforms prior approaches by a considerable margin.
+
+For the HICO-DET dataset, we present quantitative results in terms of mAP in Table 6. We report results on two different settings of Default and Known Objects. Note that our baseline still performs well and surpasses nearly half of existing approaches. FCMNet improves upon our baseline by $5.64\mathrm{mAP}$ on the default setting (full split) and sets a new state-of-the-art on this dataset for both the default and Known object setting.
+
+# 5 Conclusions
+
+We have presented FCMNet, a novel framework for human object interaction detection. We illustrated the importance of the encoding mechanism for the fine-grained spatial layouts and semantic contexts, which enables to distinguish the subtle differences among interactions. We also show that the prediction of plausible motion greatly help to constrain the space of candidate interactions by considering their motion and boost performance. By combining these cues via a gate and memory mechanism, FCMNet outperforms state-of-the-art methods on standard human object interaction benchmarks by a considerable margin.
+
+Acknowledgements. The authors gratefully acknowledge the support of the EPSRC Programme Grant Seebibyte EP/M013774/1 and EPSRC Programme Grant CALOPUS EP/R013853/1. The authors would also like to thank Samuel Albanie and Sophia Koepke for helpful suggestions.
+
+# References
+
+1. Adam, A., Rivlin, E., Shimshoni, I., Reinitz, D.: Robust real-time unusual event detection using multiple fixed-location monitors. IEEE transactions on pattern analysis and machine intelligence 30(3), 555-560 (2008)
+2. Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: Netvlad: Cnn architecture for weakly supervised place recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 5297-5307 (2016)
+3. Chao, Y.W., Wang, Z., He, Y., Wang, J., Deng, J.: Hico: A benchmark for recognizing human-object interactions in images. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1017-1025 (2015)
+4. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)
+5. Dai, B., Zhang, Y., Lin, D.: Detecting visual relationships with deep relational networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3076-3086 (2017)
+6. Fang, H.S., Cao, J., Tai, Y.W., Lu, C.: Pairwise body-part attention for recognizing human-object interactions. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 51-67 (2018)
+7. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence 32(9), 1627-1645 (Sep 2010)
+8. Fouhey, D.F., Zitnick, C.L.: Predicting object dynamics in scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2019-2026 (2014)
+9. Gao, C., Zou, Y., Huang, J.B.: ican: Instance-centric attention network for human-object interaction detection. In: British Machine Vision Conference (2018)
+0. Gao, R., Xiong, B., Grauman, K.: Im2flow: Motion hallucination from static images for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5937-5947 (2018)
+1. Girshick, R., Radosavovic, I., Gkioxari, G., Dollár, P., He, K.: Detectron. https://github.com/facebookresearch/detectron (2018)
+2. Gkioxari, G., Girshick, R., Dólar, P., He, K.: Detecting and recognizing human-object interactions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8359-8367 (2018)
+3. Gupta, S., Malik, J.: Visual semantic role labeling. arXiv preprint arXiv:1505.04474 (2015)
+4. Gupta, T., Schwing, A., Hoiem, D.: No-frills human-object interaction detection: Factorization, appearance and layout encodings, and training techniques. arXiv preprint arXiv:1811.05967 (2018)
+5. Hayes, B., Shah, J.A.: Interpretable models for fast activity recognition and anomaly explanation during collaborative robotics tasks. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). pp. 6586-6593. IEEE (2017)
+6. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 2961-2969 (2017)
+7. Hu, J., Shen, L., Albanie, S., Sun, G., Wu, E.: Squeeze-and-excitation networks. IEEE transactions on pattern analysis and machine intelligence (2019)
+8. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4700-4708 (2017)
+
+19. Kolesnikov, A., Kuznetsova, A., Lampert, C., Ferrari, V.: Detecting visual relationships using box attention. In: Proceedings of the IEEE International Conference on Computer Vision Workshops (2019)
+20. Krishna, R., Chami, I., Bernstein, M., Fei-Fei, L.: Referring relationships. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6867-6876 (2018)
+21. Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.J., Shamma, D.A., et al.: Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision 123(1), 32-73 (2017)
+22. Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., Kamali, S., Popov, S., Malloci, M., Duerig, T., et al.: The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv preprint arXiv:1811.00982 (2018)
+23. Li, Y., Ouyang, W., Zhou, B., Wang, K., Wang, X.: Scene graph generation from objects, phrases and region captions. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1261-1270 (2017)
+24. Li, Y.L., Zhou, S., Huang, X., Xu, L., Ma, Z., Fang, H.S., Wang, Y., Lu, C.: Transferable interactiveness knowledge for human-object interaction detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3585-3594 (2019)
+25. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740-755. Springer (2014)
+26. Lu, C., Krishna, R., Bernstein, M., Fei-Fei, L.: Visual relationship detection with language priors. In: European Conference on Computer Vision. pp. 852-869. Springer (2016)
+27. Luo, Y., Zheng, Z., Zheng, L., Guan, T., Yu, J., Yang, Y.: Macro-micro adversarial network for human parsing. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 418-434 (2018)
+28. Mallya, A., Lazebnik, S.: Learning models for actions and person-object interactions with transfer to question answering. In: European Conference on Computer Vision. pp. 414-428. Springer (2016)
+29. Murphy, K.P., Torralba, A., Freeman, W.T.: Using the forest to see the trees: A graphical model relating features, objects, and scenes. In: Advances in neural information processing systems. pp. 1499-1506 (2004)
+30. Peyre, J., Laptev, I., Schmid, C., Sivic, J.: Detecting rare visual relations using analogies. arXiv preprint arXiv:1812.05736 (2018)
+31. Qi, S., Wang, W., Jia, B., Shen, J., Zhu, S.C.: Learning human-object interactions by graph parsing neural networks. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 401-417 (2018)
+32. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp. 91-99 (2015)
+33. Shen, L., Yeung, S., Hoffman, J., Mori, G., Fei-Fei, L.: Scaling human-object interaction recognition through zero-shot learning. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1568-1576. IEEE (2018)
+34. Torralba, A., Murphy, K.P., Freeman, W.T., Rubin, M.A.: Context-based vision system for place and object recognition (2003)
+35. Villegas, R., Yang, J., Hong, S., Lin, X., Lee, H.: Decomposing motion and content for natural video sequence prediction. arXiv preprint arXiv:1706.08033 (2017)
+
+36. Vondrick, C., Pirsiavash, H., Torralba, A.: Generating videos with scene dynamics. In: Advances In Neural Information Processing Systems. pp. 613-621 (2016)
+37. Walker, J., Gupta, A., Hebert, M.: Dense optical flow prediction from a static image. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2443-2451 (2015)
+38. Wan, B., Zhou, D., Liu, Y., Li, R., He, X.: Pose-aware multi-level feature network for human object interaction detection. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 9469-9478 (2019)
+39. Wang, T., Anwer, R.M., Khan, M.H., Khan, F.S., Pang, Y., Shao, L., Laaksonen, J.: Deep contextual attention for human-object interaction detection. arXiv preprint arXiv:1910.07721 (2019)
+40. Xu, B., Li, J., Wong, Y., Zhao, Q., Kankanhalli, M.S.: Interact as you intend: Intention-driven human-object interaction detection. IEEE Transactions on Multimedia (2019)
+41. Xu, B., Wong, Y., Li, J., Zhao, Q., Kankanhalli, M.S.: Learning to detect human-object interactions with knowledge. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)
+42. Xu, D., Zhu, Y., Choy, C.B., Fei-Fei, L.: Scene graph generation by iterative message passing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5410-5419 (2017)
+43. Zhang, H., Kyaw, Z., Chang, S.F., Chua, T.S.: Visual translation embedding network for visual relation detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 5532-5540 (2017)
+44. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence 40(6), 1452-1464 (2017)
+45. Zhou, P., Chi, M.: Relation parsing neural network for human-object interaction detection. In: Proceedings of the IEEE international conference on computer vision (2019)
+46. Zhuang, B., Liu, L., Shen, C., Reid, I.: Towards context-aware interaction recognition for visual relationship detection. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 589-598 (2017)
\ No newline at end of file
diff --git a/amplifyingkeycuesforhumanobjectinteractiondetection/images.zip b/amplifyingkeycuesforhumanobjectinteractiondetection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3bb74b62898519d33db1ed0056f67ec1774a5482
--- /dev/null
+++ b/amplifyingkeycuesforhumanobjectinteractiondetection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:683d12de5eeedfeb99753d62843aa42cdf1c9b2a297205f2d8073ccb5e62a3c3
+size 377616
diff --git a/amplifyingkeycuesforhumanobjectinteractiondetection/layout.json b/amplifyingkeycuesforhumanobjectinteractiondetection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d9548625d4fb90eaea81a05294a2c4db29fc5a4f
--- /dev/null
+++ b/amplifyingkeycuesforhumanobjectinteractiondetection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eb68077759b2425645d479faa99ebb61e466f09e79ac6ab529c52106cbd756ac
+size 391625
diff --git a/ananalysisofsketchedirlsforacceleratedsparseresidualregression/b783e8f1-7c96-49a4-995b-8d79b6ccf0a0_content_list.json b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/b783e8f1-7c96-49a4-995b-8d79b6ccf0a0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7444b4a1bdb4ae3e280cf29cbcf40df94318638a
--- /dev/null
+++ b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/b783e8f1-7c96-49a4-995b-8d79b6ccf0a0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3cf350aff25c752fc1dc53e77d44c78c02494e857125492ba2a6abe44328948a
+size 93742
diff --git a/ananalysisofsketchedirlsforacceleratedsparseresidualregression/b783e8f1-7c96-49a4-995b-8d79b6ccf0a0_model.json b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/b783e8f1-7c96-49a4-995b-8d79b6ccf0a0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..9a3fdb266fe3f2288b9f39ff47385d1d5e28ae78
--- /dev/null
+++ b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/b783e8f1-7c96-49a4-995b-8d79b6ccf0a0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bcb6e74479a4d68b6e243d7e31a08464f8facb792f7e80025f99cb44ec099805
+size 117610
diff --git a/ananalysisofsketchedirlsforacceleratedsparseresidualregression/b783e8f1-7c96-49a4-995b-8d79b6ccf0a0_origin.pdf b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/b783e8f1-7c96-49a4-995b-8d79b6ccf0a0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8de003bad8537c3a1761fef33d380bad148688f3
--- /dev/null
+++ b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/b783e8f1-7c96-49a4-995b-8d79b6ccf0a0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eb45b5d4db2fb8452064ff81b7387e2a2d469eea1738076fa4c4bf1c0227ee37
+size 3444766
diff --git a/ananalysisofsketchedirlsforacceleratedsparseresidualregression/full.md b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6691e8c0e3a8c8228d5f4411d4fd9ac3b187c4ea
--- /dev/null
+++ b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/full.md
@@ -0,0 +1,424 @@
+# An Analysis of Sketched IRLS for Accelerated Sparse Residual Regression
+
+Daichi Iwata, Michael Waechter, Wen-Yan Lin, and Yasuyuki Matsushita
+
+Graduate School of Information Science and Technology, Osaka University {iwata.daichi,waechter.michael,lin.daniel,yasumat}@ist.osaka-u.ac.jp
+
+Abstract. This paper studies the problem of sparse residual regression, i.e., learning a linear model using a norm that favors solutions in which the residuals are sparsely distributed. This is a common problem in a wide range of computer vision applications where a linear system has a lot more equations than unknowns and we wish to find the maximum feasible set of equations by discarding unreliable ones. We show that one of the most popular solution methods, iteratively reweighted least squares (IRLS), can be significantly accelerated by the use of matrix sketching. We analyze the convergence behavior of the proposed method and show its efficiency on a range of computer vision applications. The source code for this project can be found at https://github.com/Diwata0909/Sketched_IRLS.
+
+Keywords: sparse residual regression, $\ell_1$ minimization, randomized algorithm, matrix sketching
+
+# 1 Introduction
+
+We consider the problem of residual minimization, where we wish to learn a linear model that minimizes the residuals that deviate from the model in some distance metric. For a linear model we have a matrix $\mathbf{A} \in \mathbb{R}^{n \times d}$ (we consider tall matrices, i.e., the strongly over-determined case with $n \gg d$ ), a vector $\mathbf{b} \in \mathbb{R}^{n \times 1}$ , $\mathbf{b} \notin \mathcal{R}(\mathbf{A})$ (the range of $\mathbf{A}$ ), and we seek to find
+
+$$
+\mathbf {x} ^ {*} = \underset {\mathbf {x}} {\operatorname {a r g m i n}} \| \mathbf {A x} - \mathbf {b} \| _ {p} ^ {p} \tag {1}
+$$
+
+for some $p$ -norm. In this paper, we consider this linear model and call it an $\ell_p$ (residual) minimization problem. For the more general case including non-linear residual minimization we refer the reader to, e.g., Aftab and Hartley [1] or Kiani and Drummond [36].
+
+One of the most popular methods for $\ell_p$ residual minimization with $1 \leq p < 2$ is iteratively reweighted least squares (IRLS), in which weighted $\ell_2$ minimization is repeatedly computed, eventually converging to the $\ell_p$ solution. It is generally understood that $p = 2$ is optimal for Gaussian distributed errors whereas the 1-norm leads to solutions with sparser residuals (or Laplacian distributed errors). For sparse residual regression, the $\ell_0$ pseudo-norm is appropriate to consider; however, due to its computational complexity, a convex $\ell_1$ relaxation is
+
+often employed. Because of its capability of ignoring large outliers and reaching approximate solutions that yield sparse residuals, $\ell_1$ residual minimization has become an important tool for various computer vision tasks. It can be speculated that the core reason of why $\ell_2$ is still widely preferred over $\ell_1$ despite $\ell_1$ 's better suitability for many applications, is simply that $\ell_2$ residual minimization can be solved efficiently in closed form while $\ell_1$ residual minimization cannot. Therefore, acceleration of $\ell_1$ residual minimization is wanted, especially when computing budget is limited, e.g., in end-user applications on mobile devices.
+
+In recent years, randomized algorithms are gaining attention because they offer immense speed up potential with small failure probability in various applications. In linear algebra, matrix operations can be made efficient with randomized matrix sketching [39,31,58]. It is based on the idea of approximating an input matrix by multiplying a randomly generated sketching matrix to obtain a much smaller matrix that still preserves important properties of the input matrix. In particular, certain sketching matrices $\mathbf{S}$ fulfill the subspace embedding property [11, Sec. 1.2]
+
+
+Fig. 1: Illustration of sketched IRLS: Unlike canonical IRLS, we suggest performing an approximate computation in sketched lower dimensions, yielding significant speed-up while retaining accuracy.
+
+$$
+\frac {1}{\gamma} \left\| \mathbf {A x} \right\| _ {2} ^ {2} \leq \left\| \mathbf {S A x} \right\| _ {2} ^ {2} \leq \gamma \left\| \mathbf {A x} \right\| _ {2} ^ {2} \tag {2}
+$$
+
+for some $\gamma > 1$ with high probability, meaning that the $\ell_2$ subspace embedding $\mathbf{S}$ preserves the lengths of all vectors $\mathbf{A}\mathbf{x}$ within the bounds specified by $\gamma$ , indicating that we can preserve $\mathbf{A}$ 's range via sketching even though the sketched matrix $\mathbf{SA}$ lives in lower dimensions. This allows us to accelerate $\ell_2$ regression.
+
+In this paper, we show that $\ell_1$ residual regression with IRLS can be significantly accelerated by matrix sketching, making it much more useful in practical applications. We call the new method sketched IRLS. The key idea is to speed up the internal computation block by projecting the linear system to lower dimensions using matrix sketching as illustrated in Fig. 1. Through comparisons with other robust techniques such as RANSAC, we also show that our method is versatile. This paper's contributions are summarized as follows:
+
+- We propose accelerating IRLS for $\ell_1$ minimization with matrix sketching,
+- we analyze the error bound of sketched IRLS compared to canonical IRLS,
+- and we provide an analysis of our proposed method's effectiveness on common computer vision problems using both synthetic and real-world data.
+
+The proposed method yields the important benefit that the fundamental computer vision task of regression can be executed in an outlier-robust manner with
+
+the flexible tool of IRLS without excessive computational burden, allowing it to be used in diverse applications with large datasets.
+
+# 2 Related works
+
+The problem we study in this paper is related to sparse regression and matrix sketching. Here we briefly review these two subject areas.
+
+Sparse regression The field of sparse regression has recently developed rapidly, and its usefulness due to its outlier and noise robustness has been recognized in many tasks such as face recognition [59,38], image denoising [23], general signal processing [40], and background estimation [20]. Elad [22] and Zhang [61] give technical overviews on sparse regression. The goal of sparse regression is to obtain solutions where many of the unknowns become zero. A typical setting aims at minimizing the $\ell_0$ norm of the solution with an under-determined constraint as
+
+$$
+\min _ {\mathbf {x}} \| \mathbf {x} \| _ {0} \quad \text {s . t .} \quad \mathbf {A x} = \mathbf {b}. \tag {3}
+$$
+
+Similar to this problem is the problem of minimizing the $\ell_0$ norm of residuals (called robust sensing in [35]) with over-determined $\mathbf{A}\mathbf{x} \simeq \mathbf{b}$ , expressed as
+
+$$
+\min _ {\mathbf {r}} \| \mathbf {r} \| _ {0} \quad \text {s . t .} \quad \mathbf {r} = \mathbf {b} - \mathbf {A x}. \tag {4}
+$$
+
+Both of these look for sparse unknowns via minimization of the $\ell_0$ -norm of a vector, which is NP-hard. To address this issue, various optimization strategies have been developed. One of them is based on greedy pursuit and tries to find an approximate solution for $\ell_0$ minimization directly in an iterative manner [41,47,54,15,43]. Greedy methods are simple and easy to implement, but they are only guaranteed to work under very restricted conditions [52].
+
+On the other hand, relaxation methods cast the NP-hard $\ell_0$ minimization into, for example, $\ell_1$ minimization, which is a convex surrogate for $\ell_0$ minimization. Gorodnitsky and Rao proposed the FOCUSS algorithm [28], in which they approximate $\ell_1$ minimization with the 2-norm using IRLS. It is also understood that $\ell_1$ minimization can be solved by linear programming (LP) [26]. Several methods have been proposed to solve the $\ell_1$ minimization problem, for example, one based on the interior point method [33], Least Angle Regression (LARS) [21] which is a homotopy algorithm, and the Iterative Shrinkage Thresholding Algorithm (ISTA) [13], to name a few. An excellent summary of recent $\ell_1$ minimization methods is given by Yang et al. [60].
+
+Among the $\ell_1$ minimization methods, IRLS is simple and easily implemented as it only iterates weighted $\ell_2$ minimization, and it is known for its fast convergence, i.e., only requiring a small number of iterations to achieve an accurate solution [46,14,9]. Despite the simplicity of the algorithm, $\ell_1$ minimization with IRLS shows good performance in recent applications [20,38,42]. Also, IRLS can be naturally extended to more general $\ell_p$ minimization and minimization with other robust functions such as Huber and the Pseudo-Huber functions [1]. To
+
+handle nonlinear problems, generalized IRLS has also been proposed and analyzed theoretically [49,44]. Due to this simplicity and versatility, IRLS has been one of the most popular methods for $\ell_1$ minimization and related problems.
+
+Matrix sketching Randomized algorithms are attracting attention recently as methods to speed up fundamental computation. In linear algebra, matrix sketching is a randomized algorithm for matrix operations. Woodruff [58], Mahoney [39], Halko et al. [31], and Drineas et al. [16] give technical overviews on matrix sketching and randomized algorithms. In recent years, randomized algorithms have been widely applied to some specialized linear and algebraic problems, e.g., consistent linear systems [29], symmetric diagonally dominant linear systems [12], and matrix inversion [30]. Matrix sketching is also used for accelerating several types of matrix decomposition, e.g., singular value decomposition [31], QR decomposition [19], and dynamic mode decomposition [24].
+
+As mentioned, matrix sketching is based on the idea that the range of an input matrix can be well approximated with high probability by random projection. Such a sketched matrix can then for example be used in regression problems as they fulfill the subspace embedding property of Eq. (2). One of the earliest sketching matrices (projectors) were random Gaussian matrices [27]: Let $\mathbf{S} \in \mathbb{R}^{s \times n}$ be a matrix randomly sampled from a Gaussian distribution. Such an $\mathbf{S}$ fulfills the subspace embedding property with $\gamma = 1 + \varepsilon$ and a small $\varepsilon \geq 0$ . This is a consequence of the Johnson-Lindenstrauss lemma [32]. However, with a dense $\mathbf{S}$ sketching takes $\mathcal{O}(nds)$ time which is very costly. To overcome this, various methods with other sketching matrices have been proposed. They can roughly be divided into sampling-based and projection-based methods.
+
+Sampling-based methods extract rows of the matrix. The simple intuition is that, in strongly overdetermined systems, the row vectors are mostly linearly dependent and a subset of them is sufficient to maintain their span. Selecting rows of the matrix is equivalent to subsampling the data points. Row sampling could be achieved by premultiplying the input matrix with a binary matrix that picks the desired rows, but for efficiency this is in practice implemented with simply selecting those rows in a streaming fashion. The simplest sampling method is uniform sampling that selects rows with uniform probability. Drineas et al. [17] devised leverage score sampling, which samples based on precomputed leverage scores that unfortunately require computing the singular value decomposition (SVD), making leverage score sampling somewhat impractical.
+
+Projection-based methods premultiply the input matrix with more general matrices so that the codomain bases are linear combinations of multiple of $\mathbf{A}$ 's domain bases and not simply selections of $\mathbf{A}$ 's domain bases. Ailon and Chazelle [3] proposed the Fast Johnson-Lindenstrauss Transform (FJLT), and Tropp [53] and Drineas et al. [18] proposed the Subsampled Randomized Hadamard Transform (SRHT) as an extension, which take $\mathcal{O}(nd\log s)$ time. Clarkson and Woodruff further proposed the much faster CountSketch method [11], which was originally used in data streaming [8]. CountSketch takes only $\mathcal{O}(\mathrm{nnz}(\mathbf{A}))$ time, where $\mathrm{nnz}(\mathbf{A})$ is the number of non-zero entries of $\mathbf{A}$ .
+
+# 3 Proposed method: sketched IRLS
+
+For $\ell_p$ minimization ( $1 \leq p < 2$ ) with linear models, IRLS converges to Eq. (1)'s solution by iteratively minimizing
+
+$$
+\mathbf {x} ^ {(t + 1)} = \underset {\mathbf {y}} {\operatorname {a r g m i n}} \| \mathbf {W} ^ {(t)} \mathbf {A} \mathbf {y} - \mathbf {W} ^ {(t)} \mathbf {b} \| _ {2} ^ {2} \tag {5}
+$$
+
+with a diagonal weight matrix $\mathbf{W}^{(t)}$ at the $(t)^{\mathrm{th}}$ iteration that is initialized with $\mathbf{W}^{(0)} = \mathbf{I}_n$ . This is called the $\ell_p$ Weiszfeld algorithm [56,57,2]. For $\ell_1$ minimization, where $p = 1$ , the weights are updated as
+
+$$
+\mathbf {W} _ {i, i} ^ {(t)} = \left| \mathbf {A} _ {i, *} \mathbf {x} ^ {(t)} - b _ {i} \right| ^ {- \frac {1}{2}},
+$$
+
+where $\mathbf{A}_{i,*}$ is matrix $\mathbf{A}$ 's $i^{\mathrm{th}}$ row, and $\mathbf{W}_{i,i}$ is weight matrix $\mathbf{W}$ 's $i^{\mathrm{th}}$ diagonal element. Solving Eq. (5) requires $\mathcal{O}(nd^2 + d^3)$ arithmetic operations and is thus expensive for large matrices. Further, since it is repeatedly solved with updated $\mathbf{W}$ in IRLS, it is the computational bottleneck for IRLS-based $\ell_p$ minimization.
+
+The key idea of the proposed method is to accelerate this computation block, the weighted $\ell_2$ minimization of Eq. (5), with matrix sketching. Via matrix sketching, we reduce $n$ -dimensional to much smaller $s$ -dimensional vectors so that the computational complexity is significantly reduced. Specifically, Eq. (5)'s weighted $\ell_2$ minimization is modified as
+
+$$
+\min _ {\mathbf {x}} \left\| \mathbf {W A x} - \mathbf {W b} \right\| _ {2} ^ {2} \quad \xrightarrow {\mathrm {s k e t c h}} \quad \min _ {\mathbf {x}} \left\| \widetilde {\mathbf {W A x}} - \widetilde {\mathbf {W b}} \right\| _ {2} ^ {2},
+$$
+
+where $\mathbf{WA} \in \mathbb{R}^{n \times d}$ and $\widetilde{\mathbf{WA}} \in \mathbb{R}^{s \times d}$ , $s \ll n$ , with retaining the solution's accuracy. For adopting matrix sketching in IRLS, there are two aspects to consider; (1) when to sketch in the algorithm, and (2) the choice of the sketching method.
+
+# 3.1 When to sketch?
+
+There are two possible points in time for applying matrix sketching in IRLS; (1) sketch only once before all IRLS iterations (named sketch-once), and (2) sketch in every iteration of the weighted $\ell_2$ minimization (named sketch-iteratively). We analyze the behavior of these two strategies in this paper.
+
+Sketch-once Not only the $\ell_2$ minimization but also the sketching contributes to the algorithm's overall runtime. One way to reduce the sketching costs is to only perform it once at the beginning of IRLS, outside of IRLS's iteration loop. Algorithm 1 shows pseudo code for IRLS with sketch-once.
+
+Sketch-iteratively Unlike sketch-once, sketch-iteratively performs sketching within the IRLS iteration loop. In each IRLS iteration, we generate an $\mathbf{S} \in \mathbb{R}^{s \times n}$ with $s \ll n$ and instead of Eq. (5) we solve
+
+$$
+\mathbf {x} ^ {(t + 1)} = \underset {\mathbf {x}} {\operatorname {a r g m i n}} \left\| \underbrace {\mathbf {S W} ^ {(t)} \mathbf {A}} _ {= \widetilde {\mathbf {W A}} ^ {(t)}} \mathbf {x} - \underbrace {\mathbf {S W} ^ {(t)} \mathbf {b}} _ {= \widetilde {\mathbf {W b}} ^ {(t)}} \right\| _ {2} ^ {2}.
+$$
+
+Algorithm 1 IRLS with sketch-once
+Input: $\mathbf{A}\in \mathbb{R}^{n\times d}$ $\mathbf{b}\in \mathbb{R}^n$ , sketching size $s$ #iterations $T$ , threshold $\delta$ for termination
+Output: Approximate solution $\mathbf{x}$ $\widetilde{\mathbf{W}}^{(0)}\gets$ Identity matrix $\mathbf{I}_s$ $\mathbf{x}^{(0)}\gets 0$ $\widetilde{\mathbf{A}},\widetilde{\mathbf{b}}\gets$ sketch(A,b,s)
+for $t = 0:T$ do
+ $\mathbf{x}^{(t + 1)}\gets$ solvelinearleast-squares $(\widetilde{\mathbf{W}}^{(t)}\widetilde{\mathbf{A}},\widetilde{\mathbf{W}}^{(t)}\widetilde{\mathbf{b}})$ if $\| \mathbf{x}^{(t + 1)} - \mathbf{x}^{(t)}\| _2 < \delta$ then break end if
+ $\widetilde{\mathbf{W}}_{i,i}^{(t + 1)}\gets \bigl (\max \{\epsilon ,|\widetilde{\mathbf{A}}_{i,*}\mathbf{x}^{(t + 1)} - \widetilde{b}_i|\} \bigr)^{-\frac{1}{2}}$
+end for
+
+Algorithm 2 IRLS with sketch-iteratively
+Input: $\mathbf{A}\in \mathbb{R}^{n\times d}$ $\mathbf{b}\in \mathbb{R}^n$ ,sketching size $s$ #iterations $T$ ,threshold $\delta$ for termination
+Output: Approximate solution $\mathbf{x}$ $\begin{array}{rl} & {\mathbf{W}^{(0)}\gets \mathrm{Identity~matrix~}\mathbf{I}_n}\\ & {\mathbf{x}^{(0)}\gets \mathbf{0}}\\ & {\mathrm{for~}t = 0:T\mathrm{~do}}\\ & {\widetilde{\mathbf{WA}}^{(t)},\widetilde{\mathbf{Wb}}^{(t)}\gets \mathrm{sketch}(\mathbf{W}^{(t)}\mathbf{A},\mathbf{W}^{(t)}\mathbf{b},s)}\\ & {\mathbf{x}^{(t + 1)}\gets \mathrm{solv\_linear\_least\_square}(\widetilde{\mathbf{WA}}^{(t)},\widetilde{\mathbf{Wb}}^{(t)})}\\ & {\mathrm{if~}\| \mathbf{x}^{(t + 1)} - \mathbf{x}^{(t)}\| _2 < \delta \mathrm{~then~break~end~if}}\\ & {\mathbf{W}_{i,i}^{(t + 1)}\gets \left(\max \{\epsilon ,|\mathbf{A}_{i,*}\mathbf{x}^{(t + 1)} - b_i|\}\right)^{-\frac{1}{2}}} \end{array}$
+end for
+
+While sketch-iteratively requires more computation time than sketch- once because of the sketching operation in the loop, it is expected to be more stable because the linear system is sketched differently in each iteration. Pseudo code for IRLS with sketch-iteratively is shown in Algorithm 2.
+
+# 3.2 Sketching method choice
+
+In Sec. 2, we discussed various matrix sketching methods. In this study, we mainly focus on uniform sampling and CountSketch, because of their computational efficiency. Both make the sketching matrix sparse, resulting in sketching computation costs of only $\mathcal{O}(\mathrm{nmz}(\mathbf{A}))$ time. In practice, they can be implemented in a streaming fashion without explicitly forming the sketching matrices $\mathbf{S}$ .
+
+Uniform sampling samples rows of a tall matrix $[\mathbf{A}|\mathbf{b}]$ or $\mathbf{W}[\mathbf{A}|\mathbf{b}]$ with uniform probability so that the row-dimension can be reduced. Uniform sampling of a massively over-determined system has been employed in the past in practical applications, in a similar manner to the sketch-once approach described in the previous section. In an explicit matrix form, the sketching matrix $\mathbf{S}$ is a sparse matrix in which each row has only one element that is 1 while the rest are zeros. CountSketch [11] is similar to uniform sampling, but instead of subsampling rows of matrix $\mathbf{A}$ , it uses a sketching matrix $\mathbf{S}$ in which each column has a single randomly chosen non-zero entry (typically, 1 or $-1$ ).
+
+# 4 Theoretical analysis
+
+We now analyze the errors in IRLS solutions obtained with and without sketching. To this end, we will show that the proposed sketched IRLS can reliably derive a solution close to the non-sketched case. As mentioned, sketching can approximate a matrix's range with a certain accuracy. Since "sketch-iteratively" uses sketching repeatedly, we consider the errors introduced in each iteration. The goal is to reveal the relationship between $\left\| \mathbf{A}\widetilde{\mathbf{x}} -\mathbf{b}\right\| _1$ and $\left\| \mathbf{A}\mathbf{x}^{*} - \mathbf{b}\right\|_{1}$ for IRLS, where $\mathbf{x}^*$ is the optimal solution, and $\widetilde{\mathbf{x}}$ is the approximate solution obtained with sketched IRLS. We derive it by combining the error bounds of sketching and IRLS's convergence rate. Let $\mathbf{x}^{*(t)}$ and $\widetilde{\mathbf{x}}^{(t)}$ be the solutions of canonical and sketched IRLS, resp., after $t$ iterations, and $\widetilde{\mathbf{x}}^{*(t + 1)}$ be the solution obtained by solving Eq. (5) without sketching and using $\mathbf{W}$ based on $\widetilde{\mathbf{x}}^{(t)}$ .
+
+# 4.1 Condition for the residual to decrease
+
+Before considering error bounds, we first show what condition must be fulfilled so that the residual in sketched IRLS decreases monotonically, i.e., $\left\| \mathbf{A}\widetilde{\mathbf{x}}^{(t + 1)} - \mathbf{b}\right\| _1\leq \left\| \mathbf{A}\widetilde{\mathbf{x}}^{(t)} - \mathbf{b}\right\| _1$ . For matrix sketching with $\ell_2$ regression problems, several error bounds are known [39,48,17]. In a general form, we can write them as
+
+$$
+\left\| \mathbf {A} \tilde {\mathbf {x}} - \mathbf {b} \right\| _ {2} \leq (1 + \varepsilon) \left\| \mathbf {A} \mathbf {x} ^ {*} - \mathbf {b} \right\| _ {2}, \tag {6}
+$$
+
+$$
+\left\| \widetilde {\mathbf {x}} - \mathbf {x} ^ {*} \right\| _ {2} = \left\| \Delta \mathbf {x} \right\| _ {2} \leq \varepsilon_ {x}, \tag {7}
+$$
+
+using $\widetilde{\mathbf{x}} -\mathbf{x}^{*}\coloneqq \varDelta \mathbf{x}$ and small errors $\varepsilon$ and $\varepsilon_{x}$ , which depend on the sketching method and sketching size. The error decay rate is known for canonical IRLS, e.g., linear and super-linear convergence rates [49,14]. According to Daubechies [14, Sec. 6.1], for sparse variable $\mathbf{x}$ problems (Eq. (3)) we have
+
+$$
+\left\| \widetilde {\mathbf {x}} ^ {* (t + 1)} - \mathbf {x} ^ {*} \right\| _ {1} \leq \mu \left\| \widetilde {\mathbf {x}} ^ {(t)} - \mathbf {x} ^ {*} \right\| _ {1} \tag {8}
+$$
+
+with a constant $\mu \leq 1$ . It is known that sparse variable problems (Eq. (3)) and sparse residual problems (Eq. (4)) can be treated as the same problem under $\mathbf{x} = \mathbf{r}$ [7], and for sparse residual problems we have
+
+$$
+\left\| \left(\mathbf {A} \widetilde {\mathbf {x}} ^ {* (t + 1)} - \mathbf {b}\right) - \left(\mathbf {A} \mathbf {x} ^ {*} - \mathbf {b}\right) \right\| _ {1} \leq \mu \left\| \left(\mathbf {A} \widetilde {\mathbf {x}} ^ {(t)} - \mathbf {b}\right) - \left(\mathbf {A} \mathbf {x} ^ {*} - \mathbf {b}\right) \right\| _ {1}. \tag {9}
+$$
+
+From these, the residuals $\left\| \mathbf{A}\widetilde{\mathbf{x}}^{(t + 1)} - \mathbf{b}\right\| _1$ and $\left\| \mathbf{A}\widetilde{\mathbf{x}}^{(t)} - \mathbf{b}\right\| _1$ satisfy
+
+$$
+\begin{array}{l} \left\| \left(\mathbf {A} \widetilde {\mathbf {x}} ^ {(t + 1)} - \mathbf {b}\right) \right\| _ {1} - \left\| \left(\mathbf {A} \mathbf {x} ^ {*} - \mathbf {b}\right) \right\| _ {1} \leq \left\| \left(\mathbf {A} \widetilde {\mathbf {x}} ^ {(t + 1)} - \mathbf {b}\right) - \left(\mathbf {A} \mathbf {x} ^ {*} - \mathbf {b}\right) \right\| _ {1} \\ = \left\| (\mathbf {A} \widetilde {\mathbf {x}} ^ {* (t + 1)} - \mathbf {b}) + \mathbf {A} \Delta \mathbf {x} - (\mathbf {A} \mathbf {x} ^ {*} - \mathbf {b}) \right\| _ {1} \\ \leq \left\| (\mathbf {A} \widetilde {\mathbf {x}} ^ {* (t + 1)} - \mathbf {b}) - (\mathbf {A} \mathbf {x} ^ {*} - \mathbf {b}) \right\| _ {1} + \left\| \mathbf {A} \Delta \mathbf {x} \right\| _ {1} \\ \leq \mu \| (\mathbf {A} \widetilde {\mathbf {x}} ^ {(t)} - \mathbf {b}) - (\mathbf {A} \mathbf {x} ^ {*} - \mathbf {b}) \| _ {1} + \| \mathbf {A} \Delta \mathbf {x} \| _ {1} \\ \leq \mu \| (\mathbf {A} \widetilde {\mathbf {x}} ^ {(t)} - \mathbf {b}) \| _ {1} + \mu \| (\mathbf {A} \mathbf {x} ^ {*} - \mathbf {b}) \| _ {1} + \| \mathbf {A} \Delta \mathbf {x} \| _ {1}. \tag {10} \\ \end{array}
+$$
+
+From Eq. (10), we finally have
+
+$$
+\left\| \left(\mathbf {A} \widetilde {\mathbf {x}} ^ {(t + 1)} - \mathbf {b}\right) \right\| _ {1} \leq \mu \left\| \left(\mathbf {A} \widetilde {\mathbf {x}} ^ {(t)} - \mathbf {b}\right) \right\| _ {1} + (1 + \mu) \left\| \left(\mathbf {A} \mathbf {x} ^ {*} - \mathbf {b}\right) \right\| _ {1} + \left\| \mathbf {A} \Delta \mathbf {x} \right\| _ {1}, \tag {11}
+$$
+
+and know that when $\left\| \mathbf{A}\Delta \mathbf{x}\right\| _1\leq (1 - \mu)\left\| (\mathbf{A}\widetilde{\mathbf{x}}^{(t)} - \mathbf{b})\right\| _1 - (1 + \mu)\left\| (\mathbf{A}\mathbf{x}^* -\mathbf{b})\right\| _1$ holds, then $\left\| \mathbf{A}\widetilde{\mathbf{x}}^{(t + 1)} - \mathbf{b}\right\| _1\leq \left\| \mathbf{A}\widetilde{\mathbf{x}}^{(t)} - \mathbf{b}\right\| _1$ . We remark that $\varDelta\mathbf{x}$ satisfies Eq. (7), and when the error of sketching $\varDelta\mathbf{x}$ is sufficiently close to $\mathbf{0}$ , then sketched IRLS lets the objective decrease monotonically.
+
+# 4.2 Worst-case residual bound
+
+In this section, we show the relationship between the residuals of canonical IRLS $\left\| \mathbf{A}\mathbf{x}^{*(t)} - \mathbf{b}\right\| _1$ and sketched IRLS $\left\| \mathbf{A}\widetilde{\mathbf{x}}^{(t)} - \mathbf{b}\right\| _1$ after $t$ IRLS iterations using norm relations. For canonical IRLS, after $t$ iterations we have
+
+$$
+\eta \left\| \mathbf {A} \mathbf {x} ^ {* (0)} - \mathbf {b} \right\| _ {1} = \left\| \mathbf {A} \mathbf {x} ^ {* (t)} - \mathbf {b} \right\| _ {1}, \tag {12}
+$$
+
+where, assuming that $t$ is big enough for $\mathbf{x}^{*(t)}$ to converge to $\mathbf{x}^*$ , $\eta \in [0,1]$ is the ratio between the $\ell_1$ residuals of the solutions for Eq. (1) with $p = 1$ and $p = 2$ . For sketched IRLS, under Section 4.1's condition, after $t$ iterations we have
+
+$$
+\left\| \mathbf {A} \widetilde {\mathbf {x}} ^ {(t)} - \mathbf {b} \right\| _ {1} \leq \left\| \mathbf {A} \widetilde {\mathbf {x}} ^ {(0)} - \mathbf {b} \right\| _ {1}. \tag {13}
+$$
+
+Further, we have the norm relations
+
+$$
+\| \mathbf {x} \| _ {2} \leq \| \mathbf {x} \| _ {1} \leq \sqrt {n} \| \mathbf {x} \| _ {2}, \tag {14}
+$$
+
+where $n$ is the number of dimensions of $\mathbf{x}$ . We can now derive a bound on the solution error due to the sketching after $t$ IRLS iterations:
+
+$$
+\begin{array}{l} \underline {{\left\| \mathbf {A} \widetilde {\mathbf {x}} ^ {(t)} - \mathbf {b} \right\| _ {1} ^ {\mathsf {E q .} (1 3)}} \left\| \mathbf {A} \widetilde {\mathbf {x}} ^ {(0)} - \mathbf {b} \right\| _ {1} ^ {\mathsf {E q .} (1 4)} \sqrt {n} \left\| \mathbf {A} \widetilde {\mathbf {x}} ^ {(0)} - \mathbf {b} \right\| _ {2} ^ {\mathsf {E q .} (6)} \left(1 + \varepsilon\right) \left\| \mathbf {A} \mathbf {x} ^ {* (0)} - \mathbf {b} \right\| _ {2} \\ \stackrel {\text {E q .} (1 4)} {\leq} \sqrt {n} (1 + \varepsilon) \left\| \mathbf {A x} ^ {* (0)} - \mathbf {b} \right\| _ {1} ^ {\text {E q .} (1 2)} \sqrt {n} (1 + \varepsilon) \eta^ {- 1} \left\| \mathbf {A x} ^ {* (t)} - \mathbf {b} \right\| _ {1}. \tag {15} \\ \end{array}
+$$
+
+This bound may not be very tight if $n$ is large; however, we next show that a tighter bound can be expected in practice.
+
+# 4.3 Expected residual bound
+
+The reason why the bound shown in Eq. (15) looks somewhat loose is mainly due to the right side expression of Eq. (14). The bound expresses the worst case and if that case rarely occurs, the bound does not have strong implications. Therefore, it is reasonable to consider the expected value for the $\ell_1$ -norm. Here, let $\mathbf{x} \in \mathbb{R}^n$ be an arbitrary vector with $\| \mathbf{x} \|_2 = r$ and the expected value for the $\ell_1$ -norm be $\mathbb{E}[||\mathbf{x}||_1]$ . The expectation of the $\ell_1$ -norm of a vector $\mathbf{x}$ becomes $\mathbb{E}[||\mathbf{x}||_1] = \mathbb{E}[|x_1|] + \ldots + \mathbb{E}[|x_n|]$ . In polar coordinates, $x_1, \ldots, x_n$ are
+
+$$
+\left\{ \begin{array}{r c l} x _ {1} & = & r \cos \theta_ {1}, \\ x _ {2} & = & r \sin \theta_ {1} \cos \theta_ {2}, \\ & \dots & \\ x _ {n - 1} & = & r \sin \theta_ {1} \sin \theta_ {2} \ldots \sin \theta_ {n - 2} \cos \theta_ {n - 1}, \\ x _ {n} & = & r \sin \theta_ {1} \sin \theta_ {2} \ldots \sin \theta_ {n - 2} \sin \theta_ {n - 1}. \end{array} \right.
+$$
+
+Here, we assume that arbitrary random vectors are generated by uniformly distributed $\theta$ , which means that a probability density function $p(\theta)$ is a constant. In this case, the probability density function for $x_{1}$ depends on the arcsine distribution. Namely, for $-r \leq x_{1} \leq r$ , the probability density function is $p(x_{1}) = \frac{1}{\pi \sqrt{r^{2} - x_{1}^{2}}}$ . Therefore, the expectation $\mathbb{E}[|x_1|]$ becomes
+
+$$
+\begin{array}{l} \mathbb {E} [ | x _ {1} | ] = \int_ {- r} ^ {r} | x _ {1} | p (x _ {1}) \mathrm {d} x _ {1} = \int_ {- r} ^ {r} | x _ {1} | \frac {1}{\pi \sqrt {r ^ {2} - x _ {1} ^ {2}}} \mathrm {d} x _ {1} = \frac {2}{\pi} \int_ {0} ^ {r} \frac {x _ {1}}{\sqrt {r ^ {2} - x _ {1} ^ {2}}} \mathrm {d} x _ {1} \\ = \frac {2}{\pi} \left[ - \sqrt {r ^ {2} - x _ {1} ^ {2}} \right] _ {0} ^ {r} = \frac {2}{\pi} r. \tag {16} \\ \end{array}
+$$
+
+We can write $x_{2}$ as $x_{2} = r_{1}\cos \theta_{2}$ with $r_1 = r\sin \theta_1$ , and we can obtain $\mathbb{E}[|x_2|]$ by using Eq. (16) recursively, i.e., $\mathbb{E}[|x_2|] = \int_{-\mathbb{E}[r_1]}^{\mathbb{E}[r_1]}|x_2|p(x_2)\mathrm{d}x_2 = \frac{2}{\pi}\mathbb{E}[r_1] = (\frac{2}{\pi})^2 r$ . Finally, the expected value $\mathbb{E}[||\mathbf{x}||_1]$ becomes
+
+$$
+\begin{array}{l} \mathbb {E} [ \| \mathbf {x} \| _ {1} ] = \mathbb {E} [ | x _ {1} | ] + \mathbb {E} [ | x _ {2} | ] + \dots + \mathbb {E} [ | x _ {n - 1} | ] + \mathbb {E} [ | x _ {n} | ] \\ = \frac {2}{\pi} r + \left(\frac {2}{\pi}\right) ^ {2} r + \dots + \left(\frac {2}{\pi}\right) ^ {n - 1} r + \left(\frac {2}{\pi}\right) ^ {n - 1} r \\ = \sum_ {i = 1} ^ {n - 1} \left(\frac {2}{\pi}\right) ^ {i} r + \left(\frac {2}{\pi}\right) ^ {n - 1} r = \frac {\frac {2}{\pi} + \left(\frac {2}{\pi}\right) ^ {n - 1} - 2 \left(\frac {2}{\pi}\right) ^ {n}}{1 - \frac {2}{\pi}} r. \\ \end{array}
+$$
+
+We thus have an expected bound of $\left\| \mathbf{A}\widetilde{\mathbf{x}}^{(t)} - \mathbf{b}\right\| _1\leq \frac{2}{1 - \frac{2}{\pi}}\frac{(\frac{2}{\pi})^{n - 1} - 2(\frac{2}{\pi})^n}{1 - \frac{2}{\pi}} (1 + \varepsilon)\eta^{-1}\big\| \mathbf{A}\mathbf{x}^{*(t)} - \mathbf{b}\big\| _1$ . With $n\to \infty$ , the fraction expression converges to $\frac{2}{\pi - 2}\simeq 1.75$ . The expected value is considerably smaller than the worst case value $\sqrt{n}$ indicating that when $n$ is large, Sec. 4.2's worst case bound is hardly relevant. Although it is still far from a strict bound due to the dependency on $\eta$ , as we will see in the following sections, sketched IRLS yields highly accurate approximations in practice in various settings.
+
+# 5 Performance evaluation
+
+We first evaluate the proposed method's performance in comparison to canonical IRLS using synthetic data in common problems, namely, in residual minimization and low-rank approximation.
+
+# 5.1 Residual minimization
+
+In $\ell_1$ residual minimization, given $\mathbf{A} \in \mathbb{R}^{n \times d}$ , $n > d$ and $\mathbf{b} \in \mathbb{R}^n$ , we wish to solve
+
+$$
+\min _ {\mathbf {x}} \left\| \mathbf {A} \mathbf {x} - \mathbf {b} \right\| _ {1}
+$$
+
+for $\mathbf{x} \in \mathbb{R}^d$ . To form a highly over-determined linear system, we set the size of matrix $\mathbf{A}$ to $(n,d) = (10^6, 40)$ . For assessing the performance variations with respect to the outlier distribution, we formed matrix $\mathbf{A}$ in three distinct ways: (a) $\mathbf{A}$
+
+
+
+
+(a) Uniform data $20\%$
+
+
+(b) Uniform data $60\%$
+
+
+(c) Biased data
+Fig. 2: Averages of the error $\frac{1}{d}\| \mathbf{x}^{*} - \mathbf{x}^{(t)}\|_{2}$ and the standard deviations over time in residual minimization with various matrix sketching methods and RANSAC on uniform synthetic data with a lower outlier rate (left), with a higher outlier rate (center), and with biased synthetic data (right).
+
+drawn from a uniform distribution in $[0, 10]$ and flipped signs of $20\%$ elements to create outliers (called uniform data $20\%$ , hereafter), (b) $\mathbf{A}$ created like (a) but with $60\%$ outliers (called uniform data $60\%$ , hereafter), and (c) $\mathbf{A}$ created like (a) but further corrupted by adding a large value ( $= 10^3$ ) to randomly selected $0.1\%$ of rows (called biased data, hereafter). The ground truth $\mathbf{x}^*$ is created, and based on $\mathbf{x}^*$ , $\mathbf{b}$ is pre-computed before adding outliers. We evaluate the error defined as $\frac{1}{d} \| \mathbf{x}^* - \mathbf{x}^{(t)} \|_2$ , where $\mathbf{x}^{(t)}$ is the solution after the $t^{\text{th}}$ iteration. We also compare the accuracy with RANSAC [25] to assess the robustness of $\ell_1$ minimization against outliers. Unlike $\ell_1$ minimization, RANSAC requires adjusting a few parameters. We changed the number of samplings, RANSAC's most important parameter, and chose the best result from $\{d + 1, d \times 10, d \times 10^2\}$ .
+
+Figures 2a, 2b and 2c show the results of uniform data $20\%$ , uniform data $60\%$ , and biased data, resp. All results are averages of 10 trials with different random seeds. The plots show averages of the error, error bars indicate standard deviations. From Figs. 2a and 2b we can observe that $\ell_1$ minimization works well for problems with relatively few outliers, and RANSAC shows slow convergences. Both methods need to solve least squares minimization many times, IRLS can be expected to improve the solution for each loop, but RANSAC does not necessarily do so. RANSAC is further known to fail at high-dimensional problems. Regarding sketching methods, while sampling-based methods with sketch- once work well for uniform data as shown in Fig. 2a, their accuracies become lower and variances become larger on the biased data as shown in Fig. 2c. Projection-based methods work well for either data with low variances, especially CountSketch with sketch-iteratively. For leverage score sampling, the comparison was done using the known leverage scores while it is usually unknown a priori. From here on out, we will focus only on uniform sampling and CountSketch as the chosen sketching methods because of their efficiency as discussed in Sec. 3.2.
+
+
+Fig.3: Error over time in $\ell_1$ singular value decomposition on synthetic data with lower outlier rate, (left), with higher outlier rate (right)
+Fig.4: Comparison between sketch-iteratively and sketch-once. Colors indicate faster convergence for sketch-iteratively or sketch-once.
+
+
+
+# 5.2 Low-rank approximation
+
+Next, we evaluate the proposed method on low-rank approximation with $\ell_1$ singular value decomposition. Given a matrix $\mathbf{M} \in \mathbb{R}^{m \times n}$ , its rank- $r$ approximation $(r < \min(m,n))$ with $\ell_1$ -norm can be written as
+
+$$
+\min _ {\mathbf {U}, \mathbf {V}} \| \mathbf {M} - \mathbf {U V} ^ {\top} \| _ {1},
+$$
+
+where $\mathbf{U}$ and $\mathbf{V}^{\top}$ are $m\times r$ and $r\times n$ matrices, respectively. The solution method involves $\ell_1$ minimization subproblems that are computed iteratively [34] as
+
+$$
+\mathbf {U} \leftarrow \underset {\mathbf {U}} {\operatorname {a r g m i n}} \| \mathbf {M} - \mathbf {U V} ^ {\top} \| _ {1}, \quad \mathbf {V} \leftarrow \underset {\mathbf {V}} {\operatorname {a r g m i n}} \| \mathbf {M} - \mathbf {U V} ^ {\top} \| _ {1},
+$$
+
+starting from a random initialization of $\mathbf{V}$ .
+
+We generated an $\mathbf{M} \in \mathbb{R}^{10^4 \times 10^4}$ with $\mathrm{rank}(\mathbf{M}) = 40$ , flipped signs of $10\%$ and $40\%$ of the elements. We set the sketching size $s$ as 1, 600 and 3, 200. We also compare the accuracy with robust principal component analysis (R-PCA) [6]. R-PCA decomposes a corrupted matrix $\mathbf{M}$ into a low-rank matrix $\mathbf{A}$ and a sparse corruption matrix $\mathbf{E}$ , i.e., $\mathbf{M} = \mathbf{A} + \mathbf{E}$ . To solve R-PCA, the Augmented Lagrange Multiplier method (ALM) [37] is a known effective approach. Also a previous study accelerated ALM by fast randomized singular value thresholding (FRSVT) [45]. We used native ALM and ALM with FRSVT as comparison. These methods also require tuning hyper-parameter $\lambda$ , and we chose the best result from $\lambda \in \{10^{-1}, 10^{-2}, 10^{-3}\}$ .
+
+To assess the accuracy, we define the error as $\frac{1}{mn} \| \mathbf{M}^* - \mathbf{M}^{(t)} \|_F$ , where $\mathbf{M}^*$ is the uncorrupted original matrix and $\mathbf{M}^{(t)}$ is the estimated low-rank matrix after the $t^{\text{th}}$ iteration. Figures 3a and 3b show the results. In this setting, $\ell_1$ minimization achieves high accuracy in both datasets. R-PCA converges quickly but does not work well for the dataset with many outliers. The sketch-once strategy shows about 3 times faster convergence compared to canonical IRLS in Fig. 3a, and also uniform sampling strategy shows fast convergence in Fig. 3b.
+
+
+left image
+
+
+$\ell_{1}$ min. with Sketched IRLS
+
+right image
+
+(a) Left: Input images. Right: $\ell_1$ and $\ell_2$ stitching results.
+
+
+$\ell_2$ minimization
+
+Fig. 5: Performance of image stitching with $\ell_1$ and $\ell_2$ homography estimation
+
+(b) $\ell_1$ residual variation over time
+
+# 5.3 When to sketch?
+
+In the following, we show an experiment that serves to give a first intuition for when a user should pick the sketch-once or the sketch-iteratively sketching regime. We generated $\mathbb{R}^{10^4\times 10^4}$ matrices with $10\%$ noise, we picked the ranks from $\{10,20,40,80\}$ and sketching sizes from $\{\mathrm{rank}\times 5,\times 10,\times 20,\times 40,\times 80\}$ , and conducted a low-rank approximation experiment. We checked the times $t_{SO}$ and $t_{SI}$ until sketch-once and sketch-iteratively achieved an accuracy of $10^{-5}$ . Figure 4 shows the values of $\ln (t_{SO} / t_{SI})$ . For each rank, the larger the sketching size, the faster sketch-once converges. As the matrix rank increases, sketch-iteratively shows faster convergence at larger sketching sizes. The sketching size for ideal approximations increases faster than $O(r)$ . When the sketching size is not big enough, the risk of not making ideal approximations becomes higher. Especially, at small sketching sizes, sketch-once is susceptible to making bad approximations, whereas sketch-iteratively alleviates this effect by repeated sketching and shows faster convergence for smaller sketching sizes.
+
+# 6 Applications
+
+Countless computer vision tasks are built upon robust regression. In practical scenarios, the input is typically quite large and noisy, either due to a noisy capturing process or a noisy precomputation step, e.g., incorrect feature matches. In this section, we demonstrate the proposed sketched IRLS's effectiveness on such real-world applications. We adapted our method to problems with overdetermined system: homography estimation and point cloud alignment.
+
+# 6.1 Homography estimation
+
+Homography estimation is an important building block for image stitching and plane-to-plane alignment. Given correspondences in two images, the image trans-
+
+formation can be written by a $3\times 3$ homography $\mathbf{H}$ with 8 DoF as
+
+$$
+\lambda \left[ x ^ {\prime}, y ^ {\prime}, 1 \right] ^ {\top} = \mathbf {H} \left[ x, y, 1 \right] ^ {\top},
+$$
+
+where $\lambda$ is a scaling factor, and $(x,y)$ and $(x',y')$ are corresponding points in the left and right images, respectively [50]. Given a set of correspondences, one can estimate $\mathbf{H}$ by solving a homogeneous system [51, Chap. 6.1]. The set of correspondences may contain many wrong correspondences from erroneous feature matching. Therefore, conventional methods use robust estimation techniques, such as RANSAC or $\ell_1$ regression.
+
+Result We obtained point correspondences by matching AKAZE [4] features in the two images shown in the left column of Fig. 5a. From 17,076 obtained point correspondences we estimated the homography. The second column of Fig. 5a shows the stitching results of $\ell_1$ and $\ell_2$ minimization, respectively. $\ell_1$ minimization successfully joined the two images whereas $\ell_2$ minimization produced a strongly erroneous result due to feature mismatches. In this experiment, we sketched the matrix to $17,076 \times 2 \times 1\% \simeq 341$ equations (note that each point pair gives two equations), and confirmed that the solution did not differ from the direct $\ell_1$ minimization without sketching. In Fig. 5b, we can see that the proposed method (uniform sampling + sketch-once) converges about $5 \times$ faster than canonical IRLS while maintaining the same accuracy.
+
+# 6.2 Point cloud alignment
+
+Consider the two point sets $\mathbf{P} = [\mathbf{p}_1, \ldots, \mathbf{p}_l]$ and $\mathbf{Q} = [\mathbf{q}_1, \ldots, \mathbf{q}_l]$ captured from different viewpoints around an object. For each $i$ , the points $\mathbf{p}_i \in \mathbb{R}^3$ and $\mathbf{q}_i \in \mathbb{R}^3$ approximately correspond to the same surface point on the 3D object. Since $\mathbf{P}$ may be positioned and scaled differently in space than $\mathbf{Q}$ , we search for a similarity transform $\mathbf{T}$ that, applied to all points in $\mathbf{P}$ , makes $\mathbf{P}$ and $\mathbf{Q}$ roughly coincide. We therefore optimize
+
+$$
+\min _ {\mathbf {T}} \sum_ {i} \| \mathbf {T} \tilde {\mathbf {p}} _ {i} - \tilde {\mathbf {q}} _ {i} \|, \tag {17}
+$$
+
+with the tilde denoting homogeneous representations. This problem is commonly solved with $\ell_2$ minimization but with large outliers, e.g., due to wrong point correspondences, $\ell_1$ minimization may be superior. In the veteran but still frequently used Iterative Closest Point (ICP) algorithm [10,5], the point sets are first manually pre-aligned, then it searches for the nearest neighbor in $\mathbf{Q}$ for each point in $\mathbf{P}$ , optimize Eq. (17), and iterate.
+
+Result In this task, we used the Stanford Bunny [55]. The two input point clouds are shown in Fig. 6a (left). Each set consists of $10^{5}$ points, viewed from different viewpoints in different scalings. We applied the proposed sketched IRLS to perform ICP with $\ell_{1}$ residual minimization. The second and third figures of Fig. 6a show the results of $\ell_{1}$ minimization with sketched IRLS and conventional $\ell_{2}$ minimization. It is observed that $\ell_{1}$ minimization results in an accurate alignment of the two point clouds, unaffected by inaccurate correspondences.
+
+
+(a) Input point clouds and alignment results
+
+
+Fig. 6: Performance of point cloud alignment via ICP with $\ell_1$ and $\ell_2$ similarity transform estimation
+
+
+(b) Error variation over time
+
+
+
+
+
+For this experiment we set the sketching size $s$ as $10\%$ of the original problem, and gradually transformed the point set $\mathbf{Q}$ with fixed $\mathbf{P}$ . Since we have the ground truth here, we evaluate the error by $\frac{1}{l} \| \mathbf{Q}^* - \mathbf{Q}^{(t)} \|_F$ , where $\mathbf{Q}^{(t)}$ is the result after the $t^{\text{th}}$ iteration. The evaluation of the error variation w.r.t. time only for computing the transformation (excluding matchings) is summarized in Fig. 6b. While CountSketch + sketch-once did not show good convergence, the other sketching methods find a good alignment with significant speed up compared to the conventional method.
+
+# 7 Discussion
+
+Our experiments showed that sketching is effective in reducing IRLS's runtime and $\ell_1$ minimization with IRLS works well on a wide range of computer vision problems. Other robust methods such as RANSAC and RPCA are certainly good at specific problems, but IRLS $\ell_1$ minimization is versatile without requiring parameter tuning and its convergence is demonstrably superior in some tasks.
+
+Regarding when to sketch, sketch-once is superior if the rank $r$ of the design matrix $\mathbf{A}$ is very small and the sketching is not aggressive, i.e., $s \gg r$ . However, if the rank is high or we sketch aggressively, e.g., $s < 5r$ , then it is likely that $\mathbf{A}$ 's range will not be preserved, and we need to perform sketch-iteratively to be able to recover from badly chosen samples.
+
+If one naively performs subsampling on the input data, this would be equal to sketch-once sketching, basically treating IRLS as a black box that can never be touched again after the data has initially been subsampled. The experiments showed that in applications where sketch-iteratively performs better, we want to open that black box and perform subsampling in every iteration.
+
+Acknowledgments: This work is supported by JSPS CREST Grant Number JPMJCR1764, Japan. Michael Waechter was supported through a postdoctoral fellowship by the Japan Society for the Promotion of Science (JP17F17350).
+
+# References
+
+1. Aftab, K., Hartley, R.: Convergence of iteratively re-weighted least squares to robust M-estimators. In: Winter Conference on Applications of Computer Vision (WACV) (2015) 1, 3
+2. Aftab, K., Hartley, R., Trumpf, J.: Generalized Weiszfeld algorithms for Lq optimization. Transactions on Pattern Analysis and Machine Intelligence (PAMI) 37(4), 728-745 (2015) 5
+3. Ailon, N., Chazelle, B.: Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform. In: Symposium on Theory of Computing (2006) 4
+4. Alcantarilla, P.F., Nuevo, J., Bartoli, A.: Fast explicit diffusion for accelerated features in nonlinear scale spaces. In: British Machine Vision Conference (BMVC) (2013) 13
+5. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. Transactions on Pattern Analysis and Machine Intelligence (PAMI) 14(2), 239-256 (1992) 13
+6. Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? Journal of ACM 58(3), 11:1-11:37 (2011) 11
+7. Candes, E.J., Tao, T.: Decoding by linear programming. IEEE Transactions on Information Theory 51(12), 4203-4215 (2005) 7
+8. Charikar, M., Chen, K., Farach-Colton, M.: Finding frequent items in data streams. In: International Colloquium on Automata, Languages and Programming (2002) 4
+9. Chen, C., He, L., Li, H., Huang, J.: Fast iteratively reweighted least squares algorithms for analysis-based sparse reconstruction. Medical Image Analysis 49, 141-152 (2018) 3
+0. Chen, Y., Medioni, G.: Object modeling by registration of multiple range images. In: International Conference on Robotics and Automation (ICRA) (1991) 13
+1. Clarkson, K.L., Woodruff, D.P.: Low rank approximation and regression in input sparsity time. In: Symposium on Theory of Computing (2013) 2, 4, 6
+2. Cohen, M.B., Kyng, R., Miller, G.L., Pachocki, J.W., Peng, R., Rao, A.B., Xu, S.C.: Solving SDD linear systems in nearly $m \log^{1/2} n$ time. In: Symposium on Theory of Computing (2014) 4
+3. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics 57(11), 1413-1457 (2004) 3
+4. Daubechies, I., DeVore, R., Fornasier, M., Güntürk, C.S.: Iteratively reweighted least squares minimization for sparse recovery. Communications on Pure and Applied Mathematics 63(1), 1-38 (2010) 3, 7
+5. Donoho, D.L., Tsaig, Y., Drori, I., Starck, J.L.: Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. Transactions on Information Theory 58(2), 1094-1121 (2012) 3
+6. Drineas, P., Mahoney, M.W.: Randnla: Randomized numerical linear algebra. Communications of the ACM 59(6), 8090 (2016) 4
+7. Drineas, P., Mahoney, M.W., Muthukrishnan, S.: Sampling algorithms for $\ell_2$ regression and applications. In: ACM-SIAM Symposium on Discrete Algorithm (2006) 4, 7
+8. Drineas, P., Mahoney, M.W., Muthukrishnan, S., Sarlós, T.: Faster least squares approximation. Numerische Mathematik 117(2), 219-249 (2011) 4
+9. Duersch, J., Gu, M.: Randomized QR with column pivoting. SIAM Journal on Scientific Computing 39(4), 263-291 (2017) 4
+
+20. Dutta, A., Richtárik, P.: Online and batch supervised background estimation via l1 regression. In: Winter Conference on Applications of Computer Vision (WACV) (2019) 3
+21. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Annals of Statistics 32(2), 407-499 (2004) 3
+22. Elad, M.: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer, 1st edn. (2010) 3
+23. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. Transactions on Image Processing 15(12), 3736-3745 (2006) 3
+24. Erichson, N., Mathelin, L., Brunton, S., Kutz, J.: Randomized dynamic mode decomposition. SIAM Journal on Applied Dynamical Systems 18(4), 1867-1891 (2017) 4
+25. Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24(6), 381-395 (1981) 10
+26. Gentle, J.E.: Matrix Algebra: Theory, Computations, and Applications in Statistics. Springer (2007) 3
+27. Goldstine, H.H., von Neumann, J.: Numerical inverting of matrices of high order. ii. In: American Mathematical Society (1951) 4
+28. Gorodnitsky, I.F., Rao, B.D.: Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm. Transactions on Signal Processing 45(3), 600-616 (1997) 3
+29. Gower, R., Richtárik, P.: Randomized iterative methods for linear systems. SIAM Journal on Matrix Analysis and Applications 36(4), 1660-1690 (2015) 4
+30. Gower, R., Richtrik, P.: Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms. SIAM Journal on Matrix Analysis and Applications 38(4), 1380-1409 (2016) 4
+31. Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217-288 (2011) 2, 4
+32. Johnson, W.B., Lindenstrauss, J.: Extensions of Lipschitz mappings into a Hilbert space. Contemporary Mathematics 26, 189-206 (1984) 4
+33. Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4(4), 373-395 (1984) 3
+34. Ke, Q., Kanade, T.: Robust $L_{1}$ norm factorization in the presence of outliers and missing data by alternative convex programming. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2005) 11
+35. Kekatos, V., Giannakis, G.B.: From sparse signals to sparse residuals for robust sensing. IEEE Transactions on Signal Processing 59(7), 3355-3368 (2011) 3
+36. Kiani, K.A., Drummond, T.: Solving robust regularization problems using iteratively re-weighted least squares. In: Winter Conference on Applications of Computer Vision (WACV) (2017) 1
+37. Lin, Z., Chen, M., Wu, L., Ma, Y.: The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. Tech. Rep. UIU-ENG-09-2215, Coordinated Science Laboratory, University of Illinois at Urbana-Champaign (2010) 11
+38. Lu, C., Lin, Z., Yan, S.: Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization. Transactions on Image Processing 24(2), 646-654 (2015) 3
+
+39. Mahoney, M.W.: Randomized algorithms for matrices and data. Foundations and Trends® in Machine Learning 3(2), 123-224 (2011) 2, 4, 7
+40. Mallat, S.G.: A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way. Academic Press, Inc., 3rd edn. (2008) 3
+41. Mallat, S.G., Zhang, Z.: Matching pursuits with time-frequency dictionaries. Transactions on Signal Processing 41(12), 3397-3415 (1993) 3
+42. Millikan, B., Dutta, A., Rahnavard, N., Sun, Q., Foroosh, H.: Initialized iterative reweighted least squares for automatic target recognition. In: Military Communications Conference (2015) 3
+43. Needell, D., Tropp, J.A.: CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis 26(3), 301-321 (2009) 3
+44. Ochs, P., Dosovitskiy, A., Brox, T., Pock, T.: On iteratively reweighted algorithms for non-smooth non-convex optimization in computer vision. SIAM Journal on Imaging Sciences 8(1), 331-372 (2015) 4
+45. Oh, T.H., Matsushita, Y., Tai, Y.W., Kweon, I.S.: Fast randomized singular value thresholding for low-rank optimization. Transactions on Pattern Analysis and Machine Intelligence (PAMI) 40(2), 376-391 (2018) 11
+46. Osborne, M.R.: Finite Algorithms in Optimization and Data Analysis. John Wiley & Sons, Inc. (1985) 3
+47. Pati, Y.C., Rezaifar, R., Krishnaprasad, P.S.: Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In: Asilomar Conference on Signals, Systems, and Computers (1993) 3
+48. Sarlós, T.: Improved approximation algorithms for large matrices via random projections. In: Symposium on Foundations of Computer Science. pp. 143-152 (2006) 7
+49. Sigl, J.: Nonlinear residual minimization by iteratively reweighted least squares. Computational Optimization and Applications 64(3), 755-792 (2016) 4, 7
+50. Szeliski, R.: Video mosaics for virtual environments. Computer Graphics and Applications 16(2), 22-30 (1996) 13
+51. Szeliski, R.: Computer Vision - Algorithms and Applications. Texts in Computer Science, Springer (2011) 13
+52. Tropp, J.A.: Greed is good: Algorithmic results for sparse approximation. Transactions on Information Theory 50(10), 2231-2242 (2004) 3
+53. Tropp, J.A.: Improved analysis of the subsampled randomized Hadamard transform. Advances in Adaptive Data Analysis 3, 115-126 (2011) 4
+54. Tropp, J.A., Gilbert, A.C., Strauss, M.J.: Algorithms for simultaneous sparse approximation. Part I. Signal Process. 86(3), 572-588 (2006) 3
+55. Turk, G., Levoy, M.: Zippered polygon meshes from range images. In: SIGGRAPH (1994) 13
+56. Weiszfeld, E.: Sur le point pour lequel la somme des distances de $n$ points donnés est minimum. Tohoku Mathematical Journal (1937) 5
+57. Weiszfeld, E., Plastria, F.: On the point for which the sum of the distances to n given points is minimum. Annals of Operations Research (2009) 5
+58. Woodruff, D.P.: Sketching as a tool for numerical linear algebra. Foundations and Trends® in Theoretical Computer Science 10(1-2), 1-157 (2014) 2, 4
+59. Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. Transactions on Pattern Analysis and Machine Intelligence (PAMI) 31(2), 210-227 (2009) 3
+
+60. Yang, A., Zhou, Z., Ganesh Balasubramanian, A., Sastry, S., Ma, Y.: Fast L1-minimization algorithms for robust face recognition. Transactions on Image Processing 22(8), 3234-3246 (2013) 3
+61. Zhang, Z., Xu, Y., Yang, J., Li, X., Zhang, D.: A survey of sparse representation: Algorithms and applications. IEEE Access 3, 490-530 (2015) 3
\ No newline at end of file
diff --git a/ananalysisofsketchedirlsforacceleratedsparseresidualregression/images.zip b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0e15ccca025d931379f46e3edb4629dc7c90bac8
--- /dev/null
+++ b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:35e2934ceaf4802170328251b2bd2795770906dbc9932afd09c46a02eb9b3a20
+size 383545
diff --git a/ananalysisofsketchedirlsforacceleratedsparseresidualregression/layout.json b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..02453b7e77b7e955c7988ed707858f3c7545d539
--- /dev/null
+++ b/ananalysisofsketchedirlsforacceleratedsparseresidualregression/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0313a592f7c5cd5a052798153a6fecd7dcfb5fb024f265586b2e9687ae835ac1
+size 671176
diff --git a/anasymmetricmodelingforactionassessment/8d92e729-ad64-4932-ac40-402b514e4b46_content_list.json b/anasymmetricmodelingforactionassessment/8d92e729-ad64-4932-ac40-402b514e4b46_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8b3ec94c6ee217c8807f749e07b3ecba0f8d3e9c
--- /dev/null
+++ b/anasymmetricmodelingforactionassessment/8d92e729-ad64-4932-ac40-402b514e4b46_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7381faf25d6caa1936ab1e4fac50ed9dbeb5736be5d5f428744727a4b8e7c905
+size 75159
diff --git a/anasymmetricmodelingforactionassessment/8d92e729-ad64-4932-ac40-402b514e4b46_model.json b/anasymmetricmodelingforactionassessment/8d92e729-ad64-4932-ac40-402b514e4b46_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e78c0b7e19c43caee2ca3daa425407c2f27c6e28
--- /dev/null
+++ b/anasymmetricmodelingforactionassessment/8d92e729-ad64-4932-ac40-402b514e4b46_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:08d14436de99b95e7435455ffca456bc1e5afc564e6fdbe683a02e1beb311bfc
+size 90427
diff --git a/anasymmetricmodelingforactionassessment/8d92e729-ad64-4932-ac40-402b514e4b46_origin.pdf b/anasymmetricmodelingforactionassessment/8d92e729-ad64-4932-ac40-402b514e4b46_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7da683e36d00bb77f718dddfac53228c970c6044
--- /dev/null
+++ b/anasymmetricmodelingforactionassessment/8d92e729-ad64-4932-ac40-402b514e4b46_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:29ec5156f76b89cf5e623d9848184d16d2c686deed1e24c31117f5de44a8d517
+size 2739922
diff --git a/anasymmetricmodelingforactionassessment/full.md b/anasymmetricmodelingforactionassessment/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..19a9a174bf9ddcb7c401baa4f996e30ef8ee00e0
--- /dev/null
+++ b/anasymmetricmodelingforactionassessment/full.md
@@ -0,0 +1,287 @@
+# An Asymmetric Modeling for Action Assessment
+
+Jibin Gao $^{1,4}$ , Wei-Shi Zheng $^{1,2,5*}$ , Jia-Hui Pan $^{1}$ , Chengying Gao $^{1*}$ , Yaowei Wang $^{2}$ , Wei Zeng $^{3}$ , and Jianhuang Lai $^{1}$
+
+$^{1}$ School of Data and Computer Science, Sun Yat-sen University, China
+
+2 Peng Cheng Laboratory, Shenzhen 518005, China
+
+3 School of Electronics Engineering and Computer Science, Peking University, China
+4 Pazhou Lab
+
+5 Key Laboratory of Machine Intelligence and Advanced Computing, MOE, China {gaojb5, panjh7}@mail2.sysu.edu.cn; {zhwshi, mcsgcy, stsljh} $@$ mail.sysu.edu.cn wangyw@pcl.ac.cn; weizeng@pu.edu.cn
+
+Abstract. Action assessment is a task of assessing the performance of an action. It is widely applicable to many real-world scenarios such as medical treatment and sporting events. However, existing methods for action assessment are mostly limited to individual actions, especially lacking modeling of the asymmetric relations among agents (e.g., between persons and objects); and this limitation undermines their ability to assess actions containing asymmetrically interactive motion patterns, since there always exists subordination between agents in many interactive actions. In this work, we model the asymmetric interactions among agents for action assessment. In particular, we propose an asymmetric interaction module (AIM), to explicitly model asymmetric interactions between intelligent agents within an action, where we group these agents into a primary one (e.g., human) and secondary ones (e.g., objects). We perform experiments on JIGSAWS dataset containing surgical actions, and additionally collect a new dataset, TASD-2, for interactive sporting actions. The experimental results on two interactive action datasets show the effectiveness of our model, and our method achieves state-of-the-art performance. The extended experiment on AQA-7 dataset also demonstrates the generalization capability of our framework to conventional action assessment.
+
+# 1 Introduction
+
+Action assessment [4, 13, 9, 18, 1] has attracted much attention in recent years. It is widely applicable to many practical scenarios. For instance, action assessment models can be applied in sports events to assist the referee in scoring, as well as to assist athletes in training [18, 20, 1, 16]. Athletes can make reasonable corrections to their motions according to the feedback from the assessment model to achieve better training effects. In modern medical treatment, rehabilitation
+
+
+Interactive Action Assessment with Asymmetric Interaction
+Fig. 1. Our asymmetric interaction module is designed to assess action performance. For egocentric surgical videos, we regard motions of the master tool-tips as the primary (in red), and those of the slave tool-tips and handles, which are relatively inactive, as the secondary (in blue). Best viewed in colour.
+
+treatment has received increasing attention. Action assessment can be applied to the rehabilitation training of patients [13, 22, 28]. By monitoring and assessing the daily rehabilitation training of patients, doctors can give follow-up rehabilitation treatment suggestions to patients according to the assessment report, aiming to achieve efficient treatment [31, 32, 7, 14].
+
+However, existing action assessment methods [20, 15, 17] are mostly designed for individual actions, such as diving and vaulting. In real-world applications, there are many non-individual actions, which are defined by interaction, especially there are subordination between agents in an interaction. For example, in the view of egocentric surgical videos, only motions of two tool-tips are captured. Accordingly, the actions involving interactions between human (featured as motion of tool-tips) and the "two tool-tips and handles" should be explored explicitly for the assessment. More importantly, such an interaction is asymmetric. The roles in asymmetrically interactive actions mentioned above can semantically be categorized into the primary agent and the secondary ones. While some work such as [15] can be applied to handle the assessment on interactive action, they treat all agents equally, and thus they cannot provide particular modeling on the subordination between agents (e.g. between human and objects).
+
+In this work, we propose a new framework for addressing asymmetrically interactive action assessment with an asymmetric interaction module (AIM) that provides a general and novel proposal for action interaction among multiple people or parts. In this module, it is assumed that there is a primary agent that is dominant relative to the others it interacts with; correspondingly, the others are viewed as the secondary agents in interactive actions. This assumption makes sense, since the multiple parts involved in the interactive actions are always naturally semantically categorized as two parts, an important part (e.g., human) and secondary parts (e.g., objects). In AIM, we exploit the difference between the primary and the secondary in the same latent space, and utilize the
+
+primary equipped with the difference to learn the interaction in the temporal domain, since the primary is dominant relative to secondary in representing an action. With this module, our framework can explicitly learn the latent criterion of interactive action assessment. Afterwards, we construct an attention fusion module inspired by the attention mechanism [24] to pay different amounts of attention to the whole-scene feature and AIM feature.
+
+Moreover, apart from assessing interactive actions in strong asymmetric relation among each part, our method can also be applied to interactive actions whose agents are in weak asymmetric or equal relation, such as synchronous sports. We therefore generalize our model by a multi-task learning for operating general interactive action assessment.
+
+To the best of our knowledge, AQA-7 [18] and MTL-AQA [16] are the only two available datasets that contain the events involving two players; however, these events are from side view, and thus it is unsuitable to investigate the interaction between players as they overlap seriously most of the time. Therefore, we have additionally collected a new dataset, named Two-person Action Synchronized Diving dataset (TASD-2) for evaluating interactive action assessment.
+
+In summary, our contributions are three-fold: (1) A novel module, called the asymmetric interaction module (AIM), is constructed to reasonably extract the interactive relation for asymmetric interaction; (2) a general framework for interactive action assessment is proposed that can be easily generalized to different kinds of action assessment tasks; (3) a new dataset, $TASD-2$ , is collected in our work containing two-person interactive actions captured from the front view. We have reported experiments to validate the effectiveness of the proposed method. Project homepage: https://www.isee-ai.cn/~gaojibin/ProjectAIM.html.
+
+# 2 Related Work
+
+Action Quality Assessment. Action assessment is the evaluation of how well an action is performed. For the tasks of action assessment, the existing works mainly modelled the problem in three manners: 1) casting it as a classification task to classify the action performance as expert or novice [33, 32]; 2) casting it as a regression problem that fits the scores of multiple action performances [20, 26, 17, 18, 30]; 3) casting it as a pair-wise ranking task [4, 1, 5, 29]. Our work follows the second branch. However, few works have assessed the action quality by explicitly exploring the interaction in actions, and especially lack of modeling on asymmetric interaction for the assessment. To learn the relation among joints of performers' skeleton, Pan et al. [15] computed action quality based on joint relation modeling by GNN [21]. Compared to [15], our asymmetric interaction module is a non-symmetric modeling (i.e. we treat primary and secondary nonequally), while existing method [15] treats them equally. Nevertheless, they can be applied to model the actions in joint-based interaction, but they ignore the subordination modeling in the asymmetric interaction.
+
+Interactive Models. Recently, an increasing number of interactive models [10, 25, 15, 17] have been proposed. Wang et al [25] constructed a channel-wise inter
+
+
+Fig. 2. An overview of our proposal. We uniformly divide an input video into $T$ time steps, and present the process of the asymmetric interaction module (AIM) in a clear manner at time step $t$ . The kinetic information of mobile objects is extracted, including the primary and the secondary ones. We perform asymmetric interaction between the primary and the secondary and obtain the AIM feature. Afterwards, we perform attentive contextual interaction between the whole-scene feature, which is extracted via I3D [2], and the AIM feature with an attention fusion. Finally, a regression module is utilized to learn regressing the action quality.
+
+action learning method to evaluate the interaction of each part of an image to preserve information with a binary feature map through prior knowledge graphs. Li and Cai [10] paid different attention to the interaction of each individual part extracted from the images. In action assessment, many related works [17, 20] fed the key-points feature into an LSTM [8] framework to explore the interaction in the temporal domain, while other works [15, 27] have exploited the interactive relations among the human skeleton through GNNs. However, these methods either yield poor interpretability of the interaction process or are limited by the connection of nodes, resulting in poor generalization to various assessment tasks.
+
+# 3 Approach
+
+In this section, we will introduce our model for asymmetrically interactive action assessment in detail. The overall structure of our model is shown in Fig. 2. In this framework, the asymmetric interaction module is particularly designed to explore the asymmetric interaction between primary agent) and secondary agents. An attentive textual interaction with an attention fusion module is developed to further fuse the AIM feature and whole-scene feature. Finally, a regression module is used to learn regressing the action quality.
+
+# 3.1 Asymmetric Interaction Module
+
+It is common that the interactions among multiple people or multiple agents, in particular the asymmetric interaction among humans and objects, play impor-
+
+
+Fig. 3. Examples of the primary information and secondary information partitioning. The clock icon indicates the motion of that part. Specially, there exist actions in which two performers are in weak asymmetric relation or equal. In these cases, any one of the performers can be assigned as the primary and the other as the secondary.
+
+
+
+
+
+tant roles. In order to reduce noise interference, we extract subtle but informative feature at an abstraction level, which we denote as $A_{a}$ ; that is only indispensable kinetic information for describing an action is considered, such as human pose. For example, on surgery tasks in egocentric views, we assign $A_{a}$ with the kinetic information of the tool-tips which contains the object information (e.g., tool orientations) and speed information; while for most types of action performance, the entire human body should be considered, so $A_{a}$ is the pose information provided by some pose estimator.
+
+Before performing interaction, we divide $A_{a}$ into two parts according to their semantics, the primary information, denoted as $A_{p}$ , and the secondary information, denoted as $A_{s}$ . An example diagram is shown in Fig. 3. For egocentric surgical videos, where only motions of two tool-tips are captured, it is more intuitive since each tool-tip consists of a master part and a slave one. For semantic consistence, we regard motions of the master tool-tips as the primary, and those of the slave tool-tips and handles, which are relatively inactive, as the secondary.
+
+To explicitly explore the asymmetric relations between primary information and secondary information, we design the asymmetric interaction module (AIM), as shown in Fig. 2. With different semantics, it is natural that the primary information and secondary information come from different domains. Thus, to explore the potential relation and asymmetric interaction between the primary and the secondary, we first pass secondary information through a transformation module to map it into a latent space, the same as that of the primary. When the primary and secondary are from the same domain, the transformation will tend to learn an identity function [3]. Afterwards, we determine the difference between the primary and the secondary after the transformation, where the difference operation is an effective operation to explore the relation between visual instances [15], and the process can be formed as
+
+$$
+I _ {d} ^ {(t)} = \mathcal {D} \left(A _ {p} ^ {(t)}, \mathcal {T} \left(A _ {s} ^ {(t)}\right)\right), \tag {1}
+$$
+
+where $A_{*}^{(t)}$ denotes a certain feature $A_{*}$ in time step $t$ in Fig. 2, $I_{d}^{(t)} \in \mathbb{R}^{N}$ and $\mathcal{T}(\cdot)$ is a function to conduct the transformation operation, and $\mathcal{D}(\cdot)$ is a function to determine the difference between the primary and the secondary. Here, $N$ denotes the dimension of the $I_{d}^{(t)}$ and $A_{p}^{(t)}$ .
+
+According to the discussion above, the primary information is dominant relative to the secondary in the capability of representation for interactive actions. To utilize the superiority of the primary information, we then concatenate the difference feature and primary information, called the primary-secondary information and denoted as $M_{ps}$ . We present it as
+
+$$
+M _ {p s} ^ {(t)} = A _ {p} ^ {(t)} \oplus I _ {d} ^ {(t)}, \tag {2}
+$$
+
+where $M_{ps}^{(t)}\in \mathbb{R}^{2N}$ and $\oplus$ represents the concatenating operator.
+
+The process mentioned above can be regarded as the interaction in the spatial domain. Moreover, since the interactions occur over time, temporal relations for asymmetrically interactive action assessment are essential. Then, we use a temporal network to learn the temporal interaction and obtain the complete AIM feature, which can be expressed as
+
+$$
+Y _ {p s i} ^ {(t)} = \mathcal {P} (M _ {p s} ^ {(t)}), \tag {3}
+$$
+
+where $Y_{psi}^{(t)} \in \mathbb{R}^d$ and $\mathcal{P}(\cdot)$ is a temporal network, for which we use LSTM in this work, and $d$ is the dimension of the hidden layer in the LSTM model.
+
+# 3.2 Attentive Contextual Interaction
+
+To assist the learned AIM feature, we further employ I3D [2] to extract the whole-scene feature of videos, denoted as $F_{wl}$ . To some extent, the whole-scene feature contains extra information complement to our AIM feature, even though noise exists. Now, we obtain two-stream outputs, the whole-scene one $F_{wl}$ and the AIM one $Y_{psi}$ . Before fusion of these outputs, we pass $F_{wl}$ through an encoder to map $F_{wl}$ into the latent space the same as the AIM feature, $Y_{psi}$ . Then, $X_{wl}$ is obtained, where $X_{wl}^{(t)} \in \mathbb{R}^d$ and $d$ is the dimension of the encoder feature.
+
+In our fusion modeling, we perform attentive contextual interaction between the whole-scene feature and the AIM feature; that is the whole-scene feature $F_{wl}$ is utilized to learn a key map as attention for fusion of the whole-scene feature and our AIM feature because it contains the whole-scene context. Inspired by self-attention [24], we regard $(X_{wl}^{(t)} \oplus Y_{psi}^{(t)})$ as the queries and values of attention mechanism. To be detailed, we form the fusion process as follows:
+
+$$
+Z _ {a t t} ^ {(t)} = W ^ {(t)} \circ \left(X _ {w l} ^ {(t)} \oplus Y _ {p s i} ^ {(t)}\right) ^ {\prime}, \tag {4}
+$$
+
+$$
+W ^ {(t)} = \operatorname {s o f t m a x} \left(\left(X _ {w l} ^ {(t)} \oplus Y _ {p s i} ^ {(t)}\right) ^ {\prime} \circ O _ {k e y} ^ {(t)}\right), O _ {k e y} ^ {(t)} = \mathcal {F C} _ {k e y} \left(F _ {w l} ^ {(t)}\right), \tag {5}
+$$
+
+where $\circ$ represents the matrix multiplication, $\oplus$ represents the concatenating operator, and $softmax(\cdot)$ is the softmax function, and $\mathcal{FC}_{key}(\cdot)$ is a fully connected layer to learn the key mapping. Here, $X_{wl}^{(t)}, Y_{psi}^{(t)}, Z_{att}^{(t)}, O_{key}^{(t)} \in \mathbb{R}^d$ , and $A'$ denotes the transpose of matrix $A$ .
+
+# 3.3 Scoring for Action Assessment
+
+In the final step, our method should give a final score for the action performance through the regression module shown in Fig. 2. The overall assessment result will be presented in a score given by
+
+$$
+S = \sum^ {T} \mathcal {R} \left(Z _ {\text {a t t}} ^ {(t)}\right), \tag {6}
+$$
+
+where $S$ denotes the predicted score for the action performance, $Z_{att}^{(t)}$ is the output of attentive contextual interaction, $T$ is the number of time steps in the video, and $\mathcal{R}(\cdot)$ represents the regression module implemented with two FCs.
+
+In the training stage, we use the Mean-Squared Error(MSE) as loss function for model optimization, which is defined as $\delta = \frac{1}{C}\sum_{i}^{C}(y_{i} - \hat{y}_{i})^{2}$ , where $y$ and $\hat{y}$ represent the ground truth and the predicted value respectively, and $C$ denotes the number of samples.
+
+# 4 Extension to General Interactive Action Assessment: A Multi-task Training
+
+The asymmetric interaction module can be generalized to general interactive action assessment, even though there is no explicit primary and secondary roles between performers. The second row in Fig. 3 does not show a strong asymmetric relation between two performers6, and then we choose any one of them as the primary and the other as the secondary.
+
+To be detailed, we generalize our model by a multi-task training. The multiple tasks can naturally align to the two-stream features in reasonable semantics; the whole-scene feature can be utilized for learning action assessment on overall performance, and the AIM feature can be designed for learning action assessment on interactive actions. For instance, for synchronized diving, the execution score and synchronization score are given by referees during scoring for the entire action performance. We could assess the execution of action using the whole-scene feature, which several existing methods [17, 18] have conducted, while feature extracted by AIM is capable to be utilized for learning the synchronization of action reasonably since AIM mainly explores the interaction between two players. Thus, we can use the whole-scene feature $X_{wl}$ to learn scoring for the execution and $Y_{psi}$ for synchronization of the action. Their assessment results are given by
+
+$$
+S _ {e x} = \sum^ {T} \mathcal {R} _ {A} \left(X _ {w l} ^ {(t)}\right), S _ {s n} = \sum^ {T} \mathcal {R} _ {B} \left(Y _ {p s i} ^ {(t)}\right), \tag {7}
+$$
+
+where $S_{ex}$ and $S_{sn}$ denote the predicted execution score and synchronization score, respectively. $\mathcal{R}_{*}(\cdot)$ represents the regression module implemented with two fully connected layers.
+
+
+Fig. 4. Samples of the TASD-2 dataset.
+
+
+
+The loss function in this setting could be formulated as
+
+$$
+\mathcal {L} = \mathcal {L} _ {f n} + \theta * \mathcal {L} _ {e x} + (1 - \theta) * \mathcal {L} _ {s n}, \tag {8}
+$$
+
+where $\mathcal{L}_{fn}$ , $\mathcal{L}_{ex}$ and $\mathcal{L}_{sn}$ denote the loss functions of regression for final scores, execution scores and synchronization scores, respectively. $\theta$ denotes a trade-off weight. Similarly, the loss function is the Mean-Squared Error(MSE).
+
+The overall loss function shown in Eq. (8) is meaningful, since in synchronized diving, a great performance must be excellent in both synchronization and execution. Therefore, apart from the final score, the execution score and synchronization score are also utilized to perform a multi-task training.
+
+# 4.1 TASD-2 dataset
+
+For assessing general interactive action, we also collect a new dataset. While AQA-7 [18] contains two events involving two performers, namely the synchronized 3-m springboard and 10-m platform, these events were captured from the side view, and thus it is hard to investigate the interaction between two performers as they overlap seriously for most of the time. Therefore, we collected a new diving dataset whose videos provide a better view to capture the interaction between two performers on synchronized diving videos. The construction details of our TASD-2 dataset can be found in the supplementary materials.
+
+# 5 Experiments
+
+We mainly conducted experiments on assessment of interactive action on JIGSAWS and TASD-2. In addition, conventional action assessment for single person can be regarded as a special extension of our method, and an evaluation of it on AQA-7 [18] was also conducted.
+
+Table 1. Details of the TASD-2 dataset
+
+| Sport | SyncDiving-3m | SyncDiving-10m |
| #Frames of a sample | 102 | 102 |
| #Samples | 119 | 184 |
| #Augmented samples | 238 | 368 |
| #Training set | 188 | 293 |
| #Testing set | 50 | 75 |
+
+
+Knot Tying
+Fig. 5. Frames of samples in JIGSAWS.
+
+
+Needle Passing
+
+
+Suturing
+
+- Dataset introduction. We conduct experiments on JIGSAWS [6] and TASD-2. JIGSAWS contains egocentric videos of three surgical tasks, including suturing, needle passing and knot tying. There are 206 videos in this dataset, of which 78 are for suturing, 56 for needle passing, and 72 for knot tying. Samples are shown in Fig. 5. The videos are captured in stereo recordings with left views and right views by using two cameras. All videos will be used in our experiments. For each video in JIGSAWS, 3D kinetics information of the master tool manipulators and patient-side manipulators is provided. The details of TASD-2 can be found in Section 4.1.
+
+# 5.1 Implementation Details
+
+- Model training setting. Our model is implemented in PyTorch. Without specific explanation, our model uses Adam Optimizer with a weight decay rate of 0.5. In the training process, the batch size is 64. We use cyclic learning rates of $\{1\mathrm{e} - 4,1\mathrm{e} - 5,1\mathrm{e} - 6\}$ changing according to the $\{20,50,100\}$ th epoch in every 100 epochs. For each task, we train our model for 3000 epochs. $T$ is set to 10. The encoder was implemented with a fully connected layer of shape $400\times 512$ with ReLU activation, and the LSTM is a single layer with a 512-dimensional output. In AIM, we used fully connected layers with input and output in the same dimension as the learnable transformation operation. The difference operation is vector subtraction. In the regression module, two FC layers were utilized. The first has a shape of $512\times 128$ with ReLU activation, and the second has a shape of $128\times 1$ without an activation function to avoid dead ReLU during score regression. The dropout parameter is set to 0.2 and $\theta$ is 0.4.
+
+- Evaluation Metric. For comparison with previous works [15, 18, 17, 20], we use Spearman's rank correlation as the evaluation metric of our model. It is defined as $\rho = \frac{\sum_{i}(p_i - \bar{p})(q_i - \bar{q})}{\sqrt{\sum_{i}(p_i - \bar{p})^2}\sum_{i}(q_i - \bar{q})^2}$ , where $p$ and $q$ represent the ranking of two sequences and $-1 \leq \rho \leq 1$ . The higher the Spearman's rank correlation is, the more positive the ranking relation between two sequences is. It will be used to evaluate the ranking relation between the predicted and ground-truth assessment results of our model. In order to better reflect the performance of our methods, we run the model 10 times and report the average as the final model performance. Moreover, for multiple actions in a dataset, we compute
+
+Table 2. The results(%) of our proposal compared with the state-of-the-art methods and our baseline on JIGSAWS.
+
+ | Sutur-ing Needle Passing Knot Tying | Avg. Corr. |
| ST-GCN [27] | 31 | 39 | 58 | 43 |
| TSN [4] | 34 | 23 | 72 | 46 |
| JR-GCN [15] | 36 | 51 | 75 | 57 |
| Baseline | 5 | 9 | 11 | 8 |
| Baseline+Kinetic | 17 | 37 | 73 | 46 |
| Ours | 63 | 65 | 82 | 71 |
+
+the average Spearman's rank correlation across actions from individual action correlations by using Fisher's z-value, as in [18].
+
+# 5.2 Comparison
+
+Experiments on interactive actions. We first evaluate our model on JIGSAWS, and the results are shown in Table 2, comparing with the state-of-the-art methods and our baseline. To the best of our knowledge, the methods proposed in [15, 4, 27] had achieved state-of-the-art performance for skill action assessment on JIGSAWS, and we used 4-fold cross validation on JIGSAWS by following [15]. In comparison, the results show that our model outperforms the previous state-of-the-art methods and achieves the best results, with an improvement of $14\%$ on average. According to the structure of our model, it is common to find that the great performance of our method partially benefits from the well-performed I3D [2] and partially profits from the asymmetric interaction. Thus, we remove the AIM part in Fig. 2, and evaluate our baseline by only using the I3D feature. We also concatenate I3D feature and kinetics feature as a stronger baseline. The results in Table 2 (the last three rows) indicate that the asymmetric interaction is much important in our model. The effectiveness of AIM is confirmed. Moreover, ablation study in Section 5.3 demonstrates that the roles of primary and secondary could not be exchanged for their asymmetric relation on JIGSAWS.
+
+We also compared our method with the best non-deep learning approach reported in [30] using leave-one-user-out(LOUO) in Table 3. As shown, both JR-GCN [15] and ours have their own strength. However, since the LOUO setting is demanding for the model's gerenration ability, our model is better and less specialized than [15], in which each joint is modelled in a specialized manner.
+
+To confirm the generalization of our framework to actions in weak relations between the primary and the secondary, experiments on $TASD-2$ are performed, and the results are shown in Table 4. Since $TASD-2$ is brand new, we utilize a naive model (RANDOM) that predicts scores for actions performance randomly
+
+Table 3. Evaluation(%) on JIGSAWS with LOUO.
+
+ | Sutur-ing Needle Passing Knot Tying | Avg. Corr. |
| DTC+DFT +ApEn [30] | 37 | 25 | 60 | 41 |
| JR-GCN [15] | 35 | 67 | 19 | 40 |
| Ours | 45 | 34 | 61 | 47 |
+
+Table 4. Results (%) of our model on TASD-2.
+
+ | SyncDiv-ing-3m | SyncDiv-ing-10m | Avg. Corr. |
| RANDOM | -3 | 3 | 0 |
| C3D-LSTM [17] | -14 | 1 | -7 |
| I3D [2]-SVR-L | 77 | 73 | 75 |
| I3D [2]-SVR-P | 84 | 83 | 83 |
| I3D [2]-SVR-RBF | 71 | 77 | 74 |
| JR-GCN [15] | 89 | 81 | 86 |
| Baseline | 84 | 79 | 82 |
| Baseline+Pose | 88 | 80 | 84 |
| Ours (Single-task) | 89 | 85 | 87 |
| Ours (Multi-task) | 92 | 85 | 89 |
+
+in the range of [0, 100]. The results illustrate that the distribution of samples in $TASD-2$ is relatively reasonable. We also evaluate C3D-LSTM [17] on $TASD-2$ , but it did not work based on the experimental setting in [18, 17]. Then, we use I3D [2] and SVR with different kernels, including linear polynomial and RBF kernels, on $TASD-2$ . The results show that I3D-SVR models gain great performance, which reflects the strong ability of I3D to some extent. With the multitask training in our model, our proposal achieved state-of-the-art performance on $TASD-2$ , with a more than $3\%$ improvement on average.
+
+# 5.3 Ablation study
+
+Table 5 shows the results of an ablation study on our model. To explore the contributions of each main module in our model, we conduct experiments by removing one of the components from our full model, including the attention fusion module and AIM. When replacing the attention fusion module with a fusion in each half, the model performance decreases by $4\%$ on average. This result implies that paying different amounts of attention to whole-scene feature and AIM feature exactly makes a positive difference. Removing the transformation or difference module respectively, a $3\%$ reduction was observed, indicating that these modelings are necessary. Moreover, when simply removing AIM part, the performance decreases by $31\%$ . These results indicate the significance of the asymmetric interaction and the effectiveness of AIM structure.
+
+We also exchanged the primary and the secondary when performing model training and evaluating. The resulting performance reduction of $4\%$ implies that
+
+Table 5. Ablation study (\%) for exploring the effectiveness of each main module of our model on JIGSAWS.
+
+ | Sutur-ing Needle Passing Knot Tying | Avg. Corr. |
| Full model | 63 | 65 | 82 | 71 |
| w/o AIM | 7 | 41 | 64 | 40(-31) |
| w/o attention fusion module | 61 | 55 | 80 | 67(-4) |
| Exchange primary and secondary | 55 | 62 | 80 | 67(-4) |
| w/o transformation module | 61 | 62 | 79 | 68(-3) |
| w/o difference module | 60 | 61 | 80 | 68(-3) |
| Whole-scene(Baseline) | 5 | 9 | 11 | 8(-63) |
+
+
+Fig. 6. The action assessment results of our model on a suturing case. The assessment results of our model indicate good (in green) and bad (in red) action performance for each time step. Best viewed in colour.
+
+the primary and the secondary really play their semantic roles with asymmetric interaction in the model evaluation. Moreover, from the last two rows of Table 4, we find that our proposal with multi-task training increases by more than $2\%$ in model performance on average, compared to that with single-task setting. Thus, the results indicate that the multi-task training is effective.
+
+Moreover, we exchanged the primary and the secondary in our modeling when evaluating on $TASD-2$ . The results are shown in Table 6. There was little difference as compared to the performance without exchange. Thus, it indicates that our proposal is adapted to interactive actions in weak asymmetric relation in semantics, such as synchronized diving.
+
+Table 6. Results (%) of exchanging the primary and the secondary on TASD-2.
+
+ | SyncDiving-3m | SyncDiving-10m |
| Before exchanging primary and secondary | 91.50 | 85.13 |
| After exchanging primary and secondary | 91.75 | 85.10 |
+
+# 5.4 Visualization of the assessment process
+
+In order to view the process of assessment, we output the predicted sub-score defined in Eq. (6). Fig. 6 shows an example about scoring in each time step8. We find that our model could give a reasonable score for each time step. Before accomplishing the first passing of the line used for suturing, it is difficult for most of us to control the surgical line expertly with tool-tips. Thus, it is not suitable to judge clearly a good or bad performance at this stage. Accordingly, we could observe that the proposed model gave relatively neutral judgement in
+
+
+Results of Attention Fusion on SyncDiving-3m and Knot Tying
+Fig. 7. Visualization of the attention fusion. We output the results of attention fusion on different actions, including synchronized 3-m springboard and knot tying. "Sample No." represents number of three randomly selected samples, and "Time step No." represents number of ten time steps for each video sample. The results indicate that our attention fusion could pay different amounts of attention on different time steps.
+
+the first few time steps in Fig. 6. However, in the middle stage of the suturing case, we found that two tool-tips performed relatively abnormally, causing the surgical line to be staggered in the air; thus, bad judgements were obtained during this time. Correspondingly, when approaching to finishing the suturing task, our model scored with positive judgements for great performance in this process. Therefore, the visualization also confirms that our framework is effective and interpretable.
+
+In addition, we also visualized the attention fusion through observing the computed results of Eq. (4) in Fig. 7. For the synchronized 3-m springboard, the attention fusion module could pay different amounts of attention on different time steps in a sample. The AIM feature is more important after time step 8 for SyncDiving-3m, because the interaction between two actors when they were approaching entry is more importance for synchronized diving assessment. It was obvious that our attention fusion also did make a difference by comparing different actions. It indicates that our attentive contextual interaction with an attention fusion is effective.
+
+# 5.5 Extended experiment on single-person actions
+
+The secondary information is relatively difficult to determine for single-person actions due to semantically only one motion in videos. For generalization, we define a condition that if the secondary information is ambiguous, we can use the motion of the camera capturing the action performance for replacement, as shown in the third row in Fig. 3. We additionally evaluated our framework on AQA-7 [18] under such an assumption; this dataset is collected from summer and winter Olympics and contains 1106 videos in total composed by six actions. As discussed in Section 4.1, AQA-7 [18] contains two-person actions, but only captured from the side view. The performers are not visually separable. Thus visually there is only one agent in the videos, and we regard it as the primary without other choices. Then, we extract the motion feature of the camera as the
+
+Table 7. Results (\%) of our model applied to AQA-7. To illustrate the competitive results, the average of the rank among existing methods is used.
+
+ | diving Gymvault skiing snowboard sync. 3m sync.10m | Avg. Corr. | Avg. Rank. |
| Pose+DCT [20] | 53.00(5) | - | - | - | - | - | - | 5 |
| ST-GCN [27] | 32.86(6) | 57.70(4) | 16.81(5) | 12.34(5) | 66.00(4) | 64.83(5) | 44.33(5) | 4.9 |
| C3D-LSTM [17] | 60.47(4) | 56.36(5) | 45.93(4) | 50.29(2) | 79.12(3) | 69.27(4) | 61.65(4) | 3.7 |
| C3D-SVR [17] | 79.02(1) | 68.24(3) | 52.09(3) | 40.06(4) | 59.37(5) | 91.20(2) | 69.37(3) | 3 |
| JR-GCN [15] | 76.30(2) | 73.58(1) | 60.06(1) | 54.05(1) | 90.13(2) | 92.54(1) | 78.49(1) | 1.3 |
| Ours | 74.19(3) | 72.96(2) | 58.90(2) | 49.60(3) | 92.98(1) | 90.43(1) | 77.89(2) | 2.3 |
+
+secondary, by computing the optical flow (using the TV-L1 algorithm [19]) at the region near the edge of images. In this task, we fix the weight decay rate of Adam Optimizer in our model to 0.8. For consistency, the performance results that we report in Table 7 are obtained and the experimental setting follows [15]; the results demonstrate that our method is competitive compared with current state-of-the-art methods, with the best performance on sync. 3m action assessment. Our proposal outperforms most of the state-of-the-art methods except JR-GCN [15], and its performance score is only $0.6\%$ less than that of JR-GCN [15] on average. Therefore, the extended experiment demonstrates that our framework is capable to generalize effectively to common action assessment tasks.
+
+# 6 Conclusion
+
+In this work, we proposed a novel asymmetric interaction model for asymmetrically interactive action assessment. In our model, we categorize the roles in an asymmetrically interactive action as a primary agent and secondary ones. With the asymmetric interaction, we can model the interactive actions in strong asymmetric relation. We evaluated our model on JIGSAWS [6] and our method achieved the state-of-the-art performance. Moreover, experimental results on TASD-2, a new dataset (to be released) collected in our work, also demonstrated our method could be generalized to general interactive actions in weak asymmetric relation. The extra experiments on AQA-7 [18] have also indicated that our model can be adapted to perform conventional action assessment. For future development, our method can also be extended to actions involving more than two people, with the help of the important people detectors [23, 11, 12]. It will be explored in the future work along with constructing relevant datasets.
+
+# Acknowledgement
+
+This work was supported partially by the National Key Research and Development Program of China (2018YFB1004903), NSFC(U1911401,U1811461), Guangdong Province Science and Technology Innovation Leading Talents (2016TX03X157), Guangdong NSF Project (No. 2018B030312002), Guangzhou Research Project (201902010037), and Research Projects of Zhejiang Lab (No. 2019KD0AB03).
+
+# References
+
+1. Bertasius, G., Soo Park, H., Yu, S.X., Shi, J.: Am i a baller? basketball performance assessment from first-person videos. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2177-2185 (2017)
+2. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6299-6308 (2017)
+3. Chen, J., Wang, Y., Qin, J., Liu, L., Shao, L.: Fast person re-identification via cross-camera semantic binary transformation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (July 2017)
+4. Doughty, H., Damen, D., Mayol-Cuevas, W.: Who's better, who's best: Skill determination in video using deep ranking. proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
+5. Doughty, H., Mayol-Cuevas, W., Damen, D.: The pros and cons: Rank-aware temporal attention for skill determination in long videos (June 2019)
+6. Gao, Y., Vedula, S.S., Reiley, C.E., Ahmidi, N., Varadarajan, B., Lin, H.C., Tao, L., Zappella, L., Bejar, B., Yuh, D.D., et al.: Jhu-isi gesture and skill assessment working set (jigsaws): A surgical activity dataset for human motion modeling. In: MICCAI Workshop: M2CAI. vol. 3, p. 3 (2014)
+7. Gattupalli, S., Ebert, D., Papakostas, M., Makedon, F., Athitsos, V.: Cognilearn: A deep learning-based interface for cognitive behavior assessment. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces. pp. 577-587. ACM (2017)
+8. Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: Continual prediction with LSTM. IET Conference Proceedings pp. 850-855(5) (January 1999)
+9. Ilg, W., Mezger, J., Giese, M.: Estimation of skill levels in sports based on hierarchical spatio-temporal correspondences. In: Joint Pattern Recognition Symposium. pp. 523-531. Springer (2003)
+0. Li, H., Cai, Y., Zheng, W.S.: Deep dual relation modeling for egocentric interaction recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019)
+1. Li, W.H., Hong, F.T., Zheng, W.S.: Learning to learn relation for important people detection in still images. In: Computer Vision and Pattern Recognition (2019)
+2. Li, W.H., Li, B., Zheng, W.S.: Personrank: detecting important people in images. In: International Conference on Automatic Face & Gesture Recognition (FG 2018) (2018)
+3. Malpani, A., Vedula, S.S., Chen, C.C.G., Hager, G.D.: Pairwise comparison-based objective score for automated skill assessment of segments in a surgical task. In: International Conference on Information Processing in Computer-Assisted Interventions. pp. 138-147. Springer (2014)
+4. Paiement, A., Tao, L., Hannuna, S., Camplani, M., Damen, D., Mirmehdi, M.: Online quality assessment of human movement from skeleton data. In: British Machine Vision Conference. pp. 153-166. BMVA press (2014)
+5. Pan, J.H., Gao, J., Zheng, W.S.: Action assessment by joint relation graphs. In: The IEEE International Conference on Computer Vision (ICCV) (October 2019)
+6. Parmar, P., Morris, B.T.: What and how well you performed? a multitask learning approach to action quality assessment. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019)
+
+17. Parmar, P., Tran Morris, B.: Learning to score olympic events. In: proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp. 20-28 (2017)
+18. Parmar, P., Tran Morris, B.: Action quality assessment across multiple actions. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1468-1476 (Jan 2019). https://doi.org/10.1109/WACV.2019.00161
+19. Pérez, J.S., Meinhardt-Llopis, E., Facciolo, G.: Tv-l1 optical flow estimation. Image Processing On Line pp. 137-150 (2013)
+20. Pirsiavash, H., Vondrick, C., Torralba, A.: Assessing the quality of actions. In: European Conference on Computer Vision. pp. 556-571. Springer (2014)
+21. Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Transactions on Neural Networks 20(1), 61-80 (2009)
+22. Sharma, Y., Bettadapura, V., Plötz, T., Hammerla, N., Mellor, S., McNaney, R., Olivier, P., Deshmukh, S., McCaskie, A., Essa, I.: Video based assessment of osats using sequential motion textures. Georgia Institute of Technology (2014)
+23. Solomon Mathialagan, C., Gallagher, A.C., Batra, D.: Vip: Finding important people in images. In: Computer Vision and Pattern Recognition (2015)
+24. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L.u., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems 30, pp. 5998-6008. Curran Associates, Inc. (2017), http:// papers.nips.cc/paper/7181-attention-is-all-you-need.pdf
+25. Wang, Z., Lu, J., Tao, C., Zhou, J., Tian, Q.: Learning channel-wise interactions for binary convolutional neural networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019)
+26. Xu, C., Fu, Y., Zhang, B., Chen, Z., Jiang, Y.G., Xue, X.: Learning to score the figure skating sports videos. arXiv preprint arXiv:1802.02774 (2018)
+27. Yan, S., Xiong, Y., Lin, D.: Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
+28. Zhang, Q., Li, B.: Video-based motion expertise analysis in simulation-based surgical training using hierarchical dirichlet process hidden markov model. In: Proceedings of the 2011 international ACM workshop on Medical multimedia analysis and retrieval. pp. 19-24. ACM (2011)
+29. Zhang, Q., Li, B.: Relative hidden markov models for video-based evaluation of motion skills in surgical training. IEEE transactions on pattern analysis and machine intelligence 37(6), 1206-1218 (2015)
+30. Zia, A., Essa, I.: Automated surgical skill assessment in rmis training. Int J CARS 13, 731-739 (2018)
+31. Zia, A., Sharma, Y., Bettadapura, V., Sarin, E.L., Clements, M.A., Essa, I.: Automated assessment of surgical skills using frequency analysis. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 430-438. Springer (2015)
+32. Zia, A., Sharma, Y., Bettadapura, V., Sarin, E.L., Essa, I.: Video and accelerometer-based motion analysis for automated surgical skills assessment. International journal of computer assisted radiology and surgery 13(3), 443-455 (2018)
+33. Zia, A., Sharma, Y., Bettadapura, V., Sarin, E.L., Ploetz, T., Clements, M.A., Essa, I.: Automated video-based assessment of surgical skills for training and evaluation in medical schools. International journal of computer assisted radiology and surgery 11(9), 1623-1636 (2016)
\ No newline at end of file
diff --git a/anasymmetricmodelingforactionassessment/images.zip b/anasymmetricmodelingforactionassessment/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e9717df8d44dbe082d1e8cbfb75053f2d51d2b35
--- /dev/null
+++ b/anasymmetricmodelingforactionassessment/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:192b200e18eaa879d42fefd7de48bf62acf09b49642e6a74b1a4865f0d61b96c
+size 403518
diff --git a/anasymmetricmodelingforactionassessment/layout.json b/anasymmetricmodelingforactionassessment/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..df069dcaca72eba73fa37847ff40ae3e869fd3d9
--- /dev/null
+++ b/anasymmetricmodelingforactionassessment/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:681d7ba18e16d36b435aa58e4e601e2e9d6baf2a6fcaa750b75b184db2aa57f8
+size 375991
diff --git a/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/a02ca8d3-8293-4a90-b9c7-08bf4745e5b5_content_list.json b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/a02ca8d3-8293-4a90-b9c7-08bf4745e5b5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..cccd5120d7bcfe4daec6618cb42a40f4da721166
--- /dev/null
+++ b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/a02ca8d3-8293-4a90-b9c7-08bf4745e5b5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5fca12ea6673a0afab83d2ac6b6e949b4786f1b904bcbca8dabe97be4c41375a
+size 73207
diff --git a/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/a02ca8d3-8293-4a90-b9c7-08bf4745e5b5_model.json b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/a02ca8d3-8293-4a90-b9c7-08bf4745e5b5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e52310075a10fbf83d0ab5690c10ca8f2d62b5e0
--- /dev/null
+++ b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/a02ca8d3-8293-4a90-b9c7-08bf4745e5b5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1ef174a581d195efb89d8368443060948b9d878ccb2b66be1fd21e20fec788b3
+size 90335
diff --git a/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/a02ca8d3-8293-4a90-b9c7-08bf4745e5b5_origin.pdf b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/a02ca8d3-8293-4a90-b9c7-08bf4745e5b5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cdd8e7dbaf220a0c9840d08b696707031417fec1
--- /dev/null
+++ b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/a02ca8d3-8293-4a90-b9c7-08bf4745e5b5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09915bbac1fe43d98f9168f245c14d10b6fb3cb7fd31fe7e16951fec6755685a
+size 6577786
diff --git a/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/full.md b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a189080b49121019af83ae7bd906073ee73f15f3
--- /dev/null
+++ b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/full.md
@@ -0,0 +1,248 @@
+# Anatomy-Aware Siamese Network: Exploiting Semantic Asymmetry for Accurate Pelvic Fracture Detection in X-ray Images
+
+Haomin Chen\*,2, Yirui Wang\*,1, Kang Zheng\*,1, Weijian Li\*,3, Chi-Tung Chang\*, Adam P. Harrison\*, Jing Xiao\*,4, Gregory D. Hager\*,2, Le Lu\*,1, Chien-Hung Liao\*, and Shun Miao\*
+
+1 PAII Inc., Bethesda, MD, USA
+
+$^{2}$ Departement of Computer Science, Johns Hopkins University, Baltimore, MD, USA
+
+$^{3}$ Department of Computer Science, University of Rochester, NY, USA
+
+4 Ping An Technology, Shenzhen, China
+
+5 Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
+
+Abstract. Visual cues of enforcing bilaterally symmetric anatomies as normal findings are widely used in clinical practice to disambiguate subtle abnormalities from medical images. So far, inadequate research attention has been received on effectively emulating this practice in computer-aided diagnosis (CAD) methods. In this work, we exploit semantic anatomical symmetry or asymmetry analysis in a complex CAD scenario, i.e., anterior pelvic fracture detection in trauma pelvic X-rays (PXRs), where semantically pathological (refer to as fracture) and non-pathological (e.g. pose) asymmetries both occur. Visually subtle yet pathologically critical fracture sites can be missed even by experienced clinicians, when limited diagnosis time is permitted in emergency care. We propose a novel fracture detection framework that builds upon a Siamese network enhanced with a spatial transformer layer to holistically analyze symmetric image features. Image features are spatially formatted to encode bilaterally symmetric anatomies. A new contrastive feature learning component in our Siamese network is designed to optimize the deep image features being more salient corresponding to the underlying semantic asymmetries (caused by pelvic fracture occurrences). Our proposed method have been extensively evaluated on 2,359 PXRs from unique patients (the largest study to-date), and report an area under ROC curve score of 0.9771. This is the highest among state-of-the-art fracture detection methods, with improved clinical indications.
+
+Keywords: Anatomy-Aware Siamese Network, Semantic Asymmetry, Fracture Detection, X-ray Images
+
+# 1 Introduction
+
+The computer-aided diagnosis (CAD) of abnormalities in medical images is among the most promising applications of computer vision in healthcare. In par
+
+
+Fig. 1. Example medical images where anatomical symmetry helps to detect abnormalities. The top 3 images represent infiltration in chest X-Rays, stroke in brain CT, and osteoarthritis in knee X-Rays. The bottom 2 images represent masses in mammography and fractures in PXRs. These abnormalities can be better differentiated when the anatomically symmetric body parts are compared.
+
+ticular, X-ray CAD represents an important research focus [5,34,28,20,15,4,25]. However, the high variations of abnormalities in medical imagery pose nontrivial challenges in differentiating pathological abnormalities from radiological patterns caused by normal anatomical and imaging-condition differences. At the same time, many anatomical structures are bilaterally symmetric (e.g., the brain, skeleton and breast) which suggests that the detection of abnormal radiological findings can exploit semantically symmetric anatomical regions (Figure 1). Indeed, using bilaterally symmetric visual cues to confirm suspicious findings is a strongly recommended and widely adopted clinical practice [7]. Our aim is to emulate this practice in CAD and apply it to the problem of effectively detecting subtle but critical anterior pelvic fractures in trauma pelvic X-rays (PXRs).
+
+Several studies have investigated the use of symmetry cues for CAD, aiming to find abnormalities in brain structures in neuro-imaging [32,18,22], breasts in mammograms [24], and stroke in CT [1]. All of these works directly employ symmetry defined on the image or shape space. However, under less constrained scenarios, especially the ones using projection-based imaging modalities in an emergency room setting, e.g., PXRs, image asymmetries do not always indicate positive clinical findings, as they are often caused by other non-pathological factors like patient pose, bowel gas patterns, and clothing. For these settings, a workflow better mirroring the clinical practice, i.e. robust analysis across semantic anatomical symmetries, is needed. Using semantic anatomical symmetry to facilitate CAD in such complex scenarios has yet to be explored.
+
+To bridge this gap, we propose an anatomy-aware Siamese network (AASN) to effectively exploit semantic anatomical symmetry in complex imaging scenarios. Our motivation comes from the detection of pelvic fractures in emergency-room PXRs. Pelvic fractures are among the most dangerous and lethal traumas, due to their high association with massive internal bleeding. Non-displaced fractures, i.e., fractures that cause no displacement of the bone structures, can be extraordinarily difficult to detect, even for experienced clinicians. Therefore, the combination of difficult detection coupled with extreme and highly-consequential demands on performance motivates even more progress. Using anatomical symmetry to push the performance even higher is a critical gap to fill.
+
+In AASNs, we employ fully convolutional Siamese networks [11] as the backbone of our method. First, we exploit symmetry cues by anatomically reparameterizing the image using a powerful graph-based landmark detection [21]. This allows us to create an anatomically-grounded warp from one side of the pelvis to the other. While previous symmetry modeling methods rely on image-based spatial alignment before encoding [24], we take a different approach and perform feature alignment after encoding using a spatial transformer layer. This is motivated by the observation that image asymmetry in PXRs can be caused by many factors, including imaging angle and patient pose. Thus, directly warping images is prone to introducing artifacts, which can alter pathological image patterns and make them harder to detect. Since image asymmetry can be semantically pathological, i.e., fractures, and non-pathological, e.g., imaging angle and patient pose, we propose a new contrastive learning component in Siamese network to optimize the deep image features being more salient corresponding to the underlying semantic asymmetries (caused by fracture). Crucially, this mitigates the impact of distracting asymmetries that may mislead the model. With a sensible embedding in place, corresponding anatomical regions are jointly decoded for fracture detection, allowing the decoder to reliably discover fracture-causing discrepancies.
+
+In summary, our main contributions are four folds.
+
+- We present a clinically-inspired (or reader-inspired) and computationally principled framework, named AASN, which is capable of effectively exploiting anatomical landmarks for semantic asymmetry analysis from encoded deep image features. This facilitates a high performance CAD system of detecting both visually evident and subtle pelvic fractures in PXRs.
+- We systematically explore plausible means for fusing the image based anatomical symmetric information. A novel Siamese feature alignment via spatial transformer layer is proposed to address the potential image distortion drawback in the prior work [24].
+- We describe and employ a new contrastive learning component to improve the deep image feature's representation and saliency reflected from semantically pathological asymmetries. This better disambiguates against the existing visual asymmetries caused by non-pathological reasons.
+- Extensive evaluation on real clinical dataset of 2,359 PXRs from unique patients is conducted. Our results show that AASN simultaneously increases
+
+the AUC and the average precision from $96.52\%$ to $97.71\%$ or from $94.52\%$ to $96.50\%$ , respectively, compared to a strong baseline model that does not exploit symmetry or asymmetry. More significantly, the pelvic fracture detection sensitivity or recall value has been boosted from $70.79\%$ to $77.87\%$ when controlling the false positive (FP) rate at $1\%$ .
+
+# 2 Related Work
+
+Computer-Aided Detection and Diagnosis in Medical Imaging. In recent years, motivated by the availability of public X-ray datasets, X-ray CAD has received extensive research attention. Many works have studied abnormality detection in chest X-rays (CXRs) [5,34,28,20]. CAD of fractures in musculoskeletal radiographs is another well studied field [6,8,35]. Since many public X-ray datasets only have image-level labels, many methods formulate abnormality detection as an image classification problem and use class activation maps [38] for localization [28,34]. While abnormalities that involve a large image area (e.g., atelectasis, cardiomegaly) may be suitable for detection via image classification, more localized abnormalities like masses and fractures are in general more difficult to detect without localization annotations. While methods avoiding such annotations have been developed [20,35], we take a different approach and use point-based localizations for annotations, which are minimally laborious and a natural fit for ill-defined fractures. Another complementary strategy to improve abnormality detection is to use anatomical and pathological knowledge and heuristics to help draw diagnostic inferences [23]. This is also an approach we take, exploiting the bilateral symmetry priors of anatomical structures to push forward classification performance.
+
+Image based Symmetric Modeling for CAD. Because many human anatomies are left-right symmetric (e.g., brain, breast, bone), anatomical symmetry has been studied for CAD. The shape asymmetry of subcortical brain structures is known to be associated with Alzheimer's disease and has been measured using both analytical shape analysis [32,18] and machine learning techniques [22]. A few attempts have been explored using symmetric body parts for CAD [1,24]. For instance, Siamese networks [11] have been used to combine features of the left and right half of brain CTs for detecting strokes. A Siamese Faster-RCNN approach was also proposed to detect masses from mammograms by jointly analyzing left and right breasts [24]. Yet, existing methods directly associate asymmetries in the image space with pathological abnormalities. While this assumption may hold in strictly controlled imaging scenarios, like brain CT/MRIs and mammograms, this rarely holds in PXRs, where additional asymmetry causing factors are legion, motivating the more anatomically-derived approach to symmetry that we take.
+
+Siamese Network and Contrastive Learning. Siamese networks are an effective method for contrastive learning that uses contrastive loss to embed semantically similar samples closer together and dissimilar images further away [11]. Local similarities have also been learned using Siamese networks [37]
+
+
+Fig. 2. Illustration of ROI and warp generation steps.
+
+and applied to achieve image matching/registration [26,29]. The embedding learned by Siamese networks has also been applied to one-shot image recognition [17] and human re-identification [31,30]. Fully convolutional Siamese networks have also been proposed to produce dense and efficient sliding-window embeddings, with notable success on visual object tracking tasks [9,2,10]. Another popular technique for contrastive learning is triplet networks [12]. We also use Siamese networks to learn embeddings; however, we propose a process to learn embeddings that are invariant to spurious asymmetries, while being sensitive to pathology-inducing ones.
+
+# 3 Method
+
+# 3.1 Problem Setup
+
+Given a PXR, denoted as $I$ , we aim to detect sites of anterior pelvic fractures. Following the widely adopted approach by CAD methods [20,33,34], our model produces image-level binary classifications of fracture and heatmaps as fracture localization. Using heatmaps to represent localization (instead of bounding box or segmentation) stems from the inherent ambiguity in the definition of instance and boundary of pathological abnormalities in medical images. For instance, a fracture can be comminuted, i.e. bone breaking into multiple pieces, resulting in ambiguity in defining the number of fractures. Our model takes a cost-effective and flexible annotation format, a point at the center of each fracture site, allowing ambiguous fracture conditions to be flexibly represented as one point or multiple points. We dilate the annotation points by an empirically-defined radius (2 cm in our experiment) to produce a mask for the PXR, which is the training target of our method, denoted as $M$ . In this way, we execute heatmap regression, similar to landmark detection [36], except for center-points of abnormalities with ambiguous extents.
+
+
+Fig. 3. System overview of the proposed AASN. The Siamese encoding module takes two pre-processed ROIs as input and encodes them using dense blocks with shared weights. After warping and alignment, the encoded feature maps are further processed by a Siamese feature fusion module and a Siamese contrastive learning module to produce a fracture probability map and a feature distance map, respectively.
+
+
+
+
+
+# 3.2 Anatomy-Grounded Symmetric Warping
+
+Given the input PXR image, our method first produces region of interest (ROI) of the anterior pelvis and anatomically-grounded warp to reveal the bilateral symmetry of the anatomy. The steps of ROI and warp generation are illustrated in Figure 2. First, a powerful graph-based landmark detection [19] is applied to detect 16 skeletal landmarks, including 7 pairs of bilateral symmetric landmarks and 2 points on pubic symphysis. From the landmarks, a line of bilateral symmetry is regressed, and the image is flipped with respect to it. Since we focus on detecting anterior pelvic fractures, where the dangers of massive bleeding is high and fractures are hard to detect, we extract ROIs of the anterior pelvis from the two images as a bounding box of landmarks on the pubis and ischium, which are referred as $I$ and $I_{f}$ . A pixel-to-pixel warp from $I_{f}$ to $I$ is generated from the corresponding landmarks in $I_{f}$ and $I$ using the classic thin-plate spline (TPS) warp [3], denoted as $T$ . Note, the warp $T$ is not directly used to align the images. Instead, it is used in our Siamese network via a spatial transformer layer to align the features.
+
+# 3.3 Anatomy-Aware Siamese Network
+
+The architecture of AASN is shown in Figure 3. AASN contains a fully convolutional Siamese network with a DenseNet-121 [13] backbone. The dense blocks are split into two parts, an encoding part and a decoding part. It is worth noting that AASN allows the backbone network to be split flexibly at any block. For our application, we split at a middle level after the 3rd dense block, where the
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Fig. 4. Transition layer modification options for feature map fusion. (a) Feature map fusion before transition. (b) Feature map fusion after transition. (c) Feature map fusion inside transition
+
+features are deep enough to encode the local skeletal pattern, but has not been pooled too heavily so that the textual information of small fractures is lost.
+
+The encoding layers follow a Siamese structure, with two streams of weight-shared encoding layers taking the two images $I$ and $I_{f}$ as inputs. The encoder outputs, denoted as $F$ and $F_{f}$ , provide feature representations of the original image and the flipped image, respectively. The spatial alignment transform $T$ is applied on $F_{f}$ , resulting in $F_{f}^{\prime}$ , making corresponding pixels in $F$ and $F_{f}^{\prime}$ represent corresponding anatomies. The two aligned feature maps are then fused and decoded to produce a fracture probability map, denoted as $Y$ . Details of feature map fusion and decoding will be described in Sec. 3.4. We produce the probability heatmap as fracture detection result to alert the clinician the presence of a fracture and also to guide his or her attention (as shown in Figure 6). Since pelvis fractures can be very difficult to detect, even when there is a known fracture, this localization is a key feature over-and-above image-level predictions.
+
+The model is trained using two losses. The first loss is the pixel-wise binary cross entropy (BCE) between the predicted heatmap $Y$ and the ground truth $M$ , denoted as $L_{b}$ . The second loss is the pixel-wise contrastive loss between the two feature maps, $F$ and $F_{f}^{\prime}$ , denoted as $L_{c}$ . Details of the contrastive loss will be discussed in Sec. 3.5. The total loss can be written as
+
+$$
+L = L _ {b} + \lambda L _ {c}, \tag {1}
+$$
+
+where $\lambda$ is a weight balancing the two losses.
+
+# 3.4 Siamese Feature Fusion
+
+The purpose of encoding the flipped image is to provide a reference of the symmetric counterpart, $F_{f}$ , which can be incorporated with the feature $F$ to facilitate fracture detection. To provide a meaningful reference, $F_{f}$ needs to be
+
+spatially aligned with $F$ , so that features with the same index/coordinate in the two feature maps encode the same, but symmetric, anatomies of the patient. Previous methods have aligned the bilateral images $I$ and $I_{f}$ directly before encoding [24]. However, when large imaging angle and patient pose variations are present, image alignment is prone to introducing artifacts, which can increase the difficulty of fracture detection. Therefore, instead of aligning the bilateral images directly, we apply a spatial transformer layer on the feature map $F_{f}$ to align it with $F$ , resulting in $F_{f}^{\prime}$ . The aligned feature maps $F$ and $F_{f}^{\prime}$ are fused to produce a bilaterally combined feature map, where every feature vector encodes the visual patterns from symmetrical anatomies. This allows the decoder to directly incorporate symmetry analysis into fracture detection.
+
+We fuse the feature maps by concatenation. Implementation of the concatenation involves modification to the transition module between the dense blocks, where multiple options exist, including concatenation before, after, or inside the transition module (as shown in Figure 4). A transition module in DenseNet consists of sequential BatchNorm, ReLU, Conv and AvgPool operations. We perform the concatenation inside the transition module after the ReLU layer, because it causes minimal structural changes to the DenseNet model. Specifically, the only layer affected in the DenseNet is the $1 \times 1$ Conv layer after concatenation, whose input channels are doubled. All other layers remain the same, allowing us to leverage the ImageNet pre-trained weights.
+
+# 3.5 Siamese Contrastive Learning
+
+While the above feature fusion provides a principled way to perform symmetric analysis, further advancements can be made. We are motivated by a key insight that image asymmetry can be caused by pathological abnormalities, i.e. fracture, or spurious non-pathological factors, e.g. soft tissue shadows, bowel gas patterns, clothing and foreign bodies. These non-pathological factors can be visually confusing, causing false positives. We aim to optimize the deep features to be more salient to the semantically pathological asymmetries, while mitigating the impact of distracting non-pathological asymmetries. To this end, our model employs a new constrastive learning component to minimize the pixel-wise distance between $F$ and $F_{f}^{\prime}$ in areas without fracture, making the features insensitive to non-semantic asymmetries and thus less prone to false positives. On the other hand, our contrastive learning component encourages larger distance between $F$ and $F_{f}^{\prime}$ in areas with fractures, making the features more sensitive to semantic asymmetries.
+
+The above idea is implemented using pixel-wise margin loss between $F$ and $F_{f}^{\prime}$ after a non-linear projection $g$ :
+
+$$
+L _ {c} = \sum_ {\boldsymbol {x}} \left\{ \begin{array}{l l} \| g (F (\boldsymbol {x})) - g \left(F _ {f} ^ {\prime} (\boldsymbol {x})\right) \| ^ {2} & \text {i f} \boldsymbol {x} \notin \hat {M} \\ \max (0, m - \| g (F (\boldsymbol {x})) - g \left(F _ {f} ^ {\prime} (\boldsymbol {x})\right) \| ^ {2}) & \text {i f} \boldsymbol {x} \in \hat {M} \end{array} , \right. \tag {2}
+$$
+
+where $\pmb{x}$ denotes the pixel coordinate, $\hat{M}$ denotes the mask indicating areas affected by fractures, and $m$ is a margin governing the dissimilarity of semantic
+
+asymmetries. The mask $\hat{M}$ is calculated as $\hat{M} = M \cup T \circ M_f$ , where $T \circ M_f$ is flipped and warped $M$ .
+
+We employ a non-linear projection $g$ to transform the feature before calculating the distance, which improves the quality of the learned feature $F$ , $F_{f}^{\prime}$ . In our experiment, the non-linear projection consists of a linear layer followed by BatchNorm and ReLU. We posit that directly performing contrast learning on features used for fracture detection could induce information loss and limit the modeling power. For example, bone curvature asymmetries in X-ray images are often non-pathological (e.g., caused by pose). However, they also provide visual cues to detect certain types of fractures. Using the non-linear projection, such useful information can be excluded from the contrastive learning so that they are preserved in the feature for the downstream fracture detection task.
+
+While the margin loss has been adopted for CAD in a previous method [22], it was employed as a metric learning tool to learn a distance metric that directly represent the image asymmetry. We stress that our targeted CAD is more complex and clinically relevant, where image asymmetry can be semantically non-pathological (caused by pose, imaging condition and etc.) but we are only interested in detecting the pathological (fracture-caused) asymmetries. We employ the margin loss in our contrastive learning component to learn features with optimal properties. For this purpose, extra measures are taken in our method, including 1) conducting multi-task training with the margin loss calculated on a middle level feature, and 2) employing a non-linear projection head to transform the feature before calculating the margin loss.
+
+# 4 Experiments
+
+We demonstrate that our proposed AASN can significantly improve the performance in pelvic fracture detection by exploiting the semantic symmetry of anatomies. We focus on detecting fractures on the anterior pelvis including pubis and ischium, an anatomically symmetric region with high rate of diagnostic errors and life-threatening complications in the clinical practice.
+
+# 4.1 Experimental Settings
+
+Dataset: We evaluate AASN on a real-world clinical dataset collected from the Picture Archiving and Communication System (PACS) of a hospital's trauma emergency department. The images have a large variation in the imaging conditions, including viewing angle, patient pose and foreign bodies shown in the images. Fracture sites in these images are labeled by experienced clinicians, combining multiple sources of information for confirmation, including clinical records and computed tomography scans. The annotations are provided in the form of points, due to inherent ambiguity in defining fracture as object. In total, there are 2359 PXRs, and 759 of them have at least one anterior pelvic fracture site. All our experiments are conducted with five-fold cross-validation with a $70\% /10\% /20\%$ training, validation, and testing split, respectively.
+
+(a)
+Fig. 5. Comparison of ROC curve and PR curve to the baselines. (a) is the ROC curve and (b) is the PR curve. *Methods trained using image-level labels.
+
+AASN DeepSymNet [1] Wang et al.* [32] Liu et al. [21] CBN [23] CheXNet* [26] Wang et al.* [33]
+
+
+(b)
+
+Implementation Details: The ROIs of the anterior pelvis are resized to $256 \times 512$ and stacked to a 3-channel pseudo-color image. We produce the supervision mask for the heatmap prediction branch by dilating the annotation points to circle masks with a radius of 50 (about $2\mathrm{cm}$ ). We implement all models using PyTorch [27]. Severe over-fitting is observed when training the networks from scratch, so we initialize them with ImageNet pre-trained weights. We empirically select DenseNet-121 as the backbone which yields the best performance comparing to other ResNet and DenseNet settings. All models are optimized by Adam [16] with a learning rate of $10^{-5}$ . For the pixel-wise contrastive loss, we use the hyperparameter $m = 0.5$ as the margin, and $\lambda = 0.5$ to balance the total loss.
+
+Evaluation Metrics: We first assess the model's performance as an image-level classifier, which is a widely adopted evaluation approach for CAD systems [20,33,34]. The image-level abnormality reporting is of utmost importance in clinical workflow because it directly affects the clinical decision. We take the maximum value of the output heatmap as the classification output, and use Area under ROC Curve (AUC) and Average Precision (AP) to evaluate the classification performance.
+
+We also evaluate the model's fracture localization performance. Since our model produces heatmaps as fracture localization, standard object detection metrics do not apply. A modified free-response ROC (FROC) is reported to measure localization performance. Specifically, unlike FROC, where object recall is reported with the number of false positives per image, we report fracture recall with the ratio of false positive area per image. A fracture is considered recalled if the heatmap activation value at its location is above the threshold. Areas with $>2$ cm away from all fracture annotation points are considered negative, on which the false positive ratio is calculated. Areas within 2 cm from any annotation
+
+Table 1. Fracture classification and localization performance comparison with state-of-the-art models. Classifier AUC and AP are reported for classification performance. Fracture recalls at given false positive ratio are reported for localization performance. *Methods trained using image-level labels. Localization performance are not evaluated on these methods.
+
+| Method | Classification | Localization |
| AUC | AP | RecallFP=1% | RecallFP=10% |
| CheXNet* [28] | 93.42% | 86.33% | - | - |
| Wang et al.* [34] | 95.43% | 93.31% | - | - |
| Wang et al.* [35] | 96.06% | 93.90% | - | - |
| Liu et al. [22] | 96.84% | 94.29% | 2.78% | 24.19% |
| DeepSymNet [1] | 96.29% | 94.45% | 69.66% | 90.07% |
| CBN [24] | 97.00% | 94.92% | 73.93% | 90.90% |
| AASN | 97.71% | 96.50% | 77.87% | 92.71% |
+
+point is considered as ambiguous extents of the fracture. Since both positive and negative responses in these ambiguous areas are clinically acceptable, they are excluded from the modified FROC calculation.
+
+Compared Methods: We first compare AASN with three state-of-the-art CAD methods, i.e., ChexNet [28], Wang et al. [34], and Wang et al. [35], all using image-level labels for training. They classify abnormality at image-level, and output heatmaps for localization visualization. ChexNet [28] employs a global average pooling followed by a fully connected layer to produce the final prediction. Wang et al. [34] uses Log-Sum-Exp (LSE) pooling. Wang et al. [35] employs a two-stage classification mechanism, and reports the state-of-the-art performance on hip/pelvic fracture classification.
+
+We also compare with three methods modeling symmetry for CAD, i.e., Liu et al. [22], CBN [24] and DeepSymNet [1]. All three methods perform alignment on the flipped image. Liu et al. [22] performs metric learning to learn a distance metric between symmetric body parts and uses it directly as an indicator of abnormalities. DeepSymNet [1] and CBN [24] fuse the Siamese encodings for abnormality detection, using subtraction and concatenation with gating, respectively. All evaluated methods use DenseNet-121 backbone, trained using the same experiment setting and tested with five-fold cross validation.
+
+# 4.2 Classification Performance
+
+Evaluation metrics of fracture classification performance are summarized in Table 1. ROC and PR curves are shown in Figure 5. The methods trained using only image-level labels result in overall lower performance than methods trained using fracture sites annotations. AASN outperforms all other methods, including the ones using symmetry and fracture site annotations, with substantial margins in all evaluation metrics. The improvements are also reflected in the ROC and
+
+PR curves Figure 5. Specifically, comparing to the 2nd highest values among all methods, AASN improves AUC and AP by $0.71\%$ and $1.58\%$ , from $97.00\%$ and $94.92\%$ to $97.71\%$ and $96.50\%$ , respectively. We stress that in this high AUC and AP range (i.e. above $95\%$ ), the improvements brought by AASN are significant. For instance, when recall is increased from $95\%$ to $96\%$ , the number of missed fractures are reduced by $20\%$ .
+
+Figure 6 provides visualizations of fracture heatmaps produced using different methods. Non-displaced fractures that do not cause bone structures to be largely disrupted are visually ambiguous and often missed by the vanilla DenseNet-121 without considering symmetry. Comparison between the fracture site and its symmetric bone reveals that the suspicious pattern only occurs on one side and is likely to be fracture. This intuition is in line with the results, i.e., by incorporating symmetric features, some of the ambiguous fractures can be detected. By employing the feature comparison module, AASN is able to detect more fracture, hypothetically owing to the better feature characteristics learned via feature comparison.
+
+# 4.3 Localization Performance
+
+We also evaluate AASN's fracture localization performance. The three symmetry modeling baselines and our four ablation study methods are also evaluated for comparison. As summarized in Table 1, AASN achieves the best fracture site recall among all evaluated methods, resulting in $\mathrm{Recall}_{\mathrm{FP}} = 1\% = 77.87\%$ and $\mathrm{Recall}_{\mathrm{FP}} = 10\% = 92.71\%$ , respectively. It outperforms baseline methods by substantial margins.
+
+Among the baseline methods, directly using learned distance metric as an indicator of fracture (Liu et al. [22]) results in the lowest localization performance, because the image asymmetry indicated by distance metric can be caused by other non-pathological factors than fractures. The comparison justifies the importance of our proposed contrastive learning component, which exploits image asymmetry to optimize deep feature for downstream fracture detection, instead of directly using it as a fracture indicator. CBN [24] achieves the best performance among the three baselines, hypothetically owing to the Siamese feature fusion. With our feature alignment and contrastive learning components, AASN significantly improves fracture site $\mathrm{Recall}_{\mathrm{FP} = 1\%}$ over CBN [24] by $3.94\%$ .
+
+# 4.4 Ablation Study
+
+We conduct ablation study of AASN to analyze the contributions of its novel components, summarized in Table 2. The components include: 1) Symmetric feature fusion (referred to as $FF$ ), 2) Feature alignment (referred to as $FA$ ) and 3) Feature contrastive learning (referred to as $CL$ ). We add these components individually to the Vanilla DenseNet-121 to analyze their effects. We also analyze the effect of the non-linear projection head $g$ by evaluating a variant of constrastive learning without it.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+Fig.6. Prediction results for different models. (a) pubis ROI in the PXR. Fracture probability heatmaps produced by (b) Vanilla DenseNet-121 [14], (c) CBN [24] and (d) AASN. (e) the distance map between siamese feature in AASN. The last row shows an example of failed cases.
+
+
+(e)
+
+Symmetric Feature Fusion: The effect of feature fusion is reflected in the comparisons: baseline vs. $FF$ and baseline vs. $FF - FA$ . Both $FF$ and $FF - FA$ employ symmetric feature fusion and are able to outperform Vanilla, although by a different margin due to the different alignment methods used. In particular, $FF - FA$ significantly improves the $\mathrm{Recall}_{\mathrm{FP} = 1\%}$ by $5.89\%$ . These improvements are hypothetically owing to the incorporation of the visual patterns from symmetric body parts, which provides reference for differentiating visually ambiguous fractures.
+
+Feature Alignment: The effect of feature warping and alignment is reflected in the comparisons: $FF$ vs. $FF-FA$ and $FF-CL$ vs. $FF-FA-CL$ . The ablation study shows that, by using the feature warping and alignment, the performances of both $FF$ and $FF-CL$ are both significantly improved. In particular, the $Recall_{FP=1\%}$ are improved by $3.46\%$ and $1.60\%$ in $FF-FA$ and $FF-FA-CL$ , respectively. It's also demonstrated that the contributions of feature warping and alignment are consistent with and without Siamese feature comparison. We posit that the performance improvements are owing to the preservation of the original image pattern by performing warping and alignment at the feature level.
+
+Contrastive Learning: The effect of Siamese feature comparison is reflected in the comparisons: $FF$ vs. $FF-CL$ and $FF-FA$ vs. $FF-FA-CL$ . The ablation study shows measurable contribution of the Siamese feature comparison module. By using Siamese feature fusion, $FF$ and $FF-FA$ already show significant improvements comparing to the baseline. By adding Siamese feature comparison to $FF$
+
+Table 2. Ablation study of AASN. The baseline model is vanilla DenseNet121 trained without the symmetry modeling components. "FF" indicates using feature fusion. "FA" indicates using feature alignment (otherwise image alignment is used). "CL" indicates using contrastive learning. "no. proj." indicates that the contrastive learning is performed without the non-linear projection head.
+
+| FF | FA | CL | AUC | AP | RecallFP=1% | RecallFP=10% |
| | | 96.52% | 94.52% | 70.79% | 89.46% |
| ✓ | | | 96.93% (+0.41%) | 94.77% (+0.25%) | 73.22% (+2.43%) | 89.93% (+0.47%) |
| ✓ | ✓ | | 97.20% (+0.68%) | 95.68% (+1.16%) | 76.68% (+5.89%) | 91.51% (+2.05%) |
| ✓ | | ✓ | 97.46% (+0.94%) | 95.36% (+0.84%) | 76.27% (+5.48%) | 91.09% (+1.63%) |
| ✓ | ✓ | ✓ no proj. | 97.31% (+0.79%) | 96.15% (+1.63%) | 77.26% (+6.47%) | 92.70% (+3.24%) |
| ✓ | ✓ | ✓ | 97.71% (+1.19%) | 96.50% (+1.98%) | 77.87% (+7.08%) | 92.71% (+3.25%) |
+
+and $FF$ - $FA$ , $\mathrm{Recall}_{\mathrm{FP}} = 1\%$ are improved by $3.05\%$ and $1.19\%$ , respectively. The improvements are in line with our motivation and hypothesis that by maximizing/minimizing Siamese feature distances on areas with/without fractures, the network can learn features that are more sensitive to fractures and less sensitive to other distracting factors. Comparing to the AASN directly performing constraintive learning on the symmetric feature (no. proj.), employing the non-linear projection head leads further improves the $\mathrm{Recall}_{\mathrm{FP}} = 1\%$ by $0.61\%$ .
+
+# 5 Conclusion
+
+In this paper, we systematically and thoroughly study exploiting the anatomical symmetry prior knowledge to facilitate CAD, in particular anterior pelvic fracture detection in PXR. We introduce a deep neural network technique, termed Anatomical-Aware Siamese Network, to incorporate semantic symmetry analysis into abnormality (i.e. fracture) detection. Through comprehensive ablation study, we demonstrate that: 1) Employing symmetric feature fusion can effectively exploit symmetrical information to facilitate fracture detection. 2) Performing spatial alignment at the feature level for symmetric feature fusion leads to substantial performance gain. 3) Using contrastive learning, the Siamese encoder is able to learn more sensible embedding, leading to further performance improvement. By comparing with the state-of-the-art methods, including latest ones modeling symmetry, we demonstrate the AASN is by far the most effective method exploiting symmetry and reports substantially improved performances on both classification and localization tasks.
+
+# References
+
+1. Barman, A., Inam, M.E., Lee, S., Savitz, S., Sheth, S., Giancardo, L.: Determining ischemic stroke from ct-angiography imaging using symmetry-sensitive convolutional networks. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). pp. 1873–1877 (April 2019). https://doi.org/10.1109/ISBI.2019.8759475
+2. Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.: Fully convolutional siamese networks for object tracking. In: European conference on computer vision. pp. 850-865. Springer (2016)
+3. Bookstein, F.L.: Principal warps: thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence 11(6), 567-585 (June 1989). https://doi.org/10.1109/34.24792
+4. Bustos, A., Pertusa, A., Salinas, J.M., de la Iglesia-Vayá, M.: Padchest: A large chest x-ray image dataset with multi-label annotated reports (2019)
+5. Chen, H., Miao, S., Xu, D., Hager, G.D., Harrison, A.P.: Deep hierarchical multi-label classification of chest x-ray images. In: Cardoso, M.J., Feragen, A., Glocker, B., Konukoglu, E., Oguz, I., Unal, G., Vercauteren, T. (eds.) Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning. Proceedings of Machine Learning Research, vol. 102, pp. 109-120. PMLR, London, United Kingdom (08-10 Jul 2019), http://proceedings.mlr.press/v102/chen19a.html
+6. Cheng, C.T., Ho, T.Y., Lee, T.Y., Chang, C.C., Chou, C.C., Chen, C.C., Chung, I.F., Liao, C.H.: Application of a deep learning algorithm for detection and visualization of hip fractures on plain pelvic radiographs. European radiology pp. 1-9 (2019)
+7. Clohisy, J.C., Carlisle, J.C., Beaulé, P.E., Kim, Y.J., Trousdale, R.T., Sierra, R.J., Leunig, M., Schoenecker, P.L., Millis, M.B.: A systematic approach to the plain radiographic evaluation of the young adult hip. The Journal of Bone and Joint Surgery. American volume. 90(Suppl 4), 47 (2008)
+8. Gale, W., Oakden-Rayner, L., Carneiro, G., Bradley, A.P., Palmer, L.J.: Detecting hip fractures with radiologist-level performance using deep neural networks. CoRR abs/1711.06504 (2017), http://arxiv.org/abs/1711.06504
+9. Guo, Q., Feng, W., Zhou, C., Huang, R., Wan, L., Wang, S.: Learning dynamic siamese network for visual object tracking. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1763-1771 (2017)
+0. Guo, Q., Feng, W., Zhou, C., Huang, R., Wan, L., Wang, S.: Learning dynamic siamese network for visual object tracking. In: The IEEE International Conference on Computer Vision (ICCV) (Oct 2017)
+1. Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). vol. 2, pp. 1735-1742 (June 2006). https://doi.org/10.1109/CVPR.2006.100
+2. Hoffer, E., Ailon, N.: Deep metric learning using triplet network. In: Feragen, A., Pelillo, M., Loog, M. (eds.) Similarity-Based Pattern Recognition. pp. 84-92. Springer International Publishing, Cham (2015)
+3. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.: Densely connected convolutional networks. arxiv website. arxiv.org/abs/1608.06993. Published August 24 (2016)
+4. Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. CoRR abs/1608.06993 (2016), http://arxiv.org/abs/1608.06993
+
+15. Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilicus, S., Chute, C., Marklund, H., Haghgoo, B., Ball, R.L., Shpanskaya, K., Seekins, J., Mong, D.A., Halabi, S.S., Sandberg, J.K., Jones, R., Larson, D.B., Langlotz, C.P., Patel, B.N., Lungren, M.P., Ng, A.Y.: Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. CoRR abs/1901.07031 (2019)
+16. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
+17. Koch, G., Zemel, R., Salakhutdinov, R.: Siamese neural networks for one-shot image recognition. In: ICML deep learning workshop. vol. 2 (2015)
+18. Konukoglu, E., Glocker, B., Criminisi, A., Pohl, K.M.: Wesd-weighted spectral distance for measuring shape dissimilarity. IEEE transactions on pattern analysis and machine intelligence 35(9), 2284-2297 (2012)
+19. Li, W., Lu, Y., Zheng, K., Liao, H., Lin, C., Luo, J., Cheng, C.T., Xiao, J., Lu, L., Kuo, C.F., Miao, S.: Structured landmark detection via topology-adapting deep graph learning (2020)
+20. Li, Z., Wang, C., Han, M., Xue, Y., Wei, W., Li, L.J., Fei-Fei, L.: Thoracic Disease Identification and Localization with Limited Supervision. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8290-8299. IEEE, Salt Lake City, UT (Jun 2018). https://doi.org/10.1109/CVPR.2018.00865
+21. Ling, H., Gao, J., Kar, A., Chen, W., Fidler, S.: Fast interactive object annotation with curve-gcn. CoRR abs/1903.06874 (2019), http://arxiv.org/abs/1903.06874
+22. Liu, C.F., Padhy, S., Ramachandran, S., Wang, V.X., Efimov, A., Bernal, A., Shi, L., Vaillant, M., Ratnanather, J.T., Faria, A.V., Caffo, B., Albert, M., Miller, M.I.: Using deep siamese neural networks for detection of brain asymmetries associated with alzheimer's disease and mild cognitive impairment. Magnetic Resonance Imaging (2019). https://doi.org/https://doi.org/10.1016/j.mri.2019.07.003, http://www.sciencedirect.com/science/article/pii/S0730725X19300086
+23. Liu, S.X.: Symmetry and asymmetry analysis and its implications to computer-aided diagnosis: A review of the literature. Journal of biomedical informatics 42(6), 1056-1064 (2009)
+24. Liu, Y., Zhou, Z., Zhang, S., Luo, L., Zhang, Q., Zhang, F., Li, X., Wang, Y., Yu, Y.: From unilateral to bilateral learning: Detecting mammogram masses with contrasted bilateral network. In: Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap, P.T., Khan, A. (eds.) Medical Image Computing and Computer Assisted Intervention - MICCAI 2019. pp. 477-485. Springer International Publishing, Cham (2019)
+25. Lu, Y., Li, W., Zheng, K., Wang, Y., Harrison, A.P., Lin, C., Wang, S., Xiao, J., Lu, L., Kuo, C.F., Miao, S.: Learning to segment anatomical structures accurately from one exemplar (2020)
+26. Melekhov, I., Kannala, J., Rahtu, E.: Siamese network features for image matching. In: 2016 23rd International Conference on Pattern Recognition (ICPR). pp. 378-383 (Dec 2016). https://doi.org/10.1109/ICPR.2016.7899663
+27. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch (2017)
+28. Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D.Y., Bagul, A., Langlotz, C., Shpanskaya, K.S., Lungren, M.P., Ng, A.Y.: Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. CoRR abs/1711.05225 (2017), http://arxiv.org/abs/1711.05225
+
+29. Simonovsky, M., Gutiérrez-Becker, B., Mateus, D., Navab, N., Komodakis, N.: A deep metric for multimodal registration. In: International conference on medical image computing and computer-assisted intervention. pp. 10–18. Springer (2016)
+30. Sun, Y., Wang, X., Tang, X.: Deep learning face representation by joint identification-verification. CoRR abs/1406.4773 (2014), http://arxiv.org/abs/1406.4773
+31. Varior, R.R., Haloi, M., Wang, G.: Gated siamese convolutional neural network architecture for human re-identification. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) Computer Vision - ECCV 2016. pp. 791-808. Springer International Publishing, Cham (2016)
+32. Wachinger, C., Golland, P., Kremen, W., Fischl, B., Reuter, M., Initiative, A.D.N., et al.: Brainprint: A discriminative characterization of brain morphology. NeuroImage 109, 232-248 (2015)
+33. Wang, H., Xia, Y.: Chestnet: A deep neural network for classification of thoracic diseases on chest radiography. arXiv preprint arXiv:1807.03058 (2018)
+34. Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (July 2017)
+35. Wang, Y., Lu, L., Cheng, C.T., Jin, D., Harrison, A.P., Xiao, J., Liao, C.H., Miao, S.: Weakly supervised universal fracture detection in pelvic x-rays. In: Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap, P.T., Khan, A. (eds.) Medical Image Computing and Computer Assisted Intervention - MICCAI 2019. pp. 459-467. Springer International Publishing, Cham (2019)
+36. Xu, Z., Huo, Y., Park, J., Landman, B., Milkowski, A., Grbic, S., Zhou, S.: Less is more: Simultaneous view classification and landmark detection for abdominal ultrasound images. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 711-719. Springer (2018)
+37. Zagoruyko, S., Komodakis, N.: Learning to compare image patches via convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4353-4361 (2015)
+38. Zhou, B., Khosla, A., Lapedriza, Å., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. CoRR abs/1512.04150 (2015), http://arxiv.org/abs/1512.04150
\ No newline at end of file
diff --git a/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/images.zip b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d95c6a5a1386ca975752338d6802d2b3c62be53c
--- /dev/null
+++ b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee6de41bafe7856ffbd1cfcd5cbd0520dbe87e6ce8e445ce38b4e2716fbcd3c4
+size 414625
diff --git a/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/layout.json b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2832bbe6fcf2e1131185038f91b75a3108253ece
--- /dev/null
+++ b/anatomyawaresiamesenetworkexploitingsemanticasymmetryforaccuratepelvicfracturedetectioninxrayimages/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9e9ec8eb0b9e235ae181b7bbb424f053dbe492e45dbaa5007a4b44562f36c46e
+size 392830
diff --git a/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/a23adb3e-f6ea-4156-a22f-87bcd81390aa_content_list.json b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/a23adb3e-f6ea-4156-a22f-87bcd81390aa_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d2d6a71a4e95e1c0035937037fc8ba84ef268bcd
--- /dev/null
+++ b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/a23adb3e-f6ea-4156-a22f-87bcd81390aa_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ccb3135f94c5a59b84cf2582641ddda112684e35217dd1f2c6c702c293aa910
+size 75028
diff --git a/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/a23adb3e-f6ea-4156-a22f-87bcd81390aa_model.json b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/a23adb3e-f6ea-4156-a22f-87bcd81390aa_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..255cd4d740bb1c7f3f2b4ceb821d60e9e6cd35ed
--- /dev/null
+++ b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/a23adb3e-f6ea-4156-a22f-87bcd81390aa_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b7b30008a8a8c7f256d5d4d3b7e72959d4cd52fbb6cbc27e124ff5355908169
+size 95404
diff --git a/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/a23adb3e-f6ea-4156-a22f-87bcd81390aa_origin.pdf b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/a23adb3e-f6ea-4156-a22f-87bcd81390aa_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a33cbeaf4e6c9cd490fc3ad91d825e3c40b572a8
--- /dev/null
+++ b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/a23adb3e-f6ea-4156-a22f-87bcd81390aa_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d0665bb4a17b6b15063e1c37fc73984e9f6c97594e49559b06929bdae177fcd8
+size 1888421
diff --git a/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/full.md b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3e66b3ba386902ce41af6c3366038afa5c8e5a15
--- /dev/null
+++ b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/full.md
@@ -0,0 +1,258 @@
+# An Attention-driven Two-stage Clustering Method for Unsupervised Person Re-Identification
+
+Zilong Ji $^{1}$ , Xiaolong Zou $^{2}$ , Xiaohan Lin $^{2}$ , Xiao Liu $^{3}$ , Tiejun Huang $^{2}$ , and Si Wu $^{2,3}$
+
+1 State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, China. jizilong@mail.bnu.edu.cn
+$^{2}$ School of Electronics Engineering & Computer Science, Peking University, Beijing, China.
+3 IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.
+
+{xiaolz,Lin.xiaohan,xiaoliu23,tjhuang,siwu}@pku.edu.cn
+
+Abstract. The progressive clustering method and its variants, which iteratively generate pseudo labels for unlabeled data and per form feature learning, have shown great process in unsupervised person re-identification (re-id). However, they have an intrinsic problem of modeling the in-camera variability of images successfully, that is, pedestrian features extracted from the same camera tend to be clustered into the same class. This often results in a non-convergent model in the real world application of clustering based re-id models, leading to degenerated performance. In the present study, we propose an attention-driven two-stage clustering (ADTC) method to solve this problem. Specifically, our method consists of two strategies. Firstly, we use an unsupervised attention kernel to shift the learned features from the image background to the pedestrian foreground, which results in more informative clusters. Secondly, to aid the learning of the attention driven clustering model, we separate the clustering process into two stages. We first use kmeans to generate the centroids of clusters (stage 1) and then apply the k-reciprocal Jaccard distance (KRJD) metric to re-assign data points to each cluster (stage 2). By iteratively learning with the two strategies, the attentive regions are gradually shifted from the background to the foreground and the features become more discriminative. Using two benchmark datasets Market1501 and DukeMTMC, we demonstrate that our model outperforms other state-of-the-art unsupervised approaches for person re-id.
+
+Keywords: Attention, Clustering, Unsupervised Learning, Person Reid
+
+
+Fig. 1. Examples of class activation maps (CAMs) of pedestrians extracted from the same camera. From top to bottom are the original images, the CAMs without attention, and the CAMs with attention (the attcention mechanism is described in Sec.3.1). Without attention, the CAMs highlight more on the background, leading to that images from the same camera are likely to be assigned to the same cluster. With attention, the CAMs focus more on the informative features of pedestrians.
+
+# 1 Introduction
+
+The difficulties faced by supervised learning have motivated people to develop unsupervised person re-id models which is more applicable in the real world setting. One promising approach is the clustering-based method. The idea is to train a clustering model for the unlabeled data points and a feature learning model from the pseudo-labeled dataset in a iterative manner. However, in a real world re-id system, pedestrian images detected in the same camera often share similar background. This results in a clustering model which assigns pedestrian features extracted from the same camera into the same cluster. Such model shows great attention to the image background and fails to capture the in-camera variability of images (Fig. 1). Therefore, it is necessary to shift the foci from the background to the foreground during the implementation of the clustering based model. Under the setting of supervised person re-id, it is often done by introducing an attention kernel to highlight the informative features of pedestrians (e.g., logos on clothes, backpacks) and suppresses uninformative ones (e.g., the background) [23, 38, 41, 14]. However, due to the lack of supervisory signals under the setting of unsupervised person re-id, it is hard for the attention model to learn correct attentive regions. An alternative way is to use the off-the-shelf pose estimation model to propose hard attentive local regions [34], but this introduces local network branches which increases computational complexity of the model.
+
+In the present study, to solve the aforementioned challenges, we propose an Attention-Driven, Two-stage Clustering method, referred to as ADTC hereafter (Fig. 2A), for unsupervised person re-id task. Specifically, we adopt a voxel attention kernel to highlight the features of images that are informative for pedestrian discrimination. This attention mechanism enhances the informative spatial regions for pedestrians and recalibrates the channel-wise feature information adaptively according to the inter-dependencies between channels. As a result, it
+
+
+Fig. 2. The scheme of our method ADTC. (A) Our model consists of two iterative operations, the voxel attention and the two-stage clustering. The gray shadow denotes the manifold of feature representations at the current round, and different colors represent different clusters. (B) The feature extractor of our model. GMP denotes global max-pooling and BN batch normalization. (C) The detail of the attention kernel in the red dotted box in (B).
+
+
+
+enlarges the separations between the negative and positive image pairs with respect to a query. Moreover, this voxel attention kernel only has a small number of trainable parameters, avoiding the overfitting problem during the iterative training. Furthermore, to improve the training of the attention-related parameters under the unsupervised setting, we adopt a two-stage clustering process to generate pseudo-labels for data points. We first use kmeans++ [1] to generate the centroids of clusters and then apply the k-reciprocal Jaccard distance (KRJD) metric [45] to re-assign data points to each cluster. Due to the appealing property of KRJD, data points belonging to the same class are more likely to be aggregated together, and the clustering quality of images is significantly improved, which in return facilitates the training of the model parameters. Overall, in our model, data clustering (generating pseudo-labels) and model training (optimizing feature representations with attention) are executed iteratively (Fig. 2A), and they promote each other to achieve good performances. Using benchmark datasets, we demonstrate that the proposed model can largely correct the mistakes made by the previous clustering based models (Fig. 4) and outperform other state-of-the-art unsupervised models for person re-id. The main contributions of this paper include:
+
+- We propose to use an unsupervised voxel attention strategy to correct the mistakes made by the clustering based re-id models.
+- We propose to use a two-stage clustering strategy to generate pseudo-labels for data points, which improves the clustering quality and stabilizes the progressive training.
+- Our model achieves the state-of-the-art performances under the unsupervised setting for person re-id on a number of benchmark datasets.
+
+# 2 Related Work
+
+# 2.1 Unsupervised Person Re-ID
+
+Traditional unsupervised person re-id studies have mainly focused on feature engineering [9, 7, 42, 13], which created hand-craft features using human prior knowledge that can be applied directly to the unsupervised learning paradigm. These methods are efficient for a small dataset, but often fail to deal with a large dataset, since they can not fully exploit the data distribution to extract the appropriate semantic features. Recently, the domain adaptation strategy has been widely used for unsupervised person re-id [24, 18, 25, 33, 32], which attempts to reduce the discrepancy between the source and target data domains. During training, the knowledge learned from the source domain is continuously transferred to the target domain to facilitate the learning process. For example, Lin et al. [20] developed a feature alignment method to align the source and target data in the feature space by jointly optimizing the classification and alignment losses. Deng et al.[5] proposed a SPGAN model to preserve the similarity between two domains and integrate image translation and model learning. However, these approaches rely heavily on the assumption that the two domains have similar distributions. When the discrepancy between two domains is large, there is no guarantee that these methods will work well. Another direction for unsupervised person re-id is the clustering-based method [6, 28, 40, 21, 39, 8], which generates pseudo-labels by clustering data points in the feature space and then use these pseudo-labels to train the model as if in the supervised manner. Fan et al. [6] proposed a progressive clustering method to transfer the pre-learned deep representations to an unseen domain, where feature clustering and representation learning are performed iteratively like the EM-style algorithm. Lin et al. [21] proposed a bottom-up clustering approach to jointly optimize a convolutional neural network and the relationship between the individual samples. Recently, Yang et.al [39] introduced the asymmetric co-teaching strategy in the clustering based method. For a clustering-based unsupervised model, the clustering quality of data is crucial. Compared to the existing clustering-based models, our method has two differences: 1) we use an attention mechanism to drive the clustering process, and 2) we cluster data points in two stages using a more appropriate distance metric. It turns out that our method improves the clustering quality significantly, which further leverages the model performances (see the details in Sec. 3.1 and Sec. 3.2).
+
+# 2.2 Attention in Person Re-ID
+
+The attention in a person re-id model aims to highlight the informative features of images to avoid the mis-alignments due to pose variance, occlusion, or body parts missing in a bounding box [36, 27, 3, 49, 4]. The attention mechanisms proposed in the literature can be divided into two main categories: hard-attention and soft-attention. The former typically uses a pose estimation model to locate coarse regions and then exploit these local features for discrimination [34, 43, 30,
+
+15]. However, these hard region-level attentions rely heavily on the pose estimation, which is often inaccurate and does not consider the pixel-level information within the selected regions that are potentially important for the identification task. A soft-attention mechanism typically inserts trainable layers into the main body of the model to mask the convolutional feature maps, so that the informative regions are highlighted [16, 38, 31, 2]. Two main soft-attention mechanisms are widely used: the spatial attention and the channel attention. The former enables the model to pay attention to the valuable features at different spatial locations, and the latter enables the model to improve the representational power by performing channel-wise recalibration. There are also works combining the two soft-attention mechanisms. For example, Li et al. [17] proposed a Harmonious Attention Convolutional Neural Network (HA-CNN) which combines the pixel-level spatial information and the scale-level channel information to jointly learn the attentive regions and feature representations. Notably, so far the attention mechanism has only been used under the supervised setting; here we apply it under the unsupervised setting which is much harder to optimize.
+
+# 3 Our Approach
+
+# 3.1 Voxel attention (VA)
+
+We first introduce the voxel attention strategy4. Given an input image $\pmb{x}$ in the unlabeled dataset $X$ , denote the output of the backbone model as the corresponding feature map $\pmb{f}^{w\times h\times c}$ , where $w, h, c$ are the values of width and height, and the number of channels, respectively. The attention feature map $\pmb{a}^{w\times h\times c}$ is defined as (for clearance, we omit the superscript hereafter),
+
+$$
+\boldsymbol {a} = \boldsymbol {v} \odot \boldsymbol {f}, \tag {1}
+$$
+
+where $\pmb{v}$ is the voxel attention kernel having the same size as $\pmb{f}$ , and $\odot$ denotes the element-wise product. $\pmb{v}$ is composed of two complementary parts: the spatial and the channel attentions (Fig. 2C). For the spatial attention part, we first calculate the mean intensity of activation at each spatial location along the corresponding channel, which is given by $I(i,j) = \sum_{l=1}^{c} \frac{f(i,j,l)}{c}$ ; afterwards we apply softmax to calculate the probability of $I(i,j)$ , which is $\pmb{S}(i,j) = e^{I(i,j)} / \left[ \sum_{i,j} e^{I(i,j)} \right]$ . Here, the divisive normalization makes the spatial filters competitive (acts like global inhibition) to highlight the most active (informative) ones. Note that no trainable parameter is introduced for the spatial attention branch. For the channel attention branch, we adopt the idea of [11] and apply a squeeze-and-excitation block to improve the quality of representations. Firstly, we perform global average pooling on $\pmb{f}$ to squeeze the global spatial information into a channel descriptor $\pmb{C}_{in}^{c}$ , with each element $c_{in}^{l} = \sum_{i=1,j=1}^{h,w} \frac{f(i,j,l)}{(h\times w)}$ aggregating the feature information distributed across the spatial space in channel $l$ . Secondly,
+
+to capture the inter-dependencies between different channels in $\pmb{f}$ , we employ a gating function on $C_{in}^{c}$ by forming a bottleneck with two fully connected layers, i.e.,
+
+$$
+\boldsymbol {C} = \sigma \left[ W _ {2} R e L U \left(W _ {1} \boldsymbol {C} _ {i n}\right) \right], \tag {2}
+$$
+
+where $\sigma$ represents the sigmoid function, $W_{1} \in D^{d \times c}$ , $W_{2} \in D^{c \times d}$ , $C \in D^{c}$ , with $d \ll c$ . The total number of parameters in the channel attention part is only $2cd$ , which is computationally efficient. Eventually, the voxel attention kernel $\mathbf{v}$ can be written as tensor multiplication between $\mathbf{S}$ and $\mathbf{C}$ ,
+
+$$
+\boldsymbol {v} = \boldsymbol {S} \times \boldsymbol {C}, \tag {3}
+$$
+
+i.e., each voxel $v_{i}$ in $\pmb{v}$ at the location $(i,j,l)$ is calculated as $S(i,j)\times C(l)$ (see Fig. 2C).
+
+The above voxel attention kernel can be regarded as a self-attention function, which not only enhances the quality of spatial encoding by attending to active spatial locations in the feature map $\pmb{f}$ , but also recalibrates the channel-wise feature responses adaptively by capturing the inter-dependencies between channels. Compared to the harmonious attention (HA) [17], the voxel attention has a few differences: 1) it has a much simpler form with a much smaller number of trainable parameters; 2) it is only applied after the backbone model, while HA is inserted between several building blocks; 3) it includes a normalization operation in the spatial attention to highlight the informative spatial locations. It turns out that these differences contribute to improve the model performances significantly (see Sec. 4.3,4.7).
+
+# 3.2 Two-stage clustering (TC)
+
+We now introduce the two-stage clustering strategy. The choice of the distance metric is crucial for clustering. Although an off-the-shelf clustering algorithm operating in the feature space, rather than in the raw pixel space, can alleviate the problem of "curse of dimensionality" [29] to some extent, it may still lead to an unsatisfactory clustering quality. Here we adopt a two-stage procedure to improve the clustering performance. Firstly, we use the conventional kmeans++ to get the centroids of clusters, denoted as $\{c_m\}_{m=1}^M$ , with $M$ the predefined number of clusters. Secondly, we re-assign data points to each cluster according to their k-reciprocal Jaccard distances (KRJDs) [45] to the cluster centroids. The k-reciprocal nearest neighbours of a feature point are defined as,
+
+$$
+R (\boldsymbol {g}, k) = \left\{\boldsymbol {g} _ {j} \mid \left(\boldsymbol {g} _ {j} \in N (\boldsymbol {g}, k)\right) \cap \left(\boldsymbol {g} \in N \left(\boldsymbol {g} _ {j}, k\right)\right) \right\}, \tag {4}
+$$
+
+where $\pmb{g}$ is a feature point for clustering, which is obtained by performing max-pooling and 1-D batch normalization on the re-weighted attention feature map $\pmb{a}$ . $N(\pmb{g},k)$ denotes the $k$ nearest neighbours of $\pmb{g}$ . $R(\pmb{g},k)$ indicates that $\pmb{g}$ and each element in its neighbourhood are the mutually $k$ nearest neighbours of each other. The KRJD distance between two feature points is then defined as
+
+$$
+\boldsymbol {J} (\boldsymbol {g} _ {i}, \boldsymbol {g} _ {j}) = 1 - \frac {\left| R (\boldsymbol {g} _ {i} , k) \cap R (\boldsymbol {g} _ {j} , k) \right|}{\left| R (\boldsymbol {g} _ {i} , k) \cup R (\boldsymbol {g} _ {j} , k) \right|}. \tag {5}
+$$
+
+Compared to Euclidean distance, KRJD takes into account the reciprocal relationship between data points, and is a stricter rule measuring whether two feature points match or not (see Fig. 5 and more examples in SI.6). KRJD can also be seen as a refinement of the k-nearest neighbour in the Euclidean space which is more accurate for sorting feature points. Then we obtain a refined cluster $\mathcal{C}_m^p$ by selecting the top $p$ closest feature points to $c_m$ with the KRJD metric. Some of the refined clusters may share some data points due to noises or variances of input images, especially when feature points are intertwined with each other at the first few rounds of training. To alleviate this problem, we remove data points having ambiguous pseudo-labels, and obtain the final pseudo-labeled training set $\{(x_j,y_j)\}_{j=1}^{N_r}, y_j \in [1,2,\dots,M]$ , where $N_r$ is the number of remaining data.
+
+# 3.3 Progressive Training
+
+In our model, the voxel attention (in combination with model training and feature extraction) and two-stage clustering (generating pseudo-labels) are performed iteratively. At each training round $t$ , we optimize the model parameters using the pseudo-labelled train set. When choosing the loss function, we note that the clustering assignments of two adjacent training rounds can be completely different, even if the same set of training samples are used. We therefore adopt the metric learning loss, rather than the softmax loss, as the latter will lead to the failure of model learning. In other words, we only impose that the difference of (dis-)similarities between the positive and negative pairs with respect to a query is larger than a predefined margin, such that the absolute values of assignments are irrelevant. Specifically, we adopt the triplet loss with in-batch hard example mining [10] to optimize the model parameters, which is written as
+
+$$
+\begin{array}{l} L _ {t r i} ^ {m} \left(\boldsymbol {g}, \boldsymbol {g} ^ {+}, \boldsymbol {g} ^ {-}; \boldsymbol {\theta}\right) = \max (0, \| \boldsymbol {g} - \boldsymbol {g} ^ {+} \| _ {2} ^ {2} - \| \boldsymbol {g} - \boldsymbol {g} ^ {-} \| _ {2} ^ {2} + m), \\ \text {w h e r e} \quad \boldsymbol {g} ^ {+} = \underset {\left\{\boldsymbol {g} ^ {p} \right\}} {\arg \max } \| \boldsymbol {g} - \boldsymbol {g} ^ {p} \| _ {2} ^ {2}, \text {a n d} \quad \boldsymbol {g} ^ {-} = \underset {\left\{\boldsymbol {g} ^ {n} \right\}} {\arg \min } \| \boldsymbol {g} - \boldsymbol {g} ^ {n} \| _ {2} ^ {2}. \tag {6} \\ \end{array}
+$$
+
+Here $\{\pmb{g}^p\}$ and $\{\pmb{g}^n\}$ denote the positive and negative sets with respect to $\pmb{g}$ in the mini-batch, respectively, $m$ is the margin between feature pairs, $\pmb{\theta}$ denotes the model parameters. In order to avoid overfitting on the current pseudo labeled set, we only train $\mathcal{M}^t$ in each round for a few gradient update steps to get $\mathcal{M}^{t + 1}$ . $\mathcal{M}^T$ denotes the final model when the stopping criterion is reached. The two steps of attention-driven clustering and feature learning are performed iteratively, and they facilitate each other to achieve the final well-performing model. The detail of our method ADTC is summarized in Algorithm 1.
+
+# 4 Experiments
+
+# 4.1 Datasets
+
+Market-1501 is a dataset containing 32668 images with 1501 identities captured from 6 cameras [44]. The dataset is split into three parts: 12936 images with 751
+
+Algorithm 1 Attention-driven Two-stage Clustering (ADTC) method for unsupervised person re-id
+
+Input: The unlabeled dataset $X$ , the model $\mathcal{M}^0$ .
+
+Output: Final model $\mathcal{M}^T$ .
+
+1: $t = 0$
+2: repeat
+3: Attention Step:
+4: Extracting feature point $\pmb{f_i}$ of each data point $x_{i} \in X$ before the global max-pooling layer.
+5: Applying the voxel attention kernel $\pmb{v}_i$ on $\pmb{f}_i$ to get the attention feature point $\pmb{a}_i$ .
+6: Applying global max-pooling and 1-D batch normalization on $\pmb{a}_i$ to get the final feature point $\pmb{g}_i$ .
+7: Clustering Step:
+8: Performing kmeans $^{+ + }$ clustering on $\{g_i\}_{i = 1}^N$ and obtaining centroids $\{c_m\}_{m = 1}^M$
+9: For each centroid $c_{m}$ , computing its $p$ -nearest neighbours $\mathcal{C}_m^p$ based on the KRJD metric, and assigning the pseudo-label $m$ to all data points in $\mathcal{C}_m^p$ .
+10: Removing ambiguous data points belonging to more than one clusters and obtaining the pseudo-labelled train set $\{(x_j, y_j)\}_{j=1}^{N_r}$ .
+11: Parameter Updating Step:
+12: Training $\mathcal{M}^t$ with the triplet loss on $\{(x_j, y_j)\}_{j=1}^{N_r}$ to get $\mathcal{M}^{t+1}$ .
+13: $t = t + 1$
+14: until $t = T$
+
+identities forming the training data, 19732 images with 750 identities forming the testing gallery, and another 3368 images from the testing gallery forming the query data.
+
+DukeMTMC contains 36411 images with 1812 identities captured from 8 cameras [26]. The dataset is split into three parts: 16522 images with 702 identities forming the training data, 17661 images with 1110 identities forming the testing gallery, and another 2228 images with 702 identities from the testing gallery forming the query data. Note that the evaluation protocol on two datasets are the same.
+
+# 4.2 Implementation Details
+
+We use a Resnet-50 pretrained onImagenet as the backbone model. Following [37], we add a batch normalization layer after the global pooling layer to prevent overfitting and directly use the batch-normalized global pooling features to execute identity classification (for the performance of the model architecture on supervised dataset, see SI.4). The output channels are set as 800 in the voxel attention kernel. During clustering, we set the number of clusters $M$ to be 1000 (for the effect of $M$ , see SI.2) and the neighbour size $p$ is 20. All input images are resized to $256 \times 128$ . Except random horizontal flipping, no other data augmentation strategy is used. 32 pseudo-classes and 4 examples per class are randomly sampled to form a mini-batch. The margin $m$ between negative pairs and posi
+
+tive pairs is 0.3. The total training rounds is set to be 20. To prevent overfitting, the model is fine-tuned for 10 epochs in each round. The Adam optimizer is used for optimization with an initial learning rate of 0.0001 which exponentially decays after epoch 5 (for more detailed setting of hyper-parameters, see SI.1).
+
+# 4.3 Model Performances on Benchmark Datasets
+
+We compare our model with other state-of-the-art unsupervised person re-id methods on two benchmark datasets Market1501 and DukeMTMC. These methods include: 1) two hand-crafted features: LOMO [19], BoW [44]; 2) four feature alignment methods, MMFA [20], TJ-AIDL [32], ARN [18], and EANet [12]; 3) four GAN-based domain adaptation methods, IPGAN [22], eSPGAN+LMP [5], CamStyle [47], and HHL [46]; 4) two clustering-based methods, PUL [6] and DAR [28]. Note that when training on Market1501, we first initialize our model on DukeMTMC and vice versa (domain adaptation).
+
+| source to target | DukeMTMC to Market1501 | Market1501 to DukeMTMC |
| mAP | rank1 | rank5 | rank10 | mAP | rank1 | rank5 | rank10 |
| Directly transfer | 18.8 | 44.0 | 62.1 | 69.4 | 18.2 | 34.0 | 49.1 | 55.9 |
| LOMO [19] | 8.0 | 27.2 | - | - | 4.8 | 12.3 | - | - |
| BoW [44] | 14.8 | 35.8 | - | - | 8.3 | 17.2 | - | - |
| MMFA [20] | 24.7 | 45.3 | 59.8 | 66.3 | 27.4 | 56.7 | 75.0 | 81.8 |
| TJ-AIDL [32] | 26.5 | 58.2 | 74.8 | 81.1 | 23.0 | 44.3 | 59.6 | 65.0 |
| ARN [18] | 39.4 | 70.3 | 80.4 | 86.3 | 33.4 | 60.2 | 73.9 | 79.5 |
| EANet [12] | 51.6 | 78.0 | - | - | 48.0 | 67.7 | - | - |
| IPGAN [22] | 25.6 | 56.4 | 76.0 | 82.5 | 26.7 | 46.8 | 62.0 | 67.9 |
| eSPGAN+LMP [5] | 30.4 | 52.6 | 66.3 | 71.7 | 31.7 | 63.6 | 80.1 | 86.1 |
| CamStyle [47] | 27.4 | 58.8 | 78.2 | 84.3 | 25.1 | 48.4 | 62.5 | 68.9 |
| HHL [46] | 31.4 | 62.2 | 78.8 | 84.0 | 27.2 | 46.9 | 61.0 | 66.7 |
| PUL [6] | 20.1 | 44.7 | 59.1 | 65.6 | 16.4 | 30.4 | 44.5 | 50.7 |
| DAR [28] | 53.7 | 75.8 | 89.5 | 93.2 | 49.0 | 68.4 | 80.1 | 83.5 |
| SSG [8] | 58.3 | 80.0 | 90.0 | 92.4 | 53.4 | 73.0 | 80.6 | 83.2 |
| ADTC w/o DA | 38.8 | 59.5 | 71.6 | 76.9 | 37.9 | 59.4 | 70.0 | 74.1 |
| ADTC (Ours) | 59.7 | 79.3 | 90.8 | 94.1 | 52.5 | 71.9 | 84.1 | 87.5 |
+
+Table 1. Comparison of different unsupervised learning methods. DukeMTMC to MarKet1501 means model initialized on DukeMTMC and trained on Market1501. Market1501 to DukeMTMC means model initialized on Market1501 and trained on DukeMTMC. ADTC w/o DA means we trained our model directly on the unlabeled dataset without initialization on the source domain dataset. Note that the LOMO, BoW and PUL also don't use the source domain data to initialize models.
+
+The results are summarized in Table 1. We observe that: 1) our model achieves $59.7\% / 79.3\%$ on Market1501 and $52.5\% / 71.9\%$ on DukeMTMC on the mAP/rank1 accuracy, which is one of the state-of-the-art (SOTA) models. Note
+
+that we only initialized the model on the source labeled domain and then trained it without any auxiliary label information in the unlabeled domain; whereas most of the aforementioned methods keep using the auxiliary label information in the source domain during the domain transfer learning. 2) Compared to the feature alignment methods which implicitly make an assumption that the data distributions of the source and target domains are similar, our model learns directly from the unlabeled target dataset and achieves better performances. 3) Compared to the GAN-based models which aim at translating the style of labeled images from the source domain to the target domain, our model achieves better performances even without the voxel attention or two-stage clustering (see Table.2). 4) Although the clustering-based SSG model achieves a slightly better performance on DukeMTMC (mAP/rank1) than ours, they use multi learning branches and the DBSCAN clustering method while our model only consists of only one learning branch and adopts the simple kmeans clustering method. Notably, the main concern in our paper is to enhance the in-camera variability so as to improve the accuracy of unsupervised person ReID model rather than introduce other strategies to boost the performance. Overall, our model achieves the state-of-art performances on the two benchmark datasets. In below, we inspect how different elements of the model contribute to its superior performances.
+
+# 4.4 Contribution of the Voxel Attention
+
+Fig. 3A&B present the class activation maps (CAMs) [48] of a few example images, which display the spatial regions where the model pays attention to. We see that without the voxel attention, the model pays more attention to the background than to the foreground, resulting in wrong cluster assignments. Indeed, such a degenerate performance often occurs in a clustering-based method without attention, since pedestrian images extracted from the same camera, especially those from the same location, tend to have less variability than those from different cameras (also see Fig. 1). Consequently, the model will assign clusters based on the overall image appearances, rather than the details of pedestrians, and thus fail to capture the in-camera variability of images crucial for the re-id task. Fig. 3A&B also show that the voxel attention helps to increase the margin of the negative pair $(g, g^{-})$ and decrease the margin of the positive pair $(g, g^{+})$ in a triplet. We calculate the margin difference $\delta = \| g - g^{-} \|_2^2 - \| g - g^{+} \|_2^2$ of 10000 triplets randomly sampled from DukeMTMC, and find that by applying the voxel attention, $\delta$ increases significantly across the whole dataset (Fig. 3C). This implies that the images belonging to the same identity have a more compact aggregation in the feature space, which makes the retrieval task easier than that without the voxel attention (see SI.5).
+
+To further unveil the role of the voxel attention, we differentiate the wrongly retrieved rank1 images to a query into the in-camera errors (ICE), i.e., those in the same camera as the query, and the cross-camera errors (CCE), i.e., those in different cameras with the query. Fig. 4 compares the results of our model with that of the progressive clustering method without attention. It shows that
+
+
+Fig. 3. The voxel attention highlights the informative parts of images and makes them more discriminatable. (A-B) Two examples from DukeMTMC with/without the voxel attention. From top to bottom are the raw images, the CAMs without the voxel attention, and the CAMs with the voxel attention. The value in black stands for the euclidean distance between two feature maps, and the value in red for the margin difference defined in Sec. 4.4. (C) The statistical result of the margin difference $\delta$ from 10000 triplets randomly sampled from DukeMTMC.
+
+
+Fig. 4. The voxel attention enhances the in-camera discrimination. From left to right are the results of the initialized model without training (baseline), the model with progressive clustering but no attention, and the model with progressive clustering and attention. The total number of query images is 2228. Blue, red, and orange: the number of query images having the correct rank1, the number of in-camera error (ICE), and the number of cross-camera errors (CCE). DukeMTMC is used.
+
+without attention, the progressive clustering method can improve the rank1 accuracy from $34.0\%$ to $60.3\%$ compared to the baseline (i.e., the result of the model initialized via the label data); but our model can improve the rank1 accuracy further to $71.9\%$ . Notably, this further improvement is mainly attributed to the decrease of ICE, from 588 to 340 out of 2228 queries. This supports our idea that the voxel attention helps to capture the in-camera variability of images; whereas the progressive clustering method without attention is lack of this capability and hence makes more mistakes in-camera identifications.
+
+# 4.5 Contribution of Two-Stage Clustering
+
+
+Fig.5. Example clusters with top 10 nearest neighbours after training with/without two-stage clustering. Market1501 is used. Upper: ranking by the Euclidean distance to the cluster centroid. Lower: ranking by KRJD to the cluster centroid with two-stage clustering. Blue, Red: the correctly, the wrongly assigned images.
+
+We continue to inspect the contribution of two-stage clustering. Fig. 5 shows that when two-stage clustering is used during training, more positive (correct) examples appear in the neighbourhood of a given cluster centroid, compared to that of using only the Euclidean distance based Kmeans++ algorithm. This indicates that KRJD indeed serve as a better metric to compute the neighbourhood relationship between feature points, which improves the clustering quality and leverage the model performances (see SI.6 for more examples).
+
+# 4.6 Contribution of Progressive Training
+
+We further inspect how the voxel attention and two-stage clustering are executed iteratively to generate good feature representations. To measure the clustering quality, we adopt the normalized mutual information (NMI), which is given by
+
+$$
+N M I (\mathcal {C}, \mathcal {L}) = \frac {I (\mathcal {C} , \mathcal {L})}{\sqrt {H (\mathcal {C}) H (\mathcal {L})}}, \tag {7}
+$$
+
+where $\mathcal{C} = \{\mathcal{C}_1^p,\mathcal{C}_2^p,\dots,\mathcal{C}_M^p\}$ denote $M$ clusters, $\mathcal{L}$ the corresponding ground truth label set, and $I$ the mutual information between $\mathcal{C}$ and $\mathcal{L}$ . $H(\mathcal{C})$ and $H(\mathcal{L})$ are the entropies of $\mathcal{C}$ and $\mathcal{L}$ , respectively. The value of NMI is between 0 and 1,
+
+
+Fig.6. (A) The clustering performance NMI vs. the training round. (B) The rank1 accuracy vs. the training round.
+
+
+
+with 1 standing for the perfect labeling of data points. The larger the NMI, the closer the pseudo-labels to the ground truth5. Fig. 6A shows how the clustering performance increases along with the training round. Initially, the assignment of clusters is unsatisfactory (NMI ≈ 0.77), as data points are intertwined with each other. Along with the training, data points belonging to the same class are gradually grouped together, and the assigned pseudo-clusters become more similar to the ground truth (NMI ≈ 0.90). Fig. 6B further shows that the rank1 accuracy of the model increases in the same pace as the clustering performance. This suggests that in our model, data clustering and model training promote each other during progressive training, in the sense that the improved assignments by two-stage clustering will select more reliable samples to facilitate the learning of the voxel attention, which in return will highlight more informative features to further improve cluster assignments.
+
+# 4.7 Component Analysis of ADTC
+
+| Source to Target | DukeMTMC to Market1501 | Market1501 to DukeMTMC |
| mAP | rank1 | rank5 | rank10 | mAP | rank1 | rank5 | rank10 |
| Only TC | 41.1 | 66.2 | 84.2 | 88.9 | 28.2 | 49.8 | 68.2 | 74.2 |
| Only VA | 35.5 | 61.7 | 74.3 | 79.1 | 32.6 | 52.0 | 65.3 | 69.4 |
| TC + channel attention | 42.8 | 68.9 | 87.1 | 91.2 | 30.1 | 52.2 | 71.5 | 78.9 |
| TC + spatial attention | 41.3 | 66.6 | 85.0 | 89.2 | 28.8 | 50.7 | 69.1 | 75.2 |
| TC + HA | 50.6 | 76.2 | 88.1 | 92.0 | 48.9 | 69.2 | 81.5 | 85.1 |
| TC + CABM | 55.2 | 77.3 | 88.8 | 93.5 | 49.1 | 69.8 | 82.0 | 85.9 |
| Full model | 59.7 | 79.3 | 90.8 | 94.1 | 52.5 | 71.9 | 84.1 | 87.5 |
+
+Table 2. Component analysis of the performances of our model. Except the ablating part, all other hyper-parameters are fixed.
+
+We carry out component analysis of our method. Table 2 shows that both the voxel attention and two-stage clustering are indispensable to our model, in the
+
+sense that when either of them is ablated, the model performance is degraded. Moreover, we check that for the voxel attention, both the channel attention and the spatial attention are indispensable, in the sense that when either of them is ablated, the model performance is degraded. We also replace the proposed voxel attention module with the Harmonious Attention (HA) kernel [17] and the CABM attention kernel [35] (Table 2). It shows that the proposed attention kernel is superior and leads to better performance under the unsupervised setting. Besides, we also carry out robustness analysis of our model to hyper-parameters, e.g., the number of clusters, the margin $m$ the updating epochs in each training round (see SI.2) and the balance level of the original dataset (SI.3). All these results indicate that our model is potentially feasible in real-world applications.
+
+# 5 Conclusion
+
+In this study, we have proposed an Attention-Driven Two-stage Clustering (ADTC) method for learning an unsupervised model for person re-id. It captures the in-camera variability of images and reduce the noisy labels when clustering(which has been ignored in current unsupervised ReID methods). The method has two indispensable components. Firstly, we use the voxel attention strategy to highlight the informative parts of pedestrian images, which captures the in-camera variability of images crucial for the re-id task. Secondly, we adopts a two-stage clustering strategy, which uses the KRJD metric to improve the clustering quality and stabilizes the progressive training. Through progressive training, the two strategies facilitate with each and enables our model to outperform other unsupervised approaches for person re-ID and achieve the state-of-the-art performances on two benchmark datasets. We also empirically show that our model is robust to a number of varying conditions, making it potentially feasible in real-world applications.
+
+# Acknowledgments
+
+ZLJ designed the study and carried out the experiments. XLZ, XHL and XL helped with integrating algorithms and conducting experiments. TJH and SW contributed to the conception and design of the study and revision. ZLJ and SW wrote the manuscript. This work was supported by Huawei Technology Co., Ltd(YBN2019105137) and Guangdong Province with grant (No. 2018B030338001, SW). This work was also supported by BMSTC(Beijing municipal science and technology commission) with grant (No. Z161100000216143, SW) and the National Natural Science Foundation of China (No. 61425025, T.J. Huang).
+
+# References
+
+1. Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. In: Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms. pp. 1027-1035. Society for Industrial and Applied Mathematics (2007)
+2. Chen, B., Deng, W., Hu, J.: Mixed high-order attention network for person re-identification. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2019)
+3. Chen, G., Lin, C., Ren, L., Lu, J., Zhou, J.: Self-critical attention learning for person re-identification. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 9637-9646 (2019)
+4. Dai, Z., Chen, M., Gu, X., Zhu, S., Tan, P.: Batch dropblock network for person re-identification and beyond. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 3691–3701 (2019)
+5. Deng, W., Zheng, L., Ye, Q., Kang, G., Yang, Y., Jiao, J.: Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 994–1003 (2018)
+6. Fan, H., Zheng, L., Yan, C., Yang, Y.: Unsupervised person re-identification: Clustering and fine-tuning. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 14(4), 83 (2018)
+7. Farenzena, M., Bazzani, L., Perina, A., Murino, V., Cristani, M.: Person re-identification by symmetry-driven accumulation of local features. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 2360-2367. IEEE (2010)
+8. Fu, Y., Wei, Y., Wang, G., Zhou, Y., Shi, H., Huang, T.S.: Self-similarity grouping: A simple unsupervised cross domain adaptation approach for person re-identification. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 6112-6121 (2019)
+9. Gray, D., Tao, H.: Viewpoint invariant pedestrian recognition with an ensemble of localized features. In: European conference on computer vision. pp. 262-275. Springer (2008)
+0. Hermans, A., Beyer, L., Leibe, B.: In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737 (2017)
+1. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7132-7141 (2018)
+2. Huang, H., Yang, W., Chen, X., Zhao, X., Huang, K., Lin, J., Huang, G., Du, D.: Eanet: Enhancing alignment for cross-domain person re-identification. arXiv preprint arXiv:1812.11369 (2018)
+3. Kodirov, E., Xiang, T., Gong, S.: Dictionary learning with iterative laplacian regularisation for unsupervised person re-identification. In: BMVC. vol. 3, p. 8 (2015)
+4. Lan, X., Wang, H., Gong, S., Zhu, X.: Deep reinforcement learning attention selection for person re-identification. In: BMVC (2017)
+5. Li, D., Chen, X., Zhang, Z., Huang, K.: Learning deep context-aware features over body and latent parts for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 384-393 (2017)
+6. Li, S., Bak, S., Carr, P., Wang, X.: Diversity regularized spatiotemporal attention for video-based person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 369-378 (2018)
+
+17. Li, W., Zhu, X., Gong, S.: Harmonious attention network for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2285-2294 (2018)
+18. Li, Y.J., Yang, F.E., Liu, Y.C., Yeh, Y.Y., Du, X., Frank Wang, Y.C.: Adaptation and re-identification network: An unsupervised deep transfer learning approach to person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp. 172-178 (2018)
+19. Liao, S., Hu, Y., Zhu, X., Li, S.Z.: Person re-identification by local maximal occurrence representation and metric learning. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2015)
+20. Lin, S., Li, H., Li, C.T., Kot, A.C.: Multi-task mid-level feature alignment network for unsupervised cross-dataset person re-identification. arXiv preprint arXiv:1807.01440 (2018)
+21. Lin, Y., Dong, X., Zheng, L., Yan, Y., Yang, Y.: A bottom-up clustering approach to unsupervised person re-identification. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 8738-8745 (2019)
+22. Liu, J.: Identity preserving generative adversarial network for cross-domain person re-identification. arXiv preprint arXiv:1811.11510 (2018)
+23. Liu, X., Zhao, H., Tian, M., Sheng, L., Shao, J., Yi, S., Yan, J., Wang, X.: Hydraplus-net: Attentive deep features for pedestrian analysis. In: Proceedings of the IEEE international conference on computer vision. pp. 350-359 (2017)
+24. Peng, P., Xiang, T., Wang, Y., Pontil, M., Gong, S., Huang, T., Tian, Y.: Unsupervised cross-dataset transfer learning for person re-identification. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2016)
+25. Peng, P., Xiang, T., Wang, Y., Pontil, M., Gong, S., Huang, T., Tian, Y.: Unsupervised cross-dataset transfer learning for person re-identification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1306-1315 (2016)
+26. Ristani, E., Solera, F., Zou, R., Cucchiara, R., Tomasi, C.: Performance measures and a data set for multi-target, multi-camera tracking. In: European Conference on Computer Vision. pp. 17-35. Springer (2016)
+27. Song, C., Huang, Y., Ouyang, W., Wang, L.: Mask-guided contrastive attention model for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1179-1188 (2018)
+28. Song, L., Wang, C., Zhang, L., Du, B., Zhang, Q., Huang, C., Wang, X.: Unsupervised domain adaptive re-identification: Theory and practice. arXiv preprint arXiv:1807.11334 (2018)
+29. Steinbach, M., Ertöz, L., Kumar, V.: The challenges of clustering high dimensional data. In: New directions in statistical physics, pp. 273-309. Springer (2004)
+30. Su, C., Li, J., Zhang, S., Xing, J., Gao, W., Tian, Q.: Pose-driven deep convolutional model for person re-identification. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 3960-3969 (2017)
+31. Wang, H., Fan, Y., Wang, Z., Jiao, L., Schiele, B.: Parameter-free spatial attention network for person re-identification. arXiv preprint arXiv:1811.12150 (2018)
+32. Wang, J., Zhu, X., Gong, S., Li, W.: Transferable joint attribute-identity deep learning for unsupervised person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2275–2284 (2018)
+33. Wei, L., Zhang, S., Gao, W., Tian, Q.: Person transfer gan to bridge domain gap for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 79-88 (2018)
+
+34. Wei, L., Zhang, S., Yao, H., Gao, W., Tian, Q.: Glad: Global-local-alignment descriptor for pedestrian retrieval. In: Proceedings of the 25th ACM international conference on Multimedia. pp. 420-428. ACM (2017)
+35. Woo, S., Park, J., Lee, J.Y., So Kweon, I.: Cbam: Convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV). pp. 3-19 (2018)
+36. Xia, B.N., Gong, Y., Zhang, Y., Poellabauer, C.: Second-order non-local attention networks for person re-identification. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 3760-3769 (2019)
+37. Xiong, F., Xiao, Y., Cao, Z., Gong, K., Fang, Z., Zhou, J.T.: Towards good practices on building effective cnn baseline model for person re-identification. arXiv preprint arXiv:1807.11042 (2018)
+38. Xu, J., Zhao, R., Zhu, F., Wang, H., Ouyang, W.: Attention-aware compositional network for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2119–2128 (2018)
+39. Yang, F., Li, K., Zhong, Z., Luo, Z., Sun, X., Cheng, H., Guo, X., Huang, F., Ji, R., Li, S.: Asymmetric co-teaching for unsupervised cross-domain person re-identification. In: AAAI. pp. 12597–12604 (2020)
+40. Zhang, X., Cao, J., Shen, C., You, M.: Self-training with progressive augmentation for unsupervised cross-domain person re-identification. arXiv preprint arXiv:1907.13315 (2019)
+41. Zhao, H., Tian, M., Sun, S., Shao, J., Yan, J., Yi, S., Wang, X., Tang, X.: Spindle net: Person re-identification with human body region guided feature decomposition and fusion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1077-1085 (2017)
+42. Zhao, R., Ouyang, W., Wang, X.: Unsupervised salience learning for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3586-3593 (2013)
+43. Zheng, L., Huang, Y., Lu, H., Yang, Y.: Pose invariant embedding for deep person re-identification. IEEE Transactions on Image Processing (2019)
+44. Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., Tian, Q.: Scalable person re-identification: A benchmark. In: The IEEE International Conference on Computer Vision (ICCV) (December 2015)
+45. Zhong, Z., Zheng, L., Cao, D., Li, S.: Re-ranking person re-identification with k-reciprocal encoding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1318-1327 (2017)
+46. Zhong, Z., Zheng, L., Li, S., Yang, Y.: Generalizing a person retrieval model heteroand homogeneously. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 172-188 (2018)
+47. Zhong, Z., Zheng, L., Zheng, Z., Li, S., Yang, Y.: Camstyle: A novel data augmentation method for person re-identification. IEEE Transactions on Image Processing 28(3), 1176-1190 (2018)
+48. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2921-2929 (2016)
+49. Zhou, S., Wang, F., Huang, Z., Wang, J.: Discriminative feature learning with consistent attention regularization for person re-identification. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 8040-8049 (2019)
\ No newline at end of file
diff --git a/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/images.zip b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8dd052f212bf0c1bd6c64b6e0152a07c5de6bbfb
--- /dev/null
+++ b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7e10d1351e295ed45494db841e722b37d29fedeb359626fd6be10a26d0b5e332
+size 404243
diff --git a/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/layout.json b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..3e71121f7d2dfe4af2a904971b3f0fde64991f2a
--- /dev/null
+++ b/anattentiondriventwostageclusteringmethodforunsupervisedpersonreidentification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8d6d49626830f6cb663f95c08d0fce77a1dedb937880f83c9da3edbc426391cd
+size 400403
diff --git a/anefficienttrainingframeworkforreversibleneuralarchitectures/9c28ad52-8b96-4928-bd12-c27e4d29a78e_content_list.json b/anefficienttrainingframeworkforreversibleneuralarchitectures/9c28ad52-8b96-4928-bd12-c27e4d29a78e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a8c8fc86cc491fe14af0c4600d50c4642f79c12f
--- /dev/null
+++ b/anefficienttrainingframeworkforreversibleneuralarchitectures/9c28ad52-8b96-4928-bd12-c27e4d29a78e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8eb07ee3a90a096fc7e38702c48c6d56c3337434a679e2f75b9139176ed7720a
+size 68700
diff --git a/anefficienttrainingframeworkforreversibleneuralarchitectures/9c28ad52-8b96-4928-bd12-c27e4d29a78e_model.json b/anefficienttrainingframeworkforreversibleneuralarchitectures/9c28ad52-8b96-4928-bd12-c27e4d29a78e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..81ff0045c167beaaa1e699da0d5572cc787b181b
--- /dev/null
+++ b/anefficienttrainingframeworkforreversibleneuralarchitectures/9c28ad52-8b96-4928-bd12-c27e4d29a78e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3ac5525e406d8af69a95a0e873656533677702af47c0210c5d0ae0b0e4f05f3e
+size 83650
diff --git a/anefficienttrainingframeworkforreversibleneuralarchitectures/9c28ad52-8b96-4928-bd12-c27e4d29a78e_origin.pdf b/anefficienttrainingframeworkforreversibleneuralarchitectures/9c28ad52-8b96-4928-bd12-c27e4d29a78e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..145ff6c2b627aa87937c72e65b4fef2e37fa946b
--- /dev/null
+++ b/anefficienttrainingframeworkforreversibleneuralarchitectures/9c28ad52-8b96-4928-bd12-c27e4d29a78e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:563bcb1b79997bdcc667bc2454bc1c6ae8ee157b6fb1ca100b6a90768922c6e5
+size 428097
diff --git a/anefficienttrainingframeworkforreversibleneuralarchitectures/full.md b/anefficienttrainingframeworkforreversibleneuralarchitectures/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..81d0cab91fba74f64030ae86d2de3d5c7613e5e1
--- /dev/null
+++ b/anefficienttrainingframeworkforreversibleneuralarchitectures/full.md
@@ -0,0 +1,326 @@
+# An Efficient Training Framework for Reversible Neural Architectures
+
+Zixuan Jiang $^{1}$ [0000-0001-6180-6487], Keren Zhu $^{1}$ [0000-0003-2698-141X], Mingjie Liu $^{1}$ [0000-0002-3488-9763], Jiaqi Gu $^{1}$ [0000-0001-8535-7698], and David Z. Pan $^{1}$ [0000-0002-5705-2501]
+
+The University of Texas at Austin, Austin Texas 78712, USA {zixuan, keren.zhu, jay_liu, jqgu}@utexas.edu, dpan@ece.utexas.edu https://www.cerc.utexas.edu/utda/
+
+Abstract. As machine learning models and dataset escalate in scales rapidly, the huge memory footprint impedes efficient training. Reversible operators can reduce memory consumption by discarding intermediate feature maps in forward computations and recover them via their inverse functions in the backward propagation. They save memory at the cost of computation overhead. However, current implementations of reversible layers mainly focus on saving memory usage with computation overhead neglected. In this work, we formulate the decision problem for reversible operators with training time as the objective function and memory usage as the constraint. By solving this problem, we can maximize the training throughput for reversible neural architectures. Our proposed framework fully automates this decision process, empowering researchers to develop and train reversible neural networks more efficiently.
+
+Keywords: reversible neural networks, efficient training, machine learning framework
+
+# 1 Introduction
+
+The backpropagation [20] mechanism is widely used in training neural networks. However, since intermediate results need to be saved for backward computations, the backpropagation requires considerable memory footprint. As neural networks become larger and deeper, the increasing memory footprint is forcing the usage of smaller mini-batch sizes. In extreme cases, deep networks have to be trained with a mini-batch size of 1 [25]. The issue of memory consumption impedes the explorations of desirable deep learning models.
+
+Researchers have proposed several methods to address the challenge of inflating memory footprint [21]. Chen et al. [6] propose gradient checkpoint mechanism to store partial intermediate results. The discarded activations will be recovered through recomputations in the backward pass. The memory swapping method [19, 24] moves intermediate activations to other devices to reduce the memory footprint of the current device. The extra memory transfer imposes
+
+overhead on training efficiency. Reversible operators [7] allow recovering the intermediate feature maps in backward pass through the corresponding inverse functions. All these three methods reduce the memory footprint at the cost of extra computation or memory transfer. They do not affect the model accuracy as the training process is numerically unchanged.
+
+Specifically, reversible neural architectures have been successfully adopted in computer vision research, e.g., the reversible U-net for volumetric image segmentation [3], and the reversible architecture for 3D high-resolution medical image processing [2]. The lower memory footprint allows deeper models to be trained, inducing more predictive capability and higher accuracy.
+
+Figure 1 shows two extremes in neural network training. Standard backpropagation achieves the extreme of computation efficiency at the expense of the highest memory footprint, such that it does not contain any redundant computations. On the other extreme, the fully reversible strategy has the lowest memory footprint with imposing the greatest computation overhead. However, The design space between two extremes is less studied. Existing research regarding reversible neural networks mainly focuses on saving memory consumption. All the reversible layers are executed in the memory-efficient mode. The computation overhead of their inverse functions is overlooked.
+
+
+Fig.1: Two extremes when training neural networks. The lower right extreme stands for the standard backpropagation method, which does not contain any redundant computations. The upper left extreme can achieve the lowest memory footprint by fully leveraging the reversibility of the neural network.
+
+In this paper, we explore the design space by considering the trade-off between computation and memory footprint. We derive the mathematical formulation of the decision problem for reversible neural architectures. We formulate the training time as the objective function with memory usage as an optimization constraint. By showing that it is a standard $0/1$ knapsack problem in essence,
+
+we use a dynamic programming algorithm to find the optimal solution. We also discuss the relationship between mini-batch size and training throughput.
+
+Our contributions are highlighted as follows.
+
+- New Perspective. We explore the design space for reversible neural architectures from a novel perspective of joint optimization.
+- Optimality. Our framework guarantees to obtain the maximum training throughput for reversible neural architectures under given memory constraints.
+- Automation. Our framework provides a fully automated solution, enabling more efficient development and training for reversible neural networks.
+
+# 2 Background
+
+In this section, we discuss the background of reversible neural architectures and the scheduling framework for the training process.
+
+# 2.1 Reversible Neural Architectures
+
+
+(a)
+
+
+(b)
+Fig.2: (a) non-reversible and (b) reversible neural architectures. For a nonreversible layer, we often need to save its original input $x$ for backward computations. For a reversible layer, the original input $x$ can be calculated via its inverse function $x = f^{-1}(y)$ .
+
+Figure 2a demonstrates a conventional non-reversible neural architectures. The layer $y = f(x)$ is non-reversible if and only if there is no inverse computation $x = f^{-1}(y)$ for the original function $f$ . For a non-reversible layer, we often need to store its original input $x$ during forward computation so that we can compute gradients during backpropagation. As an example, for a linear layer
+
+$y = f(x) = \theta^T x$ , where $\theta$ represents the weight vector, its backward computation $\partial y / \partial \theta = x$ depends on the original input $x$ .
+
+Traditional neural networks are mostly based on these non-reversible layers. The memory consumed by the feature maps dominates the total memory utilization, especially in deep neural networks [19]. Therefore, the memory footprint can decrease significantly by discarding those feature maps.
+
+Figure 2b illustrates a reversible operator. When using the reversible layer $y = f(x)$ , it is possible to recover $x$ in the backward computation by calling its inverse function $x = f^{-1}(y)$ . Therefore the memory consumption can be saved by discarding the intermediate feature map $x$ .
+
+Some of the commonly used operators in neural networks are implicitly reversible, such as convolution layers with a stride of 1 [12], and fully connected layers with invertible weight matrix. Inplace Activated Batch Normalization (ABN) [4] leverages the reversibility of the batch normalization [8] and some activation functions (such as leaky ReLU). Neural ordinary differential equations [5] can achieve constant memory usage through reversibility in backpropagation.
+
+Researchers also propose many variations of explicit reversible neural architectures [9]. The reversible residual architecture [7] does computations on a pair of inputs $(x_{1}, x_{2})$ as shown in Equation 1.
+
+$$
+y _ {1} = x _ {1} + F (x _ {2}), y _ {2} = x _ {2} + G (y _ {1}) \tag {1}
+$$
+
+It is reversible since the inputs can be recovered from output pairs as demonstrated in Equation 2.
+
+$$
+x _ {2} = y _ {2} - G \left(y _ {1}\right), x _ {1} = y _ {1} - F \left(x _ {2}\right) \tag {2}
+$$
+
+This technique can be combined with traditional recurrent neural networks to get reversible RNNs [16]. Kitaev et al. apply the above architecture to the Transformer [22] and obtain Reformer [13] as shown in Equations 3 and 4.
+
+$$
+y _ {1} = x _ {1} + \text {A t t e n t i o n} (x _ {2}), y _ {2} = x _ {2} + \text {F e e d F o r w a r d} (y _ {1}) \tag {3}
+$$
+
+$$
+x _ {2} = y _ {2} - \operatorname {F e e d F o r w a r d} \left(y _ {1}\right), x _ {1} = y _ {1} - \operatorname {A t t e n t i o n} \left(x _ {2}\right) \tag {4}
+$$
+
+Although the computation overhead is considered and discussed, these prior studies mainly focus on memory footprint reduction. They do not explore the space between the two extremes illustrated in Figure 1.
+
+# 2.2 Scheduling for Training
+
+For most developers, the primary concern regarding the training process is how to maximize the training throughput given existing machines, especially GPUs. Specifically, there is a need for a framework to automate the training process to fully utilize the computation capability and memory capacity of specific machines.
+
+Frameworks for the scheduling problem with gradient checkpoints are great examples. The scheduling problem seeks the minimum computation overhead
+
+with a memory footprint constraint. Researchers propose many algorithms to find optimal solutions for gradient checkpoint selection. Kusumoto et al. provide a dynamic programming algorithm from the perspective of computation graphs [14]. Jain et al. formulate the scheduling problem as a mixed integer linear program and solve it via standard solvers [10]. However, a similar problem for reversible neural architectures does not get much attention. We formulate and solve this problem in this work.
+
+There are also work focused on the scheduling for distributed training. Jia et al. optimize how each layer is parallelized in distributed and parallel training [11]. However, they do not consider the reversibility. Our framework can be used directly in every single machine in the distributed training scenario.
+
+# 3 Method
+
+In this section, we first describe two modes for reversible neural architectures. We denote them M-Mode and C-Mode, respectively. We then formulate the decision problem, and propose an algorithm and our framework. We also discuss the problem when mini-batch sizes are not fixed.
+
+# 3.1 Memory Centric and Computation Centric Modes
+
+Each reversible layer $y = f(x)$ can be computed in two modes during the training process. First, we can leverage its reversibility. We denote it M-Mode, which represents memory centric mode. Precisely, we discard the activation $x$ in forward computations, then recover it in the backward pass. This mode saves the memory consumed by $x$ at the cost of inverse computation of $x = f^{-1}(y)$ . Another mode is treating the reversible layer as a conventional non-reversible layer, which is denoted C-Mode representing computation centric mode. In this mode, we save the feature map $x$ in the forward pass, then use it directly in the backward computation. This mode does not involve redundant computations but requires an extra memory footprint. Table 1 summarizes these two modes.
+
+Table 1: Comparisons of two modes.
+
+| mode | forward | backward | computation cost | memory cost |
| M-Mode | discard x | recover x from y | x = f-1(y) | 0 |
| C-Mode | save x | use x directly | 0 | size of x |
+
+# 3.2 Formulation
+
+Let $f$ be a neural network with $(k + n)$ layers, among which there are $n$ reversible layers $\{f_i\}_{i=1}^n$ . For each of these $n$ reversible layers, we can decide to do forward and backward computation following one of the modes above. Let $x \in \{0,1\}^n$ be the decision variable. $x_i = 0$ means that the reversible layer $f_i$ follows
+
+the M-Mode (C-Mode). Thus, for $n$ reversible layers, the $2^{n}$ choices constitute the whole solution space.
+
+The two extremes in Figure 1 can be written as $x = \mathbf{0}$ and $x = \mathbf{1}$ . $x = \mathbf{0}$ represents that we discard all the intermediate results to achieve the lowest memory footprint. We treat it as baseline-M. We denote the other extreme without redundant computations $(x = \mathbf{1})$ as baseline-C. Currently, most of the implementations of reversible neural networks use baseline-M directly.
+
+Let $t_{f1}, t_{b1}(t_{f2}, t_{b2}) \in \mathbb{R}_{++}^{n}$ be the execution time vector of forward and backward pass in the M-Mode (C-Mode) respectively. Compared with the C-Mode, the extra execution time consumed by the M-Mode is $t_e = (t_{f1} + t_{b1}) - (t_{f2} + t_{b2})$ . The total execution time of forward and backward computation of all these reversible layers $\{f_i\}_{i=1}^n$ are
+
+$$
+\left(\mathbf {1} - x\right) ^ {T} \left(t _ {f 1} + t _ {b 1}\right) + x ^ {T} \left(t _ {f 2} + t _ {b 2}\right) = \mathbf {1} ^ {T} \left(t _ {f 1} + t _ {b 1}\right) - t _ {e} ^ {T} x
+$$
+
+Similarly, let $m \in \mathbb{Z}_{++}^n$ be the extra memory footprint of C-Mode compared with M-Mode, i.e., the size of corresponding intermediate activations. The total extra memory footprint of these feature maps is $m^T x$ .
+
+Finally, the time centric optimization problem can be written as Problem 5.
+
+$$
+\min _ {x} \mathbf {1} ^ {T} (t _ {f 1} + t _ {b 1}) - t _ {e} ^ {T} x
+$$
+
+$$
+\text {s . t .} \quad m ^ {T} x + m _ {o} \leq M \tag {5}
+$$
+
+$$
+x _ {i} \in \{0, 1 \}, i = 1, \dots , n
+$$
+
+where $M$ is the memory capacity of the machine, $m_{o}$ represents the memory allocated for other tensors (such as feature maps of non-reversible layers, and neural network parameters) when we achieve peak memory in a training iteration. Users can also specify the memory capacity $M$ explicitly.
+
+For other parts of the training process, such as the optimizer, the computation of non-reversible layers, their execution time is constant and independent of our decisions. Therefore, we can minimize the training time by minimizing the total wall-clock time of all these reversible layers.
+
+# 3.3 Algorithm and Framework
+
+Problem 5 can be rewritten as Problem 6.
+
+$$
+\max _ {x} t _ {e} ^ {T} x
+$$
+
+$$
+\text {s . t .} \quad m ^ {T} x \leq M - m _ {o} \tag {6}
+$$
+
+$$
+x _ {i} \in \{0, 1 \}, i = 1, \dots , n
+$$
+
+Problem 6 can be interpreted as follows. We take the baseline-M $(x = 0)$ as the reference. The object function $t_e^T x$ is the execution time reduction when we apply the decision $x$ . The remaining memory capacity for these reversible layers is $M - m_o$ .
+
+Problem 6 is a standard $\mathbf{0} / \mathbf{1}$ knapsack problem in essence [17]. Note that the memory-related variables and parameters $m, M, m_o$ are all positive integers since all of them are in the unit of bytes. Therefore, it can be solved by dynamic programming, as shown in Algorithm 1.
+
+Algorithm 1 Dynamic programming algorithm for $0 / 1$ knapsack problem
+Input: $t_e, m, M - m_o, n$ . {Indices of vectors $t_e$ and $m$ strat from 1.} Define saved $[n, M - m_o]$ and initialize all entries as $-1$ , which means the entry is undefined. {The entry saved $[i, j]$ records the maximum saved time under the condition that we consider first $i$ items with total memory limit of $j$ .} foo $(i, j)$ {This recursive function calculates saved $[i, j]$ .} if $i == 0$ or $j \leq 0$ then return 0 {No time saved under this condition} end if if saved $[i - 1, j] == -1$ then saved $[i - 1, j] = \mathbf{foo}(i - 1, j)$ end if if $m[i] > j$ then saved $[i, j] = \mathbf{save}[i - 1, j]$ else if saved $[i - 1, j - m[i]] == -1$ then saved $[i - 1, j - m[i]] = \mathbf{foo}(i - 1, j - m[i])$ end if saved $[i, j] = \max \{\mathbf{save}[i - 1, j], \mathbf{save}[i - 1, j - m[i]] + t_e[i]\}$ end if return saved $[i, j]$ end foo saved $[n, M - m_o] = \mathbf{foo}(n, M - m_o)$ Initialize decision variables $x = 0$ {Do backtracking to find the optimal solution.} $j = M - m_o$ for $i = n, n - 1, \ldots, 1$ do if saved $[i, j] \neq \mathbf{save}[i - 1, j]$ then $x[i] = 1$ $j = j - m[i]$ end if end for return saved $[n, M - m_o], x$ {Return optimal values and solutions.}
+
+Based on the algorithm, we propose a framework to automate the decision process. Figure 3 shows the four stages of our framework. Initially, we verify the reversibility of each operator. The correctness of the original and inverse functions will be verified. In the second stage, we will obtain parameters $t_e$ and $m$ from realistic measurements. Our framework is hardware-aware since we use realistic profiling data from specific machines. Then we use Algorithm 1 to get the optimal solution. Finally, we can train the network with maximum throughput. The dynamic programming algorithm will only be executed once
+
+
+Fig. 3: Four stages in our framework
+
+to obtain the optimal schedule. After that, this schedule can be used in all the training iterations. Thus, the added complexity is negligible compared with the training process.
+
+# 3.4 Various Mini-batch Size
+
+The above discussions are based on the assumption that the mini-batch size is fixed. When we have many choices on the mini-batch size (denoted $b$ ), the optimization problem will be more complicated.
+
+We assume that for each layer, its execution time is linear to the batch size, whether it is reversible or not. Namely, the execution time satisfies that $t(b) = t^{(0)} + bt^{(1)}$ . The total execution time of all the non-reversible layers is $t_n^{(0)} + bt_n^{(1)}$ . The total execution time of all the reversible layers is
+
+$$
+\mathbf {1} ^ {T} (t _ {f 1} + t _ {b 1}) - t _ {e} ^ {T} x = \mathbf {1} ^ {T} (t _ {f 1} ^ {(0)} + t _ {b 1} ^ {(0)}) + b \mathbf {1} ^ {T} (t _ {f 1} ^ {(1)} + t _ {b 1} ^ {(1)}) - t _ {e} ^ {(0) T} x - b t _ {e} ^ {(1) T} x
+$$
+
+The execution time of the optimizer, scheduler, and control are independent of mini-batch size, denoted by $t_{o}$ . The execution time per sample is
+
+$$
+t _ {n} ^ {(1)} + \mathbf {1} ^ {T} (t _ {f 1} ^ {(1)} + t _ {b 1} ^ {(1)}) - t _ {e} ^ {(1) ^ {T}} x + \frac {t _ {o} + t _ {n} ^ {(0)} + \mathbf {1} ^ {T} (t _ {f 1} ^ {(0)} + t _ {b 1} ^ {(0)}) - t _ {e} ^ {(0) ^ {T}} x}{b}
+$$
+
+The memory footprint is also linear to the mini-batch size. The size of network parameters is independent of the mini-batch size. The size of the feature maps of the non-reversible layers is proportional to the mini-batch size. Thus, the memory constraint can be rewritten as $bm^T x + m_o^{(0)} + bm_o^{(1)} \leq M$ .
+
+The optimization problem is now
+
+$$
+\min _ {x, b} t _ {n} ^ {(1)} + \mathbf {1} ^ {T} (t _ {f 1} ^ {(1)} + t _ {b 1} ^ {(1)}) - t _ {e} ^ {(1) ^ {T}} x + \frac {t _ {o} + t _ {n} ^ {(0)} + \mathbf {1} ^ {T} (t _ {f 1} ^ {(0)} + t _ {b 1} ^ {(0)}) - t _ {e} ^ {(0) ^ {T}} x}{b}
+$$
+
+s.t. $bm^{T}x + m_{o}^{(0)} + bm_{o}^{(1)}\leq M$ (7)
+
+$$
+x _ {i} \in \{0, 1 \}, i = 1, \dots , n
+$$
+
+$$
+b \in [ b _ {l}, b _ {u} ], b \in \mathbb {Z}
+$$
+
+where $b_{l}$ , $b_{u}$ are lower and upper bounds of the mini-batch size.
+
+Rewrite the problem as Problem 8.
+
+$$
+\max _ {x, b} f (x, b) = t _ {e} ^ {(1) ^ {T}} x - \frac {C - t _ {e} ^ {(0) ^ {T}} x}{b}
+$$
+
+$$
+\text {s . t .} \quad b m ^ {T} x + m _ {o} ^ {(0)} + b m _ {o} ^ {(1)} \leq M \tag {8}
+$$
+
+$$
+x _ {i} \in \{0, 1 \}, i = 1, \dots , n
+$$
+
+$$
+b \in [ b _ {l}, b _ {u} ], b \in \mathbb {Z}
+$$
+
+where $C = t_{o} + t_{n}^{(0)} + \mathbf{1}^{T}(t_{f1}^{(0)} + t_{b1}^{(0)})$ is a constant.
+
+Problem 8 is a non-linear integer programming optimization problem, which is hard to get the optimal solution. A simple method is to sweep the mini-batch size in the range of $[b_l, b_u]$ with our framework. Empirically the Problem 6 is fast to solve using Algorithm 1. Thus, it is affordable to apply the simple method of sweeping the mini-batch size. We further discuss various mini-batch size in Section 4.6. We leave Problem 8 as an open problem for future research.
+
+# 4 Experiments
+
+In this section, we provide the experimental settings initially. Then we discuss the details of profiling. We analyze three reversible neural architectures: RevNet-104, ResNeXt-101 with inplace ABN, and Reformer. We further discuss the results in terms of various mini-batch sizes.
+
+# 4.1 Settings
+
+We adapt source codes from MemCNN $^1$ [15], Inplace ABN $^2$ [4], and Reformer $^3$ [13]. We follow their original settings and hyperparameters except that we can decide what modes each reversible layer will use.
+
+Unless otherwise stated, we use PyTorch[18] 1.4.0. The training process runs on a Linux server with Intel Core i9-7900X CPU and 1 NVIDIA TITAN Xp GPU, whose memory capacity is 12,196 MiB. All the tensor operations are on the GPU. We report the mean of 100 training iterations.
+
+# 4.2 Profiling
+
+To ensure hardware-awareness, our framework needs to do profiling on the execution time and memory allocation to obtain $t_e, m, m_o$ based on realistic measurement. It is easy to collect memory-related terms $m, m_o$ since the memory footprint is stable throughout a whole training process.
+
+For the execution time $t_e$ , the most accurate way to obtain it is running the model in two modes respectively and collect all the four corresponding vectors
+
+$(t_{f1}, t_{b1}, t_{f2}, t_{b2})$ . We can also directly compare these two modes and conclude their difference. For the feature maps in the C-Mode, it takes extra time for the memory writes in the forward computation, and memory read in the backward pass. In the M-Mode, there is overhead in reading $y$ from memory and the inverse computation.
+
+It is complicated to analyze the memory behaviour, and the analysis is beyond discussions of this paper. Fortunately, we observe that $t_{f1} \approx t_{f2} \approx t_{b1} - t_{b2}$ . For instance, the average execution time of the RevNet-104 [7] with mini-batch size of 64 on ImageNet is $t_{f1} = 10.425\mathrm{ms}$ , $t_{f2} = 10.404\mathrm{ms}$ , $t_{b1} = 29.276\mathrm{ms}$ , and $t_{b2} = 18.865\mathrm{ms}$ . This observation is prevalent in current machine learning frameworks, since memory accesses are hidden by computations [18,1]. Thus, we can only take computation into account when analyzing the difference in execution time. In short, $t_e = (t_{f1} + t_{b1}) - (t_{f2} + t_{b2}) \approx t_{f1} \approx t_{f2}$ . The assumption is verified for all the following experiments. We use $t_e = t_{f1}$ in the optimization problem directly.
+
+# 4.3 RevNet
+
+We apply our framework on RevNet-104 [7] for image classification on ImageNet. By sweeping the mini-batch sizes, we can obtain various memory budgets and computation overhead. Figure 4 illustrates our decision for different mini-batch sizes. When the mini-batch size is smaller than 65, the GPU memory capacity is large enough to contain all the intermediate activations. Thus, the optimal decision is saving all of them to achieve maximum training throughput. Starting from a mini-batch size of 65, we have to use the M-Mode in partial reversible layers due to the limited memory budget. Our dynamic programming solver will obtain the optimal decision for each setting. If the mini-batch size is larger than 117, we will encounter the issue of out of memory even if we use baseline-M, the most memory-efficient decision. As shown in Figure 4, the optimal decision is non-trivial across different mini-batch sizes.
+
+
+Fig. 4: The heat map of the optimal solutions throughout different mini-batch sizes on RevNet-104 with 13 reversible layers. The horizontal and vertical axes represent the mini-batch size and the layer index, respectively.
+
+
+
+
+(a) Training time per iteration of RevNet-104
+Fig. 5: Training time and speedup comparison of RevNet-104 and ResNeXt-101 with Inplace ABN on ImageNet. Training time per iteration is the time of one complete iteration (forward, backward, and optimizer updating). Training time per sample is the multiplicative inverse of training throughput. The curves of baseline-C are truncated due to device memory limitation.
+
+
+
+
+(c) Training time per iteration of ResNeXt-101 with Inplace ABN
+(b) Training time per sample and relative speedup of RevNet-104
+(d) Training time per sample and relative speedup of ResNeXt-101 with Inplace ABN
+
+Figure 5a shows the training time per iteration of baseline-M, baseline-C and our optimal solution. The solid red line and the green dashed line represents the baseline-M and optimal settings provided by our framework. The baseline-C is highlighted in the lower left corner, since it is limited by the device's memory capacity and cannot contain a large batch size. Our optimal solution overlaps with the baseline-C when baseline-C is feasible, i.e., minibatch size smaller than 65. When baseline-C is not available, our framework approach the baseline-M gradually. The reason is that as the mini-batch size grows, the harsh memory constraint pushes us forward to the extreme of memory efficiency. The gap between the two curves (baseline-M and optimal) demonstrates the absolute time saved by applying our method.
+
+Figure 5b compares the training time per sample. We can use this metric to compare the training throughput (which is the multiplicative inverse of the training time per sample) for different mini-batch sizes. Before applying our framework, the training speed increases as the mini-batch size grows for two reasons. First, we leverage the parallelism across batches. Second, the execution time of the optimizer, scheduler, and control is independent of the mini-batch size. This part of execution is amortized by the large mini-batch size. After using our framework, the trend is different. The training throughput decreases as the mini-batch size grows, because the computation overhead of inverse functions is much larger than the benefit from large mini-batch size. We also show the relative speedup of our optimal execution time compared with baseline-M. We can achieve up to $1.15 \times$ speedup for this benchmark.
+
+# 4.4 Inplace ABN
+
+We follow the settings in the paper of Inplace ABN [4] and use our framework to train ResNeXt-101 [23] for image classification on ImageNet. Figures 5c and 5d compare the training time per iteration across different mini-batch sizes. The results are similar to those of RevNet-104 except the relative speedup.
+
+The computation overhead of the Inplace ABN is relatively low compared with RevNet-104 in the previous subsection. The execution time of baseline-C is only $0.8 - 2\%$ smaller than that of baseline-M. Therefore, the relative speedup using our method is not as significant as the experiments on RevNet-104. The reason is that the maximum training throughput of our framework is bounded by baseline-C. However, the advantage of our method is that we can find the optimal point across two baselines.
+
+# 4.5 Reformer
+
+We also do experiments on the enwik8 task with Reformer. Specifically, there are 8 heads in our 12-layer model. The maximum sequence length is 4,096, and the number of tokens is 256. For each iteration, we call the optimizer to update the trainable parameters after accumulating gradients for 4 steps. Table 2 shows the training time in different modes.
+
+Table 2: Results of Reformer on enwik8 task. TPI and TPS are abbreviations for training time per iteration and training time per sample. OOM stands for out of memory. All the execution time is in the unit of seconds.
+
+| mini-batch size | baseline-C | optimal | baseline-C | optimal |
| TPI | TPI | TPI | TPS | TPS | TPS | TPS | speedup |
| 1 | 0.951 | 1.321 | 0.949 | 0.951 | 1.321 | 0.949 | 1.392 | |
| 2 | 1.738 | 2.533 | 1.738 | 0.869 | 1.266 | 0.869 | 1.457 | |
| 3 | OOM | 3.603 | 2.752 | OOM | 1.201 | 0.917 | 1.310 | |
| 4 | OOM | 4.792 | 4.175 | OOM | 1.198 | 1.044 | 1.148 | |
| 5 | OOM | 6.020 | 5.236 | OOM | 1.204 | 1.047 | 1.150 | |
| 6 | OOM | 7.210 | 6.692 | OOM | 1.202 | 1.115 | 1.077 | |
| 7 | OOM | 8.420 | 7.670 | OOM | 1.203 | 1.096 | 1.098 | |
| 8 | OOM | 9.490 | 9.044 | OOM | 1.186 | 1.130 | 1.049 | |
| 9 | OOM | 10.603 | 10.123 | OOM | 1.178 | 1.125 | 1.047 | |
| 10 | OOM | 11.873 | 11.295 | OOM | 1.187 | 1.129 | 1.051 | |
+
+Due to the large memory footprint, the baseline-C can only run with a minibatch size of 2. The reversibility enables us to train the model with a mini-batch size up to 10. Our framework provides a smooth transition from baseline-C to baseline-M. We achieve $1.3 \times$ relative speedup when the mini-batch size is 3.
+
+# 4.6 Various mini-batch sizes
+
+For this subsection, we discuss the optimal mini-batch size from the perspective of training throughput. In the above experiments, the lowest execution time per sample (TPS) is approximately obtained at the largest mini-batch size when baseline-C is feasible. For example, the Reformer gets the lowest TPS 0.869s at the mini-batch size of 2. The reason is that the computation overhead of inverse functions is much larger than the benefit from large mini-batch size. In other words, we cannot accelerate the training process via reversible neural architectures. From the perspective of Problem 8, the TPS $f(x,b) = t_{e}^{(1)^{T}}x - \frac{C - t_{e}^{(0)^{T}}x}{b}$ is dominated by the first term $t_{e}^{(1)^{T}}x$ .
+
+# 5 Conclusions
+
+In this paper, we present the framework to execute reversible neural architectures in the most efficient modes. We formulate the decision problem for reversible operators. The training time is the objective function with memory usage as a constraint. By solving this problem, we can maximize the training speed for any reversible neural architectures. Our framework automates this decision process, empowering researchers to develop and train reversible networks more efficiently.
+
+For future directions, we may integrate gradient checkpoints and reversible neural architectures to enlarge the search space, since gradient checkpoints allow
+
+non-reversible layers to follow M-Mode by doing recomputation. The optimal mini-batch size in terms of training throughput is another critical issue.
+
+# References
+
+1. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015), software available from tensorflow.org
+2. Blumberg, S.B., Tanno, R., Kokkinos, I., Alexander, D.C.: Deeper image quality transfer: Training low-memory neural networks for 3d images. Lecture Notes in Computer Science p. 118-125 (2018)
+3. Brügger, R., Baumgartner, C.F., Konukoglu, E.: A partially reversible u-net for memory-efficient volumetric image segmentation. Medical Image Computing and Computer Assisted Intervention - MICCAI 2019 p. 429-437 (2019)
+4. Bulò, S.R., Porzi, L., Kontschieder, P.: In-place activated batchnorm for memory-optimized training of dnns. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5639-5647 (June 2018). https://doi.org/10.1109/CVPR.2018.00591
+5. Chen, T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31, pp. 6571-6583. Curran Associates, Inc. (2018)
+6. Chen, T., Xu, B., Zhang, C., Guestrin, C.: Training deep nets with sublinear memory cost (2016)
+7. Gomez, A.N., Ren, M., Urtasun, R., Grosse, R.B.: The reversible residual network: Backpropagation without storing activations. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. p. 2211-2221. NIPS'17, Curran Associates Inc., Red Hook, NY, USA (2017)
+8. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37. p. 448-456. ICML'15, JMLR.org (2015)
+9. Jacobsen, J.H., Smeulders, A.W., Oyallon, E.: i-revnet: Deep invertible networks. In: International Conference on Learning Representations (2018)
+0. Jain, P., Jain, A., Nrusimha, A., Gholami, A., Abbeel, P., Gonzalez, J., Keutzer, K., Stoica, I.: Breaking the memory wall with optimal tensor rematerialization. In: Proceedings of Machine Learning and Systems 2020, pp. 497-511 (2020)
+11. Jia, Z., Lin, S., Qi, C.R., Aiken, A.: Exploring hidden dimensions in accelerating convolutional neural networks. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 2274-2283. PMLR, Stockholm, Sweden (10-15 Jul 2018)
+12. Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31, pp. 10215-10224. Curran Associates, Inc. (2018)
+13. Kitaev, N., Lukasz Kaiser, Levskaya, A.: Reformer: The efficient transformer (2020)
+
+14. Kusumoto, M., Inoue, T., Watanabe, G., Akiba, T., Koyama, M.: A graph theoretic framework of recomputation algorithms for memory-efficient backpropagation. In: Advances in Neural Information Processing Systems 32, pp. 1161-1170. Curran Associates, Inc. (2019)
+15. Leemput, S.C.v., Teuwen, J., Ginneken, B.v., Manniesing, R.: Memcnn: A python/pytorch package for creating memory-efficient invertible neural networks. Journal of Open Source Software 4(39), 1576 (2019). https://doi.org/10.21105/joss.01576
+16. MacKay, M., Vicol, P., Ba, J., Grosse, R.: Reversible recurrent neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. p. 9043-9054. NIPS'18, Curran Associates Inc., Red Hook, NY, USA (2018)
+17. Martello, S., Toth, P.: Knapsack Problems: Algorithms and Computer Implementations. John Wiley & Sons, Inc., USA (1990)
+18. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S.: Pytorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc. (2019)
+19. Rhu, M., Gimelshein, N., Clemons, J., Zulfiqar, A., Keckler, S.W.: Vdnn: Virtualized deep neural networks for scalable, memory-efficient neural network design. In: The 49th Annual IEEE/ACM International Symposium on Microarchitecture. MICRO-49, IEEE Press (2016)
+20. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Neurocomputing: Foundations of research. Nature pp. 696-699 (1988)
+21. Sohoni, N.S., Aberger, C.R., Leszczyński, M., Zhang, J., Ré, C.: Low-memory neural network training: A technical report (2019)
+22. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L.u., Polosukhin, I.: Attention is all you need. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems 30, pp. 5998-6008. Curran Associates, Inc. (2017)
+23. Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Jul 2017). https://doi.org/10.1109/cvpr.2017.634
+24. Zhang, J., Yeung, S.H., Shu, Y., He, B., Wang, W.: Efficient memory management forgpu-based deep learning systems (2019)
+25. Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV). pp. 2242-2251 (Oct 2017). https://doi.org/10.1109/ICCV.2017.244
\ No newline at end of file
diff --git a/anefficienttrainingframeworkforreversibleneuralarchitectures/images.zip b/anefficienttrainingframeworkforreversibleneuralarchitectures/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..458915f0352f3084d3636649fb04e6296346c47c
--- /dev/null
+++ b/anefficienttrainingframeworkforreversibleneuralarchitectures/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e79de1c6b704cf294b63a103db00b28d40d093646134a64c4c09fee74fb49105
+size 291131
diff --git a/anefficienttrainingframeworkforreversibleneuralarchitectures/layout.json b/anefficienttrainingframeworkforreversibleneuralarchitectures/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..51e014b182a8f958be89ef55b2542f5c6eda9c7f
--- /dev/null
+++ b/anefficienttrainingframeworkforreversibleneuralarchitectures/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:71e9b3589db591965b1d0b839f8ca7649d3a5136b451367f254545bd1348fc0e
+size 390243
diff --git a/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/f4bbc9cf-df03-4c26-a540-d406051ce4e9_content_list.json b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/f4bbc9cf-df03-4c26-a540-d406051ce4e9_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1b5189dc4bac706c175fa73b660c1c5fb96a47a9
--- /dev/null
+++ b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/f4bbc9cf-df03-4c26-a540-d406051ce4e9_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9818e9ec04664ae234c964c5e6045d1a44d5b626953387b803675ccc3e6d77d2
+size 77089
diff --git a/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/f4bbc9cf-df03-4c26-a540-d406051ce4e9_model.json b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/f4bbc9cf-df03-4c26-a540-d406051ce4e9_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..be88556adf8682aad67afc9ef0e8ddbb56aa06d9
--- /dev/null
+++ b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/f4bbc9cf-df03-4c26-a540-d406051ce4e9_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5cb8e0d292e6c5dad0016c086240ab4c02c584db497f4ba1d421c728e35dac21
+size 93508
diff --git a/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/f4bbc9cf-df03-4c26-a540-d406051ce4e9_origin.pdf b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/f4bbc9cf-df03-4c26-a540-d406051ce4e9_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b122e9b50116b36e53b4b15a606de217bf3ed214
--- /dev/null
+++ b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/f4bbc9cf-df03-4c26-a540-d406051ce4e9_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6bc3f729c29cf8bb9d2af29254e0d1c8eb7999f8130e40d31c48c40e9123fa02
+size 4973591
diff --git a/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/full.md b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..eadd96421de4a1aa46dba455062777118c3370e6
--- /dev/null
+++ b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/full.md
@@ -0,0 +1,357 @@
+# An End-to-End OCR Text Re-organization Sequence Learning for Rich-text Detail Image Comprehension
+
+Liangcheng Li $^{1,2,3}$ , Feiyu Gao $^{2}$ , Jiajun Bu $^{\star 1,3,4}$ , Yongpan Wang $^{1,2,3}$ , Zhi Yu $^{1,3,4}$ , and Qi Zheng $^{2}$
+
+1 Zhejiang Provincial Key Laboratory of Service Robot, College of Computer Science, Zhejiang University, Hangzhou, China
+
+2 Alibaba Group, Hangzhou, China
+
+3 Alibaba-Zhejiang University Joint Institute of Frontier Technologies, Hangzhou, China
+
+$^{4}$ Ningbo Research Institute, Zhejiang University, Ningbo, China liangcheng.li@zju.edu.cn, feiyu.gfy@alibaba-inc.com, bjj@zju.edu.cn, yongpan@taobao.com, yuzhirenzhe@zju.edu.cn, yongqi.zq@taobao.com
+
+Abstract. Nowadays the description of detailed images helps users know more about the commodities. With the help of OCR technology, the description text can be detected and recognized as auxiliary information to remove the visually impaired users' comprehension barriers. However, for lack of proper logical structure among these OCR text blocks, it is challenging to comprehend the detailed images accurately. To tackle the above problems, we propose a novel end-to-end OCR text reorganizing model. Specifically, we create a Graph Neural Network with an attention map to encode the text blocks with visual layout features, with which an attention-based sequence decoder inspired by the Pointer Network and a Sinkhorn global optimization will reorder the OCR text into a proper sequence. Experimental results illustrate that our model outperforms the other baselines, and the real experiment of the blind users' experience shows that our model improves their comprehension.
+
+Keywords: OCR Text Re-organization, Graph Neural Network, Pointer Network
+
+# 1 Introduction
+
+The internet era has given rise to the development of E-commerce and a large number of relevant platforms are springing up, such as Taobao, Jingdong and Amazon. Nowadays people are apt to participate in these websites for communications with online sellers and transactions on diverse commodities. To attract more consumers, these sellers take advantage of rich description text and commodity pictures to synthesize stylistic detail images, which help the consumers know their products as intuitive as possible.
+
+
+波轮洗衣机使用方法
+
+
+
+
+
+
+STEP1
+放入至水浴桶的油位,使用50%的水溶解水和盐;放入待入油表面积(可从入口放大的面积)起,放置若干时间即可。
+STEP3
+按下暂停键,按住鼠标右键,设置按钮方向为浮点方向,选中复盘器,单击开始播放关键帧(淡色-黑水一灰)
+
+
+
+
+(a)
+Fig. 1. Example of a detail image (a) and the right reading order in (b). The blue boxes are the text blocks provided by OCR technology, the top-left red corner marks are the indexes of the text blocks. The green arrow lines in (b) show the proper reading route instead of reading from left to right and top to bottom simply.
+
+
+波纶洗衣机使用方法
+
+
+
+
+姓名姓名
+姓名:
+性别:
+出生日期:
+身份证号码:
+家庭住址:
+联系电话:
+个人简历:
+个人学历:
+专业:
+
+
+10.1.1
+2.本报告期末余额为567万元,计提减值准备
+3.本期发生额:有已收回的、递延所得税
+4.本报告期末余额:有已收回的、递延所得税
+
+
+
+
+(b)
+
+Nevertheless, most detailed images are designed for healthy people who can comprehend both the image and text information directly. They ignore the demand of the visually impaired people who account for more than $27\%$ of the world's population, such as the blind or the elderly. Since most existing screen readers cannot recognize the image format information, an interaction barrier between the visually impaired people and the e-commerce world has emerged. As the text is an essential tool for humankind's communication, it is an alternative to choose the description text in these detailed images for comprehension. Optical Character Recognition (OCR) technology devotes to mining the text information from several images, with its full application in scene text understanding[34], such as PhotoOCR [4], DocumentOCR [16]. Most classical and prevalent works on OCR concentrate on text detection [8, 13, 32] and recognition [1, 5, 14, 20]. They extract the characters in images and organize them into several text blocks according to semantic information, which performs well on many scene-text images, and detailed images are no exception.
+
+However, the text in detail images has a flexible layout. It uses diverse typography structures to convey the product information, which causes the comprehending problem as the text blocks from OCR technology are discrete and lacking in context order without image structure. So it is often confusing for the visually impaired consumers when the screen reader reads the text blocks at an arbitrary order. Figure 1(a) shows an example of a detailed image, the blue boxes are the text blocks provided by OCR technology and the top-left red corner marks are the indexes of the text blocks. If the screen reader reads these text blocks from left to right and top to bottom, the visually impaired consumers are doomed to misinterpret even hardly comprehend the detailed images. Only
+
+the reading order in Figure 1(b) shows the same information that the raw detail image is expressed.
+
+In this paper, we propose a novel end-to-end OCR text re-organization model for detailed image comprehension to tackle the problem as mentioned above. Based on the text detection feature extracted by a fully convolutional network (FCN), we use the text blocks to construct a graph structure and cast the problem to a graph to sequence model. Specifically, under the assumption that all the detailed images are probably be laid out regularly [15], we apply a graph convolution network (GCN) model with an attention mask to encode the logical layout information of the text blocks. A sequence decoder based on Pointer Network (PN) is proposed to obtain the text blocks' final order. We also introduce the Sinkhorn layer to make optimal global normalization by transforming the decoder predictions into doubly-stochastic matrices. Experiments on real-world detail image datasets have been conducted and show our method outperforms other sequence-oriented baselines both on local and global sequence evaluations. A real user experience test on blind people is also launched and shows the improvement of their comprehension.
+
+Our contributions are threefold. First, to our best knowledge, it is the first time to propose the reading order problem for a rich-text detailed image based on OCR text blocks. Second, we propose an end-to-end graph to sequence model to solve the text blocks' re-organization problem using graph convolution network and pointer attention mechanism. Last, we design both quantitative sequence evaluation and real user experience tests among the blind people to convince our model's rationality and feasibility.
+
+# 2 Related Work
+
+Since the reading order re-organization problem is rarely mentioned and similar to the fields on sequence modeling, in this section, we briefly discuss related works on it. We also discuss traditional research on document analysis to show the similarities and differences with our work.
+
+# 2.1 Sequence modeling
+
+Sequence modeling has been widely researched in many fields. In computer vision, it aims to learn a proper order for a set of images according to some predefined rules [22]. A typical variation of this task is the jigsaw puzzle problem [18, 24], which needs to recover an image from a tile of puzzle segments. Jigsaw puzzle problems can be abstracted as ordering the image segments based on their shape or texture, especially on the boundaries [11, 19]. It is similar when regarding the OCR text blocks as sub-image regions and reconstructs their order. However, these methods are not suitable because OCR text blocks are discrete and isolated, with no joint boundary and continuous texture information.
+
+Meanwhile, in natural language processing, RNN-based [21] Sequence-to-Sequence model (Seq2Seq) [27] and Neural Turing Machines [12] can solve most
+
+generative sequence tasks. However, they cannot solve the permutation problem where the outputs' size depends on the inputs directly. Vinyal et al. propose Pointer Network [29] which uses an attention mechanism to find the proper units from the input sequence and permute these as output. One of its application, text summarization, show the similarities of our work as they select some key information from the original text for summarization[7, 10]. Recently they are prevalent with the dynamic decision whether generating new words or permuting words from the original text inspired by the pointer mechanism[23, 33]. However, it is not suitable to generate the summarization of complete text information because the description text is carefully selected by the sellers to show the selling points [6], let alone the word deletion in extractive summarization. Meanwhile, as there remain some mistakes during the OCR text detection and recognition process, it is hard to guarantee the accuracy of the summarization under NLP features. Finally, sellers may tend to use concise and isolated phrases or words to describe their product, which has no grammar or syntax structure so that the summarization will fail to get whole sentences.
+
+Furthermore, another line of research for sequence modeling has been devoted to converting other complex structures into sequences. Xu et al. [31] propose a graph to sequence model (Graph2Seq) with a GCN encoder and an attention Seq2Seq decoder to solve the bAbI artificial intelligence tasks [30]; Vinyals et al. [28] apply the attention mechanisms on input sets and propose the set to sequence model (Set2Seq) for language modeling and parsing tasks; Eriguchi et al. [9] design a tree to sequence (Tree2Seq) structure for extracting syntactic information for sentences. The commonality of these models is that their sequence decoders are all based on the Seq2Seq model, causing the limitation of output dictionary dependence.
+
+# 2.2 Document analysis
+
+Document analysis mainly includes two steps: document layout analysis and document understanding. The former process detects and annotates the physical structure of documents, and the latter process has several comprehension applications such as document retrieval, content categorization, text recognition[3]. However, most layout structure extraction and comprehension tasks on traditional documents are cast to a classification problem, which is different from text ordering tasks on scene-text images. It is hard to find homogeneous text regions and define semantic categories of the OCR text blocks with diverse layouts and open designs. Furthermore, scene texts with unique layouts and designs imply the visual cues and orders for comprehending the whole image, while document content analysis scheme is not suited for obtaining the order context.
+
+# 3 Re-organization Model Architecture
+
+Since the traditional sequence modeling methods cannot directly apply to the detailed image comprehension problem. This section sheds light on an end-to-end model to re-organize the OCR text block image regions for comprehension
+
+based on layout analysis. Specifically, we first define the re-organization task and then introduce the graph-based encoding method with an attention mask to get the layout embedding, finally we introduce a pointer-based attention decoder to solve the ordering problem.
+
+# 3.1 Task definition
+
+Given a set of text block images generated by OCR text detection and recognition from an original detail image, we need to generate a proper permutation of these blocks under which its text sequence can be comprehend. Formally, let us define an detail image with its OCR text block set $\mathcal{T} = \{t_1,t_2,\dots ,t_n\}$ where $t_i$ refers to the $i^{th}$ text block. Meanwhile, we also define an target permutation $\mathcal{P}^{\mathcal{T}} = < \mathcal{P}_1,\mathcal{P}_2,\dots ,\mathcal{P}_{m(\mathcal{T})}>$ where $m(\mathcal{T})$ is the indices of each unit in the text block set $\mathcal{T}$ between 1 and n. We are suggested to train an ordering model with the parameters w by maximizing the conditional probabilities for the training set as follows:
+
+$$
+w ^ {*} = \arg \max _ {w} \sum_ {\mathcal {T}, \mathcal {P} ^ {\mathcal {T}}} \log p (\mathcal {P} ^ {\mathcal {T}} | \mathcal {T}; w) \tag {1}
+$$
+
+where the sum operation means the sum of the total training examples. Actually, we cast the discrete image block re-organization process to a supervised sequence ordering problem.
+
+# 3.2 Graph construction
+
+We model each detail image as a graph of text blocks in which each independent text block are regarded as nodes with the image feature comprised for their attributes. We also take advantage of the geometric information (e.g. position) of the text blocks and construct edges to represent the original relations among them. Mathematically, we cast a detail image to a directed weighted graph structure $\mathcal{G} = (\mathcal{N},\mathcal{E})$ , where $\mathcal{N} = \{f(t_1),f(t_2),\dots ,f(t_n)\}$ is the set of $n$ text blocks (i.e. nodes) and $f(t_{i})$ stands for the attributes of the $i^{th}$ text block, while $\mathcal{E} = \{r(e_{i,1}),r(e_{i,2}),\dots ,r(e_{i,n - 1})\}$ is the set of edges and $e_{i,j}$ is the direct edge from node $i$ to node $j$ and $r(e_{i,j})$ stands for the attributes of the $e_{i,j}$ direct edge. In fact, we construct the fully connected graph for text blocks in a detail image primarily.
+
+In order to obtain the attribute $f(t_{i})$ for the $i^{th}$ node, we consider the image feature which is related to the layout and image semantic feature instead of the text feature because the detail images do not have strict morphology and syntax structures. Given a detail image, we apply the Fully Convolutional Network (FCN) [17] model on detecting the text regions, then we extract its backbone and use the pretrained parameters from text detection to get the feature map of the total image. Combined with the text region bounding box, we get the text block feature as the node attributes with bi-linear interpolation technique.
+
+As for the directed edge attributes, we consider the geometric information and take advantage of the position coordinates of the text blocks. Since the rectangle
+
+text regions are in difference size, we apply the relative position inspired by [16] to represent the edge attribute between node $t_i$ and $t_j$ as follows:
+
+$$
+r \left(e _ {i, j}\right) = \left[ \Delta_ {i, j} X, \Delta_ {i, j} Y, \frac {l _ {i}}{h _ {i}}, \frac {l _ {j}}{h _ {i}}, \frac {h _ {j}}{h _ {i}}, \frac {h _ {j}}{l _ {i}}, \frac {l _ {j}}{l _ {i}} \right] \tag {2}
+$$
+
+where $\Delta_{i,j}X$ and $\Delta_{i,j}Y$ stand for the horizontal and vertical euclidean distance of two text blocks based on their top-left coordinates, while $l_i$ and $h_i$ stand for the width and height of the $i^{th}$ text block respectively. The third to eighth values of the attributes are the shape ratio of the node $t^i$ , with four relative height and width of node $t^j$ . Because the text blocks are not single points and have different region shapes, it is necessary to consider the impact of the shape instead of only using the euclidean distance of vertexes.
+
+To summarize, we construct the graph of text blocks in a detail image with its node embedded the image textual features and its edge embedded the geometric features primarily, as Figure 2 depicts.
+
+
+Fig. 2. The framework of graph construction and graph convolutional encoder module
+
+
+Fig. 3. The transformation of the directed weighted graph. The new feature contains the concatenation of two node feature vectors with the edge feature vector of their directed link.
+
+# 3.3 Graph convolutional encoder
+
+Compared to the traditional convolutional network, graph convolution is applied to the discrete data structure and learn the embeddings of nodes through the aggregation of their local neighbors. In this paper, we simultaneously perform the convolution operation on both nodes and edges. Because two directed edges link every two nodes, we deal with the node feature vector with the concatenation of two-node feature vectors and an edge feature vector that links them, as Figure 3 depicts. That is, for two text blocks' nodes $t_i$ and $t_j$ with two edges $e_i$ and $e_j$ between them, we define a new compound node $c_{i,j}$ with its feature vector $\pmb{h}_{i,j}^{0}$ at $0^{th}$ layer as follows:
+
+$$
+\boldsymbol {h} _ {i, j} ^ {0} = \operatorname {C O N C A T} \left(f ^ {0} \left(t _ {i}\right), r ^ {0} \left(e _ {i, j}\right), f ^ {0} \left(t _ {j}\right)\right) \tag {3}
+$$
+
+then we can iteratively compute the $l_{th}$ layer feature $h_{i,j}^{l}$ as follows:
+
+$$
+\boldsymbol {h} _ {i, j} ^ {l} = \sigma \left(\left(\boldsymbol {W} _ {v} ^ {l}\right) ^ {T} \cdot \boldsymbol {h} _ {i, j} ^ {l - 1}\right) \tag {4}
+$$
+
+where $\sigma$ refers to the nonlinear activation function, and $W_{v}^{l}$ refers to the node weight parameters of the $l^{th}$ layer. However, to get the hidden representation of node $t_i$ instead of compound node $c_{i,j}$ , we also need to analyze and aggregate the proper local neighbors of the node $t_i$ . Instead of using the traditional aggregator architectures like mean or LSTM aggregators, we use the self-attention mechanism on different hidden layers. Mathematically, the attention output embedding $f^{l}(t_{i})$ for the node $t_i$ at $l^{th}$ layer can be calculated as follows:
+
+$$
+f ^ {l} \left(t _ {i}\right) = \sigma \left(\sum_ {j \in \{k | \forall k \in N B (i) \}} \alpha_ {i, j} ^ {l} \boldsymbol {h} _ {i, j} ^ {l}\right) \tag {5}
+$$
+
+where $\sigma$ is a nonlinear activation function. Since we will mask the node with very low attention value and do not regard them as a proper local neighbor, $NB(i)$ refers to the local neighbors of the node $t_i$ . Likewise, $\alpha_{i,j}^{l}$ refers to the attention coefficient between node $t_i$ and $t_j$ . Based on the [2], the attention coefficient can be defined as follows:
+
+$$
+\alpha_ {i, j} ^ {l} = \frac {\exp \left(\sigma \left(\left(\boldsymbol {w} _ {a} ^ {l}\right) ^ {T} \boldsymbol {h} _ {i , j}\right)\right)}{\sum_ {u \in \{k | \forall k \in N B (i) \}} \exp \left(\sigma \left(\left(\boldsymbol {w} _ {a} ^ {l}\right) ^ {T} \boldsymbol {h} _ {i , u}\right)\right)} \tag {6}
+$$
+
+where the $\sigma$ refers to the LeakyReLU activation function, $w_{a}^{l}$ is a attention weight vector of the $l^{th}$ layer.
+
+Meanwhile, we perform the edge embedding with more easier operation as we find that the compound node $c_{i,j}$ represents the edge link information of two
+
+nodes, so we define the convolution output embedding $r^l(e_{i,j})$ for the edge $e_{i,j}$ at $l^{th}$ layer as follows:
+
+$$
+r ^ {l} \left(e _ {i, j}\right) = \sigma \left(\left(\boldsymbol {W} _ {e} ^ {l}\right) ^ {T} \cdot \boldsymbol {h} _ {i, j} ^ {l - 1}\right) \tag {7}
+$$
+
+where $\sigma$ is a nonlinear activation function, and $\pmb{W}_{e}^{l}$ refers to the edge weight parameters of the $l^{th}$ layer.
+
+The intermediate output $f^{l}(t_{i}), r^{l}(e_{i,j})$ and $f^{l}(t_{j})$ can be send to the next graph convolution layer as inputs according to Eq. 3. After $K$ graph convolution operations, we can obtain the final node embedding feature matrix $Z^{V}$ which combined by $f^{K}(t_{i}), \forall t_{i} \in \mathcal{N}$ and edge embedding feature matrix $Z^{E}$ which combined by $r^{K}(e_{i,j}), \forall e_{i,j} \in \mathcal{E}$ . Finally, we perform mean pooling operation on the node embedding to obtain the final graph representation $Z^{G}$ as sequence, which is fed to the downstream pointer-based sequence decoder for the result order. Meanwhile, we use a fully-connected neural network to perform link prediction task for obtaining the relation features $Z^{L}$ of the text blocks, which implies the layout constraints for the downstream decoder task. In Section 3.5 we will illustrate more about the layout constraints. The right blocks of Fig 2 shows the process of the encoder.
+
+# 3.4 Pointer-based attention decoder
+
+As for a sequence problem, the decoder of the text block re-organization task happens sequentially. That is, at each time step $s$ , the decoder will output the node $t_s$ according to the embeddings of the encoder and the previous output $t_{s'}$ which $s' < s$ . In this task, we have no output vocabulary and the nodes in the output sequence are just from the inputs. Therefore we apply a pointer-based decoder with a single-head attention mechanism. Figure 4 depicts the decoding process.
+
+The information considered by the decoder at each time step $s$ includes three embeddings, the graph embeddings from the encoder including node embeddings and layout constraints, and the previous (last) node embedding. Hence that at the first step we will use a special start label and learn the first node $v^{input}$ as input placeholder. Formally, we define this information as a concatenating context vector $h_c$ and compute as follows:
+
+$$
+\boldsymbol {h} _ {\mathrm {c}} = \left\{ \begin{array}{l} \left[ \boldsymbol {Z} ^ {G}, \boldsymbol {Z} ^ {L}, \boldsymbol {h} _ {t _ {s - 1}} \right], s > 1 \\ \left[ \boldsymbol {Z} ^ {G}, \boldsymbol {Z} ^ {L}, \boldsymbol {v} ^ {\text {i n p u t}} \right], s = 1 \end{array} \right. \tag {8}
+$$
+
+where $[\cdot, \cdot, \cdot]$ is the horizontal concatenation. With the context vector, we will decode the corresponding node and use the result to update itself for the next prediction. Under the attention mechanism, we can compute a single query $q_{c}$ from the context vector as follows:
+
+$$
+\boldsymbol {q} _ {c} = W ^ {Q} \boldsymbol {h} _ {c}, \boldsymbol {k} _ {i} = W ^ {K} \boldsymbol {h} _ {i}, \boldsymbol {v} _ {i} = W ^ {V} \boldsymbol {h} _ {i} \tag {9}
+$$
+
+where $W^{Q}, W^{K}, W^{V}$ are the learning parameters and $\pmb{h}_i$ is the node embedding, from which we get its key $\pmb{k}_i$ and value $\pmb{v}_i$ . After that, we can compute the relation score of the query with all nodes, and mask the already visited nodes. The score $a_{c,i}$ is defined as follows:
+
+$$
+a _ {c, i} = \left\{ \begin{array}{l l} \frac {\boldsymbol {q} _ {c} ^ {T} \boldsymbol {k} _ {i}}{\sqrt {d _ {h}}}, & \text {i f} i \neq s ^ {\prime}, \forall s ^ {\prime} < s \\ - \inf , & \text {o t h e r w i s e} \end{array} \right. \tag {10}
+$$
+
+where $d_h$ is the node embedding dimensionality. Then we can compute the output of the softmax probability $p_i$ of node $t_i$ as follows:
+
+$$
+p _ {i} = \frac {\exp \left(a _ {c , i}\right)}{\sum_ {j} \exp \left(a _ {c , j}\right)} \tag {11}
+$$
+
+the decoder will choose the node with max probability as the output of each time step.
+
+
+Fig. 4. The framework of pointer-based attention decoder. The decoder takes the graph embeddings including node embeddings and layout constraints. At each time step $s$ , the decoder takes advantage of the graph embeddings and the last output node embedding where the learned placeholder is used at the first step. Once a node has been output, it will be masked and cannot be considered anymore. The example depicts that the output sequence $< t_3, t_1, t_2, t_4 >$ is decoded sequentially.
+
+
+
+
+
+
+
+# 3.5 Sinkhorn global optimization
+
+To improve the efficiency and make the max probability more significant, Sinkhorn normalization algorithm can be applied in the attention matrix. Because each text block has unique link to the next one, we can cast the attention matrix into a double-stochastic matrix with rows and columns summing to one. In Sinkhorn theory, any non-negative square matrix can be transformed into a double-stochastic matrix via iteratively scaling its rows and columns to one alternatively [25, 26]. Consider the attention matrix $A^{n \times n}$ before the final prediction, and it can be transformed to a double-stochastic matrix by alternatively performing row and column normalization until its rows and columns summing to one. The row $R$ and column $C$ normalizing operations are defined as follows:
+
+$$
+R _ {i, j} (A) = \frac {A _ {i , j}}{\sum_ {k = 1} ^ {n} A _ {i , k}}; C _ {i, j} (A) = \frac {A _ {i , j}}{\sum_ {k = 1} ^ {n} A _ {k , j}}. \tag {12}
+$$
+
+And the Sinkhorn normalization $SH$ for the l-th iteration is operated recursively by the following rules:
+
+$$
+S H ^ {n} (\mathbf {A}) = \left\{ \begin{array}{l l} \mathbf {A}, & \text {i f} n = 0 \\ C (R (S H ^ {n - 1} (\mathbf {A}))), & \text {o t h e r w i s e} \end{array} \right. \tag {13}
+$$
+
+Then we can add Sinkhorn normalization for global optimal max probability of the output text block at each time step.
+
+# 4 Experiments
+
+In this section, we apply our model on real Detailed Image (DI) datasets with several types of products and use both global and local sequence evaluation methods to compare our model with other baselines. Furthermore, we launch a real user experience test on blind people and analyze their feedbacks.
+
+# 4.1 Dataset
+
+Since there is no work on re-organizing OCR text blocks for proper reading order on detail image, we first collect and label detail images from e-commerce platforms to construct the DI datasets. DI consists of about 10k detail images with more than 130k text blocks from several product types such as cosmetics, daily necessities, detergents, and the number of text block ranges from 5 to 50 for each detailed image. Due to some bad OCR results, redundant information, and irrelevant descriptions, we ignore these text blocks during the reordering process to guarantee that each text block's content is valid and necessary for comprehension. The layout of text blocks in DI includes horizontally text, multi-column text, ring, star and single key-value structural text, which implies different logical reading order. We communicate with real users including the visually impaired and the designers of the text images to understand how to comprehend the image only by the texts contained, then we induct and define the proper text order as all the text blocks from OCR are in the order of visual information acquisition, and keep the semantically related text blocks as close as possible in the ordering sequence. For our model, we assign $80\%$ of the dataset for training, $15\%$ for validation and $15\%$ for the test.
+
+# 4.2 Baselines
+
+We compare the performance of our model with the following designed baselines.
+
+Position-greedy (POS-Greedy) This method considers the position of the text blocks and under the row-major order to scan the OCR text blocks. It will select the nearest text block of the current one as its next linked one. Under the statistics, more than $98\%$ detail images satisfy the rule that its first text block is relatively close to the top or left region, so we use it to decide the first block of the sequence.
+
+Position-hierarchy (POS-Hier) This method considers the global minimum distance among all the pairs of OCR text blocks, then merge the pair into a new block iteratively and row-major order rules order the two text blocks.
+
+Position-MLP (POS-MLP) This model only considers the geometrical feature with an MLP to predict the partial order of each pair. It solves the text block re-organization task according to the partial order pairs.
+
+# 4.3 Evaluation metrics
+
+Since it is a sequence order problem, we first use the total order accuracy of the detail image as the global sequence evaluation metric. We compare the ground truth sequence with our model's predict sequence by single block position matching, if there exist two blocks mismatching, the prediction of the detail image fails. The total order accuracy can be computed as the ratio of the number of detail images whose OCR text blocks are perfectly matched.
+
+Other than the global sequence evaluation, we are inspired by the evaluation for discrete words in machine translation and apply the BLEU score for evaluating the local continuous coverage rate of the discrete OCR text blocks. Hence that we re-organize the text blocks from the input, it is meaningless to compute one block coverage (BLEU-1) as they always show the same value.
+
+# 4.4 Results and Analysis
+
+We first resize all the detailed images for $768 \times 768$ resolution as normalized input for feature extraction from the pretrained backbone, then we feed them into a two-layer graph convolution encoder for obtaining the graph embeddings, then the attention decoder will predict the sequence of the text blocks. We perform the last three models ten times within 300 epochs on NVIDIA Tesla P100 until convergence and choose the best one on the validation set. The main results are depicts in Table 1. As we can see, our proposed model GCN-PN and GCN-PNSinkhorn outperform among the baselines on global sequence prediction, which seems that the image feature from FCN is beneficial to predict more accurate re-organized sequence, because it is reasonable that the layout is related to the image feature and can help to infer the reading order. Meanwhile, the GCN encoder and PN decoder provide a more powerful order relation analysis than the rule-based method. Besides, adding Sinkhorn normalizing operation into the decoder is beneficial for total order prediction. It considers the total links among the text blocks and can weaken some potential wrong links that maybe only locally optimal.
+
+Furthermore, we make a deep analysis on the local sub-sequence coverage. Intuitively, we use the BLEU score which is usually evaluated for machine translation tasks. Since we can consider each of the text blocks in the result sequence as a separate unit like word, we can compute the BLEU-2 and BLEU-4 for evaluating the coverage rate on 2 and 4 subsequent text blocks. Table 2 depicts the results. Hence, we use the NLTK package to compute the BLEU score, which adds a normalization to it and maps it into a value at $[0,1]$ intervals. When the
+
+Table 1. Total order accuracy of these models on DI test data
+
+| Method | Total Order Acc |
| POS-Greedy | 0.41 ± 0.008 |
| POS-Hier | 0.70 ± 0.010 |
| POS-MLP | 0.75 ± 0.010 |
| GCN-PN | 0.79 ± 0.009 |
| GCN-PN-Sinkhorn | 0.86 ± 0.005 |
+
+perfect matching happens, the value goes to 1, otherwise it goes to zero, and the large the value is, the higher the coverage rate is. From the table we can find that our GCN-PN-Sinkhorn model gets the highest coverage rate both on 2 and 4 subsequent text blocks, which implies the global order optimization also benefits the local order optimization. Hence that we also find the POS-Hier methods get high BLEU-2 score but low BLEU-4 score because this method merges two nearest blocks at each ordering step, it pays more attention to the 2- neighboring text blocks or text block groups. Furthermore, normal MLP may be more easily confused by some wrong sub links than attention-based GCN-PN models and are inferior to them on local evaluation. The greedy method shows the worst results on global and local evaluation because reading order on many complex layouts does not simply depend on position, such as multi-column, which has the rule that is reading the total column context one by one.
+
+Table 2. The BLEU scores of these models on DI test data
+
+| Method | BLEU-2 | BLEU-4 |
| POS-Greedy | 0.76 | 0.40 |
| POS-Hier | 0.89 | 0.66 |
| POS-MLP | 0.82 | 0.62 |
| GCN-PN | 0.90 | 0.71 |
| GCN-PN-Sinkhorn | 0.92 | 0.74 |
+
+Fig 5 and Fig 6 show more details of the visual results. Fig 5 shows a multicolumn structure example and we can find the POS-Hier (5(b)) and our GCN-PN-Sinkhorn model (5(f)) perform well as the ground truth, which also implies their ability for ordering local text blocks. Sinkhorn based model performs well than GCN-PN (5(e)) and POS-MLP because of the global optimizing to reduce the probability of some wrong links. Meanwhile, the Greedy method is easy to make a mistake and causes many inverse reading order links because it is highly sensitive to a variation on the text blocks' coordinates. Fig 6 shows a KV-table structure example and we find that POS-Hier (6(b)) cannot deal with this structure well because some keys in the table are more closed than keys to their
+
+
+(a) Ground Truth
+
+
+(b) POS-Greedy
+
+
+(c) POS-Hier
+
+
+(d) POS-MLP
+
+
+(e) GCN-PN
+
+
+(f) GCN-PN-Sinkhorn
+Fig. 5. An example of visualized reading order results. (a) is the ground truth order with orange arrow lines and (b)-(f) are the results of the methods with green arrow lines indicating the reading order.
+
+values, resulting in a wrong merge operation. POS-MLP (6(c)) can order part of the former text blocks but failed at the latter ones, which implies the shortage of long order sequences. Our two model shows the same good results (6(d)) because the encoder-decoder structure can keep and use more global layout information to order the sequence.
+
+# 4.5 Real user experience
+
+We also design a real user experience in which the real blind people will participate in our test and check the predicted text block sequence that can be comprehended fluently. In this test, we use our model to generate the text block sequence from 113 detail images as a test group and use the untreated text block sequence (ordered by the reading scheme from top to bottom and left to right) as a control group. Meanwhile, three blind people who all receive compulsory education and often participate in online shopping are invited to our experiment. Their task is to hear both of the sequences and decide which one is better
+
+
+(a) Ground Truth
+
+
+(b) POS-Hier
+
+
+(c) POS-MLP
+Fig. 6. A KV-table structure example of visualized reading order results for analyzing row-major locality. (a) is the ground truth order with orange arrow lines and (b)-(d) are the results of the methods with green arrow lines indicating the reading order.
+
+
+(d) Ours
+
+to comprehend. There is no other comprehension assistance during the experiment, and three of them do not know the corresponding model of the sequence beforehand. It takes them a week to complete the task and submit their choices and feedbacks. The result shows that all the subjects believe that our model outperforms more than $70\%$ detailed images to help them comprehend well.
+
+# 5 Conclusion
+
+In this paper, we focus on the OCR text reordering problems. An end-to-end re-organization sequence learning structure is first proposed in the e-commerce scene. With a pretrained text detection network FCN, we extract the image feature and incorporate it with the geometric feature to build a weighted directed graph structure. Then a graph convolution encoder with a self-attention mechanism is considered to obtain the graph embeddings. Then a pointer-based attention decoder with a Sinkhorn global normalization is applied to predict the permutation. Our model outperforms the baselines both on global and local evaluations and will help get a more accurate and thorough comprehension of detailed images, especially for the visually impaired.
+
+# 6 Acknowledgement
+
+This work is supported by Alibaba-Zhejiang University Joint Institute of Frontier Technologies, The National Key R&D Program of China (No. 2018YFC2002603, 2018YFB1403202), Zhejiang Provincial Natural Science Foundation of China (No. LZ13F020001), the National Natural Science Foundation of China (No. 61972349, 61173185, 61173186) and the National Key Technology R&D Program of China (No. 2012BAI34B01, 2014BAK15B02).
+
+# References
+
+1. Baek, Y., Lee, B., Han, D., Yun, S., Lee, H.: Character region awareness for text detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9365-9374 (2019)
+2. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
+3. Binmakashen, G.M., Mahmoud, S.A.: Document layout analysis: A comprehensive survey. ACM Computing Surveys (CSUR) 52(6), 1-36 (2019)
+4. Bissacco, A., Cummins, M., Netzer, Y., Neven, H.: Photoocr: Reading text in uncontrolled conditions. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 785-792 (2013)
+5. Busta, M., Neumann, L., Matas, J.: Deep textspotter: An end-to-end trainable scene text localization and recognition framework. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2204-2212 (2017)
+6. Chakraborty, A., Paranjape, B., Kakarla, S., Ganguly, N.: Stop clickbait: Detecting and preventing clickbaits in online news media. In: 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). pp. 9-16. IEEE (2016)
+7. Cheng, J., Lapata, M.: Neural summarization by extracting sentences and words. arXiv preprint arXiv:1603.07252 (2016)
+8. Dai, Y., Huang, Z., Gao, Y., Xu, Y., Chen, K., Guo, J., Qiu, W.: Fused text segmentation networks for multi-oriented scene text detection. In: 2018 24th International Conference on Pattern Recognition (ICPR). pp. 3604-3609. IEEE (2018)
+9. Eriguchi, A., Hashimoto, K., Tsuruoka, Y.: Tree-to-sequence attentional neural machine translation. arXiv preprint arXiv:1603.06075 (2016)
+10. Filippova, K., Alfonseca, E., Colmenares, C.A., Kaiser, L., Vinyals, O.: Sentence compression by deletion with lstms. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pp. 360-368 (2015)
+1. Freeman, H., Gardner, L.: Apictorial jigsaw puzzles: The computer solution of a problem in pattern recognition. IEEE Transactions on Electronic Computers (2), 118-127 (1964)
+2. Graves, A., Wayne, G., Danihelka, I.: Neural tuning machines. arXiv preprint arXiv:1410.5401 (2014)
+3. Jaderberg, M., Simonyan, K., Vedaldi, A., Zisserman, A.: Reading text in the wild with convolutional neural networks. International Journal of Computer Vision 116(1), 1-20 (2016)
+4. Khare, V., Shivakumara, P., Raveendran, P., Blumenstein, M.: A blind deconvolution model for scene text detection and recognition in video. Pattern Recognition 54, 128-148 (2016)
+5. Kool, W., van Hoof, H., Welling, M.: Attention, learn to solve routing problems! arXiv preprint arXiv:1803.08475 (2018)
+6. Liu, X., Gao, F., Zhang, Q., Zhao, H.: Graph convolution for multimodal information extraction from visually rich documents. arXiv preprint arXiv:1903.11279 (2019)
+7. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3431-3440 (2015)
+8. Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving jigsaw puzzles. In: European Conference on Computer Vision. pp. 69-84. Springer (2016)
+
+19. Pomeranz, D., Shemesh, M., Ben-Shahar, O.: A fully automated greedy square jigsaw puzzle solver. In: CVPR 2011. pp. 9-16. IEEE (2011)
+20. Rong, X., Yi, C., Tian, Y.: Unambiguous text localization and retrieval for cluttered scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5494-5502 (2017)
+21. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Tech. rep., California Univ San Diego La Jolla Inst for Cognitive Science (1985)
+22. Santa Cruz, R., Fernando, B., Cherian, A., Gould, S.: Deepermnet: Visual permutation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3949-3957 (2017)
+23. See, A., Liu, P.J., Manning, C.D.: Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 (2017)
+24. Sholomon, D., David, O., Netanyahu, N.S.: A genetic algorithm-based solver for very large jigsaw puzzles. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1767-1774 (2013)
+25. Sinkhorn, R.: A relationship between arbitrary positive matrices and doubly stochastic matrices. The annals of mathematical statistics 35(2), 876-879 (1964)
+26. Sinkhorn, R., Knopp, P.: Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics 21(2), 343-348 (1967)
+27. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in neural information processing systems. pp. 3104-3112 (2014)
+28. Vinyals, O., Bengio, S., Kudlur, M.: Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391 (2015)
+29. Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: Advances in Neural Information Processing Systems. pp. 2692-2700 (2015)
+30. Weston, J., Bordes, A., Chopra, S., Rush, A.M., van Merriënboer, B., Joulin, A., Mikolov, T.: Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 (2015)
+31. Xu, K., Wu, L., Wang, Z., Feng, Y., Witbrock, M., Sheinin, V.: Graph2seq: Graph to sequence learning with attention-based neural networks. arXiv preprint arXiv:1804.00823 (2018)
+32. Yin, F., Wu, Y.C., Zhang, X.Y., Liu, C.L.: Scene text recognition with sliding convolutional character models. arXiv preprint arXiv:1709.01727 (2017)
+33. You, Y., Jia, W., Liu, T., Yang, W.: Improving abstractive document summarization with salient information modeling. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. pp. 2132-2141 (2019)
+34. Zhu, Y., Yao, C., Bai, X.: Scene text detection and recognition: Recent advances and future trends. Frontiers of Computer Science 10(1), 19-36 (2016)
\ No newline at end of file
diff --git a/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/images.zip b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ef404e8d9f9b8af693353bb0655fa48daf4505eb
--- /dev/null
+++ b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:383163d66e7e95b6cb33a0627fa03984c96ea739f4079705d98f6c326e98edcd
+size 423869
diff --git a/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/layout.json b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d9888a9c429ae678b9fa351f58f489c205c4d2a8
--- /dev/null
+++ b/anendtoendocrtextreorganizationsequencelearningforrichtextdetailimagecomprehension/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b97e324447811d98d680e4760a0ba3828767f86b059fda8d4cb510e294f0b563
+size 426406
diff --git a/anensembleofepochwiseempiricalbayesforfewshotlearning/480ba356-fd63-459a-9742-d18f6a62230b_content_list.json b/anensembleofepochwiseempiricalbayesforfewshotlearning/480ba356-fd63-459a-9742-d18f6a62230b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fde5d62a89559fe24672abdc4ed72fa7b4fde7bb
--- /dev/null
+++ b/anensembleofepochwiseempiricalbayesforfewshotlearning/480ba356-fd63-459a-9742-d18f6a62230b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8eebaa9636856eefc0a8063f47720cf5f10f56473272de86550a99c5cd44d254
+size 89894
diff --git a/anensembleofepochwiseempiricalbayesforfewshotlearning/480ba356-fd63-459a-9742-d18f6a62230b_model.json b/anensembleofepochwiseempiricalbayesforfewshotlearning/480ba356-fd63-459a-9742-d18f6a62230b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1962a4dcc24639621c35734eab897bdd120e6db4
--- /dev/null
+++ b/anensembleofepochwiseempiricalbayesforfewshotlearning/480ba356-fd63-459a-9742-d18f6a62230b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:126ac924393f5b962fb8f2331b4eeadb5402c13aed08b1d4a6360c92c2b7ff76
+size 116563
diff --git a/anensembleofepochwiseempiricalbayesforfewshotlearning/480ba356-fd63-459a-9742-d18f6a62230b_origin.pdf b/anensembleofepochwiseempiricalbayesforfewshotlearning/480ba356-fd63-459a-9742-d18f6a62230b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..53c7a8879e667d624cc0de79572ad006f218119e
--- /dev/null
+++ b/anensembleofepochwiseempiricalbayesforfewshotlearning/480ba356-fd63-459a-9742-d18f6a62230b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:31e0eae9d4060c3d7c5b1c594732aa84a2bcacef518bcaa3c2830c6736b9601f
+size 3032124
diff --git a/anensembleofepochwiseempiricalbayesforfewshotlearning/full.md b/anensembleofepochwiseempiricalbayesforfewshotlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5b8a2627b3a444ca55f8cf3dd1d7eae80ca0735b
--- /dev/null
+++ b/anensembleofepochwiseempiricalbayesforfewshotlearning/full.md
@@ -0,0 +1,341 @@
+# An Ensemble of Epoch-wise Empirical Bayes for Few-shot Learning
+
+Yaoyao Liu1, Bernt Schiele1, and Qianru Sun2
+{yaoyao.liu, schiele, qsun}@mpi-inf.mpg.de qianrusun@smu.edu.sg
+
+$^{1}$ Max Planck Institute for Informatics, Saarland Informatics Campus $^{2}$ School of Information Systems, Singapore Management University
+
+Abstract. Few-shot learning aims to train efficient predictive models with a few examples. The lack of training data leads to poor models that perform high-variance or low-confidence predictions. In this paper, we propose to meta-learn the ensemble of epoch-wise empirical Bayes models (E $^3$ BM) to achieve robust predictions. "Epoch-wise" means that each training epoch has a Bayes model whose parameters are specifically learned and deployed. "Empirical" means that the hyperparameters, e.g., used for learning and ensembling the epoch-wise models, are generated by hyperprior learners conditional on task-specific data. We introduce four kinds of hyperprior learners by considering inductive vs. transductive, and epoch-dependent vs. epoch-independent, in the paradigm of meta-learning. We conduct extensive experiments for five-class few-shot tasks on three challenging benchmarks: miniImageNet, tieredImageNet, and FC100, and achieve top performance using the epoch-dependent transductive hyperprior learner, which captures the richest information. Our ablation study shows that both "epoch-wise ensemble" and "empirical" encourage high efficiency and robustness in the model performance $^1$ .
+
+# 1 Introduction
+
+The ability of learning new concepts from a handful of examples is well-handled by humans, while in contrast, it remains challenging for machine models whose typical training requires a significant amount of data for good performance [34]. However, in many real-world applications, we have to face the situations of lacking a significant amount of training data, as e.g., in the medical domain. It is thus desirable to improve machine learning models to handle few-shot settings where each new concept has very scarce examples [13, 30, 39, 70].
+
+Meta-learning methods aim to tackle the few-shot learning problem by transferring experience from similar few-shot tasks [7]. There are different meta strategies, among which the gradient descent based methods are particularly promising for today's neural networks [1, 13-15, 20, 25, 38, 70, 74, 81, 83, 84, 86]. These methods follow a unified meta-learning procedure that contains two loops. The
+
+
+(a) MAML [13]
+
+
+(b) SIB [25]
+
+
+(c) $\mathrm{E}^3\mathrm{BM}$ (ours)
+Fig. 1. Conceptual illustrations of the model adaptation on the blue, red and yellow tasks. (a) MAML [13] is the classical inductive method that meta-learns a network initialization $\theta$ that is used to learn a single base-learner on each task, e.g., $\Theta_3^{\alpha}$ in the blue task. (b) SIB [25] is a transductive method that formulates a variational posterior as a function of both labeled training data $\mathcal{T}^{(tr)}$ and unlabeled test data $x^{(te)}$ . It also uses a single base-learner and optimizes the learner by running several synthetic gradient steps on $x^{(te)}$ . (c) Our $\mathrm{E}^3\mathrm{BM}$ is a generic method that learns to combine the epoch-wise base-learners (e.g., $\Theta_1$ , $\Theta_2$ , and $\Theta_3$ ), and to generate task-specific learning rates $\alpha$ and combination weights $v$ that encourage robust adaptation. $\bar{\Theta}_{1:3}$ denotes the ensemble result of three base-learners; $\varPsi_{\alpha}$ and $\varPsi_v$ denote the hyperprior learners learned to generate $\alpha$ and $v$ , respectively. Note that figure (c) is based on $\mathrm{E}^3\mathrm{BM} + \mathrm{MAML}$ , i.e., plug-in our $\mathrm{E}^3\mathrm{BM}$ to MAML baseline. Other plug-in versions are introduced in Sec. 4.4.
+
+inner loop learns a base-learner for each individual task, and the outer loop uses the validation loss of the base-learner to optimize a meta-learner. In previous works [1, 13, 14, 70], the task of the meta-learner is to initialize the base-learner for the fast and efficient adaptation to the few training samples in the new task.
+
+In this work, we aim to address two shortcomings of the previous works. First, the learning process of a base-learner for few-shot tasks is quite unstable [1], and often results in high-variance or low-confidence predictions. An intuitive solution is to train an ensemble of models and use the combined prediction which should be more robust [6,29,54]. However, it is not obvious how to obtain and combine multiple base-learners given the fact that a very limited number of training examples are available. Rather than learning multiple independent base-learners [79], we propose a novel method of utilizing the sequence of epoch-wise base-learners (while training a single base-learner) as the ensemble. Second, it is well-known that the values of hyperparameters, e.g., for initializing and updating models, are critical for best performance, and are particularly important for few-shot learning. In order to explore the optimal hyperparameters, we propose to employ the empirical Bayes method in the paradigm of meta-learning. In specific, we meta-learn hyperprior learners with meta-training tasks, and use them to generate task-specific hyperparameters, e.g., for updating and assembling multiple base-learners. We call the resulting novel approach $\mathbf{E}^3\mathbf{BM}$ , which learns the Ensemble of Epoch-wise Empirical Bayes Models for each few-shot task.
+
+Our "epoch-wise models" are different models since each one of them is resulted from a specific training epoch and is trained with a specific set of hyperparameter values. During test, $\mathrm{E}^3\mathrm{BM}$ combines the ensemble of models' predictions with soft ensembling weights to produce more robust results. In this paper, we argue that during model adaptation to the few-shot tasks, the most active adapting behaviors actually happen in the early epochs, and then converge to and even overfit to the training data in later epochs. Related works use the single base-learner obtained from the last epoch, so their meta-learners learn only partial adaptation experience [13, 14, 25, 70]. In contrast, our $\mathrm{E}^3\mathrm{BM}$ leverages an ensemble modeling strategy that adapts base-learners at different epochs and each of them has task-specific hyperparameters for updating and ensembling. It thus obtains the optimized combinational adaptation experience. Figure 1 presents the conceptual illustration of $\mathrm{E}^3\mathrm{BM}$ , compared to those of the classical method MAML [13] and the state-of-the-art SIB [25].
+
+Our main contributions are three-fold. (1) A novel few-shot learning approach $\mathrm{E}^3\mathrm{BM}$ that learns to learn and combine an ensemble of epoch-wise Bayes models for more robust few-shot learning. (2) Novel hyperprior learners in $\mathrm{E}^3\mathrm{BM}$ to generate the task-specific hyperparameters for learning and combining epochwise Bayes models. In particular, we introduce four kinds of hyperprior learner by considering inductive [13, 70] and transductive learning methods [25], and each with either epoch-dependent (e.g., LSTM) or epoch-independent (e.g., epochwise FC layer) architectures. (3) Extensive experiments on three challenging few-shot benchmarks, miniImageNet [73], tieredImageNet [58] and Fewshot-CIFAR100 (FC100) [53]. We plug-in our $\mathrm{E}^3\mathrm{BM}$ to the state-of-the-art few-shot learning methods [13, 25, 70] and obtain consistent performance boosts. We conduct extensive model comparison and observe that our $\mathrm{E}^3\mathrm{BM}$ employing an epoch-dependent transductive hyperprior learner achieves the top performance on all benchmarks.
+
+# 2 Related Works
+
+Few-shot learning & meta-learning. Research literature on few-shot learning paradigms exhibits a high diversity from using data augmentation techniques [9, 75, 77] over sharing feature representation [2, 76] to meta-learning [18, 72]. In this paper, we focus on the meta-learning paradigm that leverages few-shot learning experiences from similar tasks based on the episodic formulation (see Section 3). Related works can be roughly divided into three categories. (1) Metric learning methods [12, 24, 40, 41, 64, 71, 73, 78, 82] aim to learn a similarity space, in which the learning should be efficient for few-shot examples. The metrics include Euclidean distance [64], cosine distance [8, 73], relation module [24, 41, 71] and graph-based similarity [45, 62]. Metric-based task-specific feature representation learning has also been presented in many related works [12, 24, 41, 78]. (2) Memory network methods [50, 52, 53] aim to learn training "experience" from the seen tasks and then aim to generalize to the learning of the unseen ones. A model with external memory storage is designed specifically for fast learning in a
+
+few iterations, e.g., Meta Networks [52], Neural Attentive Learner (SNAIL) [50], and Task Dependent Adaptive Metric (TADAM) [53]. (3) Gradient descent based methods [1, 13, 14, 20, 25, 37, 38, 43, 57, 70, 86] usually employ a meta-learner that learns to fast adapt an NN base-learner to a new task within a few optimization steps. For example, Rusu et al. [61] introduced a classifier generator as the meta-learner, which outputs parameters for each specific task. Lee et al. [37] presented a meta-learning approach with convex base-learners for few-shot tasks. Finn et al. [13] designed a meta-learner called MAML, which learns to effectively initialize the parameters of an NN base-learner for a new task. Sun et al. [69, 70] introduced an efficient knowledge transfer operator on deeper neural networks and achieved a significant improvement for few-shot learning models. Hu et al. [25] proposed to update base-learner with synthetic gradients generated by a variational posterior conditional on unlabeled data. Our approach is closely related to gradient descent based methods [1, 13, 25, 69, 70, 70]. An important difference is that we learn how to combine an ensemble of epoch-wise base-learners and how to generate efficient hyperparameters for base-learners, while other methods such as MAML [13], MAML++ [1], LEO [61], MTL [69, 70], and SIB [25] use a single base-learner.
+
+Hyperparameter optimization. Building a model for a new task is a process of exploration-exploitation. Exploring suitable architectures and hyperparameters are important before training. Traditional methods are model-free, e.g., based on grid search [4,28,42]. They require multiple full training trials and are thus costly. Model-based hyperparameter optimization methods are adaptive but sophisticated, e.g., using random forests [27], Gaussian processes [65] and input warped Gaussian processes [67] or scalable Bayesian optimization [66]. In our approach, we meta-learn a hyperprior learner to output optimal hyperparameters by gradient descent, without additional manual labor. Related methods using gradient descent mostly work for single model learning in an inductive way [3,10,15,44,46-49]. While, our hyperprior learner generates a sequence of hyperparameters for multiple models, in either the inductive or the transductive learning manner.
+
+Ensemble modeling. It is a strategy [26, 85] to use multiple algorithms to improve machine learning performance, and which is proved to be effective to reduce the problems related to overfitting [35, 68]. Mitchell et al. [51] provided a theoretical explanation for it. Boosting is one classical way to build an ensemble, e.g., AdaBoost [16] and Gradient Tree Boosting [17]. Stacking combines multiple models by learning a combiner and it applies to both tasks in supervised learning [6, 29, 54] and unsupervised learning [63]. Bootstrap aggregating (i.e., Bagging) builds an ensemble of models through parallel training [6], e.g., random forests [22]. The ensemble can also be built on a temporal sequence of models [36]. Some recent works have applied ensemble modeling to few-shot learning. Yoon et al. proposed Bayesian MAML (BMAML) that trains multiple instances of base-model to reduce mete-level overfitting [80]. The most recent work [11] encourages multiple networks to cooperate while keeping predictive diversity. Its networks are trained with carefully-designed penalty functions, dif
+
+ferent from our automated method using empirical Bayes. Besides, its method needs to train much more network parameters than ours. Detailed comparisons are given in the experiment section.
+
+# 3 Preliminary
+
+In this section, we introduce the unified episodic formulation of few-shot learning, following [13, 57, 73]. This formulation was proposed for few-shot classification first in [73]. Its problem definition is different from traditional classification in three aspects: (1) the main phases are not training and test but meta-training and meta-test, each of which includes training and test; (2) the samples in meta-training and meta-testing are not datapoints but episodes, i.e. few-shot classification tasks; and (3) the objective is not classifying unseen datapoints but to fast adapt the meta-learned knowledge to the learning of new tasks.
+
+Given a dataset $\mathcal{D}$ for meta-training, we first sample few-shot episodes (tasks) $\{\mathcal{T}\}$ from a task distribution $p(\mathcal{T})$ such that each episode $\mathcal{T}$ contains a few samples of a few classes, e.g., 5 classes and 1 shot per class. Each episode $\mathcal{T}$ includes a training split $\mathcal{T}^{(tr)}$ to optimize a specific base-learner, and a test split $\mathcal{T}^{(te)}$ to compute a generalization loss to optimize a global meta-learner. For meta-test, given an unseen dataset $\mathcal{D}_{un}$ (i.e., samples are from unseen classes), we sample a test task $\mathcal{T}_{un}$ to have the same-size training/test splits. We first initiate a new model with meta-learned network parameters (output from our hyperprior learner), then train this model on the training split $\mathcal{T}_{un}^{(tr)}$ . We finally evaluate the performance on the test split $\mathcal{T}_{un}^{(te)}$ . If we have multiple tasks, we report average accuracy as the final result.
+
+# 4 An Ensemble of Epoch-wise Empirical Bayes Models
+
+As shown in Fig. 2, $\mathrm{E}^3\mathrm{BM}$ trains a sequence of epoch-wise base-learners $\{\Theta_m\}$ with training data $\mathcal{T}^{(tr)}$ and learns to combine their predictions $\{z_m^{(te)}\}$ on test data $x^{(te)}$ for the best performance. This ensembling strategy achieves more robustness during prediction. The hyperparameters of each base-learner, i.e., learning rates $\alpha$ and combination weights $v$ , are generated by the hyperprior learners conditional on task-specific data, e.g., $x^{(tr)}$ and $x^{(te)}$ . This approach encourages the high diversity and informativeness of the ensembling models.
+
+# 4.1 Empirical Bayes method
+
+Our approach can be formulated as an empirical Bayes method that learns two levels of models for a few-shot task. The first level has hyperprior learners that generate hyperparameters for updating and combining the second-level models. More specifically, these second-level models are trained with the loss derived from the combination of their predictions on training data. After that, their loss
+
+
+Fig. 2. The computing flow of the proposed $\mathrm{E}^3\mathrm{BM}$ approach in one meta-training episode. For the meta-test task, the computation will be ended with predictions. Hyperlearner predicts task-specific hyperparameters, i.e., learning rates and multi-model combination weights. When its input contains $x^{(te)}$ , it is transductive, otherwise inductive. Its detailed architecture is given in Fig. 3.
+
+of test data are used to optimize the hyperprior learners. This process is also called meta update, see the dashed arrows in Fig. 2.
+
+In specific, we sample $K$ episodes $\{\mathcal{T}_k\}_{k=1}^K$ from the meta-training data $\mathcal{D}$ . Let $\Theta$ denote base-learner and $\psi$ represent its hyperparameters. An episode $\mathcal{T}_k$ aims to train $\Theta$ to recognize different concepts, so we consider to use concepts related (task specific) data for customizing the $\Theta$ through a hyperprior $p(\psi_k)$ . To achieve this, we first formulate the empirical Bayes method with marginal likelihood according to hierarchical structure among data as follows,
+
+$$
+p (\mathcal {T}) = \prod_ {k = 1} ^ {K} p \left(\mathcal {T} _ {k}\right) = \prod_ {k = 1} ^ {K} \int_ {\psi_ {k}} p \left(\mathcal {T} _ {k} \mid \psi_ {k}\right) p \left(\psi_ {k}\right) d \psi_ {k}. \tag {1}
+$$
+
+Then, we use variational inference [23] to estimate $\{p(\psi_k)\}_{k=1}^K$ . We parametrize distribution $q_{\varphi_k}(\psi_k)$ with $\varphi_k$ for each $p(\psi_k)$ , and update $\varphi_k$ to increase the similarity between $q_{\varphi_k}(\psi_k)$ and $p(\psi_k)$ . As in standard probabilistic modeling, we derive an evidence lower bound on the log version of Eq. (1) to update $\varphi_k$ ,
+
+$$
+\log p (\mathcal {T}) \geqslant \sum_ {k = 1} ^ {K} \left[ \mathbb {E} _ {\psi_ {k} \sim q _ {\varphi_ {k}}} \left[ \log p \left(\mathcal {T} _ {k} \mid \psi_ {k}\right) \right] - D _ {\mathrm {K L}} \left(q _ {\varphi_ {k}} \left(\psi_ {k}\right) | | p \left(\psi_ {k}\right)\right) \right]. \tag {2}
+$$
+
+Therefore, the problem of using $q_{\varphi_k}(\psi_k)$ to approach to the best estimation of $p(\psi_k)$ becomes equivalent to the objective of maximizing the evidence lower
+
+bound [5,23,25] in Eq. (2), with respect to $\{\varphi_k\}_{k=1}^K$ , as follows,
+
+$$
+\min _ {\left\{\varphi_ {k} \right\} _ {k = 1} ^ {K}} \frac {1}{K} \sum_ {k = 1} ^ {K} \left[ \mathbb {E} _ {\psi_ {k} \sim q _ {\varphi_ {k}}} [ - \log p (\mathcal {T} _ {k} | \psi_ {k}) ] + D _ {\mathrm {K L}} \left(q _ {\varphi_ {k}} (\psi_ {k}) | | p (\psi_ {k})\right) \right]. \tag {3}
+$$
+
+To improve the robustness of few-shot models, existing methods sample a significant amount number of episodes during meta-training [13, 70]. Each episode employing its own hyperprior $p(\psi_k)$ causes a huge computation burden, making it difficult to solve the aforementioned optimization problem. To tackle this, we leverage a technique called "amortized variational inference" [25, 32, 59]. We parameterize the KL term in $\{\varphi_k\}_{k=1}^K$ (see Eq. (3)) with a unified deep neural network $\Psi(\cdot)$ taking $x_k^{(tr)}$ (inductive learning) or $\{x_k^{(tr)}, x_k^{(te)}\}$ (transductive learning) as inputs, where $x_k^{(tr)}$ and $x_k^{(te)}$ respectively denote the training and test samples in the $k$ -th episode. In this paper, we call $\Psi(\cdot)$ hyperprior learner. As shown in Fig. 3, we additionally feed the hyperprior learner with the training gradients $\nabla \mathcal{L}_\Theta(\mathcal{T}_k^{(tr)})$ to $\Psi(\cdot)$ to encourage it to "consider" the cur
+
+rent state of the training epoch. We mentioned in Sec. 1 that base-learners at different epochs are adapted differently, so we expect the corresponding hyperprior learner to "observe" and "utilize" this information to produce effective hyperparameters. By replacing $q_{\varphi_k}$ with $q_{\Psi(\cdot)}$ , Problem (3) can be rewritten as:
+
+$$
+\min _ {\Psi} \frac {1}{K} \sum_ {k = 1} ^ {K} \left[ \mathbb {E} _ {\psi_ {k} \sim q _ {\Psi (\cdot)}} \left[ - \log p \left(\mathcal {T} _ {k} \mid \psi_ {k}\right) \right] + D _ {\mathrm {K L}} \left(q _ {\Psi (\cdot)} \left(\psi_ {k}\right) \mid \mid p \left(\psi_ {k}\right)\right) \right]. \tag {4}
+$$
+
+Then, we solve Problem (4) by optimizing $\varPsi(\cdot)$ with the meta gradient descent method used in classical meta-learning paradigms [13, 25, 70]. We elaborate the details of learning $\{\Theta_m\}$ and meta-learning $\varPsi(\cdot)$ in the following sections.
+
+
+Fig.3. Two options of hyperprior learner at the $m$ -th base update epoch. In terms of the mapping function, we deploy either FC layers to build epoch-independent hyperprior learners, or LSTM to build an epoch-dependent learner. Values in dashed box were learned from previous tasks.
+
+
+
+using $\varPsi(\cdot)$ with the meta gradient descent in the paradigms [13,25,70]. We elaborate the tuning $\varPsi(\cdot)$ in the following sections.
+
+# 4.2 Learning the ensemble of base-learners
+
+Previous works have shown that training multiple instances of the base-learner is helpful to achieve robust few-shot learning [12, 79]. However, they suffer from the computational burden of optimizing multiple copies of neural networks in parallel, and are not easy to generalize to deeper neural architectures. If include the computation of second-order derivatives in meta gradient descent [13], this burden becomes more unaffordable. In contrast, our approach is free from this problem, because it is built on top of optimization-based meta-learning models, e.g., MAML [13], MTL [70], and SIB [25], which naturally produce a sequence of models along the training epochs in each episode.
+
+Given an episode $\mathcal{T} = \{\mathcal{T}^{(tr)},\mathcal{T}^{(te)}\} = \{\{x^{(tr)},y^{(tr)}\},\{x^{(te)},y^{(te)}\}\}$ , let $\Theta_{m}$ denote the parameters of the base-learner working at epoch $m$ (w.r.t. $m$ -th base-learner or BL- $m$ ), with $m\in \{1,\dots,M\}$ . Basically, we initiate BL-1 with parameters $\theta$ (network weights and bias) and hyperparameters (e.g., learning rate $\alpha$ ), where $\theta$ is meta-optimized as in MAML [13], and $\alpha$ is generated by the proposed hyperprior learner $\Psi_{\alpha}$ . We then adapt BL-1 with normal gradient descent on the training set $\mathcal{T}^{(tr)}$ , and use the adapted weights and bias to initialize BL-2. The general process is thus as follows,
+
+$$
+\Theta_ {0} \leftarrow \theta , \tag {5}
+$$
+
+$$
+\Theta_ {m} \leftarrow \Theta_ {m - 1} - \alpha_ {m} \nabla_ {\Theta} \mathcal {L} _ {m} ^ {(t r)} = \Theta_ {m - 1} - \Psi_ {\alpha} (\tau , \nabla_ {\Theta} \mathcal {L} _ {m} ^ {(t r)}) \nabla_ {\Theta} \mathcal {L} _ {m} ^ {(t r)}, \tag {6}
+$$
+
+where $\alpha_{m}$ is the learning rate outputted from $\varPsi_{\alpha}$ , and $\nabla_{\Theta}\mathcal{L}_{m}^{(tr)}$ are the derivatives of the training loss, i.e., gradients. $\tau$ represents either $x^{(tr)}$ in the inductive setting, or $\{x^{(tr)},x^{(te)}\}$ in the transductive setting. Note that $\Theta_0$ is introduced to make the notation consistent, and a subscript $m$ is omitted from $\varPsi_{\alpha}$ for conciseness. Let $F(x;\Theta_m)$ denote the prediction scores of input $x$ , so the base-training loss $\mathcal{T}^{(tr)} = \left\{x^{(tr)},y^{(tr)}\right\}$ can be unfolded as,
+
+$$
+\mathcal {L} _ {m} ^ {(t r)} = L _ {c e} \left(F \left(x ^ {(t r)}; \Theta_ {m - 1}\right), y ^ {(t r)}\right), \tag {7}
+$$
+
+where $L_{ce}$ is the softmax cross entropy loss. During episode test, each base-learner BL- $m$ infers the prediction scores $z_{m}$ for test samples $x^{(te)}$ ,
+
+$$
+z _ {m} = F \left(x ^ {(t e)}; \Theta_ {m}\right). \tag {8}
+$$
+
+Assume the hyperprior learner $\varPsi_v$ generates the combination weight $v_{m}$ for BL- $m$ . The final prediction score is initialized as $\hat{y}_1^{(te)} = v_1z_1$ . For the $m$ -th base epoch, the prediction $z_{m}$ will be calculated and added to $\hat{y}^{(te)}$ as follows,
+
+$$
+\hat {y} _ {m} ^ {(t e)} \leftarrow v _ {m} z _ {m} + \hat {y} _ {m - 1} ^ {(t e)} = \varPsi_ {v} (\tau , \nabla_ {\Theta} \mathcal {L} _ {m} ^ {(t r)}) F (x ^ {(t e)}; \Theta_ {m}) + \hat {y} _ {m - 1} ^ {(t e)}. \tag {9}
+$$
+
+In this way, we can update prediction scores without storing base-learners or feature maps in the memory.
+
+# 4.3 Meta-learning the hyperprior learners
+
+As presented in Fig. 3, we introduce two architectures, i.e., LSTM or individual FC layers, for the hyperprior learner. FC layers at different epochs are independent. Using LSTM to "connect" all epochs is expected to "grasp" more task-specific information from the overall training states of the task. In the following, we elaborate the meta-learning details for both designs.
+
+Assume before the $k$ -th episode, we have meta-learned the base learning rates $\{\alpha_{m}^{\prime}\}_{m = 1}^{M}$ and combination weights $\{v_m^\prime \}_{m = 1}^M$ . Next in the $k$ -th episode, specifically at the $m$ -th epoch as shown in Fig. 3, we compute the mean values of $\tau$ and $\nabla_{\Theta_m}\mathcal{L}_m^{(tr)}$ , respectively, over all samples2. We then input the concatenated value to FC or LSTM mapping function as follows,
+
+$$
+\Delta \alpha_ {m}, \Delta v _ {m} = \mathrm {F C} _ {m} (\operatorname {c o n c a t} [ \bar {\tau}; \overline {{\nabla_ {\Theta_ {m}} \mathcal {L} _ {m} ^ {(t r)}}} ]) \text {, o r} \tag {10}
+$$
+
+$$
+[ \Delta \alpha_ {m}, \Delta v _ {m} ], h _ {m} = \operatorname {L S T M} \left(\operatorname {c o n c a t} [ \bar {\tau}; \overline {{\nabla_ {\Theta_ {m}} \mathcal {L} _ {m} ^ {(t r)}}} ], h _ {m - 1}\right), \tag {11}
+$$
+
+where $h_m$ and $h_{m-1}$ are the hidden states at epoch $m$ and epoch $m-1$ , respectively. We then use the output values to update hyperparameters as,
+
+$$
+\alpha_ {m} = \lambda_ {1} \alpha_ {m} ^ {\prime} + (1 - \lambda_ {1}) \Delta \alpha , v _ {m} = \lambda_ {2} v _ {m} ^ {\prime} + (1 - \lambda_ {2}) \Delta v, \tag {12}
+$$
+
+where $\lambda_{1}$ and $\lambda_{2}$ are fixed fractions in $(0,1)$ . Using learning rate $\alpha_{m}$ , we update BL- $(m-1)$ to be BL- $m$ with Eq. (6). After $M$ epochs, we obtain the combination of predictions $\hat{y}_{M}^{(te)}$ (see Eq. (9)) on test samples. In training tasks, we compute the test loss as,
+
+$$
+\mathcal {L} ^ {(t e)} = L _ {c e} \left(\hat {y} _ {M} ^ {(t e)}, y ^ {(t e)}\right). \tag {13}
+$$
+
+We use this loss to calculate meta gradients to update $\varPsi$ as follows,
+
+$$
+\varPsi_ {\alpha} \leftarrow \varPsi_ {\alpha} - \beta_ {1} \nabla_ {\varPsi_ {\alpha}} \mathcal {L} ^ {(t e)}, \quad \varPsi_ {v} \leftarrow \varPsi_ {v} - \beta_ {2} \nabla_ {\varPsi_ {v}} \mathcal {L} ^ {(t e)}, \tag {14}
+$$
+
+where $\beta_{1}$ and $\beta_{2}$ are meta-learning rates that determine the respective step sizes for updating $\varPsi_{\alpha}$ and $\varPsi_{v}$ . These updates are to back-propagate the test gradients till the input layer, through unrolling all base training gradients of $\Theta_1\sim \Theta_M$ . The process thus involves a gradient through a gradient [13,14,70]. Computationally, it requires an additional backward pass through $\mathcal{L}^{(tr)}$ to compute Hessian-vector products, which is supported by standard numerical computation libraries such as TensorFlow [19] and PyTorch [55].
+
+# 4.4 Plugging-in $\mathbf{E}^3\mathbf{BM}$ to baseline methods
+
+The optimization of $\varPsi$ relies on meta gradient descent method which was first applied to few-shot learning in MAML [13]. Recently, MTL [70] showed more efficiency by implementing that method on deeper pre-trained CNNs (e.g., ResNet-12 [70], and ResNet-25 [69]). SIB [25] was built on even deeper and wider networks (WRN-28-10), and it achieved top performance by synthesizing gradients
+
+in transductive learning. These three methods are all optimization-based, and use the single base-learner of the last base-training epoch. In the following, we describe how to learn and combine multiple base-learners in MTL, SIB and MAML, respectively, using our $\mathrm{E}^3\mathrm{BM}$ approach.
+
+According to [25, 70], we pre-train the feature extractor $f$ on a many-shot classification task using the whole set of $\mathcal{D}$ . The meta-learner in MTL is called scaling and shifting weights $\varPhi_{SS}$ , and in SIB is called synthetic information bottleneck network $\phi(\lambda, \xi)$ . Besides, there is a common meta-learner called base-learner initializer $\theta$ , i.e., the same $\theta$ in Fig. 2, in both methods. In MAML, the only base-learner is $\theta$ and there is no pre-training for its feature extractor $f$ .
+
+Given an episode $\mathcal{T}$ , we feed training images $x^{(tr)}$ and test images $x^{(te)}$ to the feature extractor $f\odot \varPhi_{SS}$ in MTL ( $f$ in SIB and MAML), and obtain the embedding $e^{(tr)}$ and $e^{(te)}$ , respectively. Then in MTL, we use $e^{(tr)}$ with labels to train base-learner $\Theta$ for $M$ times to get $\{\Theta_m\}_{m = 1}^M$ with Eq. (6). In SIB, we use its multilayer perceptron (MLP) net to synthesize gradients conditional on $e^{(te)}$ to indirectly update $\{\Theta_m\}_{m = 1}^M$ . During these updates, our hyperprior learner $\varPsi_{\alpha}$ derives the learning rates for all epochs. In episode test, we feed $e^{(te)}$ to $\{\Theta_m\}_{m = 1}^M$ and get the combined prediction $\{z_m\}_{m = 1}^M$ with Eq. (9). Finally, we compute the test loss to meta-update $[\varPsi_{\alpha};\varPsi_{v};\varPhi_{SS};\theta ]$ in MTL, $[\varPsi_{\alpha};\varPsi_{v};\phi (\lambda ,\xi);\theta ]$ in SIB, and $[f;\theta ]$ in MAML. We call the resulting methods MTL+E3BM, SIB+E3BM, and MAML+E3BM, respectively, and demonstrate their improved efficiency over baseline models [13,25,70] in experiments.
+
+# 5 Experiments
+
+We evaluate our approach in terms of its overall performance and the effects of its two components, i.e. ensembling epoch-wise models and meta-learning hyperprior learners. In the following sections, we introduce the datasets and implementation details, compare our best results to the state-of-the-art, and conduct an ablation study.
+
+# 5.1 Datasets and implementation details
+
+Datasets. We conduct few-shot image classification experiments on three benchmarks: miniImageNet [73], tieredImageNet [58] and FC100 [53]. miniImageNet is the most widely used in related works [13, 24, 25, 25, 70, 71]. tieredImageNet and FC100 are either with a larger scale or a more challenging setting with lower image resolution, and have stricter training-test splits.
+
+miniImageNet was proposed in [73] based on ImageNet [60]. There are 100 classes with 600 samples per class. Classes are divided into 64, 16, and 20 classes respectively for sampling tasks for meta-training, meta-validation and meta-test. tieredImageNet was proposed in [58]. It contains a larger subset of ImageNet [60] with 608 classes (779, 165 images) grouped into 34 super-class nodes. These nodes are partitioned into 20, 6, and 8 disjoint sets respectively for meta-training, meta-validation and meta-test. Its super-class based training-test split
+
+results in a more challenging and realistic regime with test tasks that are less similar to training tasks. FC100 is based on the CIFAR100 [33]. The few-shot task splits were proposed in [53]. It contains 100 object classes and each class has 600 samples of $32 \times 32$ color images per class. On these datasets, we consider the (5-class, 1-way) and (5-class, 5-way) classification tasks. We use the same task sampling strategy as in related works [1, 13, 25].
+
+Backbone architectures. In $\mathrm{MAML + E^3BM}$ , we use a 4-layer convolution network (4CONV) [1, 13]. In $\mathrm{MTL + E^3BM}$ , we use a 25-layer residual network (ResNet-25) [56, 69, 78]. Followed by convolution layers, we apply an average pooling layer and a fully-connected layer. In $\mathrm{SIB + E^3BM}$ , we use a 28-layer wide residual network (WRN-28-10) as SIB [25].
+
+The configuration of base-learners. In MTL [70] and SIB [25], the base-learner is a single fully-connected layer. In MAML [13], the base-learner is the 4-layer convolution network. In MTL and MAML, the base-learner is randomly initialized and updated during meta-learning. In SIB, the base-learner is initialized with the averaged image features of each class. The number of base-learners $M$ in $\mathrm{MTL + E^3BM}$ and $\mathrm{SIB + E^3BM}$ are respectively 100 and 3, i.e., the original numbers of training epochs in [25, 70].
+
+The configuration of hyperprior learners. In Fig. 3, we show two options for hyperprior learners (i.e., $\varPsi_{\alpha}$ and $\varPsi_v$ ). Fig. 3(a) is the epoch-independent option, where each epoch has two FC layers to produce $\alpha$ and $v$ respectively. Fig. 3(b) is the epoch-dependent option which uses an LSTM to generate $\alpha$ and $v$ at all epochs. In terms of the learning hyperprior learners, we have two settings: inductive learning denoted as "Ind.", and transductive learning as "Tra." "Ind." is the supervised learning in classical few-shot learning methods [13,37,64,70,73]. "Tra." is semi-supervised learning, based on the assumption that all test images of the episode are available. It has been applied to many recent works [24,25,45].
+
+Ablation settings. We conduct a careful ablative study for two components, i.e., "ensembling multiple base-learners" and "meta-learning hyperprior learners". We show their effects indirectly by comparing our results to those of using arbitrary constant or learned values of $v$ and $\alpha$ . In terms of $v$ , we have 5 ablation options: (v1) "E $^3$ BM" is our method generating $v$ from $\varPsi_v$ ; (v2) "learnable" is to set $v$ to be update by meta gradient descent same as $\theta$ in [13]; (v3) "optimal" means using the values learned by option (a2) and freezing them during the actual learning; (v4) "equal" is an simple baseline using equal weights; (v5) "last-epoch" uses only the last-epoch base-learner, i.e., $v$ is set to $[0,0,\dots,1]$ . In the experiments of (v1)-(v5), we simply set $\alpha$ as in the following (a4) [13,25,70]. In terms of $\alpha$ , we have 4 ablation options: (a1) "E $^3$ BM" is our method generating $\alpha$ from $\varPsi_{\alpha}$ ; (a2) "learnable" is to set $\alpha$ to be update by meta gradient descent same as $\theta$ in [13]; (a3) "optimal" means using the values learned by option (a2) and freezing them during the actual learning; (a4) "fixed" is a simple baseline that uses manually chosen $\alpha$ following [13,25,70]. In the experiments of (a1)-(a4), we simply set $v$ as in (v5), same with the baseline method [70].
+
+| Methods | Backbone | miniImageNet | tieredImageNet | FC100 |
| 1-shot | 5-shot | 1-shot | 5-shot | 1-shot | 5-shot |
| MatchNets [73] | 4CONV | 43.44 | 55.31 | - | - | - | - |
| ProtoNets [64] | 4CONV | 49.42 | 68.20 | 53.31 | 72.69 | - | - |
| MAML° [13] | 4CONV | 48.70 | 63.11 | 49.0 | 66.5 | 38.1 | 50.4 |
| MAML++° [1] | 4CONV | 52.15 | 68.32 | 51.5 | 70.6 | 38.7 | 52.9 |
| TADAM [53] | ResNet-12 | 58.5 | 76.7 | - | - | 40.1 | 56.1 |
| MetaOptNet [37] | ResNet-12 | 62.64 | 78.63 | 65.99 | 81.56 | 41.1 | 55.5 |
| CAN [24] | ResNet-12 | 63.85 | 79.44 | 69.89 | 84.23 | - | - |
| CTM [40] | ResNet-18 | 64.12 | 80.51 | 68.41 | 84.28 | - | - |
| MTL [70] | ResNet-12 | 61.2 | 75.5 | - | - | 45.1 | 57.6 |
| MTL° [70] | ResNet-25 | 63.4 | 80.1 | 69.1 | 84.2 | 43.7 | 60.1 |
| LEO [61] | WRN-28-10 | 61.76 | 77.59 | 66.33 | 81.44 | - | - |
| Robust20-dist‡ [12] | WRN-28-10 | 63.28 | 81.17 | - | - | - | - |
| MAML+E3BM (+time, +param) | 4CONV | 53.2(↑4.5) | 65.1(↑2.0) | 52.1(↑3.1) | 70.2(↑3.7) | 39.9(↑1.8) | 52.6(↑2.2) |
| - | (8.9, 2.2) | (9.7, 2.2) | (10.6, 2.2) | (9.3, 2.2) | (7.8, 2.2) | (12.1, 2.2) |
| MTL+E3BM (+time, +param) | ResNet-25 | 64.3(↑0.9) | 81.0(↑0.9) | 70.0(↑0.9) | 85.0(↑0.8) | 45.0(↑1.3) | 60.5(↑0.4) |
| - | (5.9, 0.7) | (10.2, 0.7) | (6.7, 0.7) | (9.5, 0.7) | (5.7, 0.7) | (7.9, 0.7) |
+
+(a) Inductive Methods
+
+| EGNN [31] | ResNet-12 | 64.02 | 77.20 | 65.45 | 82.52 | - | - |
| CAN+T [24] | ResNet-12 | 67.19 | 80.64 | 73.21 | 84.93 | - | - |
| SIBև [25] | WRN-28-10 | 70.0 | 79.2 | 72.9 | 82.8 | 45.2 | 55.9 |
| SIB+E3BM‡ (+time, +param) | WRN-28-10 | 71.4(↑1.4) | 81.2(↑2.0) | 75.6(↑2.7) | 84.3(↑1.5) | 46.0(↑0.8) | 57.1(↑1.2) |
| - | (2.1, 0.04) | (5.7, 0.04) | (5.2, 0.04) | (4.9, 0.04) | (6.1, 0.04) | (7.3, 0.04) |
+
+$\diamond$ Our implementation on tieredImageNet and FC100. $\ddagger$ Input image size: $80\times 80\times 3$
+
+(b) Transductive Methods
+
+Table 1. The 5-class few-shot classification accuracies $(\%)$ on miniImageNet, tieredImageNet, and FC100. “(+time, +param)” denote the additional computational time $(\%)$ and parameter size $(\%)$ , respectively, when plugging-in E³BM to baselines (MAML, MTL and SIB). “-” means no reported results in original papers. The best and second best results are highlighted.
+
+# 5.2 Results and analyses
+
+In Table 1, we compare our best results to the state-of-the-arts. In Table 2, we present the results of using different kinds of hyperprior learner, i.e., regarding two architectures (FC and LSTM) and two learning strategies (inductive and transductive). In Fig. 4(a)(b), we show the validation results of our ablative methods, and demonstrate the change during meta-training iterations. In Fig. 4(c)(d), we plot the generated values of $v$ and $\alpha$ during meta-training.
+
+Comparing to the state-of-the-arts. Table 1 shows that the proposed $\mathrm{E}^3\mathrm{BM}$ achieves the best few-shot classification performance in both 1-shot and 5-shot settings, on three benchmarks. Please note that [12] reports the results of using different backbones and input image sizes. We choose its results under the same setting of ours, i.e., using WRN-28-10 networks and $80\times 80\times 3$ images, for fair comparison. In our approach, plugging-in $\mathrm{E}^3\mathrm{BM}$ to the state-of-the-art model SIB achieves $1.6\%$ of improvement on average, based on the identical network architecture. This improvement is significantly larger as $2.9\%$ when taking MAML
+
+| No. | Setting | miniImageNet | tieredImageNet | FC100 |
| Method | Hyperprior | Learning | 1-shot | 5-shot | 1-shot | 5-shot | 1-shot | 5-shot |
| 1 | MTL [70] | - | Ind. | 63.4 | 80.1 | 69.1 | 84.2 | 43.7 | 60.1 |
| 2 | MTL+E3BM | FC | Ind. | 64.3 | 80.9 | 69.8 | 84.6 | 44.8 | 60.5 |
| 3 | MTL+E3BM | FC | Tra. | 64.7 | 80.7 | 69.7 | 84.9 | 44.7 | 60.6 |
| 4 | MTL+E3BM | LSTM | Ind. | 64.3 | 81.0 | 70.0 | 85.0 | 45.0 | 60.4 |
| 5 | MTL+E3BM | LSTM | Tra. | 64.5 | 81.1 | 70.2 | 85.3 | 45.1 | 60.6 |
| 6 | SIB [25] | - | Tra. | 70.0 | 79.2 | 72.9 | 82.8 | 45.2 | 55.9 |
| 7 | SIB+E3BM | FC | Tra. | 71.3 | 81.0 | 75.2 | 83.8 | 45.8 | 56.3 |
| 8 | SIB+E3BM | LSTM | Tra. | 71.4 | 81.2 | 75.6 | 84.3 | 46.0 | 57.1 |
+
+Table 2. The 5-class few-shot classification accuracies (%) of using different hyper-prior learners, on the miniImageNet, tieredImageNet, and FC100. "Ind." and "Tra." denote the inductive and transductive settings, respectively. The best and second best results are highlighted.
+
+as baseline. All these show to be more impressive if considering the tiny overheads from plugging-in. For example, using $\mathrm{E}^3\mathrm{BM}$ adds only $0.04\%$ learning parameters to the original SIB model, and it gains only $5.2\%$ average overhead regarding the computational time. It is worth mentioning that the amount of learnable parameters in $\mathrm{SIB + E^3BM}$ is around $80\%$ less than that of model in [12] which ensembles 5 deep networks in parallel (and later learns a distillation network).
+
+Hyperprior learners. In Table 2, we can see that using transductive learning clearly outperforms inductive learning, e.g., No. 5 vs. No. 4. This is because the "transduction" leverages additional data, i.e., the episode-test images (no labels), during the base-training. In terms of the network architecture, we observe that LSTM-based learners are slightly better than FC-based (e.g., No. 3 vs. No. 2). LSTM is a sequential model and is indeed able to "observe" more patterns from the adaptation behaviors of models at adjacent epochs.
+
+Ablation study. Fig. 4(a) shows the comparisons among $\alpha$ related ablation models. Our $\mathrm{E}^3\mathrm{BM}$ (orange) again performs the best, over the models of using any arbitrary $\alpha$ (red or light blue), as well as over the model with $\alpha$ optimized by the meta gradient descent (blue) [13]. Fig. 4(b) shows that our approach $\mathrm{E}^3\mathrm{BM}$ works consistently better than the ablation models related to $v$ . We should emphasize that $\mathrm{E}^3\mathrm{BM}$ is clearly more efficient than the model trained with metalearned $v$ (blue) through meta gradient descent [13]. This is because $\mathrm{E}^3\mathrm{BM}$ hyperprior learners generate empirical weights conditional on task-specific data. The LSTM-based learners can leverage even more task-specific information, i.e., the hidden states from previous epochs, to improve the efficiency.
+
+The values of $\alpha$ and $v$ learned by $\mathbf{E}^3\mathbf{BM}$ . Fig. 4(c)(d) shows the values of $\alpha$ and $v$ during the meta-training iterations in our approach. Fig. 4(c) show the base-learners working at later training epochs (e.g., BL-100) tend to get smaller values of $\alpha$ . This is actually similar to the common manual schedule, i.e. monotonically decreasing learning rates, of conventional large-scale network training [21]. The difference is that in our approach, this is "scheduled" in a total automated way by hyperprior learners. Another observation is that the
+
+
+(a) Val. acc. for $\alpha$
+
+
+(b) Val. acc. for $v$
+
+
+(c) Values of $\alpha$
+Fig.4. (a)(b): The meta-validation accuracies of ablation models. The legends are explained in (a1)-(a4) and (v1)-(v5) in Sec. 5.1 Ablation settings. All curves are smoothed with a rate of 0.9 for a better visualization. (c)(d): The values of $\alpha$ and $v$ generated by $\varPsi_{\alpha}$ and $\varPsi_v$ , respectively. The setting is using $\mathrm{MTL + E^3BM}$ , ResNet-25, on miniImageNet, 1-shot.
+
+
+(d) Values of $v$
+
+highest learning rate is applied to BL-1. This actually encourages BL-1 to make an influence as significant as possible. It is very helpful to reduce meta gradient diminishing when unrolling and back-propagating gradients through many base-learning epochs (e.g., 100 epochs in MTL). Fig. 4(d) shows that BL-1 working at the initial epoch has the lowest values of $v$ . In other words, BL-1 is almost disabled in the prediction of episode test. Intriguingly, BL-25 instead of BL-100 gains the highest $v$ values. Our explanation is that during the base-learning, base-learners at latter epochs get more overfitted to the few training samples. Their functionality is thus suppressed. Note that our empirical results revealed that including the overfitted base-learners slightly improves the generalization capability of the approach.
+
+# 6 Conclusions
+
+We propose a novel $\mathrm{E}^3\mathrm{BM}$ approach that tackles the few-shot problem with an ensemble of epoch-wise base-learners that are trained and combined with task-specific hyperparameters. In specific, $\mathrm{E}^3\mathrm{BM}$ meta-learns the hyperprior learners to generate such hyperparameters conditional on the images as well as the training states for each episode. Its resulting model allows to make use of multiple base-learners for more robust predictions. It does not change the basic training paradigm of episodic few-shot learning, and is thus generic and easy to plug-and-play with existing methods. By applying $\mathrm{E}^3\mathrm{BM}$ to multiple baseline methods, e.g., MAML, MTL and SIB, we achieved top performance on three challenging few-shot image classification benchmarks, with little computation or parametrization overhead.
+
+Acknowledgments. This research was supported by the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant. We thank all reviewers and area chairs for their constructive suggestions.
+
+# References
+
+1. Antoniou, A., Edwards, H., Storkey, A.: How to train your maml. In: ICLR (2019) 1, 2, 4, 11, 12
+2. Bart, E., Ullman, S.: Cross-generalization: Learning novel classes from a single example by feature replacement. In: CVPR. pp. 672-679 (2005) 3
+3. Bengio, Y.: Gradient-based optimization of hyperparameters. Neural Computation 12(8), 1889-1900 (2000) 4
+4. Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. Journal of Machine Learning Research 13, 281-305 (2012) 4
+5. Blei, D.M., Kucukelbir, A., McAuliffe, J.D.: Variational inference: A review for statisticians. Journal of the American statistical Association 112(518), 859-877 (2017) 7
+6. Breiman, L.: Stacked regressions. Machine Learning 24(1), 49-64 (1996) 2, 4
+7. Caruana, R.: Learning many related tasks at the same time with backpropagation. In: NIPS. pp. 657-664 (1995) 1
+8. Chen, W.Y., Liu, Y.C., Kira, Z., Wang, Y.C., Huang, J.B.: A closer look at few-shot classification. In: ICLR (2019) 3
+9. Chen, Z., Fu, Y., Zhang, Y., Jiang, Y., Xue, X., Sigal, L.: Multi-level semantic feature augmentation for one-shot learning. IEEE Transactions Image Processing 28(9), 4594-4605 (2019) 3
+0. Domke, J.: Generic methods for optimization-based modeling. In: AISTATS. pp. 318-326 (2012) 4
+1. Dvornik, N., Schmid, C., Julien, M.: f-VAEGAN-D2: A feature generating framework for any-shot learning. In: ICCV. pp. 10275-10284 (2019) 4
+2. Dvornik, N., Schmid, C., Mairal, J.: Diversity with cooperation: Ensemble methods for few-shot classification. In: ICCV. pp. 3722-3730 (2019) 3, 8, 12, 13
+3. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML. pp. 1126-1135 (2017) 1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 13
+4. Finn, C., Xu, K., Levine, S.: Probabilistic model-agnostic meta-learning. In: NeurIPS. pp. 9537-9548 (2018) 1, 2, 3, 4, 9
+5. Franceschi, L., Frasconi, P., Salzo, S., Grazzi, R., Pontil, M.: Bilevel programming for hyperparameter optimization and meta-learning. In: ICML. pp. 1563-1572 (2018) 1, 4
+6. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119-139 (1997) 4
+7. Friedman, J.H.: Stochastic gradient boosting. Computational Statistics & Data Analysis 38(4), 367-378 (2002) 4
+8. Geoffrey, H.E., David, P.C.: Using fast weights to deblur old memories. In: CogSci. pp. 177-186 (1987) 3
+9. Girija, S.S.: Tensorflow: Large-scale machine learning on heterogeneous distributed systems. Software available from tensorflow.org 39 (2016) 9
+20. Grant, E., Finn, C., Levine, S., Darrell, T., Griffiths, T.L.: Recasting gradient-based meta-learning as hierarchical bayes. In: ICLR (2018) 1, 4
+21. He, T., Zhang, Z., Zhang, H., Zhang, Z., Xie, J., Li, M.: Bag of tricks for image classification with convolutional neural networks. In: CVPR. pp. 558-567 (2019) 13
+22. Ho, T.K.: Random decision forests. In: ICDAR. vol. 1, pp. 278-282 (1995) 4
+
+23. Hoffman, M.D., Blei, D.M., Wang, C., Paisley, J.: Stochastic variational inference. The Journal of Machine Learning Research 14(1), 1303-1347 (2013) 6, 7
+24. Hou, R., Chang, H., Bingpeng, M., Shan, S., Chen, X.: Cross attention network for few-shot classification. In: NeurIPS. pp. 4005-4016 (2019) 3, 10, 11, 12
+25. Hu, S.X., Moreno, P.G., Xiao, X.S.Y., Lawrence, N.D., Obozinski, G., Damianou, A., Champs-sur Marne, F.: Empirical bayes meta-learning with synthetic gradients. In: ICLR (2020) 1, 2, 3, 4, 7, 8, 9, 10, 11, 12, 13
+26. Huang, G., Li, Y., Pleiss, G., Liu, Z., Hopcroft, J.E., Weinberger, K.Q.: Snapshot ensembles: Train 1, get m for free. In: ICLR (2017) 4
+27. Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: LION. pp. 507-523 (2011) 4
+28. Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W.M., Donahue, J., Razavi, A., Vinyals, O., Green, T., Dunning, I., Simonyan, K., Fernando, C., Kavukcuoglu, K.: Population based training of neural networks. arXiv 1711.09846 (2017) 4
+29. Ju, C., Bibaut, A., van der Laan, M.: The relative performance of ensemble methods with deep convolutional neural networks for image classification. Journal of Applied Statistics 45(15), 2800-2818 (2018) 2, 4
+30. Jung, H.G., Lee, S.W.: Few-shot learning with geometric constraints. IEEE Transactions on Neural Networks and Learning Systems (2020) 1
+31. Kim, J., Kim, T., Kim, S., Yoo, C.D.: Edge-labeling graph neural network for few-shot learning. In: CVPR. pp. 11-20 (2019) 12
+32. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: ICLR (2014) 7
+33. Krizhevsky, A.: Learning multiple layers of features from tiny images. University of Toronto (2009) 11
+34. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS. pp. 1097-1105 (2012) 1
+35. Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning 51(2), 181-207 (2003) 4
+36. Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. In: ICLR (2017) 4
+37. Lee, K., Maji, S., Ravichandran, A., Soatto, S.: Meta-learning with differentiable convex optimization. In: CVPR. pp. 10657-10665 (2019) 4, 11, 12
+38. Lee, Y., Choi, S.: Gradient-based meta-learning with learned layerwise metric and subspace. In: ICML. pp. 2933-2942 (2018) 1, 4
+39. Li, F., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(4), 594-611 (2006) 1
+40. Li, H., Eigen, D., Dodge, S., Zeiler, M., Wang, X.: Finding task-relevant features for few-shot learning by category traversal. In: CVPR. pp. 1-10 (2019) 3, 12
+41. Li, H., Dong, W., Mei, X., Ma, C., Huang, F., Hu, B.: Lgm-net: Learning to generate matching networks for few-shot learning. In: ICML. pp. 3825-3834 (2019) 3
+42. Li, L., Jamieson, K.G., DeSalvo, G., Rostamizadeh, A., Talwalkar, A.: Hyperband: A novel bandit-based approach to hyperparameter optimization. Journal of Machine Learning Research 18, 185:1-185:52 (2017) 4
+43. Li, X., Sun, Q., Liu, Y., Zhou, Q., Zheng, S., Chua, T.S., Schiele, B.: Learning to self-train for semi-supervised few-shot classification. In: NeurIPS. pp. 10276-10286 (2019) 4
+44. Li, Z., Zhou, F., Chen, F., Li, H.: Meta-sgd: Learning to learn quickly for few shot learning. arXiv 1707.09835 (2017) 4
+
+45. Liu, Y., Lee, J., Park, M., Kim, S., Yang, Y.: Learning to propagate labels: Transductive propagation network for few-shot learning. In: ICLR (2019) 3, 11
+46. Liu, Y., Su, Y., Liu, A.A., Schiele, B., Sun, Q.: Mnemonics training: Multi-class incremental learning without forgetting. In: CVPR. pp. 12245-12254 (2020) 4
+47. Luketina, J., Raiko, T., Berglund, M., Greff, K.: Scalable gradient-based tuning of continuous regularization hyperparameters. In: ICML. pp. 2952-2960 (2016) 4
+48. Maclaurin, D., Duvenaud, D.K., Adams, R.P.: Gradient-based hyperparameter optimization through reversible learning. In: ICML. pp. 2113-2122 (2015) 4
+49. Metz, L., Maheswaranathan, N., Cheung, B., Sohl-Dickstein, J.: Meta-learning update rules for unsupervised representation learning. In: ICLR (2019) 4
+50. Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P.: Snail: A simple neural attentive meta-learner. In: ICLR (2018) 3, 4
+51. Mitchell, T.: Machine learning, mcgraw-hill higher education. New York (1997) 4
+52. Munkhdalai, T., Yu, H.: Meta networks. In: ICML. pp. 2554-2563 (2017) 3, 4
+53. Oreshkin, B.N., Rodríguez, P., Lacoste, A.: TADAM: task dependent adaptive metric for improved few-shot learning. In: NeurIPS. pp. 719-729 (2018) 3, 4, 10, 11, 12
+54. Ozay, M., Vural, F.T.Y.: A new fuzzy stacked generalization technique and analysis of its performance. arXiv 1204.0171 (2012) 2, 4
+55. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: An imperative style, high-performance deep learning library. In: NeurIPS. pp. 8024-8035 (2019) 9
+56. Qiao, S., Liu, C., Shen, W., Yuille, A.L.: Few-shot image recognition by predicting parameters from activations. In: CVPR. pp. 7229-7238 (2018) 11
+57. Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2017) 4, 5
+58. Ren, M., Triantafillou, E., Ravi, S., Snell, J., Swersky, K., Tenenbaum, J.B., Larochelle, H., Zemel, R.S.: Meta-learning for semi-supervised few-shot classification. In: ICLR (2018) 3, 10
+59. Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: ICML. pp. 1278-1286 (2014) 7
+60. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M.S., Berg, A.C., Li, F.: Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115(3), 211-252 (2015) 10
+61. Rusu, A.A., Rao, D., Sygnowski, J., Vinyals, O., Pascanu, R., Osindero, S., Hadsell, R.: Meta-learning with latent embedding optimization. In: ICLR (2019) 4, 12
+62. Satorras, V.G., Estrach, J.B.: Few-shot learning with graph neural networks. In: ICLR (2018) 3
+63. Smyth, P., Wolpert, D.: Linearly combining density estimators via stacking. Machine Learning 36(1-2), 59-83 (1999) 4
+64. Snell, J., Swersky, K., Zemel, R.S.: Prototypical networks for few-shot learning. In: NIPS. pp. 4077-4087 (2017) 3, 11, 12
+65. Snoek, J., Larochelle, H., Adams, R.P.: Practical bayesian optimization of machine learning algorithms. In: NIPS. pp. 2951-2959 (2012) 4
+66. Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., Patwary, M.M.A., Prabhat, Adams, R.P.: Scalable bayesian optimization using deep neural networks. In: ICML. pp. 2171-2180 (2015) 4
+67. Snoek, J., Swersky, K., Zemel, R.S., Adams, R.P.: Input warping for bayesian optimization of non-stationary functions. In: ICML. pp. 1674-1682 (2014) 4
+
+68. Sollich, P., Krogh, A.: Learning with ensembles: How overfitting can be useful. In: NIPS. pp. 190-196 (1996) 4
+69. Sun, Q., Liu, Y., Chen, Z., Chua, T., Schiele, B.: Meta-transfer learning through hard tasks. arXiv 1910.03648 (2019) 4, 9, 11
+70. Sun, Q., Liu, Y., Chua, T.S., Schiele, B.: Meta-transfer learning for few-shot learning. In: CVPR. pp. 403-412 (2019) 1, 2, 3, 4, 7, 8, 9, 10, 11, 12, 13
+71. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H.S., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: CVPR. pp. 1199-1208 (2018) 3, 10
+72. Thrun, S., Pratt, L.: Learning to learn: Introduction and overview. In: Learning to learn, pp. 3-17. Springer (1998) 3
+73. Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., Wierstra, D.: Matching networks for one shot learning. In: NIPS. pp. 3630-3638 (2016) 3, 5, 10, 11, 12
+74. Wang, X., Huang, T.E., Darrell, T., Gonzalez, J.E., Yu, F.: Frustratingly simple few-shot object detection. In: ICML (2020) 1
+75. Wang, Y., Girshick, R.B., Hebert, M., Hariharan, B.: Low-shot learning from imaginary data. In: CVPR. pp. 7278-7286 (2018) 3
+76. Wang, Y.X., Hebert, M.: Learning from small sample sets by combining unsupervised meta-training with cnns. In: NIPS. pp. 244-252 (2016) 3
+77. Xian, Y., Sharma, S., Schiele, B., Akata, Z.: f-VAEGAN-D2: A feature generating framework for any-shot learning. In: CVPR. pp. 10275–10284 (2019) 3
+78. Ye, H.J., Hu, H., Zhan, D.C., Sha, F.: Learning embedding adaptation for few-shot learning. arXiv 1812.03664 (2018) 3, 11
+79. Yoon, J., Kim, T., Dia, O., Kim, S., Bengio, Y., Ahn, S.: Bayesian model-agnostic meta-learning. In: NeurIPS. pp. 7343–7353 (2018) 2, 8
+80. Yoon, J., Kim, T., Dia, O., Kim, S., Bengio, Y., Ahn, S.: Bayesian model-agnostic meta-learning. In: NeurIPS. pp. 7343–7353 (2018) 4
+81. Zhang, C., Cai, Y., Lin, G., Shen, C.: Deepemd: Differentiable earth mover's distance for few-shot learning. arXiv 2003.06777 (2020) 1
+82. Zhang, C., Cai, Y., Lin, G., Shen, C.: Deepemd: Few-shot image classification with differentiable earth mover's distance and structured classifiers. In: CVPR. pp. 12203-12213 (2020) 3
+83. Zhang, C., Lin, G., Liu, F., Guo, J., Wu, Q., Yao, R.: Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation. In: ICCV. pp. 9587-9595 (2019) 1
+84. Zhang, C., Lin, G., Liu, F., Yao, R., Shen, C.: Canet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. In: CVPR. pp. 5217-5226 (2019) 1
+85. Zhang, L., Shi, Z., Cheng, M.M., Liu, Y., Bian, J.W., Zhou, J.T., Zheng, G., Zeng, Z.: Nonlinear regression via deep negative correlation learning. IEEE Transactions on Pattern Analysis and Machine Intelligence (2019) 4
+86. Zhang, R., Che, T., Grahamramani, Z., Bengio, Y., Song, Y.: Metagan: An adversarial approach to few-shot learning. In: NeurIPS. pp. 2371-2380 (2018) 1, 4
\ No newline at end of file
diff --git a/anensembleofepochwiseempiricalbayesforfewshotlearning/images.zip b/anensembleofepochwiseempiricalbayesforfewshotlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..35bb2b89833d86cf1ca5d0821d4017e8ad00e469
--- /dev/null
+++ b/anensembleofepochwiseempiricalbayesforfewshotlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:065673196c6dc5b693093d7458778c085d4c64fb642869fc1454fb8ec0f8398f
+size 456702
diff --git a/anensembleofepochwiseempiricalbayesforfewshotlearning/layout.json b/anensembleofepochwiseempiricalbayesforfewshotlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f8c7c81edab084c2655bb76f5849a49c3c05e95e
--- /dev/null
+++ b/anensembleofepochwiseempiricalbayesforfewshotlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b24e0b568fe74a2017c59369dc27668a0562cb5462536a9878eb9ad7d96ab3c
+size 606390
diff --git a/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/c25691c4-d953-4591-b31d-bc7fc6e349ed_content_list.json b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/c25691c4-d953-4591-b31d-bc7fc6e349ed_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..885dda991994aea299c67d09c1e30f68a380f85c
--- /dev/null
+++ b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/c25691c4-d953-4591-b31d-bc7fc6e349ed_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:338bd0ba0d7427c3a737ab9452312200e65d29c525a9545259c0b59e8e926d81
+size 70160
diff --git a/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/c25691c4-d953-4591-b31d-bc7fc6e349ed_model.json b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/c25691c4-d953-4591-b31d-bc7fc6e349ed_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..cf1d25ad7cc5e5dd05617b08e6e3ac2049fe7611
--- /dev/null
+++ b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/c25691c4-d953-4591-b31d-bc7fc6e349ed_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6c465f873454f1ca0f9bccd4c6d64c2ab57f8afec5942188363c1c293e1e618b
+size 87207
diff --git a/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/c25691c4-d953-4591-b31d-bc7fc6e349ed_origin.pdf b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/c25691c4-d953-4591-b31d-bc7fc6e349ed_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..35fba6f07def6f17b5974ee4f5f622091804fdbc
--- /dev/null
+++ b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/c25691c4-d953-4591-b31d-bc7fc6e349ed_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:807b5e8e510fb73d9657767105b19dd27e629cf0c0b63254eb7d736f468c7647
+size 681200
diff --git a/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/full.md b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..84fb0816967642fff1ca74bcfe0cd898cf536051
--- /dev/null
+++ b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/full.md
@@ -0,0 +1,262 @@
+# Angle-based Search Space Shrinking for Neural Architecture Search
+
+Yiming Hu $^{1,3,\star}$ , Yuding Liang $^{2,\star}$ , Zichao Guo $^{2,\star\star}$ , Ruosi Wan $^{2}$ , Xiangyu Zhang $^{2}$ , Yichen Wei $^{2}$ , Qingyi Gu $^{1}$ , Jian Sun $^{2}$
+
+$^{1}$ Institute of Automation, Chinese Academy of Sciences $^{2}$ MEGVII Technology $^{3}$ School of Artificial Intelligence, University of Chinese Academy of Sciences $\{\text{liangyuding, guozichao, wanruosi, zhangxiangyu, weiyichen, sunjian}\} @\text{megvii.com}$ $\{\text{huyiming2016, qingyi.gu}\} @ia.ac.cn$
+
+Abstract. In this work, we present a simple and general search space shrinking method, called Angle-Based search space Shrinking (ABS), for Neural Architecture Search (NAS). Our approach progressively simplifies the original search space by dropping unpromising candidates, thus can reduce difficulties for existing NAS methods to find superior architectures. In particular, we propose an angle-based metric to guide the shrinking process. We provide comprehensive evidences showing that, in weight-sharing supernet, the proposed metric is more stable and accurate than accuracy-based and magnitude-based metrics to predict the capability of child models. We also show that the angle-based metric can converge fast while training supernet, enabling us to get promising shrunk search spaces efficiently. ABS can easily apply to most of NAS approaches (e.g. SPOS, FairNAS, ProxylessNAS, DARTS and PDARTS). Comprehensive experiments show that ABS can dramatically enhance existing NAS approaches by providing a promising shrunk search space.
+
+Keywords: angle, search space shrinking, NAS
+
+# 1 Introduction
+
+Neural Architecture Search (NAS), the process of automatic model design has achieved significant progress in various computer vision tasks [36, 8, 22, 34]. They usually search over a large search space covering billions of options to find the superior ones, which is time-consuming and challenging. Though many weight-sharing NAS methods [15, 23, 24, 5, 33] have been proposed to relieve the search efficiency problem, the challenge brought by the large and complicated search space still remains.
+
+Shrinking search space seems to be a feasible solution to relieve the optimization and efficiency problem of NAS over large and complicated search spaces. In fact, recent studies [7, 26, 27, 4, 20] have adopted different shrinking methods
+
+
+Sort via Angle-based Metric
+Fig. 1. Overview of the proposed angle-based search space shrinking method. We first train the supernet for some epochs with uniform sampling. After this, all operators are ranked by their scores and those of them whose rankings fall at the tail are dropped
+
+to simplify the large search space dynamically. These methods either speed up the search process or reduce the optimization difficulty in training stage by progressively dropping unpromising candidate operators. Though existing shrinking methods have obtained decent results, it's still challenging to detect unpromising operators among lots of candidate ones. The key is to predict the capacity of candidates by an accurate metric. Existing NAS methods usually use accuracy-based metric [23, 28, 4, 20] or magnitude-based metric [7, 26, 27] to guide the shrinking process. However, neither of them is satisfactory: the former one is unstable and unable to accurately predict the performance of candidates in weight-sharing setting [35]; while the later one entails the rich-get-richer problem [1, 7].
+
+In this work, we propose a novel angle-based metric to guide the shrinking process. It's obtained via computing the angle between the model's weight vector and its initialization. Recent work [6] has used the similar metric to measure the generality of stand-alone models and demonstrates its effectiveness. For the first time, we introduce the angle-based metric to weight-sharing NAS. Compared with accuracy-based and magnitude-based metrics, the proposed angle-based metric is more effective and efficient. First, it can save heavy computation overhead by eliminating inference procedure. Second, it has higher stability and ranking correlation than accuracy-based metric in weight-sharing supernet. Third, it converges faster than its counterparts, which enables us to detect and remove unpromising candidates during early training stage.
+
+Based on the angle-based metric, we further present a conceptually simple, flexible, and general method for search space shrinking, named as Angle-Based search space Shrinking (ABS). As shown in Fig. 1, we divide the pipeline of ABS into multiple stages and progressively discard unpromising candidates according to our angle-based metric. ABS aims to get a shrunk search space covering many
+
+promising network architectures. Contrast to existing shrinking methods, the shrunk search spaces ABS find don't rely on specific search algorithms, thus are available for different NAS approaches to get immediate accuracy improvement.
+
+ABS applies to various NAS algorithms easily. We evaluate its effectiveness on Benchmark-201 [12] and ImageNet [19]. Our experiments show several NAS algorithms consistently discover more powerful architectures from the shrunk search spaces found by ABS. To sum up, our main contributions are as follows:
+
+1. We clarify and verify the effectiveness of elaborately shrunk search spaces to enhance the performance of existing NAS methods.
+2. We design a novel angle-based metric to guide the process of search space shrinking, and verify its advantages including efficiency, stability, and fast convergence by lots of analysis experiments.
+3. We propose a dynamic search space shrinking method that can be considered as a general plug-in to improve various NAS algorithms including SPOS [15], FairNAS [9], ProxylessNAS [5], DARTS [24] and PDARTS [7].
+
+# 2 Related Work
+
+Weight-sharing NAS. To reduce computation cost, many works [24, 5, 15, 3, 7] adopt weight-sharing mechanisms for efficient NAS. Latest approaches on efficient NAS fall into two categories: one-shot methods [3, 15, 9] and gradient-based methods [24, 5, 33]. One-shot methods train an over-parameterized supernet based on various sample strategies [15, 9, 3]. After this, they evaluate many child models with the well-trained supernet as alternatives, and choose those with the best performance. Gradient-based algorithms [24, 5, 33] jointly optimize the network weights and architecture parameters by back-propagation. Finally, they choose operators by the magnitudes of architecture parameters.
+
+Search Space Shrinking. Several recent works [23, 7, 28, 26, 27, 4, 26, 20] perform search space shrinking for efficient NAS. For example, PDARTS [7] proposes to shrink the search space for reducing computational overhead when increasing network depth. In order to improve the ranking quality of candidate networks, PCNAS [20] attempts to drop unpromising operators layer by layer based on one-shot methods. However, existing shrinking techniques are strongly associated with specific algorithms, thus unable to easily apply to other NAS methods. In contrast, our search space shrinking method is simple and general, which can be considered as a plug-in to enhance the performance of different NAS algorithms. Moreover, an effective metric is vital to discover less promising models or operators for search space shrinking. Accuracy-based metric [23, 28, 4, 20] and magnitude-based metric [7, 26, 27] are two widely used metrics in NAS area. In contrast, our angle-based metric is much more stable and predictive without the poor ranking consistence and the rich-get-richer problem.
+
+Angle-based Metric. Recently, deep learning community comes to realize the angle of weights is very useful to measure the training behavior of neural networks: some works [21, 2] theoretically prove that due to widely used normalization layers in neural network, the angle of weights is more accurate than euclidean distance to represent the update of weights; [6] uses the angle between the weights of a well-trained network and the initialized weights, to measure the generalization of the well-trained network on real data experiments. But the angle calculation method in [6] can't deal with none-parameter operators like identity and average pooling. To our best knowledge, no angle-based method was proposed before in NAS filed. Therefore we design a special strategy to apply the angle-based metric in NAS methods.
+
+# 3 Search Space Shrinking
+
+In this section, firstly we verify our claim that a elaborately shrunk search space can improve existing NAS algorithms by experiments. Then we propose an angle-based metric to guide the process of search space shrinking. Finally, we demonstrate the pipeline of the overall angle-based search space shrinking method.
+
+# 3.1 Elaborately Shrunk Search Space is Better
+
+In this section, we investigate behaviors of NAS methods on various shrunk search spaces and point out that an elaborately shrunk search space can enhance existing NAS approaches. Our experiments are conducted on NAS-Bench-201 [12], which contains 15625 child models with ground-truths. We design 7 shrunk search spaces of various size on NAS-Bench-201, and evaluate five NAS algorithms [24, 10, 11, 15, 29] over shrunk search spaces plus the original one.
+
+Fig. 2 summaries the experiment results. It shows the elaborately shrunk search space can improve the given NAS methods with a clear margin. For example, GDAS finds the best model on CIFAR-10 from $S2$ . On CIFAR-100 dataset, all algorithms discover the best networks from $S8$ . For SPOS, the best networks found on ImageNet-16-120 are from $S5$ . However, not all shrunk search spaces are beneficial to NAS algorithms. Most of shrunk search spaces show no superiority to the original one ( $S1$ ), and some of them even get worse performance. Only a few shrunk search spaces can outperform the original ones, which makes it non-trivial to shrink search space wisely. To deal with such issue, we propose an angle-based shrinking method to discover the promising shrunk search space efficiently. The proposed shrinking procedure can apply to all existing NAS algorithms. We'll demonstrate its procedure and effectiveness later.
+
+# 3.2 Angle-based Metric
+
+Angle of Weights. According to [21, 2], the weights of a neural network with Batch Normalization [17, 31] are "scale invariant", which means the Frobenius norm of weights can't affect the performance of the neural network and only
+
+
+
+
+Fig.2. Elaborately shrunk Search Space is better. We evaluate five different NAS algorithms [24, 10, 11, 15, 29] on eight search spaces
+
+
+S1: original search space of NAS-Bench-201 S3 S5 S7 S2: S2 to S8 are various subsets of S1 S4 S6 S8
+
+direction of weights matters. Due to "scale invariant" property, the angle $\Delta_{\mathbf{W}}$ (defined as Eq. (1)) between trained weights $\mathbf{W}$ and initialized weights $\mathbf{W}_0$ is better than euclidean distance of weights to represent the difference between initialized neural networks and trained ones:
+
+$$
+\Delta_ {\boldsymbol {W}} = \arccos \left(\frac {< \boldsymbol {W} , \boldsymbol {W} _ {0} >}{\| \boldsymbol {W} \| _ {F} \cdot \| \boldsymbol {W} _ {0} \| _ {F}}\right), \tag {1}
+$$
+
+where $< W,W_0 >$ denotes the inner product of $\pmb{W}$ and $\pmb {W}_0$ $\| \cdot \| _F$ denotes the Frobenius norm. [6] shows $\varDelta_{\mathbf{W}}$ is an efficient metric to measure the generalization of a well-trained stand-alone model.
+
+Angle-based Metric for Child Model from Supernet. Since the angle shows close connection to generalization of trained networks, we consider using it to compare the performance of different child models. However, directly using angle $\Delta_W$ of a child model may meet severe problems in weight sharing settings: the procedure of computing $\Delta_W$ can't distinguish different structures with exact same learnable weights. Such dilemma is caused by non-parametric alternative operators ("none", "identity", "pooling"). For example, child model 1 and child model 2 shown in Fig. 3 have exact same learnable weights $[W_1, W_2, W_3]$ , but child model 1 has shortcut (OP4: identity), while child model 2 is sequential. Apparently child model 1 and 2 have different performance due to diverse structures, but $\Delta_{[W_1, W_2, W_3]}$ can't reflect such difference.
+
+
+Fig. 3. Examples of the weight vector determined by structure and weights. $V_{1}, V_{2}$ are weight vectors of these child models respectively
+
+Therefore, to take non-parametric operators into account, we use the following strategy to distinguish different structures with the same learnable weights. For "pooling" and "identity" operators, we assign a fixed weight to them, and treat them like other operators with learnable weights: "pooling" has $k \times k$ kernel, where elements are all $1 / k^2$ , $k$ is the pooling size; "identity" has empty weights, which means we don't add anything to the weight vector for "identity". For "none" operator, it can totally change the connectivity of the child model, we can't simply treat it as the other operators. Hence we design a new angle-based metric as following to take connectivity of child model into account.
+
+Definition of Angle-based Metric. Supernet is seen as a directed acyclic graph $\mathcal{G}(\pmb{O},\pmb{E})$ , where $\pmb{O} = \{o_1,o_2,\dots,o_M\}$ is the set of nodes, $o_1$ is the only root node (input of the supernet), $o_M$ is the only leaf node (output of the supernet); $\pmb{E} = \{(o_i,o_j,w_k)|$ alternative operators (including non-parametric operators except "none") from $o_i$ to $o_j$ with $w_{k}\}$ . Assume a child model is sampled from the supernet, and it can be represented as a sub-graph $g(\pmb{O},\tilde{\pmb{E}})$ from $\mathcal{G}$ , where $\tilde{\pmb{E}}\subset \pmb{E}$ , $o_1,o_M\in \tilde{\pmb{E}}$ ; The angle-based metric $\varDelta_g$ given $g$ is defined as:
+
+$$
+\Delta_ {g} = \operatorname {a r c c o s} \left(\frac {< \mathbf {V} (g , \mathbf {W} _ {0}) , \mathbf {V} (g , \mathbf {W}) >}{\| \mathbf {V} (g , \mathbf {W} _ {0}) \| _ {F} \cdot \| \mathbf {V} (g , \mathbf {W}) \| _ {F}}\right), \tag {2}
+$$
+
+where $\mathbf{W}_0$ is the initialized weights of the supernet $\mathcal{G}$ ; $\mathbf{V}(g, \mathbf{W})$ denotes the weight vector of $g$ , and it's constructed by concatenating the weights of all paths from $o_1$ to $o_M$ in $g$ , its construction procedure is shown in Algorithm 1.
+
+The construction procedure described in Algorithm 1 can make sure child models with diverse structures must have different weight vectors, even with the same learnable weights. As an example, Fig.3 illustrates the difference between the weight vectors of child models with "none" and "identity" (comparing child model 1 and 2). Since $V(g, W)$ is well defined on child models from any type of supernet, we compute the angel-based metric on all child models no matter whether there's "none" in supernet as an alternative.
+
+Constructing Weight Vector on Cell-like/Block-like Supernet. Algorithm 1 presents the general construction procedure of weight vector given a
+
+Algorithm 1: Construction of weight vector $V(g, W)$ for Model $g$
+Input: A child model $g(\pmb {O},\tilde{\pmb{E}})$ from the supernet, weights of supernet $\pmb {W} = \{w\}$ Output: weight vector $\pmb {V}(g,\pmb {W})$
+1 Find all paths from the root node $o_1$ to the leaf node $o_M$ in $g$ .. $\pmb {P} = \{P\subset \tilde{\pmb{E}} |P$ is a path from $o_1$ to $o_M\}$ .
+2 $\pmb {V} = [\emptyset ]([\emptyset ]$ means empty vector);
+3 for $P$ in $P$ do
+4 $\begin{array}{rl}{\mathbf{V}_P = \mathrm{concatenate}(\{w_k|(o_i,o_j,w_k)\in P\})};} & {} \end{array}$
+5 $\mathbf{V} = \mathrm{concatenate}[\mathbf{V},\mathbf{V}_P]$
+6 end
+7 $V(g,W) = V$
+
+child model. It works well when the topology of supernet isn't too complex. However, in the worst case, the length of weight vector is of exponential complexity given the number of nodes. Hence it can make massive computational burden when number of nodes is too large in practice. Luckily, existing popular NAS search spaces all consist of several non-intersect cells, which allows us to compute the angle-based metric within each cell instead of the whole network. Specifically, we propose the following strategy as a computation-saving option:
+
+1. Divide the whole network into several non-intersecting blocks;
+2. Construct weight vector within each block respectively by Algorithm 1;
+3. Obtain weight vector of the child model by concatenating each block.
+
+# 3.3 Angle-based Shrinking method
+
+Scores of Candidate Operators. Before demonstrating the pipeline of angle-based shrinking method, we firstly define the angle-based score to evaluate alternative operators. Assume $\pmb{P} = \{p_1, p_2, \dots, p_N\}$ represents the collection of all candidate operators from supernet, $N$ is the number of candidate operators. We define the score of an operator by the expected angle-based metric of child models containing the operator:
+
+$$
+\operatorname {S c o r e} \left(p _ {i}\right) = \mathbb {E} _ {g \in \{g | g \subset \mathcal {G}, g \text {c o n t a i n s} p _ {i} \}} \Delta_ {g}, \quad i \in \{1, 2, \dots , N \}, \tag {3}
+$$
+
+where $g, \mathcal{G}$ and $\Delta_g$ have been defined in section 3.2, $g$ is uniformly sampled from $\{g | g \subset \mathcal{G}, g \text{ contains } p_i\}$ . In practice, rather than computing the expectation in Eq.(3) precisely, we randomly sample finite number of child models containing the operator, and use the sample mean of angle-based metric instead.
+
+Algorithm of Angle-based Shrinking Method. Based on the angle-based metric, we present Algorithm 2 to describe the pipeline shown in Fig.1. Note that during the shrinking process, at least one operator is preserved in each edge, since ABS should not change the connectivity of the supernet.
+
+Algorithm 2: Angle-based Search Space Shrinking Method (ABS)
+Input: A supernet $\mathcal{G}$ , threshold of search space size $\mathcal{T}$ , number of operators dropped out per iteration $k$ Output: A shrunk supernet $\tilde{\mathcal{G}}$
+1 Let $\tilde{\mathcal{G}} = \mathcal{G}$
+2 while $|\tilde{\mathcal{G}} | > \mathcal{T}$ do
+3 Training the supernet $\tilde{\mathcal{G}}$ for several epochs following [15];
+4 Computing score of each operator from $\tilde{\mathcal{G}}$ by Eq.(3);
+5 Removing $k$ operators from $\tilde{\mathcal{G}}$ with the lowest $k$ scores;
+6 end
+
+Table 1. The mean Kendall's Tau of 10 repeat experiments on NAS-Bench-201 for different initialization policies
+
+| Initialization | CIFAR-10 | CIFAR-100 | ImageNet-16-120 |
| Kaiming-norm [16] | 0.622 | 0.608 | 0.534 |
| Xavier-uniform [13] | 0.609 | 0.614 | 0.544 |
| Orthogonal | 0.609 | 0.612 | 0.533 |
+
+# 4 Experiments
+
+In this section, we demonstrate the power of ABS in two aspects: first, we conduct adequate experiments to verify and analyze the effectiveness of our angle-based metric in stability and convergence; second, we show that various NAS algorithms can achieve better performance by combining with ABS.
+
+# 4.1 Empirical Study on Angle-based Metric
+
+How important is the specific network initialization? There are several different ways to initialize a network, while almost all initialization methods are gaussian type, thus the direction of initialized weights is always uniformly random. Theoretically, various initialization methods make no difference to the angle-based metric. The results in Table 1 prove our justification. Our proposed metric is reasonably robust to various initialization settings on different datasets.
+
+Ranking Correlation in Stand-alone Model. First of all, we conduct experiments to verify if the angle-based metric defined in Eq.(2) can really reflect the capability of stand-alone models with different structures. In detail, we uniformly select 50 child models from NAS-Bench-201 and train them from scratch to obtain the fully optimized weights. Since the initialized weights are known, the angle of a model can be calculated as Eq.(2). To quantify the correlation between the networks' capability and their angles, we rank chosen 50 models according to their angle, and compute the Kendall rank correlation coefficient [18] (Kendall's Tau for short) with their accuracy in standalone setting.
+
+
+
+
+Stand-alone ranking by accuracy
+Fig. 4. The correlation between the angle-based ranking and ground-truth ranking. We uniformly choose 50 models from NAS-Bench-201 [12], and train them from scratch. After this, we leverage the angle and accuracy of these models to rank them respectively.
+
+
+
+Table 2. The mean Kendall's Tau of 10 repeat experiments on NAS-Bench-201
+
+| Method | CIFAR-10 | CIFAR-100 | ImageNet-16-120 |
| Random | 0.0022 | -0.0019 | -0.0014 |
| Acc. w/ Re-BN4 [15] | 0.5436 | 0.5329 | 0.5391 |
| Angle | 0.5748 | 0.6040 | 0.5445 |
+
+Fig.4 shows the correlation between the network ranking by angle and the ranking of ground-truth on three datasets (CIFAR-10, CIFAR-100, ImageNet-16-120). It's obvious that the Kendall's Tau on all three different datasets are greater than 0.8, which suggests the angle of a model has significant linear correlation with its capability. Therefore, it's reasonable to use the angle-based metric to compare the performance of trained models even with different structures.
+
+Ranking Correlation in Weight-sharing Supernet. In this section, we verify the effectiveness of our angel-based metric in weight-sharing supernet. In detail, we firstly train a weight-sharing supernet constructed on NAS-Bench-201 following [15]. Then we calculate different metrics, such as accuracy and angle, of all child models by inheriting optimized weights from supernet. At last, we rank child models by the metric and ground-truth respectively, and compute the Kendall's Tau between these two types of ranks as the ranking correlation. Since the magnitude-based metric can only rank operators, it's not compared here.
+
+Table 2 shows the ranking correlations based on three metrics (random, accuracy with Re-BN, angle-based metric) on three different datasets (CIFAR-10, CIFAR-100, ImageNet-16-120). Accuracy-based metric with Re-BN and angle-based metric are both dramatically better than random selection. Importantly, our angle-based metric outperforms accuracy-based metric with a clear margin on all three datasets, which suggests that our angle-based metric is more effective to evaluate the capability of child models from supernet.
+
+
+Fig. 5. The ranking stability on NAS-Bench-201. Every column is the range of ranking correlation for a metric and dataset pair. The smaller column means more stable
+
+
+Fig. 6. Ranking correlation of metrics at early training stage on NAS-Bench-201
+
+Ranking Stability. We have shown that our angle-based metric can achieve higher ranking correlation than the accuracy-based metric. In this section, we further discuss the ranking stability of our metric. In detail, we conduct 9 experiments on three different datasets and calculate means and variances of ranking correlation obtained by accuracy-based metric and angle-based metric. As Fig.5 shows, our angle-based metric is extremely stable compared with accuracy-based metric. It has much smaller variance and higher mean on all three datasets. This is a crucial advantage for NAS methods, which can relieve the problem of reproducibility in weight-sharing NAS approaches. Magnitude-based metric is still not included in discussion, because it can't rank child models.
+
+Convergence in Supernet Training. In this section, we further investigate convergence behaviors of angle-based metric and accuracy-based metric in supernet training. In search space shrinking procedure, unpromising operators are usually removed when supernet isn't well trained. Hence the performance of the metric to evaluate child models' capability at early training stage will severely influence the final results. Fig. 6 shows different metrics' ranking correlation with ground-truth at early training stage. Our angle-based metric has higher ranking correlation on all three datasets during the first 10 epochs. Especially, there is a huge gap between such two metrics during the first 5 epochs. It suggests that our metric converges faster than accuracy-based metric in supernet training, which makes it more powerful to guide shrinking procedure at early training stage.
+
+Table 3. The processing time (100 models) of different metrics on NAS-Bench-201
+
+| Method | CIFAR-10 (s) | CIFAR-100 (s) | ImageNet-16-120 (s) |
| Acc. w/ Re-BN [15] | 561.75±126.58 | 332.43±59.18 | 259.84±31.90 |
| Angle | 0.92±0.06 | 0.77±0.02 | 0.73±0.04 |
+
+Time Cost for Metric Calculation. Magnitude-based metric needs to train extra architecture parameters as metric except network weights, which costs nearly double time for supernet training. Instead, accuracy-based metric only requires inference time by inheriting weights from supernet. But it still costs much time when evaluating a large number of child models. And our angle-based metric can further save the inference time. To compare the time cost of calculating metrics, we train a supernet and apply the specific metric to 100 randomly selected models from NAS-Bench-201. Experiments are run ten times on NVIDIA GTX 2080Ti GPU to calculate the mean and standard deviation. From Table 3, the time cost of our metric on three datasets are all less than 1 seconds while accuracy-based metric's time cost are greater than 250 seconds.
+
+Select Promising Operators. The experiments above prove the superiority of angle-based metric as an indicator to evaluate child models from supernet, but we still need to verify if it's really helpful to guide the selection of promising operators. To this end, we directly compare the shrinking results based on different metrics. In our settings, the ground-truth score of each operator is obtained by averaging the ground-truth accuracy of all child models containing the given operator, the ground-truth ranking is based on the ground-truth score. We also rank alternative operators according to their metric-based scores: our angle-based score is defined as Eq.(3); the accuracy-based score is similar to the ground-truth score but the accuracy is obtained from the well-trained supernet. It shares the same algorithm pipeline (see Algorithm 2) and the hyper-parameters as our approach except the specific metric; the magnitude-based metric takes the magnitudes of corresponding architecture parameters as operator scores. It trains supernet following [32], but has the identical training and shrinking setting as our method. After getting the metric-based rank, we drop 20 operators with the lowest ranking, and check the ground-truth ranking of the reserved operators.
+
+From Fig.7, magnitude-based and accuracy-based metrics both remove most of operators in the first 8 ground-truth ranking, while the angle-based metric reserves all of them. Moreover, almost all reserved operators our approach finds have higher ground-truth scores than the removed ones', while other two methods seem to choose operators randomly. Besides, we repeat the experiments for three times with different random seeds, the result shows that angle-based shrinking method can stably select the promising operators with top ground-truth scores, while the shrunk spaces based on the other metrics are of great uncertainty.
+
+
+Fig. 7. The operator distribution after shrinking in three repeated CIFAR-10 experiments on NAS-Bench-201 with different random seeds
+
+Though there's no guarantee that the child models with best performance must be hidden in the shrunk search space, it's reasonable to believe we are more likely to discover well-behaved structures from elaborately shrunk search spaces with high ground-truth scores. Based on this motivation, the angle-based metric allows us to select those operators with high performance efficiently.
+
+# 4.2 NAS Algorithms with Angle-based Shrinking
+
+In this part, we verify the power of ABS combined with existing NAS algorithms. We choose five NAS algorithms (SPOS[15], FairNAS[9], Proxyless-NAS[5], DARTS[24], and PDARTS[7]), whose public codes are available, to apply ABS. All experiments are conducted on ImageNet. The original training set is split into two parts: 50000 images for validation and the rest for training.
+
+MobileNet-like Search Space. MobileNet-like Search Space consists of MobileNetV2 blocks with kernel size $\{3,5,7\}$ , expansion ratio $\{3,6\}$ and identity as alternative operators. We test the performance of ABS with SPOS [15], ProxylessNAS [5] and FairNAS [9] on MobileNet-like search space. SPOS and ProxylessNAS are applied on the Proxyless (GPU) search space [5], while FairNAS is applied on the same search space as [9]. We shrink the MobileNet-like search spaces by ABS at first, then apply three NAS algorithms to the shrunk spaces.
+
+In detail, supernet is trained for 100 and 5 epochs in the first and other shrinking stages. We follow the block-like weight vector construction procedure to compute the angle-based metric. The score of each operator is acquired by averaging the angle of 1000 child models containing the given operator. Moreover, the base weight $W_{0}$ used to compute angle is reset when over 50 operators are removed from the original search space. Because our exploratory experiments (see Fig. 4 in appendix) show that after training models for several epochs, the angle between the current weight $W$ and the initialized weight $W_{0}$ is always close to $90^{\circ}$ due to very high dimension of weights. It doesn't mean the training is
+
+Table 4. Search results on MobileNet-like search space. * The searched models in their papers are retrained using our training setting
+
+ | Flops | Top1 | Acc. | Flops(ABS) | Top1 | Acc.(ABS) |
| FairNAS [9] | 322M | 74.24%* | | 325M | 74.42% | |
| SPOS [15] | 465M | 75.33%* | | 472M | 75.97% | |
| ProxylessNAS [5] | 467M | 75.56%* | | 470M | 76.14% | |
+
+Table 5. ImageNet results on DARTS search space. * For the form x(y), x means models searched by us using codes, y means the searched models in their papers
+
+| Method | Channels | Flops | Top1 Acc. |
| DARTS [24] | 48(48)* | 446M(530M)* | 73.39%(74.88%)* |
| DARTS(ABS) | 48 | 619M | 75.59 |
| DARTS(ABS, scale down) | 45 | 547M | 75.19 |
| PDARTS [7] | 48(48)* | 564M(553M)* | 75.02%(75.58%)* |
| PDARTS(ABS) | 48 | 645M | 75.89 |
| PDARTS(ABS, scale down) | 45 | 570M | 75.64 |
+
+close to be finished, but the change of angle is too tiny to distinguish the change of weights. Therefore, to represent the change of weights effectively during the mid-term of training, we need to reset the base weight to compute the angle.
+
+When sampling child models, ABS dumps models that don't satisfy flops constraint. For SPOS and ProxylessNAS, ABS removes 7 operators whose rankings fall at the tail. For FairNAS, ABS removes one operator for each layer each time because of its fair constraint. The shrinking process finishes when the size of search space is less than $10^{5}$ . For re-training, we use the same training setting as [15] to retrain all the searched models from scratch, with an exception: dropout is added before the final fully-connected and the dropout rate is 0.2.
+
+As Table 4 shows, all algorithms can obtain significant benefits with ABS. SPOS and ProxylessNAS find models from the shrunk search space with $0.6\%$ higher accuracy than from the original search space respectively. FairNAS also finds better model on shrunk search space with $0.2\%$ accuracy improvement.
+
+DARTS Search Space. Following the experiment settings in [24], we apply the search procedure on CIFAR-10, then retrain the selected models from scratch and evaluate them on ImageNet. In detail, the block-like weight vector construction procedure is adopted while using ABS. Supernet is trained for 150 and 20 epochs in the first and other shrinking stages respectively. More epochs are adopted for training supernet due to its slow convergence on DARTS search space. ABS removes one operator for each edge each time. The shrinking process stops when the size of shrunk search space is less than a threshold. DARTS and PDARTS share the same threshold as the MobileNet-like search space. During re-training, all algorithms use the same training setting as [7] to retrain the searched models.
+
+From Table 5, the architectures found by DARTS and PDARTS with ABS on CIFAR10 perform well on ImageNet. Equipped with ABS, DARTS and PDARTS get $2.2\%$ and $0.87\%$ accuracy improvement respectively without any human interference ( $0.71\%$ and $0.31\%$ improvement even compared with reported results in [24, 7]). Such vast improvement is probably due to the fact that the architectures found from the shrunk search space have more flops. But it's reasonable that models with higher flops are more likely to have better capability if the flops are not constrained. Furthermore, to fairly compare the performance with constrained flops, the channels of the architecture we found from shrunk space are scaled down to fit with the constraint of flops. Table 5 shows that even the constrained models from the shrunk search space can still get better results.
+
+Discussion. Search space shrinking is very useful for NAS [7, 20], and the angle-based metric is extremely suitable for shrinking due to its high correlation with the performance of DNN and fast convergence (see Fig. 6). Our results show ABS can enhance existing NAS algorithms (see Table 4, 5). But the metric is not a perfect indicator (see Table 2), so directly searching with our metric shows no superiority to combining it and other NAS methods: on the Mobilenet-like search space, our experiments indicate SPOS gets only $0.19\%$ improvement by replacing the accuracy-based metric with the metric, while combining with ABS, SPOS can get $0.64\%$ improvement. Thus we leverage the metric to perform shrinking.
+
+# 5 Conclusion and Future Work
+
+In this paper, we point out that elaborately shrunk search space can improve performance of existing NAS algorithms. Based on this observation, we propose an angle-based search space shrinking method available for existing NAS algorithms, named as ABS. While applying ABS, we adopt a novel angle-based metric to evaluate capability of child models and guide the shrinking procedure. We verify the effectiveness of the angle-based metric on NAS-bench-201, and demonstrate the power of ABS by combining with various NAS algorithms on multiple search spaces and datasets. All experiments prove that the proposed method is highly efficient, and can significantly improve existing NAS algorithms.
+
+However, there are some problems not solved yet. For example, how to discriminate average pooling and max pooling; how to process more non-parametric operators such as different activation functions [14, 25, 30]). In the future, we will spend more time on discriminating more non-parametric operators using the angle-based metric in NAS. Additionally, we plan to apply our proposed metric to some downstream tasks (e.g., detection, segmentation) in our future work.
+
+# Acknowledgement
+
+This work is supported by the National Key Research and Development Program of China (No.2017YFA0700800), Beijing Academy of Artificial Intelligence (BAAI) and the National Natural Science Foundation of China (No.61673376).
+
+# References
+
+1. Adam, G., Lorraine, J.: Understanding neural architecture search techniques. arXiv preprint arXiv:1904.00438 (2019)
+2. Arora, S., Li, Z., Lyu, K.: Theoretical analysis of auto rate-tuning by batch normalization. arXiv preprint arXiv:1812.03981 (2018)
+3. Bender, G., Kindermans, P.J., Zoph, B., Vasudevan, V., Le, Q.: Understanding and simplifying one-shot architecture search. In: International Conference on Machine Learning. pp. 549-558 (2018)
+4. Cai, H., Gan, C., Han, S.: Once for all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791 (2019)
+5. Cai, H., Zhu, L., Han, S.: Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332 (2018)
+6. Carbonnelle, S., De Vleeschouwer, C.: Layer rotation: a surprisingly simple indicator of generalization in deep networks? (2019)
+7. Chen, X., Xie, L., Wu, J., Tian, Q.: Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. arXiv preprint arXiv:1904.12760 (2019)
+8. Chen, Y., Yang, T., Zhang, X., Meng, G., Xiao, X., Sun, J.: Detnas: Backbone search for object detection. In: Advances in Neural Information Processing Systems. pp. 6638-6648 (2019)
+9. Chu, X., Zhang, B., Xu, R., Li, J.: Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search. arXiv preprint arXiv:1907.01845 (2019)
+10. Dong, X., Yang, Y.: One-shot neural architecture search via self-evaluated template network. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). pp. 3681-3690 (2019)
+1. Dong, X., Yang, Y.: Searching for a robust neural architecture in fourgpu hours. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1761-1770 (2019)
+2. Dong, X., Yang, Y.: Nas-bench-201: Extending the scope of reproducible neural architecture search. In: International Conference on Learning Representations (ICLR) (2020), https://openreview.net/forum?id=HJxyZkBKDr
+3. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. pp. 249-256 (2010)
+4. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the fourteenth international conference on artificial intelligence and statistics. pp. 315-323 (2011)
+5. Guo, Z., Zhang, X., Mu, H., Heng, W., Liu, Z., Wei, Y., Sun, J.: Single path one-shot neural architecture search with uniform sampling. arXiv preprint arXiv:1904.00420 (2019)
+6. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: 2015 IEEE International Conference on Computer Vision (ICCV). pp. 1026-1034 (2015)
+7. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
+8. Kendall, M.G.: A new measure of rank correlation. Biometrika 30(1/2), 81-93 (1938)
+9. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Communications of The ACM 60(6), 84-90 (2017)
+
+20. Li, X., Lin, C., Li, C., Sun, M., Wu, W., Yan, J., Ouyang, W.: Improving one-shot na's by suppressing the posterior fading. arXiv preprint arXiv:1910.02543 (2019)
+21. Li, Z., Arora, S.: An exponential learning rate schedule for deep learning. arXiv preprint arXiv:1910.07454 (2019)
+22. Liu, C., Chen, L.C., Schroff, F., Adam, H., Hua, W., Yuille, A.L., Fei-Fei, L.: Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 82–92 (2019)
+23. Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L.J., Fei-Fei, L., Yuille, A., Huang, J., Murphy, K.: Progressive neural architecture search. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 19-34 (2018)
+24. Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018)
+25. Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: Proc. icml. vol. 30, p. 3 (2013)
+26. Nayman, N., Noy, A., Ridnik, T., Friedman, I., Jin, R., Zelnik, L.: Xnas: Neural architecture search with expert advice. In: Advances in Neural Information Processing Systems. pp. 1975-1985 (2019)
+27. Noy, A., Nayman, N., Ridnik, T., Zamir, N., Doveh, S., Friedman, I., Giryes, R., Zelnik-Manor, L.: Asap: Architecture search, anneal and prune. arXiv preprint arXiv:1904.04123 (2019)
+28. Pérez-Rúa, J.M., Baccouche, M., Pateux, S.: Efficient progressive neural architecture search. arXiv preprint arXiv:1808.00391 (2018)
+29. Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268 (2018)
+30. Ramachandran, P., Zoph, B., Le, Q.V.: Searching for activation functions. arXiv preprint arXiv:1710.05941 (2017)
+31. Wan, R., Zhu, Z., Zhang, X., Sun, J.: Spherical motion dynamics of deep neural networks with batch normalization and weight decay. arXiv preprint arXiv:2006.08419 (2020)
+32. Wang, L., Xie, L., Zhang, T., Guo, J., Tian, Q.: Scalable nas with factorizable architectural parameters. arXiv preprint arXiv:1912.13256 (2019)
+33. Wu, B., Dai, X., Zhang, P., Wang, Y., Sun, F., Wu, Y., Tian, Y., Vajda, P., Jia, Y., Keutzer, K.: Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 10734-10742 (2019)
+34. Xu, H., Yao, L., Zhang, W., Liang, X., Li, Z.: Auto-fpn: Automatic network architecture adaptation for object detection beyond classification. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 6649-6658 (2019)
+35. Zhang, Y., Lin, Z., Jiang, J., Zhang, Q., Wang, Y., Xue, H., Zhang, C., Yang, Y.: Deeper insights into weight sharing in neural architecture search. arXiv preprint arXiv:2001.01431 (2020)
+36. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 8697-8710 (2018)
\ No newline at end of file
diff --git a/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/images.zip b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8e49ec2b460efcd51fb78e231fad6078b5d688ce
--- /dev/null
+++ b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6501f92da70ca25bc3ed70bd9eda086798d582fd4a0fb773da85586982a4cc97
+size 343540
diff --git a/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/layout.json b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..08bb411b40e8331d42b49a47c359838f6674eaa2
--- /dev/null
+++ b/anglebasedsearchspaceshrinkingforneuralarchitecturesearch/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1aee45f81f7d462788373ddf13afe262d2e9e6ccd06bfdcb4212af8c476dca6
+size 379978
diff --git a/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/dd6de73a-227d-43d8-8a61-7ca28264a8e9_content_list.json b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/dd6de73a-227d-43d8-8a61-7ca28264a8e9_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..326779ea6d2652ce19c1195e6d351a9fe2eb3074
--- /dev/null
+++ b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/dd6de73a-227d-43d8-8a61-7ca28264a8e9_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:086a1e630e3809ee93491fb61a3ca1644aebe315c5833779efb7cc6ad15ee6fa
+size 82157
diff --git a/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/dd6de73a-227d-43d8-8a61-7ca28264a8e9_model.json b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/dd6de73a-227d-43d8-8a61-7ca28264a8e9_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d6c9915274b08528739c7b89c219ca7bdaf30252
--- /dev/null
+++ b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/dd6de73a-227d-43d8-8a61-7ca28264a8e9_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c89eba72d98243dda5bb970c7b5c2f1e31e20127a683e2a8e0ab29d82a304e2f
+size 99597
diff --git a/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/dd6de73a-227d-43d8-8a61-7ca28264a8e9_origin.pdf b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/dd6de73a-227d-43d8-8a61-7ca28264a8e9_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2ee04844bfdcfa47a8972ceb390c71a8198060e5
--- /dev/null
+++ b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/dd6de73a-227d-43d8-8a61-7ca28264a8e9_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:538d0b91f2dd4cf117b38a566295e4de9ac76ec9c3aef92269cdb49fded3f2af
+size 6055613
diff --git a/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/full.md b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..88f5cbe1f7ca9fa3f8e9b2230edb01fc10edbd26
--- /dev/null
+++ b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/full.md
@@ -0,0 +1,383 @@
+# An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices
+
+Xiaolong Ma $^{1\dagger}$ , Wei Niu $^{2\dagger}$ , Tianyun Zhang $^{3}$ , Sijia Liu $^{4}$ , Sheng Lin $^{1}$ , Hongjia Li $^{1}$ , Wujie Wen $^{5}$ , Xiang Chen $^{6}$ , Jian Tang $^{7}$ , Kaisheng Ma $^{8}$ , Bin Ren $^{2}$ , and Yanzhi Wang $^{1}$
+
+$^{1}$ Northeastern University, Boston MA 02115, USA
+
+{ma.xiaol, yanz.wang}@northeastern.edu
+
+$^{2}$ College of William and Mary, $^{3}$ Syracuse University, $^{4}$ IBM Research, $^{5}$ Lehigh
+
+University, $^{6}$ George Mason University, $^{7}$ DiDi AI Labs, $^{8}$ Tsinghua University
+
+† Equal Contribution
+
+Abstract. Weight pruning has been widely acknowledged as a straightforward and effective method to eliminate redundancy in Deep Neural Networks (DNN), thereby achieving acceleration on various platforms. However, most of the pruning techniques are essentially trade-offs between model accuracy and regularity which lead to impaired inference accuracy and limited on-device acceleration performance. To solve the problem, we introduce a new sparsity dimension, namely pattern-based sparsity that comprises pattern and connectivity sparsity, and becoming both highly accurate and hardware friendly. With carefully designed patterns, the proposed pruning unprecedentedly and consistently achieves accuracy enhancement and better feature extraction ability on different DNN structures and datasets, and our pattern-aware pruning framework also achieves pattern library extraction, pattern selection, pattern and connectivity pruning and weight training simultaneously. Our approach on the new pattern-based sparsity naturally fits into compiler optimization for highly efficient DNN execution on mobile platforms. To the best of our knowledge, it is the first time that mobile devices achieve real-time inference for the large-scale DNN models thanks to the unique spatial property of pattern-based sparsity and the help of the code generation capability of compilers.
+
+# 1 Introduction
+
+Weight pruning has been proven to be effective in eliminating redundancy in the original model [7,32,14,24,18,20], therefore accelerating DNN execution on target computing platforms. Non-structured pruning [10] achieves high accuracy, but is limited by its hardware unfriendliness [32,14]. Meanwhile, structured pruning [32] is hardware friendly but suffers from accuracy loss.
+
+It is imperative to seek an approach that can offer, or even go beyond, the best of both types of sparsity. We visualize part of the normalized heat map of a pre-trained model of VGG-16 on ImageNet in Figure 1, we find that (i) the effective area (i.e. weights with higher absolute values) forms some specific shapes
+
+
+Fig. 1: Heat map of randomly selected convolution kernels in the third convolutional layer of a VGG-16 on ImageNet dataset. The weight values in each kernel are normalized and darker shade represents higher absolute value.
+
+and repeatedly appears in the model, and (ii) some of the entire convolution kernels have very small weight values and make themselves void kernels. Motivated by the two observations, we introduce a new sparsity dimension - pattern-based sparsity, which exploits both intra-convolution and inter-convolution kernel sparsity, exhibiting both high accuracy and regularity, and revealing a previously unknown point in design space.
+
+In pattern-based sparsity, we call our intra-convolution kernel sparsity pattern sparsity and inter-convolution kernel sparsity connectivity sparsity. To get pattern sparsity, we prune a fixed number of weights in each convolution kernel, and the remaining weights form specific "kernel patterns". Along this line, we find that some carefully designed kernel patterns have special vision properties that potentially enhance image quality, thereby enhancing feature extraction ability of DNNs. For connectivity sparsity, we cut the relatively unimportant connections between certain input and output channels, which is equivalent to removal of corresponding kernels. At the algorithm level, we design a novel pattern-aware network pruning framework that efficiently achieves pattern pruning and connectivity pruning without degrading accuracy. We begin by reforming the pruning problem into an ADMM optimization problem [4], and then solve the problem iteratively using a Primal-Proximal solution which decoupling the stochastic gradient descent process with regularization, enabling a progressive and gradual process of penalizing unimportant weight groups, meaning a more accurate selection of remaining weight patterns. Therefore, the framework can achieve pattern library extraction, pattern assignment, unimportant connectivity removal, as well as weight training simultaneously. Our proposed pattern-based sparsity is mobile hardware friendly with the help of code generation capability of compilers. More specifically, we design the filter/kernel re-ordering technique that enables compiler optimizations that maintain instruction-level and thread-level parallelism, and achieves the maximum possible hardware acceleration.
+
+Our contributions of this paper are summarized as follows:
+
+- We design a set of patterns, namely pattern library, and prove the image enhancement property that is related to pattern pruning. (Section 4)
+- We form a novel pattern-aware network pruning framework that can extract pattern library, perform pattern and connectivity pruning and weight training at the same time. (Section 5)
+- We design the corresponding (algorithm-compiler-hardware) inference framework which fully leverages the new sparsity dimension and achieves real-time DNN execution on mobile devices. (Section 6)
+
+
+Fig. 2: Illustration of pattern-based sparsity.
+
+Section 7 demonstrates pattern library extraction result, pattern pruning for accuracy and image enhancement results, the overall pattern-based compression results and its acceleration results on mobile devices.
+
+# 2 Background
+
+DNN model pruning techniques are studied in early work of non-structured pruning [10], in which an iterative, heuristic method is used with limited, non-uniform model compression rates. The irregular weight distribution causes irregular memory access and thereby execution overheads, which leads to limited acceleration performance. Structured pruning is pioneered by [32][14], in which regular and smaller weight matrices are generated to eliminate overhead of weight indices and achieve higher acceleration in CPU/GPU executions. However, it suffers from notable accuracy drop when the pruning rate increases. Kernel level pruning is studied in [5] that the sparse complimentary kernels can save half of the weights and computations, but it is different from our approach because pattern-based sparsity is theoretically and practically improving the software and hardware performance of DNN while [5] only focuses on parameter and computation reduction without discussing on platform acceleration.
+
+Mobile DNN inference frameworks are studied, including TFLite [1], TVM [6], Alibaba MNN [2], DeepCache [33] and DeepSense [34]. These works do not account for model compression techniques, and the performance is far from real-time requirement (usually 30 frames/sec). There are other researches that exploit model sparsity to accelerate DNN inference [17] [25], but they either do not target mobile platforms (require new hardware) or trade off compression rate and accuracy, thus having different challenges than our work.
+
+# 3 Overview
+
+The pattern-based sparsity should exploit the best of both non-structured and structured pruning while hiding the disadvantages. Given that, we propose two pattern-based pruning dimensions, pattern pruning and connectivity pruning.
+
+Pattern pruning is illustrated in Figure 2, where the white blocks denote a fixed number of pruned weights in each kernel. The remaining (four) green blocks in each kernel have arbitrary weight values, while their locations form a specific pattern. Different kernels can have different patterns, but the total number of pattern styles (i.e., the size of the pattern library) shall be limited. We focus on $3 \times 3$ kernel pattern in this work because it is widely used in various
+
+of DNN architectures. For other kernel shape (e.g., $1 \times 1$ or $5 \times 5$ ), we group $1 \times 1$ kernels into $3 \times 3$ then apply patterns, or use $5 \times 5$ patterns directly (will not be discussed in this work due to space limit).
+
+Connectivity pruning is illustrated in Figure 2, with gray kernels as pruned ones. Connectivity pruning is a good supplement to pattern pruning, as both can be integrated in the same algorithm-level solution and compiler-assisted mobile inference framework.
+
+Compiler-assisted DNN inference framework uniquely enables optimized code generation to guarantee end-to-end inference execution efficiency supporting pattern-based sparsity. As the computation paradigm of DNN is in a manner of layerwise execution, we convert a DNN model into computational graph, which is embodied by static $\mathrm{C}++$ (for CPU execution) or OpenCL and CUDA (for GPU execution) codes. The above two pruning schemes can be naturally combined, which achieves high pruning (acceleration) rate while maintaining hardware friendliness.
+
+# 4 Pattern Library - Theory and Design
+
+# 4.1 A Unique Perspective on Weight Pruning
+
+Conventionally, weight pruning is considered as a redundant information removal technique. This will inevitably omit other aspects, such as the computer vision properties of pruning. In this work, we consider weight pruning as incorporating an additional convolution mask $P$ on an original kernel. $P$ has the same size as original kernels and binary-valued elements (0 and 1). From our perspective, pattern pruning is an element-wise multiplication of different $P$ 's and original kernels. The set of different $P$ 's is the pattern library.
+
+The multi-layer DNN are formed by cascading functional layers. Applying $P$ on every convolution kernel across layers is intrinsically an interpolation operation of $P$ 's. Different patterns can form functional steerable filters [9] (e.g., Gaussian blur filter, sharpen filter, edge detection filter, etc.) by interpolation, and this process only needs a limited number of patterns (i.e., a small pattern library). A small pattern library has two advantages, (i) at algorithm level, an appropriate number of patterns ensures the flexible search space for achieving a solution with good performance on DNN and (ii) at compiler level, fewer patterns means fewer computation paradigms after kernel reordering and grouping, which reduces thread level divergence.
+
+# 4.2 Pattern Library Design
+
+Our designed patterns could be transformed to a series of steerable filters [9], which in our case, the Gaussian filter and Laplacian of Gaussian filter by interpolating patterns through DNN layers.
+
+Transform patterns to Gaussian filter: Consider a two-dimensional Gaussian filter $\mathcal{G}$ :
+
+$$
+\mathcal {G} (x, y, \sigma) = \frac {1}{2 \pi \sigma^ {2}} e ^ {- \frac {x ^ {2} + y ^ {2}}{2 \sigma^ {2}}} \tag {1}
+$$
+
+$x$ and $y$ are input coordinates, and $\sigma^2$ is variance.
+
+Binomial coefficients give a compact approximation of the Gaussian coefficients using only integers. To apply the Gaussian filters with $3 \times 3$ filter size, we utilize the following approximation. According to (1) and set $\sigma^2 = \frac{1}{2}$ , in the 1-D situation, the approximation of Gaussian filter [1 2 1] is given by the convolution of two box filters [1 1]. Then we get the 2-D approximation of Gaussian filter by convolving $[1 2 1]$ and $[1 2 1]^T$ , and the result is $\left[ \begin{array}{lll}1 & 2 & 1\\ 2 & 4 & 2\\ 1 & 2 & 1 \end{array} \right]$ .
+
+Interpolation in multi-layer DNN is proved to be convergent [30]. We can make further approximation by interpolating patterns into convolutional layers (i.e. uniformly map patterns to each kernel). In continuous probability space, interpolating patterns into convolution function is a specific Probability Density Function (PDF), so the effect of interpolating patterns is accumulating probability expectations of interpolation into $n$ convolutional layers.
+
+$$
+\underbrace {\left[ \begin{array}{l l l} 1 & 1 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right] \cdots \left[ \begin{array}{l l l} 0 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{array} \right] \cdots \left[ \begin{array}{l l l} 0 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 0 \end{array} \right] \cdots \left[ \begin{array}{l l l} 0 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 1 & 1 \end{array} \right]} _ {n \text {i n t e r p o l a t i o n s}} = \left[ \begin{array}{l l l} p & 2 p & p \\ 2 p & 4 p & 2 p \\ p & 2 p & p \end{array} \right] ^ {n} = \left[ \begin{array}{l l l} p & 1 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & 1 \end{array} \right] ^ {n} \tag {2}
+$$
+
+The four pattern masks $P$ shown in colored positions in (2) form the Gaussian filter through interpolation. The coefficient $p$ has no effect after normalization.
+
+Transform patterns to Laplacian of Gaussian filter: The Laplacian operator is a second derivative operator. According to the associative property, smoothing an image with Gaussian filter and then applying Laplacian operator is equivalent to convolve the image with the Laplacian of Gaussian (LoG) filter:
+
+$$
+\nabla^ {2} \mathcal {G} (x, y, \sigma) = \left(\frac {x ^ {2} + y ^ {2}}{\sigma^ {4}} - \frac {2}{\sigma^ {2}}\right) \mathcal {G} (x, y, \sigma) \tag {3}
+$$
+
+LoG has elegant mathematical properties, and is valid for a variety of applications including image enhancement, edge detection, and stereo matching.
+
+Taylor series expansion is utilized to determine the approximate values of the LoG filter with $3 \times 3$ filter size. First, we consider the 1-D situation. The Taylor series expansions of 1-D Gaussian filter $\mathcal{G}(x)$ are given by:
+
+$$
+\mathcal {G} (x + \delta) = \mathcal {G} (x) + \delta \mathcal {G} ^ {\prime} (x) + \frac {1}{2} \delta^ {2} \mathcal {G} ^ {\prime \prime} (x) + \frac {1}{3 !} \delta^ {3} \mathcal {G} ^ {\prime \prime \prime} (x) + \mathcal {O} (\delta^ {4}) \tag {4}
+$$
+
+$$
+\mathcal {G} (x - \delta) = \mathcal {G} (x) - \delta \mathcal {G} ^ {\prime} (x) + \frac {1}{2} \delta^ {2} \mathcal {G} ^ {\prime \prime} (x) - \frac {1}{3 !} \delta^ {3} \mathcal {G} ^ {\prime \prime \prime} (x) + \mathcal {O} (\delta^ {4}) \tag {5}
+$$
+
+By summing (4) and (5), we have
+
+$$
+\left[ \mathcal {G} (x - \delta) - 2 \mathcal {G} (x) + \mathcal {G} (x + \delta) \right] / \delta^ {2} = \nabla^ {2} \mathcal {G} (x) + \mathcal {O} (\delta^ {2}) \tag {6}
+$$
+
+Applying central difference approximation of $\operatorname{LoG} \nabla^2 \mathcal{G}(x)$ , we derive the 1-D approximation of $\operatorname{LoG}$ filter as $[1 - 21]$ . Then we procure the 2-D approximation of $\operatorname{LoG}$ filter by convolving $[1 - 21]$ and $[1 - 21]^T$ , and get $\left[ \begin{array}{rrr} -1 & 2 & -1 \\ 2 & -4 & 2 \\ -1 & 2 & -1 \end{array} \right]$ as the $1st$ approximation. According to (6), we have
+
+$$
+\nabla^ {2} \mathcal {G} (x, y) = \left(\left[ \begin{array}{l l} 1 & - 2 \end{array} \right] + \left[ \begin{array}{l} 1 \\ - 2 \\ 1 \end{array} \right]\right) * \mathcal {G} (x, y) \tag {7}
+$$
+
+Based on (7), we derive the 2nd approximation as $\left[ \begin{array}{rrr}0 & 1 & 0\\ 1 & -4 & 1\\ 0 & 1 & 0 \end{array} \right]$ .
+
+According to the central limit theorem, the convolution of two Gaussian functions is still a Gaussian function. Hence, we convolve the above two approximations of LoG and then apply normalization, and get the Enhanced Laplacian of Gaussian (ELoG) filter as $\left[ \begin{array}{lll}0 & 1 & 0\\ 1 & 8 & 1\\ 0 & 1 & 0 \end{array} \right]$ .
+
+Similarly, we make the further approximation by interpolating patterns into convolutional layers.
+
+$$
+\underbrace {\left[ \begin{array}{l l l} 0 & 1 & 0 \\ 1 & 1 \\ 0 & 0 & 0 \end{array} \right] \cdots \left[ \begin{array}{l l l} 0 & 1 & 0 \\ 1 & 1 \\ 0 & 1 & 0 \end{array} \right] \cdots \left[ \begin{array}{l l l} 0 & 0 & 0 \\ 1 & 1 \\ 0 & 1 & 0 \end{array} \right] \cdots \left[ \begin{array}{l l l} 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & p & 0 \end{array} \right]} _ {n \text {i n t e r p o l a t i o n s}} = \left[ \begin{array}{c c c} 0 & p & 0 \\ p & 1 & p \\ 0 & p & 0 \end{array} \right] ^ {n} = \left[ \begin{array}{c c c} p & 0 & 1 \\ p & 1 & 1 / p \\ 0 & 1 & 0 \end{array} \right] ^ {n} \tag {8}
+$$
+
+The four pattern masks $P$ shown in colored positions in (8) form the ELoG filter through interpolation. In order to get the best approximation to ELoG filter, we set $p = 0.75$ and $n = 8$ , then the desired filter is equal to interpolating these four patterns for eight times. The coefficient $p$ has no effect after normalization.
+
+# 5 Pattern-Aware Network Pruning Framework for Pattern Library Extraction
+
+In Section 4, we have determined the (eight) patterns as our pattern library through theoretical derivation. However, are these theoretically derived patterns also the most desirable at algorithm level? How to select the appropriate pattern for each kernel and train corresponding (remaining) weights? To answer these questions, we propose a novel pattern-aware network pruning framework, simultaneously achieving pattern library extraction (with predefined number of patterns in library), pattern assignment, and weight training.
+
+In pattern library extraction, we start from a large library comprising all possible candidate patterns. By extending ADMM [4] and incorporating Primal-Proximal solution technique, we make convolution kernels dynamically "select" the best suited patterns within the library and train the unpruned weights. Then we delete the least selected patterns in the library, thereby updating the library. The previous step is iterated on the updated library, with a single step as shown below.
+
+# 5.1 Pattern Library Extraction - A Single Step
+
+For an $N$ -layer DNN of interest, let $\mathbf{W}$ denote the collection of weights for all $3\times 3$ kernels, i.e., $\mathbf{W} = \{\mathbf{W}_i\}_{i = 1}^N$ . The pattern of each kernel $\mathbf{W}_i$ is restricted to a finite pattern library $\varOmega=\{\mathbf{M}_1,\ldots,\mathbf{M}_j,\ldots,\mathbf{M}_K\}$ , where $\mathbf{M}_j$ denotes a binary mask, and $K$ denotes the total number of possible patterns. We choose to reserve 4 non-zero entries in a kernel to match the SIMD (single-instruction multiple-data) architecture of embedded CPU/GPU processors, thereby maximizing throughput. As a result, the initial $K = \binom{9}{4} = 126$ , and $K$ will decrease in each step.
+
+The purpose of each step is to select a pattern from the current library for each kernel, and train the non-zero weights. Let $f(\mathbf{W};\mathcal{D})$ denote the training loss ( $\mathcal{D}$ denotes training data), we pose the following optimization problem
+
+$$
+\underset {\mathbf {W}, \mathbf {z}} {\text {m i n i m i z e}} f \left(\left\{\mathbf {W} _ {i} \circ \left(\sum_ {j = 1} ^ {K} z _ {j} \mathbf {M} _ {j}\right) \right\} _ {i = 1} ^ {N}; \mathcal {D}\right) \tag {9}
+$$
+
+$$
+\text {s u b j e c t} z _ {j} \in \{0, 1 \}, \forall j, \quad \sum_ {j = 1} ^ {K} z _ {j} = 1,
+$$
+
+where $z_{j}$ denotes the Boolean selection variable to indicate which pattern in $\Omega$ is chosen for $\mathbf{W}_i$ . The constraint $\sum_{j=1}^{K} z_{j} = 1$ indicates that only one pattern is selected, and thus $\mathbf{W}_i \circ (\sum_{j=1}^{K} z_{j} \mathbf{M}_{j})$ denotes the pattern-pruned kernel using one of pruning patterns. Here $\circ$ denotes element-wise product. In (9), we have two types of optimization variables: (i) $3 \times 3$ kernel weights $\mathbf{W}$ , (ii) pattern Boolean selection variables $\mathbf{z} \in [0,1]^{K}$ . The pattern selection scheme is co-optimized with non-zero weight training.
+
+To solve the above problem analytically, we introduce auxiliary variables $\mathbf{u}$ together with constraints $\mathbf{z} = \mathbf{u}$ . Based on that, we reformulate problem (9) as
+
+$$
+\begin{array}{l} \underset {\mathbf {W}, \mathbf {u}} {\text {m i n i m i z e}} f \left(\left\{\mathbf {W} _ {i} \circ \left(\sum_ {j = 1} ^ {K} z _ {j} \mathbf {M} _ {j}\right) \right\} _ {i = 1} ^ {N}; \mathcal {D}\right) + \mathcal {I} (\mathbf {u}) \tag {10} \\ \mathrm {s u b j e c t} \quad \mathbf {z} = \mathbf {u} \\ \end{array}
+$$
+
+where $\mathcal{I}(\mathbf{u})$ is the indicator function
+
+$$
+\mathcal {I} (\mathbf {u}) = \left\{ \begin{array}{l l} 0 & \text {i f} u _ {j} \in [ 0, 1 ], \forall j, \\ \infty & \text {o t h e r w i s e .} \end{array} \quad \sum_ {j = 1} ^ {K} u _ {j} = 1 \right. \tag {11}
+$$
+
+Here we relax the binary selection variable $z_{i} \in \{0,1\}$ to the (continuous) probabilistic selection variable $u_{i} \in [0,1]$ .
+
+The augmented Lagrangian function of problem (10) is given by
+
+$$
+\begin{array}{l} \mathcal {L} (\mathbf {W}, \mathbf {z}, \mathbf {u}, \boldsymbol {\mu}) = f \left(\left\{\mathbf {W} _ {i} \circ \left(\sum_ {j = 1} ^ {K} z _ {j} \mathbf {M} _ {j}\right) \right\} _ {i = 1} ^ {N}; \mathcal {D}\right) \tag {12} \\ + \mathcal {I} (\mathbf {u}) + \boldsymbol {\mu} ^ {T} (\mathbf {z} - \mathbf {u}) + \frac {\rho}{2} \| \mathbf {z} - \mathbf {u} \| _ {2} ^ {2} \\ \end{array}
+$$
+
+where $\pmb{\mu}$ is Lagrangian multipliers, and $\| \cdot \| _2$ denotes the Frobenius norm. $\rho >0$ is a given augmented penalty value, and for ease of notation we view matrices as vectors in optimization.
+
+ADMM is then given by the following alternating optimization process. At iteration $t$ , ADMM yields
+
+$$
+\mathbf {W} ^ {(t)}, \mathbf {z} ^ {(t)} = \underset {\mathbf {W}, \mathbf {z}} {\arg \min } \mathcal {L} (\mathbf {W}, \mathbf {z}, \mathbf {u} ^ {(t - 1)}, \boldsymbol {\mu} ^ {(t - 1)}) \quad \text {(P r i m a l)}
+$$
+
+$$
+\mathbf {u} ^ {(t)} = \underset {\mathbf {u}} {\arg \min } \mathcal {L} (\mathbf {W} ^ {(t)}, \mathbf {z} ^ {(t)}, \mathbf {u}, \boldsymbol {\mu} ^ {(t - 1)}) \quad \text {(P r o x i m a l)}
+$$
+
+$$
+\boldsymbol {\mu} ^ {(t)} = \boldsymbol {\mu} ^ {(t - 1)} + \rho (\mathbf {z} ^ {(t)} - \mathbf {u} ^ {(t)}), \tag {13}
+$$
+
+where the initial values $\mathbf{u}^{(0)}$ and $\pmb{\mu}^{(0)}$ are given.
+
+Problem (Primal) can be simplified to
+
+$$
+\underset {\mathbf {W}, \mathbf {z}} {\operatorname {m i n i m i z e}} f \left(\left\{\mathbf {W} _ {i} \circ \left(\sum_ {j = 1} ^ {K} z _ {j} \mathbf {M} _ {j}\right) \right\} _ {i = 1} ^ {N}; \mathcal {D}\right) + \frac {\rho}{2} \| \mathbf {z} - \mathbf {a} \| _ {2} ^ {2} \tag {14}
+$$
+
+where $\mathbf{a} := (\mathbf{u}^{(t-1)} - (1/\rho) \boldsymbol{\mu}^{(t-1)})$ . In problem (14), the objective function is differentiable, and can thus be solved by standard DNN solvers in SGD.
+
+Problem (Proximal) can be equivalently decomposed over $\mathbf{u}$ . This leads to problem
+
+$$
+\underset {\mathbf {u}} {\text {m i n i m i z e}} \frac {\rho}{2} \| \mathbf {u} - \mathbf {d} \| _ {2} ^ {2} \tag {15}
+$$
+
+$$
+\mathrm {s u b j e c t} \quad \mathrm {t o} u _ {j} \in [ 0, 1 ], \forall j, \quad \sum_ {j = 1} ^ {K} u _ {j} = 1,
+$$
+
+where $\mathbf{d} \coloneqq \mathbf{z}^{(t)} + (1 / \rho)\pmb{\mu}^{(t - 1)}$ .
+
+Based on [26], the analytical solution to problem (15) is
+
+$$
+\mathbf {u} ^ {(t)} = \left[ \mathbf {d} - \nu \mathbf {1} \right] _ {+}, \tag {16}
+$$
+
+where $[x]_{+} = x$ if $x\geq 0$ and 0 otherwise, $\nu$ is the root of the equation
+
+$$
+\mathbf {1} ^ {T} [ \mathbf {d} - \nu \mathbf {1} ] _ {+} = 1. \tag {17}
+$$
+
+Once $\mathbf{W}$ and $\mathbf{z}$ are solved, $\mathbf{z}$ is a continuous variable rather than a binary variable. We need an intermediate step to project continuous $\mathbf{z}_{\mathrm{adm}}$ to integer $\mathbf{z}_{\mathrm{binary}}$ , yielding
+
+$$
+\underset {\mathbf {z} _ {\text {b i n a r y}}} {\text {m i n i m i z e}} \| \mathbf {z} _ {\text {b i n a r y}} - \mathbf {z} _ {\text {a d m m}} \| _ {2} ^ {2} \tag {18}
+$$
+
+$$
+\text {s u b j e c t} \mathbf {1} ^ {T} \mathbf {z} = 1, z _ {i} \in \{0, 1 \}, \forall i.
+$$
+
+The solution is given by $[\mathbf{z}_{\mathrm{binary}}]_i = 1$ if $i = \operatorname{argmax}_j[\mathbf{z}_{\mathrm{admm}}]_j$ , and 0 otherwise. At this point, we have simultaneously selected a pattern for each kernel and trained the non-zero weights.
+
+# 5.2 Pattern Library Extraction - Overall
+
+The overall pattern library extraction starts from $K = 126$ and decreases $K$ in each step, with algorithm brief shown in Algorithm 1. In actual implementation we set the new $K$ to be 12 in the first step as most of the patterns occur in very few times. We set the target $K$ to be either 12, 8, or 4. When the type of patterns is within this range, the overhead in code generation at compiler level can be kept small and parallelism can be maximized.
+
+Total Runtime: Despite an iterative process, the total number of epochs (and training time) can be limited. This is because except for the last step, we only need to extract a number of patterns instead of finishing the final training of non-zero weights. As a result, we can finish each step with $10\%$ to $20\%$ of the total epochs as training of the original DNN. In the last step, we need around 9 - 12 ADMM iterations, each requiring less than $20\%$ of the total epochs of
+
+original DNN training. So the total number of training epochs using PyTorch [27] is around 300 - 400 for the whole process, which is even lower compared with many prior art [10,22].
+
+Algorithm 1: Pattern library extraction process.
+```matlab
+1 Initialization: $\Omega = \{\mathbf{M}_1,\mathbf{M}_2\ldots ,\mathbf{M}_K\}$ with $K = 126$ ; Result: Subsets $\varOmega^{\prime}$ with $K = 12,8$ or 4;
+2 while training neural network do
+3 Update $W$ by solving (Primal);
+4 for $K\gets 126$ until $K = 12,8$ or 4 do
+5 Solving (Proximal) using current $\varOmega$ .
+6 Update $\mu$ in (13);
+7 Calculate pattern distribution of current $\varOmega$ .
+8 Removing patterns with fewest occurrences in $\varOmega$ .
+9 end
+10 end
+```
+
+# 6 Connectivity Sparsity and the New Sparsity Induced Inference Framework
+
+# 6.1 Connectivity Sparsity
+
+Connectivity sparsity is achieved by connectivity pruning which can be integrated in the same algorithm-level solution in Section 5.1 and compiler-assisted mobile inference framework. Using the same notations as in Section 5.1, we define the collection of weights in $i$ -th layer as $\mathbf{W}_i \in \mathbb{R}^{H_i \times W_i \times F_i \times C_i}$ , where $H$ and $W$ denote the dimension of the convolution kernel. $F$ and $C$ denote the number of filters and channels, respectively. We further define critical connectivity score for each convolution kernel as
+
+$$
+\gamma_ {i, f, c} \left(\mathbf {W} _ {i}\right) = | | \left[ \mathbf {W} _ {i} \right] _ {:,:}, f, c | | _ {2} \tag {19}
+$$
+
+where $f$ and $c$ are filter and channel indices, respectively. The problem formulation and solution framework for achieving connectivity sparsity is similar with the ones in Section 5.1. The difference is that the constraint in the framework is related to $\gamma_{i,f,c}$ . Please note that our algorithm level solution can solve the problems of pattern and connectivity pruning simultaneously or individually.
+
+# 6.2 Compiler-assisted Inference Framework for Real-time Execution
+
+After we obtain pattern and connectivity sparsity combined in a DNN model, we use a compiler-assisted inference framework to maximize the execution efficiency by utilizing multiple optimization techniques that are induced by pattern-based sparsity. The compiler optimizations showing in Figure 3 target on DNN computation graph and memory access for on-device executions.
+
+
+Fig. 3: Overview of the compiler level DNN inference framework.
+
+Layerwise optimization for DNN computation graph is designed to achieve the best of instruction-level and thread-level parallelism by utilizing the unique filter/kernel re-ordering technique as Figure 3 shows. In the weight matrix illustration, the internal squares with different colors denote different pattern styles, and empty white squares denote connectivity sparsity. By filter/kernel re-ordering, we (i) organize the filters with similar kernels together to improve inter-thread parallelism, and (ii) group kernels with identical patterns in each filter together to improve intra-thread parallelism. By DNN computation graph optimization, the generated execution code eliminates all of the execution branches, implying higher instruction-level parallelism; meanwhile, similar filter groups escalate execution similarity and result in a good load balance, achieving better thread-level parallelism.
+
+Memory access optimizations for hardware execution address the poor memory performance due to the irregular memory access. In DNN execution, the input/output data access is associated with the non-zero elements of the weights. Since in pattern-based sparse model, the non-zero pattern of each kernel is already known, we can generate data access code with this information for each kernel pattern and call them dynamically during DNN execution. With the data access code, it is possible to directly access valid input data that is associated with the non-zero elements in a pattern-based kernel. Moreover, after DNN computation graph optimization, the model weights distribution is highly compact and structured as Figure 3 shows, which reduces the calling frequency of data access code and as a result, reduces the memory overhead.
+
+# 7 Experimental Results
+
+In our experiment, our generated pattern-based sparse models are based on four widely used network structures, VGG-16 [29], ResNet-18/50 [11] and MobileNet-V2 [15], and are trained on an eight NVIDIA RTX-2080Ti GPUs server using PyTorch [27]. We show the consistency of pattern library extraction results with the theoretically designed pattern library in Section 4.2, and provide the accuracy improvement and image enhancement demonstrations. We also show the overall compression results of pattern-based pruning in different DNN models. In order to show acceleration of pattern-based sparsity on mobile devices, we compare it with three state-of-the-art DNN inference acceleration frameworks,
+
+
+Fig. 4: The pattern library extraction result. When $K = 32$ after two steps, the pattern distribution is shown in (b) with different colors representing different pattern styles in (a). The 20 less significant patterns only account for $12\%$ of the total 32 patterns, and the rest 12 patterns form the Phase 1 pattern library. If we continue the extraction step, we can get Phase 2 and Phase 3 pattern libraries as (a) shows.
+
+TFLite [1], TVM [6], and MNN [2]. Our experiments are conducted on a Samsung Galaxy S10 cell phone with the latest QualcommSnapdragon 855 mobile platform that consists of a Qualcomm Kryo 485 Octa-core CPU and a Qualcomm Adreno 640 GPU.
+
+# 7.1 Pattern Library Extraction Result
+
+We use VGG-16 on ImageNet dataset to extract pattern libraries. VGG-16 has more than 1,630,000 convolution kernels. However, patterns can be concentrated to 12 styles in only a couple of steps. Figure 4 shows the pattern styles distribution results when $K$ decreases to 32 after two steps. We can see that most of the patterns are distributed in the top 12 styles, namely Phase 1 pattern library. If we continue to decrease $K$ to 8, the remaining 8 patterns form Phase 2 pattern library. We can notice that Phase 2 is exactly the same with our derived pattern library in Section 4.2. Further extraction step will give us Phase 3 pattern library, which is the top-4 pattern styles. Using other DNNs and datasets gives us the same extraction results, thereby we can conclude that the theoretically derived patterns are also the most desirable ones at algorithm level.
+
+# 7.2 Visualization Demonstration and Accuracy Analysis for Pattern Pruning
+
+After we obtain the extracted pattern libraries in three phases (i.e., containing 12, 8 or 4 patterns respectively), we need to validate the image enhancement effects and evaluate the accuracy of the pattern pruned DNN.
+
+Visualization comparisons of applying Phase 2 pattern library to an original DNN model (pattern pruning) are demonstrated in Figure 5. To ensure the fairness in comparisons, we adopt three visualization methods to eliminate the impact of causal factors. They are (a) Guided-backpropagation (BP) [31], (b) Integrated gradients [23], and (c) Inverted representation [3]. Through different visualization techniques, we can see what a DNN has learned and how well it can preserve the photographically accurate information from an image.
+
+
+Fig. 5: Visualization comparisons of three images from ImageNet dataset on original and pattern pruned VGG-16 model using (a) guided-backpropagation (BP); (b) integrated gradients and (c) inverted representation methods.
+
+We provide strong evidence in Figure 5 that pattern pruned VGG-16 model can effectively capture more image details and less noise compared with the original VGG-16 model. We conclude that the accuracy improvement is attributed to the enhanced image processing ability of our designed pattern library.
+
+Accuracy evaluation is shown in Figure 6 (a). Starting from the baseline accuracy results that are in many cases higher than prior works, we have the first conclusion that the accuracy improvements are more significant when applying the designed 8 patterns (i.e., pattern library at Phase 2) on each convolution kernel. The accuracy improvements are consistently observed on various network structures (e.g., VGG-16, ResNet-18/50, MobileNet-V2) on CIFAR-10 and ImageNet datasets.
+
+
+(a)
+
+
+(b)
+Fig. 6: (a) Accuracy improvement results from pattern pruning on different DNN models and datasets (CIFAR-10 & ImageNet). (b) Overall $6 \times$ compression for ResNet-18 on ImageNet training curves for connectivity sparsity.
+
+Table 1: Pattern-based pruning results (\%) on convolution layer for CIFAR-10 and ImageNet using VGG-16, ResNet-18 and ResNet-50. (O: original, P: prune)
+
+ | CIFAR-10 | ImageNet |
| Pruning Framework | Top-1 | Comp. Rate | Sparse Type | Top-1 | Top-5 | Comp. Rate |
| O | P | O | P | O | P |
| ResNet-18† | AMC [13] | 90.5 | 90.2 | 2.0× | Struct. | - | - | - | - | - |
| Tiny [21] | 94.1 | 93.2 | 15.1× | Struct. | N/A | N/A | 89.1 | 88.4 | 3.3× |
| TAS [8] | 92.8 | 92.8 | 1.8× | Struct. | 70.6 | 69.1 | 89.8 | 89.2 | 1.5× |
| FPGM [12] | 92.2 | 91.9 | 2.5× | Struct. | 70.2 | 68.3 | 89.6 | 88.5 | 3.3× |
| Ours | 94.0 | 94.7 | 8.0× | Phase 2 | 69.9 | 69.6 | 89.1 | 89.2 | 4.0× |
| Ours | 94.0 | 94.6 | 12.0× | Phase 3 | 69.9 | 68.2 | 89.1 | 88.3 | 6.0× |
| Ours | 94.0 | 94.2 | 16.0× | Phase 2 | 69.9 | 67.1 | 89.1 | 87.7 | 8.0× |
| ResNet-50* | One Shot [19] | 93.8 | 93.6 | 2.5× | Irreg. | - | - | - | - | - |
| ADMM-NN [28] | - | - | - | - | N/A | N/A | N/A | 92.3 | 7.0× |
| TAS [8] | 94.5 | 93.7 | 2.0× | Struct. | 77.5 | 76.2 | 93.5 | 93.1 | 1.7× |
| GAL [16] | 93.3 | 90.4 | 2.9× | Struct. | 76.4 | 69.3 | 92.8 | 89.1 | 2.5× |
| FPGM [12] | 93.6 | 93.5 | 2.5× | Struct. | 76.2 | 75.6 | 92.8 | 92.6 | 3.3× |
| GBN [35] | - | - | - | - | 75.8 | 75.2 | 92.7 | 92.4 | 2.2× |
| Ours | 94.2 | 95.2 | 8.0× | Phase 3 | 76.1 | 75.9 | 92.9 | 92.7 | 3.9× |
| Ours | 94.2 | 94.9 | 12.0× | Phase 3 | 76.1 | 75.8 | 92.9 | 92.8 | 4.9× |
| Ours | 94.2 | 94.5 | 16.0× | Phase 3 | 76.1 | 75.6 | 92.9 | 92.6 | 5.8× |
| VGG-16 | NeST [7] | - | - | - | - | 71.6 | 69.3 | 90.4 | 89.4 | 6.5× |
| ADMM-NN [28] | - | - | - | - | 69.0 | 68.7 | 89.1 | 88.9 | 10.2× |
| DecorReg [36] | 93.5 | 93.3 | 8.5× | Struct. | 73.1 | 73.2 | N/A | N/A | 3.9× |
| GAL [16] | 93.9 | 90.8 | 5.6× | Struct. | - | - | - | - | - |
| Ours | 93.5 | 93.4 | 8.0× | Phase 2 | 74.5 | 74.4 | 91.7 | 91.5 | 8.0× |
| Ours | 93.5 | 93.3 | 11.6× | Phase 2 | 74.5 | 74.1 | 91.7 | 91.3 | 10.0× |
| Ours | 93.5 | 93.2 | 19.7× | Phase 1 | 74.5 | 73.6 | 91.7 | 91.0 | 12.0× |
+
+† TAS, FPGA use ResNet-20 network structure on CIFAR-10 dataset.
+* TAS, GAL, FPGA use ResNet-56 network structure on CIFAR-10 dataset.
+
+# 7.3 Connectivity Pruning and Overall Model Compression Results
+
+Combining connectivity sparsity with pattern sparsity has different DNN performances with different pattern libraries. Figure 6 (b) illustrates testing accuracies of training connectivity sparsity combined with existing pattern sparsity. From diagram, we can clearly notice that by using designed pattern library (Phase 2), we can achieve better training performance, thereby higher DNN accuracy. Similar paradigm can be observed with different compression rates and on different networks/datasets. Please note that pattern sparsity already reserves $2.25 \times$ compression rate, and we add different connectivity compression rates upon it to achieve the different overall compression rates. Table 1 records the best final DNN accuracies and compression rates regarding their pattern styles, and are compared with several pruning methods with their sparsity types.
+
+# 7.4 Performance Evaluation on Mobile Platform
+
+In this part, we demonstrate our evaluation results on mobile devices. To guarantee fairness, all frameworks are using the same pattern-based sparse model, and we also enable the fully optimized configurations of TFLite, TVM and MNN (e.g., Winograd optimization is turned on).
+
+Execution time. Figure 7 shows mobile CPU/GPU execution time of pattern-based model on different platforms. Since Phase 2 pattern library has best performance on pruning, our testing model are using Phase 2 patterns and $8 \times$ overall compression rate for ResNet-18, $5.8 \times$ for ResNet-50 and $12 \times$ for VGG-16. The inference is using images from ImageNet dataset. We can see our approach achieves significant acceleration on mobile device compared with other frameworks. Real-time execution usually requires 30 frames/sec (i.e., $33ms$ frame). From our results, all of our DNN models on ImageNet meet or far exceed this requirement, and some of them can even accomplish real-time inference on mobile CPU.
+
+
+Fig. 7: Inference time (ms) comparisons for different mobile inference frameworks using image from ImageNet dataset.
+
+
+
+
+
+# 8 Conclusion
+
+This paper proposes pattern-based sparsity, along with the highly efficient algorithm level pruning framework and the novel compiler level inference framework. Pattern-based sparsity inherits the flexibility from non-structured sparsity and regularity from structured sparsity, achieving both highly accurate/compressed model and hardware friendliness. Particularly, with carefully designed pattern library, pattern pruning achieves image enhancement and accuracy improvement. The pattern-based sparsity elicits compiler optimization, achieving real-time inference on mobile devices on various representative large-scale DNNs.
+
+# 9 Acknowledgment
+
+This work is supported by the National Science Foundation CCF-1919117, CCF-1937500, CNS-1909172, CNS-2011260, and is sponsored by DiDi GAIA Research Collaboration Initiative. We thank all anonymous reviewers for their feedback.
+
+# References
+
+1. https://www.tensorflow.org/mobile/tflite/
+2. https://github.com/alibaba/MNN
+3. Aravindh, M., Andrea, V.: Understanding deep image representations by inverting them. In: Computer Vision and Pattern Recognition, 2015. CVPR 2015. IEEE Conference on (2015)
+4. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning 3(1), 1-122 (2011)
+5. Chen, C.F., Oh, J., Fan, Q., Pistoia, M.: Sc-conv: Sparse-complementary convolution for efficient model utilization on cnns. In: 2018 IEEE International Symposium on Multimedia (ISM). pp. 97–100. IEEE (2018)
+6. Chen, T., Moreau, T., Jiang, Z., Zheng, L., Yan, E., Shen, H., Cowan, M., Wang, L., Hu, Y., Ceze, L., et al.: TVM: An automated end-to-end optimizing compiler for deep learning. In: OSDI (2018)
+7. Dai, X., Yin, H., Jha, N.K.: Nest: A neural network synthesis tool based on a grow-and-prune paradigm. IEEE Transactions on Computers 68(10), 1487-1497 (2019)
+8. Dong, X., Yang, Y.: Network pruning via transformable architecture search. In: Advances in Neural Information Processing Systems. pp. 759-770 (2019)
+9. Freeman, W., Adelson, E.: The design and use of steerable filters. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. vol. 13, pp. 891-906. IEEE (1991)
+0. Han, S., Mao, H., Dally, W.J.: Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In: International Conference on Learning Representations (ICLR) (2016)
+1. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770-778 (2016)
+2. He, Y., Liu, P., Wang, Z., Hu, Z., Yang, Y.: Filter pruning via geometric median for deep convolutional neural networks acceleration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4340-4349 (2019)
+3. He, Y., Lin, J., Liu, Z., Wang, H., Li, L.J., Han, S.: Amc: Automl for model compression and acceleration on mobile devices. In: European Conference on Computer Vision. pp. 815-832 (2018)
+4. He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: Computer Vision (ICCV), 2017 IEEE International Conference on. pp. 1398-1406. IEEE (2017)
+5. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
+6. Lin, S., Ji, R., Yan, C., Zhang, B., Cao, L., Ye, Q., Huang, F., Doermann, D.: Towards optimal structured cnn pruning via generative adversarial learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2790-2799 (2019)
+7. Liu, B., Wang, M., Foroosh, H., Tappen, M., Pensky, M.: Sparse convolutional neural networks. In: CVPR. pp. 806-814 (2015)
+8. Liu, N., Ma, X., Xu, Z., Wang, Y., Tang, J., Ye, J.: Autocompress: An automatic dnn structured pruning framework for ultra-high compression rates. In: AAAI. pp. 4876-4883 (2020)
+
+19. Liu, Z., Sun, M., Zhou, T., Huang, G., Darrell, T.: Rethinking the value of network pruning. In: International Conference on Learning Representations (2019)
+20. Ma, X., Guo, F.M., Niu, W., Lin, X., Tang, J., Ma, K., Ren, B., Wang, Y.: Pconv: The missing but desirable sparsity in dnn weight pruning for real-time execution on mobile devices. In: AAAI. pp. 5117-5124 (2020)
+21. Ma, X., Yuan, G., Lin, S., Ding, C., Yu, F., Liu, T., Wen, W., Chen, X., Wang, Y.: Tiny but accurate: A pruned, quantized and optimized memristor crossbar framework for ultra efficient dnn implementation. ASP-DAC (2020)
+22. Molchanov, P., Tyree, S., Karras, T., Aila, T., Kautz, J.: Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440 (2016)
+23. Mukund, S., Ankur, T., Qiqi, Y.: Axiomatic attribution for deep networks. In: 2017 International Conference on Machine Learning (ICML). ACM/IEEE (2017)
+24. Niu, W., Ma, X., Lin, S., Wang, S., Qian, X., Lin, X., Wang, Y., Ren, B.: Patdnn: Achieving real-time dnn execution on mobile devices with pattern-based weight pruning. In: Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems. pp. 907-922 (2020)
+25. Parashar, A., Rhu, M., Mukkara, A., Puglielli, A., Venkatesan, R., Khailany, B., Emer, J., Keckler, S.W., Dally, W.J.: Scnn: An accelerator for compressed-sparse convolutional neural networks. In: ISCA (2017)
+26. Parikh, N., Boyd, S.: Proximal algorithms. Foundations and Trends® in Optimization 1(3), 127-239 (2014)
+27. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: An imperative style, high-performance deep learning library. In: NeurIPS (2019)
+28. Ren, A., Zhang, T., Ye, S., Li, J., Xu, W., Qian, X., Lin, X., Wang, Y.: Admm-nn: An algorithm-hardware co-design framework of dnns using alternating direction methods of multipliers. In: ASPLOS. pp. 925-938 (2019)
+29. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
+30. Siyuan, M., Raef, B., Mikhail, B.: The power of interpolation: Understanding the effectiveness of sgd in modern over-parametrized learning. In: 2018 International Conference on Machine Learning (ICML). ACM/IEEE (2018)
+31. Springenberg, J.T., Alexey Dosovitskiy, T.B.a.R.: Striving for simplicity: The all convolutional net. In: ICLR-2015 workshop track (2015)
+32. Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: Advances in neural information processing systems. pp. 2074-2082 (2016)
+33. Xu, M., Zhu, M., Liu, Y., Lin, F.X., Liu, X.: Deepcache: Principled cache for mobile deep vision. In: Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. pp. 129-144. ACM (2018)
+34. Yao, S., Hu, S., Zhao, Y., Zhang, A., Abdelzaher, T.: Deepsense: A unified deep learning framework for time-series mobile sensing data processing. In: Proceedings of the 26th International Conference on World Wide Web (2017)
+35. You, Z., Yan, K., Ye, J., Ma, M., Wang, P.: Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks. In: Advances in Neural Information Processing Systems. pp. 2130-2141 (2019)
+36. Zhu, X., Zhou, W., Li, H.: Improving deep neural network sparsity through decorrelation regularization. In: IJCAI (2018)
\ No newline at end of file
diff --git a/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/images.zip b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..141ab97cb71cbb14801bf490ae26695fd5c2cf26
--- /dev/null
+++ b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7c3414af6e6d3e80b9a2cf4a70e9e42194d9397995ba355fc8eefd542c2c9bc6
+size 573220
diff --git a/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/layout.json b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bfb0aaaed515b1e0df9f1a63313609faba245e1a
--- /dev/null
+++ b/animageenhancingpatternbasedsparsityforrealtimeinferenceonmobiledevices/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:abd09c8d843631d416c0d888dd23a20ef55a71579ed4fb928e477d850f541ed4
+size 448779
diff --git a/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/0a2a3acb-0317-4e73-ab16-32101e21b065_content_list.json b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/0a2a3acb-0317-4e73-ab16-32101e21b065_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b5b80616ce2732d2a98c7ed8faffe1254a648d59
--- /dev/null
+++ b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/0a2a3acb-0317-4e73-ab16-32101e21b065_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:89a45169fc3b95a2d505c2a3e1f8c57eb77178079805388a17f7feb14190b481
+size 87331
diff --git a/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/0a2a3acb-0317-4e73-ab16-32101e21b065_model.json b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/0a2a3acb-0317-4e73-ab16-32101e21b065_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..3829d948bc7bef2a1d962bfb583f432697441cbc
--- /dev/null
+++ b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/0a2a3acb-0317-4e73-ab16-32101e21b065_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:12179f6f614ace22a35595eb21381179e4c889ece57148fa79b5c9cf9d3a64ae
+size 109293
diff --git a/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/0a2a3acb-0317-4e73-ab16-32101e21b065_origin.pdf b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/0a2a3acb-0317-4e73-ab16-32101e21b065_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b4a7ff9216a07ec51c7deb5594f4f8739f5a2f3e
--- /dev/null
+++ b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/0a2a3acb-0317-4e73-ab16-32101e21b065_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:151d3440fadcceb3068832ca9342602d8efd027101ad54020a5cf5b94b7be22d
+size 1635767
diff --git a/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/full.md b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c92154fb2c1a5bb4a938c8fbe35f46b584122d37
--- /dev/null
+++ b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/full.md
@@ -0,0 +1,350 @@
+# An Inference Algorithm for Multi-Label MRF-MAP Problems with Clique Size 100
+
+Ishant Shanu1, Siddhant Bharti1, Chetan Arora2, and S. N. Maheshwari2
+
+$^{1}$ Indraprastha Institute of Information Technology, Delhi, India
+ $^{2}$ Indian Institute of Technology, Delhi, India
+
+Abstract. In this paper, we propose an algorithm for optimal solutions to submodular higher order multi-label MRF-MAP energy functions which can handle practical computer vision problems with up to 16 labels and cliques of size 100. The algorithm uses a transformation which transforms a multi-label problem to a 2-label problem on a much larger clique. Earlier algorithms based on this transformation could not handle problems larger than 16 labels on cliques of size 4. The proposed algorithm optimizes the resultant 2-label problem using the submodular polyhedron based Min Norm Point algorithm. The task is challenging because the state space of the transformed problem has a very large number of invalid states. For polyhedral based algorithms the presence of invalid states poses a challenge as apart from numerical instability, the transformation also increases the dimension of the polyhedral space making the straightforward use of known algorithms impractical. The approach reported in this paper allows us to bypass the large costs associated with invalid configurations, resulting in a stable, practical, optimal and efficient inference algorithm that, in our experiments, gives high quality outputs on problems like pixel-wise object segmentation and stereo matching.
+
+Keywords: Submodular Minimization, Discrete Optimization, Hybrid Methods, MRF-MAP, Image Segmentation.
+
+# 1 Introduction
+
+Many problems in computer vision can be formulated as pixel labeling problems, in which each pixel $p \in \mathcal{P}$ needs to be assigned a label $l_{p} \in \mathcal{L}$ . Finding the joint labeling configuration, $\mathbf{l}_{\mathcal{P}}$ , over all pixels, with maximum posterior probability can then be formulated as a MRF-MAP inference problem [25,48]. The formulation involves solving the following optimization problem: $\mathbf{l}_{\mathcal{P}}^{*} = \arg \min_{\mathbf{l}_{\mathcal{P}} \in \mathcal{L}^{| \mathcal{P}|}} \sum_{\mathfrak{c} \in \mathcal{C}} f_{\mathfrak{c}}(\mathbf{l}_{\mathfrak{c}})$ . Here, $\mathfrak{c}$ , also called a clique, is defined as a set of pixels whose labels are contextually dependent on each other. A labeling configuration on a clique $\mathfrak{c}$ is denoted as $\mathbf{l}_{\mathfrak{c}}$ , $\mathcal{P}$ denotes the set of all pixels and $\mathcal{C}$ denotes the set of all cliques. The order of the MRF-MAP problem is considered as one less than the size of the maximal clique, $k = \max_{\mathfrak{c} \in \mathcal{C}} |\mathfrak{c}|$ . Each term, $f_{\mathfrak{c}}(\mathbf{l}_{\mathfrak{c}})$ , also called the clique potential, measures the cost of the labeling configuration $\mathbf{l}_{\mathfrak{c}}$ of a clique $\mathfrak{c}$ , depending on how consistent the labeling is with respect to the observation and prior knowledge.
+
+Optimal inference problem, in general, is NP hard even for first order MRFs. Therefore, researchers have explored approximate solutions to the inference problem for first order [9,28,32,52] as well as higher order MRFs [7,33,50]. Another line of research has been to identify sub-classes of clique potentials which model vision problems well and for which optimal inference algorithms can be devised with polynomial time complexity. The MRF-MAP problems with submodular clique potentials is one such popular sub-class [2,11,32], which is also the focus of this paper.
+
+Use of higher-order cliques in an MRF-MAP problem is important because it has been established that they can capture more complex dependencies between pixels thereby significantly improving the quality of a labeling solution [21,26,33,40,41,46,51,53]. Our experiments also show improvement over state of the art techniques based on the deep neural networks. Note that MRF-MAP formulation allows one to use the output of deep neural networks as the likelihood term in the objective function. Therefore, performing posterior inference, even using the manually defined priors, helps exploit the problem structure, and improves performance further.
+
+Inference algorithms for higher-order MRF-MAP with general clique potentials output approximate solutions, and are generally based on either message passing/dual decomposition [18,31,33,37,38,47,49] or reduction to first-order potentials frameworks [10,12,15,21,19,24,32,39,41]. The focus of this paper is on developing optimal inference algorithm for multi-label, submodular, higher-order MRF-MAP problems.
+
+One approach to handle multi-label potentials is to use encodings [2,20,53] to convert a multi-label problem to an equivalent 2-label problem while preserving submodularity. However there are some practical challenges. For a multi-label problem of order $k$ with $m$ labels, the encoding blows the problem to cliques of size $mk$ and exploding the size of the solution space to $2^{mk}$ [2]. Note that only $m^k$ of the $2^{mk}$ binary configurations resulting from the encoding correspond to the original $m^k$ labeling configurations. The rest are invalid in the problem context. Note that if potentials for invalid states are kept very large and those for valid states the same as in the original multi-label version, the minimum is always among the valid states.
+
+The use of Block Co-ordinate Descent (BCD) based techniques to handle the Min Norm Point polyhedral algorithm[45,46] is also possible in principle for such transformed problems. But the encoding based transformations pose new challenges. As explained in the next section, these techniques maintain the current feasible base vector as a convex combination of a set of extreme bases. For the 2-label problems arising out of encoding multi-label versions, some of the values in the extreme bases can correspond to energy of the invalid states. Giving a large or effectively an infinite value to the invalid states creates numerical challenges in maintaining/updating these convex combinations. Also, encoding increases the size of the cliques by $m$ times, which increases the dimensions of the polyhedral space to an extent that cannot be handled by the algorithm in [46].
+
+The main contribution of this paper is to show that there is enough structure in the submodular polyhedron to handle invalid extreme bases arising out of converted 2-label problems efficiently. The proposed algorithm raises the bar significantly in that using it we can handle multi-label MRF-MAP problems with 16 labels, and clique size upto 100. In comparison the current state of the art [2] can only work with cliques of size up to 4.
+
+At this stage we would like to contrast out mapping technique with that of [29], which has exploited the linear relationship between a tree and order in labels, to map multi-label submodular functions to the more general class of tree based $L^{\natural}$ -convex functions. However, these algorithms have high degree polynomial time complexity (based on [35,22,36]), limiting them to be of theoretical interest only. Our focus on the other hand is to extend the frontiers of practical optimal algorithms.
+
+Finally, we would like to point out that when the case for higher-order potential was first made, the then existing algorithms could only work with small cliques. Solutions were approximate and potentials many times were decomposable [26,33,41]. It is only with [45] and [46] that experiments could be done with cliques of size 100 or larger. Experiments reported in [46] established that quality of object segmentation improves with larger clique sizes [46]. We extend that exercise further here by focusing on quality of multi object segmentation as a function of clique size.
+
+# 2 Background
+
+We briefly describe the basic terminology and results from submodular function minimization (SFM) literature required to follow the discussion in this paper. We direct the reader to [44] for more details. The objective of a FSM problem is to find a minimizer set, $S^{*} = \min_{S\subseteq \mathcal{V}}f(S)$ of a submodular function $f$ , where $\mathcal{V}$ is the set of all the elements. W.l.o.g. we assume $f(\phi) = 0$ . We associate two polyhedra in $\mathbb{R}^{|\mathcal{V}|}$ with $f$ , the submodular polyhedron, $P(f)$ , and the base polyhedron, $B(f)$ , such that
+
+$$
+P (f) = \{x \mid x \in \mathbb {R} ^ {| \nu |}, \forall U \subseteq \nu : x (U) \leq f (U) \}, \text {a n d}
+$$
+
+$$
+B (f) = \{x \mid x \in P (f), x (\mathcal {V}) = f (\mathcal {V}) \},
+$$
+
+where $x(v)$ denotes the element at index $v$ in the vector $x$ , and $x(U) = \sum_{v \in U} x(v)$ . A vector in the base polyhedron $B(f)$ is called a base, and an extreme point of $B(f)$ is called an extreme base. Edmond's greedy algorithm gives a procedure to create an extreme base, $b^{\prec}$ , given a total order $\prec$ of elements of $\mathcal{V}$ such that $\prec : v_1 \prec \ldots \prec v_n$ , where $n = |\mathcal{V}|$ . Denoting the first $k$ elements in the ordered set $\{v_1, \ldots, v_k, \ldots, v_n\}$ by $k_{\prec}$ , the algorithm initializes the first element as $b^{\prec}(1) = f(\{v_1\})$ and rest of the elements as $b^{\prec}(k) = f(k_{\prec}) - f((k - 1)_{\prec})$ . There is a one-to-one mapping between an ordering of the elements, and an extreme base. The Min Max Theorem, states that $\max \{x^-(\mathcal{V}) \mid x \in B(f)\} = \min \{f(U) \mid U \subseteq \mathcal{V}\}$ . Here, $x^-(\mathcal{V})$ gives the sum of negative elements of $x$ .
+
+The min-norm equivalence result shows that $\arg \max_{x\in B(f)}x^{-}(\mathcal{V}) = \arg \min_{x\in B(f)}\| x\| _2$ . Fujishige and Isotani's [14] Min Norm Point (MNP) algorithm uses the equivalence and solves the problem using Wolfe's algorithm [13]. The algorithm has been shown empirically to be the fastest among all base polyhedron based algorithms [23,46]. The algorithm maintains a set of extreme bases, $\{b^{\prec_i}\}$ , and a minimum norm base vector, $x$ , in their convex hull, s.t.:
+
+$$
+x = \sum_ {i} \lambda_ {i} b ^ {\prec_ {i}} \quad \lambda_ {i} \geq 0, \text {a n d} \sum_ {i} \lambda_ {i} = 1. \tag {1}
+$$
+
+At a high level, an iteration in the MNP/Wolfe's algorithm comprises of two stages. In the first stage, given the current base vector, $x$ , an extreme base, $q$ , that minimizes $x^{\mathsf{T}}q$ is added to the current set. The algorithm terminates in case $\| x\| = x^{\mathsf{T}}q$ . Otherwise it finds a new $x$ , with smaller norm, in the convex hull of the updated set of extreme bases.
+
+The MRF-MAP inference problem can be seen as minimizing a sum of submodular functions [30,46]. Shanu et al. [46] have suggested a block coordinate descent framework to implement the Min Norm Point algorithm in the sum of submodular functions environment when cliques are large. A very broad overview of that scheme is as follows.
+
+With each $f_{\mathbb{C}}$ , the submodular clique potential of clique $\mathbb{C}$ , one can associate a base polyhedron such that:
+
+$$
+B \left(f _ {\mathbb {C}}\right) := \left\{y _ {\mathbb {C}} \in \mathbb {R} ^ {| \mathbb {C} |} \mid y _ {\mathbb {C}} (U) \leq f _ {\mathbb {C}} (U), \forall U \subseteq \mathbb {C}; y _ {\mathbb {C}} (\mathbb {C}) = f _ {\mathbb {C}} (\mathbb {C}) \right\}. \tag {2}
+$$
+
+The following results [46] relate a base vector $x$ of function $f$ , and a set of base vectors $y_{\mathfrak{c}}$ of a $f_{\mathfrak{c}}$ :
+
+Lemma 1. Let $x(S) = \sum_{\mathbb{C}} y_{\mathbb{C}}(\mathbb{C} \cap S)$ where each $y_{\mathbb{C}}$ belongs to base polyhedra $B(f_{\mathbb{C}})$ . Then the vector $x$ belongs to base polyhedron $B(f)$ .
+
+Lemma 2. Let $x$ be a vector belonging to the base polyhedron $B(f)$ . Then, $x$ can be expressed as the sum: $x(S) = \sum_{\mathbb{C}} y_{\mathbb{C}}(S \cap \mathbb{C})$ , where each $y_{\mathbb{C}}$ belongs to the submodular polyhedron $B(f_{\mathbb{C}})$ i.e., $y_{\mathbb{C}} \in B(f_{\mathbb{C}}) \forall \mathbb{C}$ .
+
+The block coordinate descent approach based on the results requires each block to represent a base vector $y_{\mathfrak{c}}$ as defined above (c.f. [46]). Note that a base vector $y_{\mathfrak{c}}$ is of dimension $|\mathfrak{c}|$ (clique size), whereas a base $x$ is of dimension $|\mathcal{V}|$ (number of pixels in an image). Since $|\mathfrak{c}| \ll |\mathcal{V}|$ , minimizing the norm of $y_{\mathfrak{c}}$ over its submodular polyhedron $B(f_{\mathfrak{c}})$ is much more efficient than minimizing the norm of $x$ by just applying the MNP algorithm. However, for reasons already given and discussed in the Introduction, the algorithm based on the above fails to converge on multi-label submodular MRF-MAP problems when transformed to a 2-label MRF-MAP problems using an extension of the encoding given in [2] that preserves submodularity.
+
+We now show how these problems can be overcome by performing block coordinate descent over two blocks: one block has convex combination of only extreme
+
+bases corresponding to valid states and the other has the convex combination of extreme bases corresponding to the invalid states. The block corresponding to valid states is small enough for the traditional MNP algorithm to output optimal solutions. For the larger block corresponding to the invalid states we develop a flow based algorithm to find a vector with minimum $\ell_2$ norm. This results in an algorithm which is numerically stable and practically efficient.
+
+# 3 Properties of the Multi-label to 2-Label Transformation
+
+Let $F$ be a multi-label submodular function defined over the set of $n$ pixels $\mathcal{P}$ . Let $X$ and $Y$ stand for the $n$ -tuples of parameters. Let $\vee$ and $\wedge$ be max and min operators and let $(X \vee Y), (X \wedge Y)$ denote the $n$ -tuples resulting from element wise application of the max and min operators over $n$ -tuples $X$ and $Y$ . $F$ is called submodular if:
+
+$$
+F (X) + F (Y) \geq F (X \vee Y) + F (X \wedge Y). \tag {3}
+$$
+
+We now summarize the transformation to convert a multi-label to a 2-label problem as suggested in [2,20]. Consider an unordered set of pixels $\mathcal{P} = \{p_1,\dots ,p_i,\dots ,p_n\}$ , and an ordered set of labels $\mathcal{L} = \{1,\dots ,m\}$ . To save the notation clutter, whenever obvious, we denote a pixel simply using variables $p,q$ without the subscript index.
+
+Definition 1 (Binary Encoding). The encoding $\mathcal{E}:\mathcal{L}\to \mathbb{B}^m$ maps a label $i\in \mathcal{L}$ to a m dimensional binary vector such that its first $m - i$ elements are 0 and the remaining elements are 1.
+
+For example, $\mathcal{E}(1) = (0,\dots ,0,0,1)$ , and $\mathcal{E}(2) = (0,\dots ,0,1,1)$ . Let us denote the encoded label vector corresponding to a pixel $p_i$ as $\gamma_{i} = (p_{i}^{1},\ldots ,p_{i}^{m}),p_{i}^{j}\in \{0,1\}$ . We denote by $\Gamma \in \mathbb{B}^{mn}$ , the vector obtained by concatenating all encoded vectors $\gamma :\Gamma = (\gamma_1,\dots ,\gamma_i,\dots ,\gamma_n)$ . The vector $\varGamma$ represents encoding of labeling configuration over all the pixels. We also define a universal set containing all elements of $\varGamma:\mathcal{V}=\{p_1^1,\dots,p_1^m,\dots,p_n^1,\dots,p_n^m\}$ .
+
+Definition 2 (Universal Ordering). Assuming an arbitrary ordering among the pixels, the universal ordering defines a total ordering of the elements $p_i^j$ , $i \in \mathcal{Z}_{1:n}, j \in \mathcal{Z}_{1:m}$ :
+
+$$
+\prec_ {0}: p _ {1} ^ {1} \prec \dots \prec p _ {1} ^ {m} \prec \dots \prec p _ {n} ^ {1} \cdot \cdot \cdot \prec p _ {n} ^ {m}.
+$$
+
+We denote by $S \subseteq \mathcal{V}$ , called state, set of all the elements, $p_i^j$ of $\Gamma$ labeled as 1. Note that there are $2^{mn}$ possible states, however only $m^n$ of them correspond to valid $\Gamma$ vector obtained by encoding labeling configurations over the pixels. We call such states as valid states. If label of a pixel $p_i$ is denoted as $l_i \in \mathcal{L}$ , a valid state may be represented as: $S = \{\mathcal{E}(l_1), \dots, \mathcal{E}(l_i), \dots, \mathcal{E}(l_n)\}$ . Similarly $S_p = \{\mathcal{E}(l_p)\}$ includes elements corresponding to pixel $p$ .
+
+Definition 3 (Valid Ordering/Extreme Base). An ordering $\prec$ is called a valid ordering, if for any $p_i^j, p_i^k \in \mathcal{V}$ , $j > k \Rightarrow p_i^j \prec p_i^k$ . An extreme base $b^{\prec}$ is called a valid extreme base, if it corresponds to a valid ordering.
+
+The states, orderings or extreme-bases which are not valid are called invalid. We denote the set of all valid states by $S$ .
+
+Definition 4 (Covering State, Minimal Covering State). For an arbitrary state, $S$ , a valid state, $\hat{S} \in S$ , is called covering if $S \subseteq \hat{S}$ . There may be multiple covering states corresponding to a $S$ . The one with the smallest cardinality among them is referred to as the minimal covering state, and is denoted by $\overline{S}$ . There is a unique minimal covering state corresponding to any $S$ . For a valid state $S = \overline{S}$
+
+We are now ready to show that the above transformation can be used to define a binary set function which is not only submodular but is also identical to the multi-label submodular function on valid states. We encode the multi-label function to a submodular pseudo-Boolean function $f$ defined over set $\mathcal{V}$ of size $mn$ as follows:
+
+Definition 5 (The Extended Binary Set Function).
+
+$$
+f (S) = \left\{ \begin{array}{l l} F (\ldots , l _ {i}, \ldots), & i f S = \{\ldots , \mathcal {E} (l _ {i}), \ldots \} \\ f (\overline {{S}}) + (| \overline {{S}} | - | S |) L & o t h e r w i s e \end{array} \right.
+$$
+
+Here $l_i \in \mathcal{L}$ is a label of pixel $p_i$ , and $L \gg M = [\max_{S \in \mathcal{S}} f(S) - \min_{S \in \mathcal{S}} f(S)]$ .
+
+It is easy to see that $f(S)$ can also be defined as follows:
+
+Definition 6 (The Extended Binary Set Function: Alternate Definition).
+
+$$
+f (S) = f (\bar {S}) + \sum_ {p \in \mathcal {P}} \left(\left| \bar {S} _ {p} \right| - \left| S _ {p} \right|\right) L, \tag {4}
+$$
+
+where $\overline{S}_p\subset \overline{S}$ , and $S_{p}\subset S$ are the subsets containing elements corresponding to pixel $p$ in $\overline{S}$ and $S$ respectively.
+
+Theorem 1. The extended binary set function $f$ , as given by Definition 5, is submodular, and $\min f(\cdot) = \min F(\cdot)$ .
+
+To prevent the breaking of thought flow, and due to restrictions on length, the detailed proof of this theorem as well as those following are given in the supplementary material.
+
+The reader may, at this stage, wonder whether it is at all possible to limit to working only with the valid states in the submodular mode, perhaps using one-hot encoding as in [53]. The answer is no, since in a one-hot encoding the set of all valid states is not a ring family [34], and hence the encoded function is not submodular.
+
+Note that in the proposed encoding, any value of $L \gg M$ , keeps the function, $f$ , submodular. However, as we show later, choosing such a large value of $L$ , makes the contribution of some extreme bases very small causing precision issues in the computation. We also show that including those extreme bases with very small contribution is extremely important for achieving the optimal inference. The major contribution of this paper is in showing that one can perform an efficient inference bypassing $L$ altogether. Therefore, the use of $L$ is merely conceptual in our framework. There is no impact of actual value of $L$ on the algorithm's performance.
+
+# 4 Representing Invalid Extreme Bases
+
+In the discussion that follows, we refer to any scalar as small or finite if its absolute value is $\ll L$ , and large or infinite if the absolute value is $\propto \overline{L}$ . We write Eq. (1) as:
+
+$$
+x = x _ {v} + x _ {i} = \sum_ {b ^ {\prec_ {j}} \in R} \lambda_ {j} b ^ {\prec_ {j}} + \sum_ {b ^ {\prec_ {i}} \in Q} \lambda_ {i} b ^ {\prec_ {i}}. \tag {5}
+$$
+
+Here, $R$ and $Q$ are the sets of valid and invalid extreme bases, and $x_v$ , and $x_i$ , their contribution in $x$ respectively. It is easy to see that, all the elements of $x_i$ must be much smaller than $L^3$ . We first focus on the relationship between $\lambda$ and $L$ in the block of invalid extreme bases.
+
+Lemma 3. For any element, $e$ , of an invalid extreme base, $b^{\prec}: b^{\prec}(e) = a_{e}L + b_{e}$ , where $|a_{e}|, |b_{e}| \ll L$ and $a_{e} \in I$ .
+
+Lemma 4. Consider two base vectors $x_{1}$ and $x_{2}$ such that $\| x_1\|^2, \| x_2\|^2 < |\mathcal{V}|M^2$ . If $x_{2} = (1 - \lambda)x_{1} + \lambda b^{-\prec}$ and $b^{-\prec}$ is an invalid extreme base, then $\lambda \leq |\mathcal{V}| \frac{M}{L}$ .
+
+Conceptually, Lemma 3 shows that all elements of an invalid extreme base are either small or are proportional to $L$ (and not proportional to, say $L^2$ , or other higher powers of $L$ ). Whereas, Lemma 4 shows that since $\mathcal{V}$ and $M$ are effectively constants, $\lambda$ the multiplicative factor associated with in the contribution of invalid extreme bases, $\lambda$ is proportional to $1 / L$ . Therefore, for $L \approx \infty$ , the value of $\lambda \approx 0$ . However, it is important to note that the value of $\lambda b^{\prec}(e)$ , is always finite. It is easy to see that, whenever $a_e = 0$ , $\lambda b^{\prec}(e) \approx 0$ , and when $a_e \neq 0$ , the $L$ present in the $b^{\prec}(e)$ and $1 / L$ present in $\lambda$ cancel each other, leading to a finite contribution. The argument as given above motivates our overall approach in this paper that, for a numerically stable norm minimization algorithm, focus should be on manipulating the finite valued product $\lambda b^{\prec}$ , and not the individual $\lambda$ and $b^{\prec}(e)$ . We show in the following sections that this is indeed possible.
+
+We start by showing that it is possible to find a small set of what we call elementary invalid extreme bases whose linear combination contains as a subset the space of vectors $x_{i}$ as given in Eq. (5). Crucial to doing this is the notion of canonical orderings.
+
+# 4.1 Canonical Ordering and Its Properties
+
+In an arbitrary, valid or invalid, ordering $\prec$ consider two adjacent elements $u$ and $v$ such that $u \prec v$ . We term swapping of order locally between $u$ and $v$ in $\prec$ as an exchange operation. The operation will result in a new order
+
+
+Fig. 1: Top: An ordering of elements in $\mathcal{P} = \{p, q, r\}$ , for a label of size 3. Bottom: Corresponding canonical ordering.
+
+ing $\prec_{\mathrm{new}}$ such that $u$ and $v$ are still adjacent but $v \prec_{\mathrm{new}} u$ .
+
+Consider a strategy in which starting with $\prec$ we carry out exchange operations till all the elements corresponding to a pixel come together, and repeat this for all pixels. Note that we do not change the relative ordering between elements corresponding to the same pixel. We call the resultant ordering the canonical form of the original ordering $\prec$ and denote it by $\prec$ . The corresponding extreme base is called canonical extreme base. Note that there can be multiple canonical forms of an ordering. Figure 1 contains an example of an arbitrary ordering and one of its canonical orderings. We emphasize here that there may be more than one canonical orderings corresponding to $\prec$ .
+
+Note that a valid (invalid) ordering leads to a valid (invalid) canonical ordering. For any $p^j$ and $p^k$ , in a valid canonical ordering, if $j = k + 1$ , then $p^j, p^k$ are adjacent in the ordering and $p^k \prec p^j$ . Further, a canonical ordering is agnostic to any relative order among pixels. For example, for pixels $p$ and $q$ , a canonical ordering only requires that all elements of $p$ (or $q$ ) are contiguous. An ordering in which elements corresponding to $p$ come before those of $q$ will define a different canonical ordering from the one in which the relative ordering of elements of $p$ and $q$ is vice-versa. In general a canonical ordering $\prec$ corresponding to a $\prec$ can be any one of the possible canonical orderings.
+
+Lemma 5. Let $\prec$ be an invalid ordering and $\prec$ be its canonical ordering. Then, $b^{\prec}(e) - b^{\prec}(e) \ll L, \forall e \in \mathcal{V}$ .
+
+The above result serves to indicate that by changing an invalid extreme base to canonical one, the change in value of any element of the extreme base is much less than $L$ . Therefore, due to Lemma 4, one can conclude that the contribution of an invalid extreme base or its canonical extreme base in a base vector is going to be the same.
+
+Lemma 6. For a canonical invalid ordering $\prec$ , let $p^i$ and $p^j$ be two adjacent elements corresponding to a pixel $p$ , s.t. $p^i \prec p^j$ . Let $\overline{\prec}_p^{i,j}$ be the ordering obtained by swapping $p^i$ and $p^j$ . Then: $b_{\overline{\prec}_p^{i,j}} - b_{\overline{\prec}} = (\chi_p^j - \chi_p^i)(aL + b)$ , where $\chi_p^i$ is an indicator vector for the element $p^i$ , and $a, b \ll L$ .
+
+Lemma 6 relates the two extreme bases when one pair of their elements is swapped. It is useful to note that in a valid extreme base all elements have small values. With each swap in an invalid canonical ordering we either move
+
+the canonical ordering towards validity or away from it. In each swap the change in the value of an element is proportional to $L$ (positive or negative). Since conversion of an invalid canonical ordering to a valid one may involve swaps between a number of elements, the extreme base corresponding to the invalid ordering will contain multiple elements with values proportional to $L$ . The special cases are the ones in which only one swap has been done. In these cases there will be only two elements with values proportional to $L$ (positive and negative). We show that using such extreme bases as the basis to represent canonical invalid extreme bases. In the next section we show that it is indeed possible.
+
+# 4.2 Elementary Invalid Extreme Base
+
+Definition 7 (Elementary Invalid Extreme Base). The ordering obtained by swapping two elements $p^j$ and $p^{j+1}$ , corresponding to a pixel $p$ , in a canonical valid ordering, is called an elementary invalid ordering. Its corresponding extreme base is called elementary invalid extreme base, and is denoted as $b^{\tilde{\prec}_p^j}$ .
+
+Lemma 7. Consider an elementary invalid extreme base $b^{\prec_p^i}$ , obtained by swapping two adjacent elements $(p^{i+1}, p^i)$ in the universal ordering, $\prec_0$ (Def. 2). Then: $b^{\prec_p^i} - b^{\prec_0} = (\chi_p^i - \chi_p^{i+1})(L + b)$ , where $b^{\prec_0}$ is the valid extreme base corresponding to $\prec_0$ .
+
+Lemma 8. An invalid canonical extreme base, $b^{\prec}$ , can be represented as a linear combination of elementary invalid extreme base vectors such that: $b^{\prec} = \sum_{p\in \mathcal{P}}\sum_{i = 1}^{m - 1}\alpha_p^i b^{\widetilde{\tau}_p^i} + \Lambda$ , where $0 < \alpha_{p}^{i}\ll L$ , and $\varLambda$ is a vector with all its elements much smaller than $L$ .
+
+Due to Lemma 5, the above result is also true for representing the invalid extreme bases (and not only the canonical ones), with a different $\Lambda$ . Lemma 7 allows us to further simplify the result of Lemma 8 to the following:
+
+Lemma 9 (Invalid Extreme Base Representation). An invalid extreme base can be represented as $b^{\prec} = \sum_{p\in \mathcal{P}}\sum_{i = 1}^{m - 1}\alpha_{p}^{i}L(\chi_{p}^{i} - \chi_{p}^{i + 1}) + \Lambda$ , where $\chi_p^i$ is an indicator vector corresponding to element $p^i$ , $0 < \alpha_{p}^{i}\ll L$ , and $\Lambda$ is some vector whose all elements are $\ll L$ .
+
+Recall from Eq. (5): $x = x_{v} + x_{i}$ , where $x_{v} = \sum_{b^{\prec_{j}} \in R} \lambda_{j} b^{\prec_{j}}$ , and $x_{i} = \sum_{b^{\prec_{i}} \in Q} \lambda_{i} b^{\prec_{i}}$ . Using Lemma 9 to replace the second term, and noting that $L \approx \infty \Rightarrow \lambda_{i} \approx 0$ , and $\sum \lambda_{j} \approx 1$ , one observes that the term $\sum_{b^{\prec_{i}} \in Q} \lambda_{i} \Lambda_{i}$ in the expansion can be made smaller than the precision constant by increasing the value of $L$ ( $\lambda < |\mathcal{V}| M / L$ by Lemma 4) and can be dropped. As one of the final theoretical results of this paper, we can show the following:
+
+Theorem 2 (Main Result).
+
+$$
+\sum_ {\forall b ^ {\prec_ {i}} \in Q} \lambda_ {i} b ^ {\prec_ {i}} = \sum_ {p \in \mathcal {P}} \sum_ {k = 1} ^ {m - 1} \beta_ {p} ^ {k} L \left(\chi_ {p} ^ {k} - \chi_ {p} ^ {k + 1}\right), \tag {6}
+$$
+
+where $\lambda_{i}\geq 0$ $\beta_p^k = \sum_{b_i\in Q}\alpha_p^k\lambda_i$
+
+Note that the above result incorporates all the invalid extreme bases, not merely the ones involved in the representation of base vector $x$ in any iteration of MNP. Us-
+
+ing the result in Eq. (5), we get: $\| x\| ^2 = \left\| \sum_{b^{\prec_j}\in R}\lambda_jb^{\prec_j} + \sum_{p\in \mathcal{P}}\sum_{k = 1}^{m - 1}\beta_p^k L(\chi_p^k -\chi_p^{k + 1})\right\| ^2$
+
+# 5 The Multi-label Hybrid Algorithm
+
+In this section we give the algorithm for minimizing the norm of the base vector corresponding to a single clique in the original MRF-MAP problem, where the pseudo-Boolean function is generated from encoding the multi-label function. For solving the overall MRF-MAP problem with multiple cliques, the proposed algorithm can be used in the inner loop of the BCD strategy as suggested in [46].
+
+Theorem (2) opens up the possibility of minimizing $\| x\|^2$ for a single clique using the BCD strategy. We will have two blocks. The first block, called the valid block, is a convex combination of valid extreme bases $b^{\prec j}$ , where standard MNP algorithm can be used to optimize the block. The other block, called the invalid block, corresponds to the sum of the mn terms of type: $\beta_p^k L(\chi_p^k - \chi_p^{k+1})$ , representing the invalid extreme bases. For minimizing the norm of the overall base vector using the in
+
+valid block, we hold the contribution from the valid block, $x_{v}$ , constant $^{4}$ . Each vector $\beta_p^k L(\chi_p^k -\chi_p^{k + 1})$ may be looked upon as capturing the $\beta_p^k$ increase/decrease due to the exchange operation between the two adjacent elements which define an elementary extreme base. This exchange operation can be viewed as flow of $\beta_p^k L$ from the element $p^{k + 1}$ to $p^k$ . We model the optimization problem for the invalid block using a flow graph whose nodes consists of $\{p^k\mid p\in \mathcal{P},1\leq k\leq m - 1\} \cup \{s,t\}$ . We add two type of edges:
+
+
+Fig.2: Flow graph corresponding to the exchange operations for optimizing the block containing invalid extreme bases.
+
+- Type 1: If $x_{v}(p^{k})$ , corresponding to the valid block contribution, is $>0$ , then we add a directed edge from $s\rightarrow p^k$ , else we add the edge from $p^k\to t$ with capacity $x_{v}(p^{k})$ .
+- Type 2: The directed edges $p^{k+1}$ to $p^k$ , $1 \leq k \leq (m - 1)$ with capacity $|\mathcal{V}|M$ to ensure that the capacity is at least as large as $\beta_p^k L$ : much larger than any permissible value of $x_v(p^k)$ . Thus, any feasible flow augmentation in a path from $s$ to $t$ can saturate only the first or the last edge in the augmenting path (i.e. the edge emanating from $s$ or the edge incident at $t$ in the path).
+
+Figure 2 is an example of a flow graph for 3 pixel and 3 label problem. Since the starting state is $x_{v}$ the "initial flow" prior to pushing flow for flow maximization requires setting flow in a type 1 edge incident at $p^k$ equal to the value of $x_{v}(p^{k})$ and that in type 2 edges as 0. This is because sum of flow on all
+
+Algorithm 1 Computing Min $\ell_2$ Norm from the Flow Output
+Input: Vector $e$ the output of the max flow algorithm.
+Output: The transformed vector $e$ with minimum $\ell_2$ norm.
+1: for $\forall p\in \mathcal{P}$ do
+2: for $i = 2:m$ do
+3: repeat
+4: Find smallest $k$ , $i\geq k\geq 1$ , such that $e(p^i) > e(p^{i - 1}) = e(p^{i - 2})\dots = e(p^k)$ or $e(p^i) = e(p^{i - 1}) = e(p^{i - 2})\dots = e(p^{k + 1}) > e(p^k)$ ;
+5: Set $e(p^i),e(p^{i - 1}),\ldots ,e(p^k)$ equal to $av_{k}$ , where $av_{k}$ is the average of $e(p^i),e(p^{i - 1}),\ldots ,e(p^k)$ ;
+6: until $e(p^{k + 1})\leq e(p^{k})$
+7: end for
+8: end for
+
+edges incident at a node may be looked upon as the value of the corresponding element in the base vector $^5$ . In effect initially there are non zero excesses on the non $s, t$ nodes in the flow graph defined as the sum of net in-flow on all edges incident at a node. The excess at node $p_k$ is denoted by $e(p_k)$ . Max flow state can be looked upon as that resulting from repeatedly sending flow from a positive excess vertex to a negative excess vertex till that is no more possible. Values in the optimal base vector (optimal subject to the given $x_v$ ) at the end of this iteration will be the excesses at nodes when max flow state has been reached.
+
+# 5.1 Computing Min $\ell_2$ Norm By Flow
+
+Since there is no edge between any two nodes corresponding to different pixels, max flow can be calculated independently for each pixel. When max flow state is reached in the flow graph associated with a pixel, a vertex which still has a negative excess will be to the left of vertices with positive excess (planar flow graph laid out as in Figure 2) otherwise flow could be pushed from a positive excess vertex to a negative excess vertex.
+
+Note that the optimal base vector is not unique. Consider two adjacent vertices, $p^{k+1}$ and $p^k$ , in the flow graph when the max flow state has been reached. If $e(p^{k+1})$ is larger than $e(p^k)$ then increasing the flow in the edge from $p^{k+1}$ to $p^k$ by $\delta$ decreases $e(p^{k+1})$ by $\delta$ and increases $e(p^k)$ by $\delta$ . The result of this "exchange operation" is to create another optimal base vector but with a smaller $\ell_2$ norm.
+
+An optimal base vector with minimum $\ell_2$ norm will correspond to the max flow state in the flow graph in which $e(p^{k + 1})\leq e(p^k)$ for all adjacent pairs of type 2 vertices. If this is not so then there would exist at least a pair $e(p^{k + 1})$ and $e(p^k)$ such that $e(p^{k + 1}) > e(p^k)$ . Doing an exchange operation between $p^{k + 1}$ and $p^k$ involving setting $e(p^{k + 1})$ and $e(p^k)$ to the average of the old values will create a new optimal base vector with lower value of the $\ell_2$ norm. Algorithm 1 gives an efficient procedure to transform the optimal base vector outputted by the max
+
+flow algorithm to one with minimum $\ell_2$ norm. Note that the proposed algorithm simply updates the base vector in one pass without any explicit flow pushing. In contrast, the corresponding algorithm for general flow graphs given in [45] requires $O(n\log n)$ additional max flow iterations over an $n$ vertex flow graph.
+
+# 5.2 Overall Algorithm
+
+The proposed Multi-label Hybrid (MLHybrid) algorithm is quite similar to the algorithm in [46] in its over all structure. Just like [46], we also create blocks corresponding to each clique, and optimize each block independently (taking the contribution of other blocks as suggested in [46]) in an overall block coordinate descent strategy. The only difference between SoSMNP and MLHybrid is the way we optimize one block. While SoSMNP uses standard MNP, we optimize using a special technique, as outlined in previous section, with (sub)blocks of valid and invalid extreme bases, within each block/queue. Hence, the convergence and correctness of overall algorithm follows from block coordinate descent similar to [46]. What we need to show is that for a single clique/block, the algorithmic strategy of alternating between valid and invalid blocks converges to the optimal for that clique/block.
+
+Recall that in a standard MNP algorithm iteration, given the current base vector $x$ , an extreme base $q$ , that minimizes $x^{\mathsf{T}}q$ is added to the current set. Hence, steps to convergence of MNP is bounded by the number of extreme bases that may be added. In our case we have shown in the Supplementary Section that when we start with a valid extreme base, the extreme base generated in the valid block after using the latest contribution from the invalid block, will come out to be a valid extreme base. This implies that the number of iterations involving invalid blocks can not exceed the number of valid extreme bases added as in the standard MNP algorithm. This ensures convergence of the optimization step for each block. The formal convergence proof for the MLHybrid algorithm is given in the Supplementary Section.
+
+The correctness of our optimization for each block follows from the fact that the optimization for valid blocks proceeds in the standard way, and results in a new extreme base given the current base vector. The correctness of the optimization step of the invalid block, which finds a minimum norm base vector given a valid block, has already been explained in the previous section.
+
+# 6 Experiments
+
+We have experimented with pixel-wise object segmentation and stereo correspondence problems. All experiments have been conducted on a computer with Intel Core i7 CPU, 8 GB of RAM running Windows 10. Implementation of our algorithm is in C++ (https://github.com/ishantshanu/ML-Minnorm). For the segmentation experiments, the input images are from Pascal VOC dataset [8] with a small amount of Gaussian noise added. We have experimented with two types of submodular clique potentials:
+
+
+Fig. 3: Pixel-wise object segmentation comparison. Input images from the Pascal VOC dataset.
+
+
+Fig. 4: Stereo matching problem. Input images from the Middlebury dataset.
+
+- Decomposable: Sum of absolute difference of labels for all pixel pairs in a clique. Denoted by ABS.
+- Non-decomposable: Concave-of-Cardinality potential defined in [53] as: $\sum_{l\in \mathcal{L}}$ (number of pixels - number of pixels which have their label as $l$ ) $\alpha$ . We have used $\alpha = 0.5$ in our experiments.
+
+For both the potentials, two types of clique sizes namely "Small" (cliques ranging from 60 to 80 elements) and "Big" (cliques ranging from 300 to 400 elements) have been used for the experiments. Overlapping of cliques has been ensured by running SLIC algorithm [1] with different seeds.
+
+Figure 5 shows the IOU values as bars for Deeplabv3+ [6] fine-tuned on noisy images (red), running MLHybrid with small cliques (green) and with big cliques (blue) on all the classes of the VOC dataset for the segmentation problem. The likelihood of a label on each pixel, required for our algorithm, is estimated using the scaled score from the Deeplabv3+. The scaling factors are specific to labels and are the hyper-parameters in our algorithm. We use the pre-trained version
+
+
+Fig.5: Shows IOU values across all the classes of PASCAL VOC dataset.
+
+of Deeplabv3+ from [6]. Deeplabv3+ gives overall pixel accuracy of 82.79 and with MLHybrid we get pixel accuracy of 84.07 and 85.11 respectively for small and big cliques. Mean IOU values (three bars at the right end) are 0.544, 0.566, and 0.579 respectively. MLHybrid has been run with non-decomposable clique potentials and the same standard fixed hyper parameters on the VOC dataset.
+
+The performance of MLHybrid improves with fine tuning of hyper parameters. Figure 3 shows the visual results on four pictures from the data set when the hyper parameters have been tuned. To show the extent of improvement we have also included in Figure 3 the MLHybrid output with the standard hyper parameters (standard-hyp). We have also included the IOU values in the images (upper left hand corner) corresponding to Deeplabv3+, MLHybrid (Big Concave)) run with standard and fine tuned hyper parameters respectively. For all the four images IOU values hover around 0.9 when MLHybrid is run with big cliques and concave potentials. Run time for MLhybrid in seconds are shown at the upper right corner of the respective images. Deeplabv3+ takes approximately 0.5 seconds per image excluding the training time. Hyper parameters for $\alpha$ -expansion running on pairwise cliques ( $4^{th}$ column in Figure 3) are the optimized parameters used for MLHybrid as are the likelihood labels for the pixels.
+
+Note that the quality of output is distinctly better for the non-decomposable concave potential in comparison to the decomposable ABS potential for both Small and Big clique configurations. The output for $\mathrm{Big}(\mathrm{Concave})$ matches the ground truth significantly. The time taken for concave potentials is distinctly less than ABS potentials for the same size and number of cliques. This difference is because the number of iterations taken for convergence is proportionately less for non-decomposable potentials. It is reasonable to infer that the segmentation quality improves with clique size. Since for large cliques, potentials will need to be predefined and not learnt, designing clique potentials calls for further investigation. Also, since fine tuning of hyper parameters improves quality of segmentation results significantly an area of research with high pay off is how to automate the process of fine tuning the hyper parameters for the segmentation problem.
+
+For stereo correspondence, the images are from Middelbury dataset [42] and are of size $200 \times 200$ . The cliques are generated, as earlier, using SLIC algorithm. Label likelihood is calculated using Birchfield/Tomasi cost given in [3]. There are 16 disparity labels considered and clique potential used is the same as for the segmentation problem. Figure 4 shows the output. We have compared with
+
+implementations of Max Product Inference (MPI) [27], TRWS [28], MPLP [16], $\alpha$ -expansion [4] available in Darwin framework [17]. We use the pair wise absolute difference of labels potential with a pixel covered by maximum of four cliques. Other than $\alpha$ -expansion, other methods could not handle pairwise potentials emanating out of all pairs of variables in a clique of size 50 or larger. Primal/Dual values are shown below the images and their corresponding running times on the top.
+
+Our final experiments are to show efficacy of convergence of the MLHybrid algorithm. Table 1 shows the performance of SOS-MNP [46] on the extended pseudo-boolean submodular function. Since [46] do not bypass $L$ therefore we run it for different values of $L$ . Note that primal and dual
+
+| L = | 10^9 | 10^{11} | 10^{13} | 10^{15} |
| Primal | 1.26(10^15) | 1.26(10^17) | -1.75(10^8) | -1.77(10^8) |
| Dual | -5.37(10^8) | -5.37(10^8) | -5.58(10^8) | -5.60(10^8) |
+
+Table 1: Primal dual for SoS-MNP [46] for different values of $L$ .
+
+do not converge even when the value of $L$ is as large as $10^{15}$ after running the algorithm for approximately 50 minutes. SOS-MNP not only takes huge amount of time but do not even converge to the right point.
+
+In contrast Figure 6 shows the convergence performance of the MLHybrid algorithm for solving a stereo problem on the sawtooth sample with sum of absolute difference potential. The figure shows that on the same potential function and same problem size, time taken for effective convergence by the MLHybrid algorithm is only around 28 seconds. It must be pointed out that one of the factors contributing to speed gain is the way invalid extreme
+
+
+Fig.6: Convergence of MLHybrid.
+
+bases are being handled. The flow graph created at each iteration handles a fixed number of (only $n(m - 1)$ ) elementary extreme bases which span the space of all invalid extreme bases. The run-time at each iteration is essentially independent of the number of invalid extreme bases added by Wolfe's algorithm.
+
+# 7 Conclusions
+
+In this paper, we have proposed a new efficient inference algorithm for higher-order multi-label MRF-MAP problems, which enables obtaining optimal solution to such problems when potentials are submodular, and even when the cliques are of size upto 100 (for a 16 label problem). This has been made possible by exploiting the structure of the potentials used to make the extension function submodular. The min $\ell_2$ norm solution to the block of invalid extreme bases can be found by max flow techniques on a particularly simple flow graph. What takes a series of max flow iterations in [45] requires only two linear time passes on the resultant flow graph.
+
+# References
+
+1. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süssstrunk, S.: Slic superpixels compared to state-of-the-art superpixel methods. PAMI 34(11), 2274-2282 (2012)
+2. Arora, C., Maheshwari, S.: Multi label generic cuts: Optimal inference in multi label multi clique MRF-MAP problems. In: CVPR. pp. 1346-1353 (2014)
+3. Birchfield, S., Tomasi, C.: A pixel dissimilarity measure that is insensitive to image sampling. PAMI 20(4), 401-406 (1998)
+4. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence 23(11), 1222-1239 (2001)
+5. Chakrabarty, D., Jain, P., Kothari, P.: Provable submodular minimization using wolfe's algorithm. In: NIPS. pp. 802-809 (2014)
+6. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: ECCV (2018)
+7. Delong, A., Osokin, A., Isack, H.N., Boykov, Y.: Fast approximate energy minimization with label costs. International journal of computer vision 96(1), 1-27 (2012)
+8. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html
+9. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. International journal of computer vision 70(1), 41-54 (2006)
+0. Fix, A., Gruber, A., Boros, E., Zabih, R.: A graph cut algorithm for higher-order markov random fields. In: ICCV. pp. 1020-1027 (2011)
+1. Fix, A., Wang, C., Zabih, R.: A primal-dual algorithm for higher-order multilabel markov random fields. In: CVPR. pp. 1138-1145 (2014)
+2. Freedman, D., Drineas, P.: Energy minimization via graph cuts: Settling what is possible. In: CVPR. pp. 939-946 (2005)
+3. Fujishige, S., Hayashi, T., Isotani, S.: The minimum-norm-point algorithm applied to submodular function minimization and linear programming (2006)
+4. Fujishige, S., Isotani, S.: A submodular function minimization algorithm based on the minimum-norm base. Pacific Journal of Optimization 7, 3-17 (2011)
+5. Gallagher, A.C., Batra, D., Parikh, D.: Inference for order reduction in Markov random fields. In: CVPR. pp. 1857-1864 (2011)
+6. Globerson, A., Jaakkola, T.S.: Fixing max-product: Convergent message passing algorithms for map lp-relaxations. In: NIPS. pp. 553-560 (2008)
+7. Gould, S.: Darwin: A framework for machine learning and computer vision research and development. JMLR 13(Dec), 3533-3537 (2012)
+8. Hazan, T., Shashua, A.: Norm-product belief propagation: Primal-dual message-passing for approximate inference. Information Theory 56(12), 6294-6316 (2010)
+9. Ishikawa, H.: Higher-order clique reduction without auxiliary variables. In: CVPR. pp. 1362-1369 (2014)
+20. Ishikawa, H.: Exact optimization for Markov Random Fields with convex priors. PAMI 25(10), 1333-1336 (2003)
+21. Ishikawa, H.: Transformation of general binary MRF minimization to the first-order case. TPAMI 33(6), 1234-1249 (2011)
+22. Iwata, S., Fleischer, L., Fujishige, S.: A combinatorial strongly polynomial algorithm for minimizing submodular functions. JACM 48(4), 761-777 (2001)
+
+23. Jegelka, S., Bach, F., Sra, S.: Reflection methods for user-friendly submodular optimization. In: NIPS. pp. 1313-1321 (2013)
+24. Kahl, F., Strandmark, P.: Generalized roof duality for pseudo-boolean optimization. In: ICCV. pp. 255-262 (2011)
+25. Kappes, J.H., Andres, B., Hamprecht, F.A., Schnorr, C., Nowozin, S., Batra, D., Kim, S., Kausler, B.X., Kröger, T., Lellmann, J., Komodakis, N., Savchynskyy, B., Rother, C.: A comparative study of modern inference techniques for structured discrete energy minimization problems. IJCV 115(2), 155-184 (2015)
+26. Kohli, P., Torr, P.H., et al.: Robust higher order potentials for enforcing label consistency. IJCV 82(3), 302-324 (2009)
+27. Koller, D., Friedman, N., Bach, F.: Probabilistic graphical models: principles and techniques. MIT press (2009)
+28. Kolmogorov, V.: Convergent tree-reweighted message passing for energy minimization. PAMI 28(10), 1568-1583 (2006)
+29. Kolmogorov, V.: Submodularity on a tree: Unifying $l^1$ -convex and bisubmodular functions. In: International Symposium on Mathematical Foundations of Computer Science. pp. 400-411. Springer (2011)
+30. Kolmogorov, V.: Minimizing a sum of submodular functions. Discrete Applied Mathematics 160(15), 2246-2258 (2012)
+31. Kolmogorov, V.: A new look at reweighted message passing. TPAMI 37(5), 919-930 (2015)
+32. Kolmogorov, V., Zabin, R.: What energy functions can be minimized via graph cuts? TPAMI 26(2), 147-159 (2004)
+33. Komodakis, N., Paragios, N.: Beyond pairwise energies: Efficient optimization for higher-order MRFs. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. pp. 2985-2992. IEEE (2009)
+34. McCormick, S.T.: Submodular function minimization (2005)
+35. Murota, K.: On steepest descent algorithms for discrete convex functions. SIAM Journal on Optimization 14(3), 699-707 (2004)
+36. Orlin, J.B.: A faster strongly polynomial time algorithm for submodular function minimization. Mathematical Programming 118(2), 237-251 (2009)
+37. Pearl, J.: Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann (2014)
+38. Potetz, B., Lee, T.S.: Efficient belief propagation for higher-order cliques using linear constraint nodes. CVIU 112(1), 39-54 (Oct 2008)
+39. Ramalingam, S., Russell, C., Ladicky, L., Torr, P.H.: Efficient minimization of higher order submodular functions using monotonic boolean functions. arXiv preprint arXiv:1109.2304 (2011)
+40. Roth, S., Black, M.J.: Fields of experts. IJCV 82(2), 205-229 (2009)
+41. Rother, C., Kohli, P., Feng, W., Jia, J.: Minimizing sparse higher order energy functions of discrete variables. In: CVPR. pp. 1382-1389 (2009)
+42. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV 47(1-3), 7-42 (2002)
+43. Schrijver, A.: A combinatorial algorithm minimizing submodular functions in strongly polynomial time. Journal of Combinatorial Theory, Series B 80(2), 346-355 (2000)
+44. Schrijver, A.: Combinatorial optimization: polyhedra and efficiency, vol. 24. Springer Science & Business Media (2003)
+45. Shanu, I., Arora, C., Maheshwari, S.: Inference in higher order mrf-map problems with small and large cliques. In: CVPR. pp. 7883-7891 (2018)
+
+46. Shanu, I., Arora, C., Singla, P.: Min norm point algorithm for higher order MRF-MAP inference. In: CVPR. pp. 5365-5374 (2016)
+47. Sontag, D., Globerson, A., Jaakkola, T.: Introduction to dual decomposition for inference. Optimization for Machine Learning 1, 219-254 (2011)
+48. Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M., Rother, C.: A comparative study of energy minimization methods for Markov random fields with smoothness-based priors. TPAMI 30(6), 1068-1080 (Jun 2008)
+49. Tarlow, D., Givoni, I.E., Zemel, R.S.: HOP-MAP: Efficient message passing with high order potentials. In: AISTATS (2010)
+50. Windheuser, T., Ishikawa, H., Cremers, D.: Generalized roof duality for multi-label optimization: Optimal lower bounds and persistency. In: European Conference on Computer Vision. pp. 400-413. Springer (2012)
+51. Woodford, O., Torr, P., Reid, I., Fitzgibbon, A.: Global stereo reconstruction under second order smoothness priors. In: CVPR. pp. 1-8 (2008)
+52. Yedidia, J.S., Freeman, W.T., Weiss, Y.: Generalized belief propagation. In: Advances in neural information processing systems. pp. 689-695 (2001)
+53. Zhang, J., Djolonga, J., Krause, A.: Higher-order inference for multi-class log-supermodular models. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1859-1867 (2015)
\ No newline at end of file
diff --git a/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/images.zip b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6fcfa47a0a9390da01d6cc154c007037caa00138
--- /dev/null
+++ b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7bdd6a638b1b2b199d34f5b1e3acb527c89facfb710e34143dddb331632ff3b5
+size 272782
diff --git a/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/layout.json b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e6024e0fc9a49bce3babb628e428cf48d24dac62
--- /dev/null
+++ b/aninferencealgorithmformultilabelmrfmapproblemswithcliquesize100/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:96ac3198476c5918cdf94c2dc5066b6e26742a999ae2b3e2708558319ceb58b8
+size 681863
diff --git a/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/431c03ec-a561-4ada-af16-62c11fb710b5_content_list.json b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/431c03ec-a561-4ada-af16-62c11fb710b5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fe55b16918ee7234cebdbd78837f354f1a221150
--- /dev/null
+++ b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/431c03ec-a561-4ada-af16-62c11fb710b5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:31381b2a6cbd69210628b2d78ca873f5702b6aa66069ed901cde5da06ca1c648
+size 74110
diff --git a/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/431c03ec-a561-4ada-af16-62c11fb710b5_model.json b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/431c03ec-a561-4ada-af16-62c11fb710b5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e36d190be2ba0b31871f22c62b06802027c10034
--- /dev/null
+++ b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/431c03ec-a561-4ada-af16-62c11fb710b5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c20ed50c5d216ca51c1bf3d3604a0da6fb38d43db5c96fbbee1cdbae0465d8e9
+size 92255
diff --git a/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/431c03ec-a561-4ada-af16-62c11fb710b5_origin.pdf b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/431c03ec-a561-4ada-af16-62c11fb710b5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..da89ab564b1e473f83b9acf6deb08c1cd500e580
--- /dev/null
+++ b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/431c03ec-a561-4ada-af16-62c11fb710b5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d0228cf7c8226d3b1fd4f8130d86ab5c6a89d25d297798a07ad8d5f92ce8d4c
+size 8231619
diff --git a/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/full.md b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a84f574a445b36b4f2a8fa1944a4f1b87b858792
--- /dev/null
+++ b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/full.md
@@ -0,0 +1,280 @@
+# An LSTM Approach to Temporal 3D Object Detection in LiDAR Point Clouds
+
+Rui Huang, Wanyue Zhang, Abhijit Kundu, Caroline Pantofaru, David A Ross, Thomas Funkhouser, and Alireza Fathi
+
+Google Research huangrui@google.com
+
+Abstract. Detecting objects in 3D LiDAR data is a core technology for autonomous driving and other robotics applications. Although LiDAR data is acquired over time, most of the 3D object detection algorithms propose object bounding boxes independently for each frame and neglect the useful information available in the temporal domain. To address this problem, in this paper we propose a sparse LSTM-based multi-frame 3d object detection algorithm. We use a U-Net style 3D sparse convolution network to extract features for each frame's LiDAR point-cloud. These features are fed to the LSTM module together with the hidden and memory features from last frame to predict the 3d objects in the current frame as well as hidden and memory features that are passed to the next frame. Experiments on the Waymo Open Dataset show that our algorithm outperforms the traditional frame by frame approach by $7.5\%$ mAP@0.7 and other multi-frame approaches by $1.2\%$ while using less memory and computation per frame. To the best of our knowledge, this is the first work to use an LSTM for 3D object detection in sparse point clouds.
+
+Keywords: 3D Object Detection, LSTM, Point Cloud
+
+# 1 Introduction
+
+3D object detection is one of the fundamental tasks in computer vision. Given observations of a scene with a 3D sensor (e.g., LiDAR), the goal is to output semantically labeled 3D oriented bounding boxes for all objects in every observation. This task is critical for autonomous driving, object manipulation, augmented reality, and many other robot applications.
+
+Although almost all robot sensors capture data continuously (LiDAR, RGB-D video, RGB video, etc.), most 3D object detection algorithms consider only one "frame" of input sensor data when making bounding box predictions. Historically, multi-frame data has not been widely available (e.g. the Kitti 3D Object Detection Challenge [15] provides only one LiDAR sweep for each scene). However, after datasets with multi-frame sequences of LiDAR were released [3,5,42], most 3D object detection algorithms still work frame by frame. Among the algorithms with reported results on the nuScenes and Waymo object detection
+
+tasks, we find that only Ngiam et al. [34] and Hu et al. [21] consider multiple frames as input, and they both use simple methods based on reusing seed points or concatenating of input data for multiple frames.
+
+
+Fig. 1. Our method consumes a sequence of point clouds as input. At each time step, the proposed LSTM module combines the point cloud features from the current frame with the hidden and memory features from the previous frame to predict the 3d objects in the current frame together with the hidden and memory features that are passed to the next frame. For memory efficiency we pass only the hidden feature points that have high score to the next frame (pink in the images). $D_{t}$ and $h_t$ represent the 3d object detections and hidden features in frame $t$ respectively.
+
+In this paper, we investigate a new method as depicted in Fig 1 that utilizes the temporal sequence of LiDAR data acquired by autonomous vehicle for 3D object detection. Our approach is to use the memory of an LSTM to encode information about objects detected in previous frames in a way that can assist object detection in the current frame. Specifically, we represent the memory and hidden state of the LSTM as 64-dimensional features associated with 3D points observed in previous frames. At each frame, we use an LSTM architecture to combine features from these memory and hidden state 3D point clouds with features extracted from the latest observed 3D point cloud to produce bounding box predictions for the current frame and update the memory and hidden state for the next frame.
+
+The rationale for this approach is that the LSTM memory can represent everything known about object detections in the past in a concise set of features associated with a sparse set of 3D positions (ideally near past object detections). In comparison to previous methods that concatenate input point clouds from
+
+multiple timesteps at every frame, this approach is more memory and compute efficient, as we include a relatively small number of 3D points related to past object detections in our LSTM memory rather than all the input points from previous frames (redundant storage and processing of 3D points on background objects such as trees and buildings is wasteful). In comparison to a traditional LSTM, our approach of associating memory and hidden states with 3D points provides a spatial attention mechanism that assists object detection and enables transformation of the memory and hidden state from frame to frame based on the egomotion of the vehicle. By associating the memory and hidden state with 3D points contributing to confident object detections in the past, we expect to get more accurate and robust detections with this approach.
+
+Our implementation is built on a U-Net style sparse 3D convolution backbone (SparseConv) as described in [33]. The point cloud for each input frame is voxelized into sparse 3d voxels, convolved on a sparse grid, and then associated with encoded features. Then, the encoded point cloud features are jointly voxelized with the memory and hidden features and passed through a SparseConv inside the LSTM. The LSTM network outputs hidden and memory point cloud features that will be passed to the next frame. Furthermore, the predicted hidden features are fed into the 3d detection head to generate per point object bounding box proposals (center, size, rotation, and confidence), which are further processed with a graph convolution to smooth per point predictions in local neighborhoods and non maximum suppression (NMS) to select a highly confident and non-overlapping set of proposed bounding boxes. The hidden and memory state (point features) only keep track of features in locations that have high objectness score. This enables us to be more memory efficient and to be able to aggregate and reason about information in a long sequence.
+
+Experiments show that this method outperforms frame by frame detection models by $7.5\%$ $\mathrm{mAP}@0.7$ and beats a strong multi-frame concatenation baseline model by $1.2\%$ . Our model achieves $6.8\%$ better results than a baseline that refines predicted bounding boxes using the classical combination of frame by frame detection, Hungarian assignment, and Kalman filtering [44].
+
+Our key contributions are summarized below:
+
+- We propose the first LSTM-based sequential point cloud processing framework for 3D object detection. It provides a significant performance boost over a single frame state-of-the-art 3D SparseConv model. Furthermore, our model outperforms a strong baseline based on concatenating multi-frame data.
+- We propose a 3D Sparse Conv LSTM where a small 3d sparse U-Net replaces the fully connected layer in vanilla LSTM. Our model has explicit memory to facilitate reasoning across long sequence of point clouds. Compared to point-based methods, our voxel-based module is effective and efficient in fusing accumulated memory and input data in multiple scales, while maintaining a constant memory footprint regardless of the sequence length in inference time.
+
+# 2 Related Work
+
+3D Object Detection A common approach to 3D object detection is to utilize ideas that have been successful for 2D object detection [47,41,40,6,27,26,32]. For instance, Frustum-PointNet [36] uses 2D detectors on RGB images and point clouds from the depth sensor. However, the search space for potential objects is limited in the 3D viewing frustum extended from 2D regions. MV3D [6] deploys a multi-view fusion network for features extracted from the bird-eye view, Lidar range view and RGB images. Building on soft voxelization, Zhou et al. [52] fuse features based on Cartesian coordinate, perspective coordinate and the output of a shared fully connected layer from LiDAR points.
+
+Another class of methods [48,51,19,50,39,35,37] propose networks that directly consume the 3d point cloud as input. Shi et al. [39] propose a bottom-up approach to directly generate 3D bounding box proposals from the point cloud, followed by a sub-network for refinement. VoteNet [35] uses PointNet++ [37] backbone to vote for object centers. The votes are then clustered to produce the final bounding box proposals.
+
+There is an increasing trend to convert point clouds to regular grids where 3d convolution can be conveniently constructed. VoxelNet [53] partitions point clouds into voxels but this method is computationally expensive. Some of the previous works attempt to solve this issue by making use of the sparsity pattern of 3D points [12,16,17,38,12,33]. SparseConv [18,33] is exceptionally efficient as convolutions are restricted to active sites and sparsity pattern is preserved even after layers of convolutions. Our work uses a sparse voxel U-Net as the backbone as described in [33].
+
+Spatio-temporal Methods Various ways to make use of the temporal information are experimented for different vision tasks such as prediction in video data and modeling the human motion dynamics [46,45,7,22,23]. In addition, there are various LSTM based methods for object detection in video [14,24,45,7]. Among those, Xiao et al. [45] introduce a spatio-temporal memory module (STMM) to model temporal appearance and motion changes of objects. Teng et al. [43] explore detecting objects in streaming video using weak supervision by tracking and optical flow.
+
+For LiDAR point clouds, [49,11,31] use ConvGRU or ConvLSTM to process the bird-eye view projection. Luo et al. [30] explore the synergy of 4 tasks for autonomous driving: detection, tracking, motion forecasting and motion planning. By concatenating multiple frames of input, 2d convolution is performed on voxels to forecast the next $n$ frames. Inspired by Luo et al. [30], Casas et al. [4] jointly tackle detection and motion estimation by adding the rasterized map to provide the environment context for more accurate forecasting. MeteorNet [29] processes point clouds directly and proposes direct grouping and chained grouping to find nearest spatio-temporal neighbors. This method is applied to semantic segmentation, classification and flow estimation. Choy et al. [9] augment 3D data with the time axis and build sparse 4D convolutions using non-conventional kernel shapes. Our approach is distinct from the above as we propose a 3d sparse
+
+LSTM model that consumes sparse 3d data and performs 3d sparse operations and merging to perform 3d object detection.
+
+The closest related work to ours is PointRNN [13], which adapts RNNs for predicting scene flow on multi-frame point clouds. It proposes a point-rnn function to aggregate the past state and the current input based on the point coordinates. Our approach is different as we conduct 3D sparse convolution on adjacent voxels which avoids the expensive step of finding nearest neighbors for each point in the point cloud. Besides, no permutation invariant aggregation is needed by our method. Furthermore, we focus on 3d object detection in a sequence while PointRNN [13] focuses on scene flow.
+
+3D Scene Flow and Object Tracking Some researchers have focused on the related problem of predicting scene flow (3D motion vector per point) from pairs of input point clouds in adjacent frames. Since the magnitude and direction of movement of points provides a cue for object detection, these two tasks could provide mutual context for each other. For instance, Behl et al. [1] extract xyz object coordinates from 4 RGB images and incorporate detection and instance segmentation cues from 2D to improve scene flow. PointFlowNet [2] gets rid of the reliance on 2D images by using an encoder-decoder model to tackle flow, object location and motion in conjunction. FlowNet3D [10] consumes point clouds directly by using a Set Conv Layer to down-sample points, a flow embedding layer to aggregate features from two point clouds and a Set UpConv layer to get a per-point estimation of the translation vectors. While these methods are loosely related to ours, they are aimed at predicting flow for the entire scene and do not aim at improving 3d object detection.
+
+Some of the previous works [44,8,5] have focused on 3D tracking of objects. However, these algorithms mainly focus on generating multi-frame tracks and do not necessarily result in a more accurate per frame 3d object detection. Wang et al. [44] detect objects in every frame and then use the Hungarian algorithm to associate objects and Kalman filter to aggregate predictions across frames. We use this method as a baseline and compare our 3d object detection accuracy with theirs in the results section.
+
+# 3 Method
+
+The architecture of our method is shown in Fig 2. In each frame, we extract point features from the input point cloud by feeding it into a 3d sparse voxel conv U-Net as described in [33]. We feed the extracted features together with the memory and hidden features from previous frame to our proposed 3d sparse conv LSTM which processes them and outputs the hidden and memory features that will be consumed by the next frame. In the mean time, an object detection head is applied to the hidden features to produce 3d object proposals in each frame. The proposals are then passed through a graph convolution stage and then non-maximum suppression to output the detected 3d objects for each frame.
+
+
+Fig. 2. Overview of the temporal detection framework: A sequence of point clouds are processed by a Sparse Conv U-Net backbone in each frame. The 3d sparse LSTM fuses the backbone feature at the current time step $t$ with the hidden and memory feature at the previous time step $t - 1$ to produce hidden and memory feature at time step $t$ . Object proposals are generated from the hidden feature and refined using a graph convolution network. Farthest point sampling and non-maximum suppression are applied to the proposed 3d objects to produce the final detected 3d objects.
+
+# 3.1 3D Sparse Conv U-Net
+
+To extract point features from the input point cloud, we use a U-Net shaped backbone as described in [33]. The input to our feature extractor is a point cloud as a $N \times 3$ tensor (points with their xyz position). The network first voxelizes the point cloud into sparse 3d voxels. If multiple points fall in the same voxel, the voxel feature would be the average of the xyz location of those points. The netowrk encoder consists of several blocks of 3d sparse convolution layers where each block is followed by a 3d max pooling. The U-Net decoder upsamples the spatial resolution gradually with several blocks of sparse convolution layers and upsampling with skip connections from the encoder layers. The extracted voxel features are de-voxelized back to the points to output a $N \times F$ tensor where $F$ is the extracted feature dimension.
+
+# 3.2 3D Sparse Conv LSTM
+
+We use an LSTM based on 3d sparse conv network to leverage the temporal information in the sequence of LiDAR frames. Here we first review the basic
+
+notation for our 3d sparse LSTM module (Fig 3) and then introduce the key differences and challenges in more details.
+
+
+Fig. 3. 3D sparse conv LSTM structure. The backbone feature $x_{t}$ , memory feature $c_{t-1}$ and hidden feature $h_{t-1}$ are jointly voxelized. A lightweight SparseConv U-Net takes the concatenation of $x_{t}$ and $h_{t-1}$ to produce gates and memory candidate. Output features from the LSTM are de-voxelized before being sent to the next time step.
+
+Vanilla LSTM: Long short term memory (LSTM) [20] is a common variant of recurrent neural network used extensively for time series data and natural language processing. The vanilla LSTM structure is described below:
+
+$$
+f _ {t} = \sigma \left(W _ {f} \cdot \left[ h _ {t - 1}, x _ {t} \right] + b _ {f}\right) \tag {1}
+$$
+
+$$
+i _ {t} = \sigma \left(W _ {i} \cdot \left[ h _ {t - 1}, x _ {t} \right] + b _ {i}\right) \tag {2}
+$$
+
+$$
+\tilde {c} _ {t} = \tanh \left(W _ {c} \cdot \left[ h _ {t - 1}, x _ {t} \right] + b _ {c}\right) \tag {3}
+$$
+
+$$
+c _ {t} = f _ {t} \times c _ {t - 1} + i _ {t} \times \tilde {c} _ {t} \tag {4}
+$$
+
+$$
+o _ {t} = \sigma \left(W _ {o} \left[ h _ {t - 1}, x _ {t} \right] + b _ {o}\right) \tag {5}
+$$
+
+$$
+h _ {t} = o _ {t} \times \tanh \left(c _ {t}\right) \tag {6}
+$$
+
+The input feature at current time step $x_{t}$ and the hidden feature at the previous time step $h_{t-1}$ are concatenated before being transformed by a fully connected layer with weight matrix $W$ and bias $b$ . The transformed feature is activated by either sigmoid $(\sigma)$ or tanh function to produce input gate $(i_{t})$ , forget gate $(f_{t})$ , output gate $(o_{t})$ and cell memory candidate $(\tilde{c}_{t})$ for the current time step. The cell memory $c_{t}$ is updated from $\tilde{c}_{t}$ and the cell memory at previous time step $c_{t-1}$ , where $\times$ denotes element-wise multiplication.
+
+LSTM on Sparse Point Clouds: In our context, $x_{t}$ of size $N_{t} \times F$ is the point cloud features that is extracted using our 3d sparse backbone, $h_{t-1}$ and $c_{t-1}$ of size $N_{t-1}' \times F'$ are hidden and memory point features respectively. We subsample the hidden and memory point features and only keep a subset $(N_{t-1}')$ of the points that have high semantic scores (obtained from the pre-trained single frame detection model).
+
+In order for the LSTM to be able to fuse multiple 3d sparse tensor features, we replace the fully connected layer in vanilla LSTM with a lightweight 3d sparse
+
+conv U-Net structure to produce gates and cell memory candidate. This approach ensures that the LSTM has enough capacity to conduct sequential reasoning in the 3d sparse space and avoid the expensive nearest neighbor search in point-based methods [13,28,29].
+
+Joint Voxelization: Due to the object motion in the scene (even though that we compensate for the egomotion), $x_{t}$ and $h_{t-1}$ (or $c_{t-1}$ ) will not align in the 3D space. Our solution to this problem is to jointly voxelize the three point clouds, namely $x_{t}$ , $h_{t-1}$ and $c_{t-1}$ . The resulting three voxel grids are then concatenated in feature dimension. If one of the sparse point features has no point in a voxel but the other ones do, we will pad the voxel features of the one with the missing point with zeros.
+
+In other words, since the point clouds are non-overlapping in some region, after joint voxelization there will be empty voxels inserted into each voxel grids in non-overlapping region. This means that the joint sparse voxel representation covers the union of the spatial extent of all participating point clouds which is still extremely sparse.
+
+# 3.3 Object Detection
+
+The proposal head takes the voxelized hidden features $h_t$ of size $N \times F'$ from the LSTM at each time step to independently generate per voxel bounding box proposals (center, rotation, height, length and width). The predictions are then de-voxelized to produce per point bounding box predictions, taking into account each point's position offset within the voxel. During de-voxelization we transfer the prediction associated with each voxel to all the points that fall inside it. The head is implemented with 3 layers of sparse convolutions for each attribute.
+
+As described in [33], we construct a graph on top of the per point predictions, where each point (node) is connected to its K nearest neighbors with similar object center predictions. The predicted object attributes are propagated based on a predicted weight per point. The weight determines the significance of each point in comparison with its neighbors. The bounding box prediction loss is applied both before and after the propagation.
+
+During the inference time, we sample a subset of high score and farthest predictions and then apply non-maximum suppression to output the final 3d object detection results.
+
+# 3.4 Training
+
+We first train a single frame backbone which we use to extract the encoded point features for each frame. The proposed LSTM module processes the encoded point features $x_{t}$ together with hidden and memory point features from previous frame $(h_{t - 1}$ and $c_{t - 1}$ ) and outputs hidden and memory features $h_t$ and $c_t$ for the current frame. The object detection head takes $h_t$ as input and outputs the 3d detected objects in frame $t$ . The 3d box regression and classification losses are applied to the outputs in every frame. Our algorithm operates in the local
+
+coordinate frame which means the features from previous frames are transformed to the current frame to compensate for egomotion.
+
+As described in [33], we adopt a hybrid of regression and classification losses for bounding box prediction. Each bounding box is represented by height, length, width, center location and a $3 \times 3$ rotation matrix. Instead of computing a separate loss for each of the parameters, we use an integrated box corner loss which can be back propagated and get all the individual attributes updated at once. We first calculate the 8 box corners in a differentiable way, then apply Huber loss on the distance between ground-truth and predicted boxes. The benefit of doing this is the ease of training as we do not have to tune multiple individual losses.
+
+We use a dynamic classification loss as described in [33]. At each step, we classify the predictions that have more than $70\%$ IOU with their corresponding ground-truth object as positive and the rest of the predictions as classified as negative. As the model gets better in box prediction, there will be more positive predicted boxes over time.
+
+# 4 Experimental Results
+
+We perform a series of experiments to evaluate the performance of our LSTM network in comparison to the alternative approaches. Furthermore, we study the effects of our design decisions through ablation studies.
+
+
+Fig. 4. Example sequences in Waymo Open Dataset. Each frame is colored differently. A few fast moving cars and two walking pedestrians are shown over three frames.
+
+
+
+Dataset: We use the recently released Waymo Open Dataset [42] for our experiments. It contains 1000 sequences (798 training and 202 validation) captured in major US cities under diverse weather conditions and times of the day. Each sequence (Fig 4) has approximately 200 frames at a frame rate of $100\mathrm{ms}$ . Each frame has multiple LiDARs and cameras with annotated 3D and 2D bounding box labels for vehicles, pedestrians, cyclists, signs. In our experiments, only LiDAR point clouds and 3D bounding box labels for vehicles with $5+$ points are used for training and evaluation. The Waymo Open Dataset is a
+
+larger scale dataset in comparison to the previous self-driving car datasets such as Kitti dataset [15]. The 20 seconds sequence for each scene enables the training and evaluation of the temporal 3D object detection task on point clouds in challenging and realistic autonomous driving scenarios.
+
+| Model | mAP@0.7IoU |
| StarNet [34] | 53.7 |
| PointPillars† [25] | 57.2 |
| MVF [52] | 62.9 |
| U-Net | 56.1 |
| U-Net + Kalman Filter [44] | 56.8 |
| Concatenation (4 frames) | 62.4 |
| Ours (4 frames) | 63.6 |
+
+Table 1. 3D object detection results on Waymo Open dataset validation set. Unless noted otherwise, the models are using single frame. †:re-implemented by [34].
+
+Experiment Details: We follow the metric used in almost all self-driving car datasets which is the mean average precision (mAP) metric for the 7 degree-of-freedom 3D boxes at intersection over union (IoU) threshold of 0.7 for vehicles.
+
+For the object detection backbone, the encoder contains 6 blocks each with two 3D SparseConv layers, with output feature dimensions of 64, 96, 128, 160, 192, 224, 256. The decoder has the same structure in reversed order with skip connections from encoder layers.
+
+We use a lightweight 3D sparse U-Net for LSTM that has one encoder block (of 128 dimensions), max pooling, one bottleneck block (of 128 dimensions), unpooling, and one decoder block (of 256 dimensions). Models are trained on 20 synced GPUs with a batch size of 2 (which means effectively a batch size of 40). We train the model with 0.1 initial learning rate. After 25k steps, we decay the learning rate every 7k steps by the factors of [0.3, 0.1, 0.01, 0.001, 0.0001]. We use a voxel size of $[0.2\mathrm{m}, 0.2\mathrm{m}, 0.2\mathrm{m}]$ . The LSTM module uses the sub-sampled point cloud features that are computed by the backbone as well as hidden and memory features that it receives from the previous frame. Hidden feature points in previous steps are accumulated during the sequence. In practice, this only slightly increases the number of non-empty voxels at each step.
+
+# 4.1 Object Detection Results
+
+We show our results on Waymo Open Dataset in Table 1. Our first baseline (U-Net) is a single frame model built on our sparse 3D convolution U-Net backbone (without the LSTM) which achieves $56.1\%$ mAP at IoU 0.7. Our second baseline combines the single frame detector with AB3DMOT [44], which deploys a combination of 3D Kalman Filter and Hungarian algorithm. Kalman filter is a classical way for tracking objects which we use to update measurements based
+
+
+Fig. 5. We compare our sparse LSTM 3d object detection results with the one frame 3d detection baseline. Left: ground truth labels; Middle: single frame predictions; Right: LSTM predictions. Misaligned (arrows) and missing (circles) vehicles are highlighted.
+
+on prior from previous frames $^1$ . We build this baseline by applying the Kalman filter on top of the single frame detector. Based on our experiments, this method achieves $0.7\%$ gain in comparison to the single frame baseline.
+
+Our third baseline feeds the concatenation of 4 frames into our U-Net backbone (same as the first baseline, but with 4 frames of input). We concatenate the points in the feature dimension after applying ego-motion transformation. Since points in different frames do not align, we use zero padding to offset the features. This is more flexible than Luo et al. [30]'s early fusion with 1D convolution, and more memory and compute efficient than their late fusion since the backbone runs only once. In comparison to the U-Net baseline, this gives rise to a $6.3\%$ increase of mAP to $62.4\%$ . Finally, in the last row we show our proposed LSTM model (4-frames) results with the best performance of $63.6\%$ mAP@0.7.
+
+We report the results of other single-frame detectors for comparison. StarNet [34] is a point-based detector based on sampling instead of learned proposals. It achieves $53.7\%$ on the validation dataset. PointPillars [25] organizes point clouds into regular vertical columns and detects objects using 2D CNN. It achieves $57.2\%$ mAP (re-implemented by [34]). MVF [52] has the state-of-the-art single frame results on the Waymo Open Dataset. However, their method is not directly comparable to ours since they perform significant data augmentation. Regardless, our focus is on how an LSTM can be used to improve the results of any method, which is largely orthogonal to any particular choice of single-frame baseline. The results demonstrate its effectiveness ( $7.5\%$ improvement in mAP@0.7 over the single frame model with the same U-Net backbone).
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Forget Gate
+
+
+Prediction
+
+
+Ground Truth
+Fig. 6. Visualization of the forget gate of our proposed LSTM module, prediction of object detection and the ground truth label. The gate is visualized as a heatmap, where a high value (red) intuitively means not forgetting the hidden features at the location. We included ground truth boxes in this figure for clarity. The points within vehicles are mostly in high value, while the buildings and other objects are mostly in blue. The color in the prediction represents semantic classes of vehicles (red), background (black).
+
+The qualitative result of our method is shown in Fig 5 and Fig 6. Fig 5 shows that our LSTM method predicts more accurate bounding boxes and have fewer false negatives in comparison to the single frame baseline.
+
+Due to the NMS process, there are often more predicted boxes than in the ground truth. These false positives usually have low semantics scores (confidence). For better visualization, the forget gate feature heat maps (Fig 6) are sampled in point locations of a full point cloud from a voxel grid. The actual memory features point cloud (pink in Fig 7) concentrates on a smaller spatial extend, mostly on object surfaces. The memory features indicate the spatial attention of the network, which is useful to carry the most relevant information from previous frames to future frames.
+
+
+Fig. 7. Locations where hidden and memory features are selected (pink points).
+
+In Table 2, we present the results of our LSTM model with different number of frames. In the first row, as a sanity check, we show the mAP accuracy when applying our LSTM model to one frame. We see a $2.6\%$ increase in comparison to the one frame raw U-Net model shown in row 3 of Table 1. This is because our LSTM model has convolutional and non-linear layers and gates that enhance the expressiveness of the network. We see a $1.0\%$ improvement when using a 2-frame LSTM model in comparison to the 1-frame one. For the 4-frame LSTM model, mAP reaches $63.6\%$ , with a $4.9\%$ improvement in comparison to the 1-frame LSTM model.
+
+In order to take 7 frames as input, we decrease the batch size from 2 to 1 due to memory constraints. Compared with the 4 frame model with batch size 1, the performance increases by $1.0\%$ . Overall, the ablation study shows that LSTM with hidden and memory features over longer sequences results in higher 3d object detection accuracy.
+
+# 5 Discussion
+
+We have proposed an LSTM approach for detecting 3D objects in a sequence of LiDAR point cloud observations. Our method leverages memory and hidden state features associated with 3D points from previous object detections, which are transformed according to vehicle egomotion at each timestep. The backbone
+
+| Model | mAP@0.7IoU |
| 1 frame | 58.7 |
| 2 frames | 59.7 |
| 4 frames | 63.6 |
| 4 frames (batch size 1) | 62.3 |
| 7 frames (batch size 1) | 63.3 |
+
+Table 2. Ablation studies of our detection model on Waymo Open validation set
+
+for our LSTM is a sparse 3D convolution network that co-voxelizes the input point cloud, memory, and hidden state at each frame. Experiments on Waymo Open Dataset demonstrate that our algorithm achieves the state-of-the-art results and outperforms a single frame baseline by $7.5\%$ , a multi-frame object detection baseline by $1.2\%$ , and a multi-frame object tracking baseline by $6.8\%$ . In the future, we would like to also predict scene flow and use it to better transform the memory and hidden states in our LSTM, and we would like to study how our LSTM can be used to improve a variety of other single-frame object detectors.
+
+Memory Efficiency: Our proposed model is more memory efficient in comparison to previous temporal models that concatenate the point clouds from multiple frames [25,30]. A method that concatenates $M$ frames needs to apply the 3d network at each frame to $M$ times more number of points, while our 3d network is only applied to the points in the current frame plus a small set of features coming from last frame. Please note that our sub-sampled hidden and memory feature points (we sample 30k points in each frame out of 180k LiDAR points) are lightweight in comparison to passing full size point features from previous frames.
+
+Computation Efficiency: In comparison to the single frame model, the LSTM module adds a very small computational overhead. LSTM runs in a stream and is able to reuse the intermediate tensors that are computed in the previous time steps. The only additional overhead to per-frame computation is 3 sparse conv blocks which is a small fraction (10% of the parameters) of the single frame network that uses 15 sparse conv blocks. Note that our single frame feature extractor runs in 19ms on a Titan V GPU. Given that the lidar input arrives at 10hz, our network is still able to run in real-time within its 100ms computation budget. Therefore, it adds only a small overhead while it gains 7.5% in comparison to our single frame method. In comparison to the 4-frame concatenation baseline, the LSTM approach is more efficient. Concatenation reduces the sparsity and results in feeding a denser set of voxels to the network. We show that not only LSTM method is more efficient but also it achieves 1.2% better result.
+
+# References
+
+1. Behl, A., Hosseini Jafari, O., Karthik Mustikovela, S., Abu Alhajia, H., Rother, C., Geiger, A.: Bounding boxes, segmentations and object coordinates: How important is recognition for 3d scene flow estimation in autonomous driving scenarios? In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2574-2583 (2017)
+2. Behl, A., Paschalidou, D., Donne, S., Geiger, A.: Pointflownet: Learning representations for rigid motion estimation from point clouds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7962-7971 (2019)
+3. Caesar, H., Bankiti, V., Lang, A.H., Vora, S., Liong, V.E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., Beijbom, O.: nuscenes: A multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027 (2019)
+4. Casas, S., Luo, W., Urtasun, R.: Intentnet: Learning to predict intention from raw sensor data. In: Conference on Robot Learning. pp. 947-956 (2018)
+5. Chang, M.F., Lambert, J., Sangkloy, P., Singh, J., Bak, S., Hartnett, A., Wang, D., Carr, P., Lucey, S., Ramanan, D., et al.: Argoverse: 3d tracking and forecasting with rich maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8748-8757 (2019)
+6. Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3d object detection network for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1907-1915 (2017)
+7. Chen, X., Yu, J., Wu, Z.: Temporally identity-aware ssd with attentional LSTM. IEEE transactions on cybernetics (2019)
+8. Chiu, H.k., Prioletti, A., Li, J., Bohg, J.: Probabilistic 3d multi-object tracking for autonomous driving. arXiv preprint arXiv:2001.05673 (2020)
+9. Choy, C., Gwak, J., Savarese, S.: 4d spatio-temporal convnets: Minkowski convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3075-3084 (2019)
+10. Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., Van Der Smagt, P., Cremers, D., Brox, T.: Flownet: Learning optical flow with convolutional networks. In: Proceedings of the IEEE international conference on computer vision. pp. 2758-2766 (2015)
+11. El Sallab, A., Sobh, I., Zidan, M., Zahran, M., Abdelkarim, S.: Yolo4d: A spatiotemporal approach for real-time multi-object detection and classification from lidar point clouds. In: NIPS 2018 Workshop MLITS (2018)
+12. Engelcke, M., Rao, D., Wang, D.Z., Tong, C.H., Posner, I.: Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). pp. 1355-1361. IEEE (2017)
+13. Fan, H., Yang, Y.: Pointrnn: Point recurrent neural network for moving point cloud processing. arXiv preprint arXiv:1910.08287 (2019)
+14. Feng, Y., Ma, L., Liu, W., Luo, J.: Spatio-temporal video re-localization by warp LSTM. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1288-1297 (2019)
+15. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 3354-3361. IEEE (2012)
+
+16. Graham, B.: Sparse 3d convolutional neural networks. In: Xianghua Xie, Mark W. Jones, G.K.L.T. (ed.) Proceedings of the British Machine Vision Conference (BMVC). pp. 150.1-150.9. BMVA Press (September 2015). https://doi.org/10.5244/C.29.150, https://dx.doi.org/10.5244/C.29.150
+17. Graham, B., Engelcke, M., van der Maaten, L.: 3d semantic segmentation with submanifold sparse convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9224-9232 (2018)
+18. Graham, B., van der Maaten, L.: Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307 (2017)
+19. Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: Atlasnet: A papier-m\^ ach\`e approach to learning 3d surface generation. arXiv preprint arXiv:1802.05384 (2018)
+20. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735-1780 (1997)
+21. Hu, P., Ziglar, J., Held, D., Ramanan, D.: What you see is what you get: Exploiting visibility for 3d object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11001-11009 (2020)
+22. Huang, L., Yan, P., Li, G., Wang, Q., Lin, L.: Attention embedded spatio-temporal network for video salient object detection. IEEE Access 7, 166203-166213 (2019)
+23. Kanazawa, A., Zhang, J.Y., Felsen, P., Malik, J.: Learning 3d human dynamics from video. In: Computer Vision and Pattern Regognition (CVPR) (2019)
+24. Kang, K., Li, H., Xiao, T., Ouyang, W., Yan, J., Liu, X., Wang, X.: Object detection in videos with tubelet proposal networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 727-735 (2017)
+25. Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: Pointpillars: Fast encoders for object detection from point clouds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 12697-12705 (2019)
+26. Li, B., Zhang, T., Xia, T.: Vehicle detection from 3d lidar using fully convolutional network. arXiv preprint arXiv:1608.07916 (2016)
+27. Liang, M., Yang, B., Wang, S., Urtasun, R.: Deep continuous fusion for multi-sensor 3d object detection. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 641-656 (2018)
+28. Liu, X., Qi, C.R., Guibas, L.J.: Flownet3d: Learning scene flow in 3d point clouds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 529-537 (2019)
+29. Liu, X., Yan, M., Bohg, J.: Meteornet: Deep learning on dynamic 3d point cloud sequences. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 9246-9255 (2019)
+30. Luo, W., Yang, B., Urtasun, R.: Fast and furious: Real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. pp. 3569-3577 (2018)
+31. McCrae, S., Zakhor, A.: 3d object detection using temporal lidar data. In: IEEE International Conference on Image Processing (ICIP) (2020)
+32. Meyer, G.P., Laddha, A., Kee, E., Vallespi-Gonzalez, C., Wellington, C.K.: Lasernet: An efficient probabilistic 3d object detector for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 12677-12686 (2019)
+33. Najibi, M., Lai, G., Kundu, A., Lu, Z., Rathod, V., Funkhouser, T., Pantofaru, C., Ross, D., Davis, L., Fathi, A.: Dops: Learning to detect 3d objects and predict
+
+their 3d shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2020)
+34. Ngiam, J., Caine, B., Han, W., Yang, B., Chai, Y., Sun, P., Zhou, Y., Yi, X., Alsharif, O., Nguyen, P., et al.: Starnet: Targeted computation for object detection in point clouds. arXiv preprint arXiv:1908.11069 (2019)
+35. Qi, C.R., Litany, O., He, K., Guibas, L.J.: Deep hough voting for 3d object detection in point clouds. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 9277-9286 (2019)
+36. Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum pointnets for 3d object detection from rgb-d data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 918-927 (2018)
+37. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In: Advances in neural information processing systems. pp. 5099-5108 (2017)
+38. Riegler, G., Osman Ulusoy, A., Geiger, A.: Octnet: Learning deep 3d representations at high resolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3577-3586 (2017)
+39. Shi, S., Wang, X., Li, H.: Pointcnn: 3d object proposal generation and detection from point cloud. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770-779 (2019)
+40. Simon, M., Amende, K., Kraus, A., Honer, J., Samann, T., Kaulbersch, H., Milz, S., Michael Gross, H.: Complexer-yolo: Real-time 3d object detection and tracking on semantic point clouds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp. 0-0 (2019)
+41. Simon, M., Milz, S., Amende, K., Gross, H.M.: Complex-yolo: An euler-regionproposal for real-time 3d object detection on point clouds. In: European Conference on Computer Vision. pp. 197-209. Springer (2018)
+42. Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., Caine, B., et al.: Scalability in perception for autonomous driving: Waymo open dataset. arXiv pp. arXiv-1912 (2019)
+43. Teng, E., Falcão, J.D., Huang, R., Iannucci, B.: Clickbait: Click-based accelerated incremental training of convolutional neural networks. In: 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). pp. 1-12. IEEE (2018)
+44. Weng, X., Kitani, K.: A Baseline for 3D Multi-Object Tracking. arXiv:1907.03961 (2019)
+45. Xiao, F., Jae Lee, Y.: Video object detection with an aligned spatial-temporal memory. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 485-501 (2018)
+46. Xu, Z., Liu, Z., Sun, C., Murphy, K., Freeman, W.T., Tenenbaum, J.B., Wu, J.: Unsupervised discovery of parts, structure, and dynamics. arXiv preprint arXiv:1903.05136 (2019)
+47. Yang, B., Luo, W., Urtasun, R.: Pixor: Real-time 3d object detection from point clouds. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. pp. 7652-7660 (2018)
+48. Yang, Y., Feng, C., Shen, Y., Tian, D.: Foldingnet: Point cloud auto-encoder via deep grid deformation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 206-215 (2018)
+49. Yin, J., Shen, J., Guan, C., Zhou, D., Yang, R.: Lidar-based online 3d video object detection with graph-based message passing and spatiotemporal transformer attention. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2020)
+
+50. Zhao, H., Jiang, L., Fu, C.W., Jia, J.: Pointweb: Enhancing local neighborhood features for point cloud processing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5565-5573 (2019)
+51. Zhao, Y., Birdal, T., Deng, H., Tombari, F.: 3d point capsule networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1009-1018 (2019)
+52. Zhou, Y., Sun, P., Zhang, Y., Anguelov, D., Gao, J., Ouyang, T., Guo, J., Ngiam, J., Vasudevan, V.: End-to-end multi-view fusion for 3d object detection in lidar point clouds. arXiv preprint arXiv:1910.06528 (2019)
+53. Zhou, Y., Tuzel, O.: Voxelnet: End-to-end learning for point cloud based 3d object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4490-4499 (2018)
\ No newline at end of file
diff --git a/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/images.zip b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9b71f13629c67adf2952c1f3cb7bcb9c34c9b591
--- /dev/null
+++ b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b3008f8455a4c5a6afc013feff8c4847792d4d0b3050021ad9994438207ff6f9
+size 556094
diff --git a/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/layout.json b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..17c31b7914c2de86f4653fa4cca67bfa3b94a99a
--- /dev/null
+++ b/anlstmapproachtotemporal3dobjectdetectioninlidarpointclouds/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7ef83e87e08b9d6628913fe8c27c116a401b71882b8b97e566ef43f911cf6edd
+size 380038
diff --git a/antibanditneuralarchitecturesearchformodeldefense/e0b48eae-eb86-4c63-bb9b-b1e0cb9ab275_content_list.json b/antibanditneuralarchitecturesearchformodeldefense/e0b48eae-eb86-4c63-bb9b-b1e0cb9ab275_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..eea5b8a02730e3d4ef7e4e76e2d3ebedd74e1bec
--- /dev/null
+++ b/antibanditneuralarchitecturesearchformodeldefense/e0b48eae-eb86-4c63-bb9b-b1e0cb9ab275_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c232668a93319f3ca330fdb5a024dad1a724edf11d834e47be56b0738d87cd03
+size 73789
diff --git a/antibanditneuralarchitecturesearchformodeldefense/e0b48eae-eb86-4c63-bb9b-b1e0cb9ab275_model.json b/antibanditneuralarchitecturesearchformodeldefense/e0b48eae-eb86-4c63-bb9b-b1e0cb9ab275_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f3a190291495e6af1bcc27273085bb6c67b4bad4
--- /dev/null
+++ b/antibanditneuralarchitecturesearchformodeldefense/e0b48eae-eb86-4c63-bb9b-b1e0cb9ab275_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b7fd5c0d7034a3aed8e03c30737fc88bce80d502fbabbaeaf637685d9a2220d
+size 93257
diff --git a/antibanditneuralarchitecturesearchformodeldefense/e0b48eae-eb86-4c63-bb9b-b1e0cb9ab275_origin.pdf b/antibanditneuralarchitecturesearchformodeldefense/e0b48eae-eb86-4c63-bb9b-b1e0cb9ab275_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..12b249165bcad9647f84d6c844289960dffb069c
--- /dev/null
+++ b/antibanditneuralarchitecturesearchformodeldefense/e0b48eae-eb86-4c63-bb9b-b1e0cb9ab275_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d6d893fb692aeaf2a944651ac3927c52baf500c3fc9c08a05861979e75ffe75
+size 816945
diff --git a/antibanditneuralarchitecturesearchformodeldefense/full.md b/antibanditneuralarchitecturesearchformodeldefense/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..1938c1b3b019bf309c4b68651cfa041958867357
--- /dev/null
+++ b/antibanditneuralarchitecturesearchformodeldefense/full.md
@@ -0,0 +1,308 @@
+# Anti-Bandit Neural Architecture Search for Model Defense
+
+Hanlin Chen $^{1}$ , Baochang Zhang $^{1*}$ , Song Xue $^{1}$ , Xuan Gong $^{2*}$ , Hong Liu $^{3}$ , Rongrong Ji $^{3}$ , and David Doermann $^{2}$
+
+1 Beihang University, Beijing, China
+$^{2}$ University at Buffalo, Buffalo, USA
+$^{3}$ Xiamen University, Fujian, China, {hlchen,bczhang}@buaa.edu.cn
+
+Abstract. Deep convolutional neural networks (DCNNs) have dominated as the best performers in machine learning, but can be challenged by adversarial attacks. In this paper, we defend against adversarial attacks using neural architecture search (NAS) which is based on a comprehensive search of denoising blocks, weight-free operations, Gabor filters and convolutions. The resulting anti-bandit NAS (ABanditNAS) incorporates a new operation evaluation measure and search process based on the lower and upper confidence bounds (LCB and UCB). Unlike the conventional bandit algorithm using UCB for evaluation only, we use UCB to abandon arms for search efficiency and LCB for a fair competition between arms. Extensive experiments demonstrate that ABanditNAS is about twice as fast as the state-of-the-art NAS method, while achieving an $8.73\%$ improvement over prior arts on CIFAR-10 under PGD-7.
+
+Keywords: Neural Architecture Search (NAS), Bandit, Adversarial Defense
+
+# 1 Introduction
+
+The success of deep learning models [4] have been demonstrated on various computer vision tasks such as image classification [18], instance segmentation [25] and object detection [36]. However, existing deep models are sensitive to adversarial attacks [6, 16, 37], where adding an imperceptible perturbation to input images can cause the models to perform incorrectly. Szegedy et. al [37] also observe that these adversarial examples are transferable across multiple models such that adversarial examples generated for one model might mislead other models as well. Therefore, models deployed in the real world scenarios are susceptible to adversarial attacks [24]. While many methods have been proposed to defend against these attacks [37, 7], improving the network training process proves to be one of the most popular. These methods inject adversarial examples into the training data to retrain the network [16, 21, 1]. Similarly, pre-processing
+
+
+Fig. 1. ABanditNAS is mainly divided into two steps: sampling using LCB and abandoning based on UCB.
+
+defense methods modify adversarial inputs to resemble clean inputs [35, 22] by transforming the adversarial images into clean images before they are fed into the classifier.
+
+Overall, however, finding adversarially robust architectures using neural architecture search (NAS) shows even more promising results [7, 11, 27, 29]. NAS has attracted a great attention with remarkable performance in various deep learning tasks. In [7] the researchers investigate the dependence of adversarial robustness on the network architecture via NAS. A neural architecture search framework for adversarial medical image segmentation is proposed by [29]. [27] leverages one-shot NAS [3] to understand the influence of network architectures against adversarial attacks. Although promising performance is achieved in existing NAS based methods, this direction still remains largely unexplored.
+
+In this paper, we consider NAS for model defense by treating it as a multi-armed bandit problem and introduce a new anti-bandit algorithm into adversarially robust network architecture search. To improve the robustness to adversarial attacks, a comprehensive search space is designed by including diverse operations, such as denoising blocks, weight-free operations, Gabor filters and convolutions. However, searching a robust network architecture is more challenging than traditional NAS, due to the complicated search space, and learning inefficiency caused by adversarial training. We develop an anti-bandit algorithm based on both the upper confidence bound (UCB) and the lower confidence bound (LCB) to handle the huge and complicated search space, where the number of operations that define the space can be $9^{60}$ ! Our anti-bandit algorithm uses UCB to reduce the search space, and LCB to guarantee that every arm is fairly tested before being abandoned.
+
+Making use of the LCB, operations which have poor performance early, such as parameterized operations, will be given more chances but they are thrown
+
+away once they are confirmed to be bad. Meanwhile, weight-free operations will be compared with parameterized operations only when they are well trained. Based on the observation that the early optimal operation is not necessarily the optimal one in the end, and the worst operations in the early stage usually has a worse performance at the end [45], we exploit UCB to prune the worst operations earlier, after a fair performance evaluation via LCB. This means that the operations we finally reserve are certainly a near optimal solution. On the other hand, with the operation pruning process, the search space becomes smaller and smaller, leading to an efficient search process. Our framework shown in Fig. 1 highlights the anti-bandit NAS (ABanditNAS) for finding a robust architecture from a very complicated search space. The contributions of our paper are as follows:
+
+- ABanditNAS is developed to solve the adversarially robust optimization and architecture search in a unified framework. We introduce an anti-bandit algorithm based on a specific operation search strategy with a lower and an upper bound, which can learn a robust architecture based on a comprehensive operation space.
+- The search space is greatly reduced by our anti-bandit pruning method which abandons operations with less potential, and significantly reduces the search complexity from exponential to polynomial, i.e., $\mathcal{O}(K^{|\mathcal{E}_{\mathcal{M}}| \times v})$ to $\mathcal{O}(K^2 \times T)$ (see Section 3.4 for details).
+- Extensive experiments demonstrate that the proposed algorithm achieves better performance than other adversarially robust models on commonly used MNIST and CIFAR-10.
+
+# 2 Related Work
+
+Neural architecture search (NAS). NAS becomes one of the most promising technologies in the deep learning paradigm. Reinforcement learning (RL) based methods [47, 46] train and evaluate more than 20,000 neural networks across 500 GPUs over 4 days. The recent differentiable architecture search (DARTS) reduces the search time by formulating the task in a differentiable manner [23]. However, DARTS and its variants [23, 41] might be less efficient for a complicated search space. To speed up the search process, a one-shot strategy is introduced to do NAS within a few GPU days [23, 31]. In this one-shot architecture search, each architecture in the search space is considered as a sub-graph sampled from a super-graph, and the search process can be accelerated by parameter sharing [31]. Though [7] uses NAS with reinforcement learning to find adversarially robust architectures that achieve good results, it is insignificant compared to the search time. Those methods also seldom consider high diversity in operations closely related to model defense in the search strategy.
+
+Adversarial attacks. Recent research has shown that neural networks exhibit significant vulnerability to adversarial examples. After the discovery of adversarial examples by [37], [16] proposes the Fast Gradient Sign Method (FGSM) to generate adversarial examples with a single gradient step. Later, in [21], the
+
+researchers propose the Basic Iterative Method (BIM), which takes multiple and smaller FGSM steps to improve FGSM, but renders the adversarial training very slow. This iterative adversarial attack is further strengthened by adding multiple random restarts, and is also incorporated into the adversarial training procedure. In addition, projected gradient descent (PGD) [26] adversary attack, a variant of BIM with a uniform random noise as initialization, is recognized to be one of the most powerful first-order attacks [1]. Other popular attacks include the Carlini and Wagner Attack [6] and Momentum Iterative Attack [10]. Among them, [6] devises state-of-the-art attacks under various pixel-space $l_{p}$ norm-ball constraints by proposing multiple adversarial loss functions.
+
+Model defense. In order to resist attacks, various methods have been proposed. A category of defense methods improve network's training regime to counter adversarial attacks. The most common method is adversarial training [21, 28] with adversarial examples added to the training data. In [26], a defense method called Min-Max optimization is introduced to augment the training data with first-order attack samples. [38] investigates fast training of adversially robust models to perturb both the images and the labels during training. There are also some model defense methods that target at removing adversarial perturbation by transforming the input images before feeding them to the network [22, 1, 17]. In [12, 8], the effect of JPEG compression is investigated for removing adversarial noise. In [30], the authors apply a set of filters such as median filters and averaging filters to remove perturbation. In [42], a ME-Net method is introduced to destroy the adversarial noise and re-enforce the global structure of the original images. With the development of NAS, finding adversially robust architectures using NAS is another promising direction [7], which is worth in-depth exploration. Recently, [29] designs three types of primitive operation set in the search space to automatically find two-cell architectures for semantic image segmentation, especially medical image segmentation, leading to a NAS-Unet backbone network.
+
+In this paper, an anti-bandit algorithm is introduced into NAS, and we develop a new optimization framework to generate adversarially robust networks. Unlike [19] using bandits to produce black-box adversarial samples, we propose an anti-bandit algorithm to obtain a robust network architecture. In addition, existing NAS-based model defense methods either target at different applications from ours or are less efficient for object classification [7, 11, 27, 29].
+
+# 3 Anti-Bandit Neural Architecture Search
+
+# 3.1 Search Space
+
+Following [47, 23, 45], we search for computation cells as the building blocks of the final architecture. Different from these approaches, we search for $v$ ( $v > 2$ ) kinds of cells instead of only normal and reduction cells. Although it increases the search space, our search space reduction in ABanditNAS can make the search efficient enough. A cell is a fully-connected directed acyclic graph (DAG) of $M$
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Fig. 2. (a) A cell containing four intermediate nodes $B_{1}$ , $B_{2}$ , $B_{3}$ , $B_{4}$ that apply sampled operations on the input node $B_{0}$ . $B_{0}$ is from the output of the previous cell. The output node concatenates the outputs of the four intermediate nodes. (b) Gabor Filter. (c) A generic denoising block. Following [40], it wraps the denoising operation with a $1 \times 1$ convolution and an identity skip connection [18].
+
+nodes, i.e., $\{B_1,B_2,\dots,B_M\}$ as shown in Fig. 2(a). Each node $B_{i}$ takes its dependent nodes as input, and generates an output through the selected operation $B_{j} = o^{(i,j)}(B_{i})$ . Here each node is a specific tensor (e.g., a feature map in convolutional neural networks) and each directed edge $(i,j)$ between $B_{i}$ and $B_{j}$ denotes an operation $o^{(i,j)}(.)$ , which is sampled from $\Omega^{(i,j)} = \{o_1^{(i,j)},\dots,o_K^{(i,j)}\}$ . Note that the constraint $i < j$ ensures that there are no cycles in a cell. Each cell takes the output of the previous cell as input, and we define this node belonging to the previous cell as the input node $B_{0}$ of the current cell for easy description. The set of the operations $\varOmega$ consists of $K=9$ operations. Following [23], there are seven normal operations that are the $3\times 3$ max pooling, $3\times 3$ average pooling, skip connection (identity), $3\times 3$ convolution with rate 2, $5\times 5$ convolution with rate 2, $3\times 3$ depth-wise separable convolution, and $5\times 5$ depth-wise separable convolution. The other two are $3\times 3$ Gabor filter and denoising block. Therefore, the size of the whole search space is $K^{|E_{\mathcal{M}}|\times v}$ , where $\mathcal{E}_{\mathcal{M}}$ is the set of possible edges with $M$ intermediate nodes in the fully-connected DAG. The search space of a cell is constructed by the operations of all the edges, denoted as $\{\varOmega^{(i,j)}\}$ . In our case with $M=4$ and $v=6$ , together with the input node, the total number of cell structures in the search space is $9^{(1+2+3+4)\times 6}=9^{10\times 6}$ .
+
+Gabor filter. Gabor wavelets [15, 14] were invented by Dennis Gabor using complex functions to serve as a basis for Fourier transform in information theory applications. The Gabor wavelets (kernels or filters) in Fig. 2(b) are defined as: $\exp \left(-\frac{x'^2 + \gamma^2y'^2}{2\sigma^2}\right)\cos \left(2\pi \frac{x'}{\lambda} +\psi\right)$ , where $x' = x\cos \theta +y\sin \theta$ and $y^\prime = -x\sin \theta + y\cos \theta$ . We set $\sigma ,\gamma ,\lambda ,\psi$ and $\theta$ to be learnable parameters. Note that the symbols used here apply only to the Gabor filter and are different from the symbols used in the rest of this paper. An important property of the wavelets is that the product of its standard deviations is minimized in both time and frequency domains. Also, robustness is another important property which we use here [32].
+
+Denoising block. In [40], the researchers suggest that adversarial perturbations on images can result in noise in the features. Thus, a denoising block Fig. 2(c) is used to improve adversarial robustness via feature denoising. Similarly, we add the non-local mean denoising block [5] to the search space to denoise the features. It computes a denoised feature map $z$ of an input feature map $x$ by taking a weighted mean of the features over all spatial locations $\mathcal{L}$ as $z_{p} = \frac{1}{C(x)}\sum_{\forall q\in \mathcal{L}}f(x_{p},x_{q})\cdot x_{q}$ , where $f(x_{p},x_{q})$ is a feature-dependent weighting function and $C(x)$ is a normalization function. Also, the denoising block needs huge computations because of the matrix multiplication between features.
+
+It is known that adversarial training is more challenging than that of natural training [33], which adds an additional burden to NAS. For example, adversarial training using the $F$ -step PGD attack needs roughly $F + 1$ times more computation. Also, more operations added to the search space are another burden. To solve these problems, we introduce operation space reduction based on the UCB bandit algorithm into NAS, to significantly reduce the cost of GPU hours, leading to our efficient ABanditNAS.
+
+# 3.2 Adversarial Optimization for ABanditNAS
+
+Adversarial training [26] is a method for learning networks so that they are robust to adversarial attacks. Given a network $f_{\theta}$ parameterized by $\theta$ , a dataset $(x_e, y_e)$ , a loss function $l$ and a threat model $\Delta$ , the learning problem is typically cast as the following optimization problem: $\min_{\theta} \sum_{e} \max_{\delta \in \Delta} l(f_{\theta}(x_e + \delta), y_e)$ , where $\delta$ is the adversarial perturbation. A typical choice for a threat model is to take $\Delta = \{\delta : \| \delta \|_{\infty} \leq \epsilon\}$ for some $\epsilon > 0$ , where $\| \cdot \|_{\infty}$ is some $l_{\infty}$ -norm distance metric and $\epsilon$ is the adversarial manipulation budget. This is the $l_{\infty}$ threat model used by [26] and what we consider in this paper. The procedure for adversarial training is to use attacks to approximate the inner maximization over $\Delta$ , followed by some variation of gradient descent on the model parameters $\theta$ . For example, one of the earliest versions of adversarial training uses the Fast Gradient Sign Method (FGSM) [16] to approximate the inner maximization. This could be seen as a relatively inaccurate approximation of the inner maximization for $l_{\infty}$ perturbations, and has the closed form solution: $\theta = \epsilon \cdot \mathrm{sign}\Big(\nabla_x l(f(x), y)\Big)$ .
+
+A better approximation of the inner maximization is to take multiple, smaller FGSM steps of size $\alpha$ instead. However, the number of gradient computations caused by the multiple steps is proportional to $\mathcal{O}(EF)$ in a single epoch, where $E$ is the size of the dataset and $F$ is the number of steps taken by the PGD adversary. This is $F$ times greater than the standard training which has $\mathcal{O}(E)$ gradient computations per epoch, and so the adversarial training is typically $F$ times slower. To speed up the adversarial training, we combine the FGSM with random initialization [39].
+
+# 3.3 Anti-Bandit
+
+In machine learning, the multi-armed bandit problem [2,34] is a classic reinforcement learning (RL) problem that exemplifies the exploration-exploitation
+
+trade-off dilemma: shall we stick to an arm that gave high reward so far (exploitation) or rather probe other arms further (exploration)? The Upper Confidence Bound (UCB) is widely used for dealing with the exploration-exploitation dilemma in the multi-armed bandit problem. For example, the idea of bandit is exploited to improve many classical RL methods such as Monte Carlo [20] and Q-learning [13]. The most famous one is AlphaGo [34], which uses the Monte Carlo Tree Search (MCTS) algorithm to play the board game Go but based on a very powerful computing platform unavailable to common researchers. Briefly, the UCB algorithm chooses at trial the arm $k$ that maximizes
+
+$$
+\hat {r} _ {k} + \sqrt {\frac {2 \log N}{n _ {k}}}, \tag {1}
+$$
+
+where $\hat{r}_k$ is the average reward obtained from arm $k$ , and $n_k$ is the number of times arm $k$ has been played up to trial $N$ . The first term in Eq. 1 is the value term which favors actions that look good historically, and the second is the exploration term which makes actions get an exploration bonus that grows with $\log N$ . The total value can be interpreted as the upper bound of a confidence interval, so that the true mean reward of each arm $k$ with a high probability is below this UCB.
+
+The UCB in bandit is not applicable in NAS, because it is too time-consuming to choose an arm from a huge search space (a huge number of arms, e.g., $9^{60}$ ), particularly when limited computational resources are available. To solve the problem, we introduce an anti-bandit algorithm to reduce the arms for the huge-armed problem by incorporating both the upper confidence bound (UCB) and the lower confidence bound (LCB) into the conventional bandit algorithm. We first define LCB as
+
+$$
+\hat {r} _ {k} - \sqrt {\frac {2 \log N}{n _ {k}}}. \tag {2}
+$$
+
+LCB is designed to sample an arm from a huge number of arms for one more trial (later in Eq. 3). A smaller LCB means that the less played arm (a smaller $n_k$ ) is given a bigger chance to be sampled for a trial. Unlike the conventional bandit based on the maximum UCB (Eq. 1) to choose an arm, our UCB (Eq. 6) is used to abandon the arm operation with the minimum value, which is why we call our algorithm anti-bandit.
+
+Our anti-bandit algorithm is specifically designed for the huge-armed bandit problem by reducing the number of arms based on the UCB. Together with the LCB, it can guarantee every arm is fairly tested before being abandoned.
+
+# 3.4 Anti-Bandit Strategy for ABanditNAS
+
+As described in [43, 45], the validation accuracy ranking of different network architectures is not a reliable indicator of the final architecture quality. However, the experimental results actually suggest a nice property that if an architecture performs poorly in the beginning of training, there is little hope that it can be part of the final optimal model [45]. As the training progresses, this observation
+
+Algorithm 1: ABanditNAS
+Input: Training data, validation data, searching hyper-graph, adversarial perturbation $\delta$ , adversarial manipulation budget $\epsilon$ , $K = 9$ , hyper-parameters $\alpha$ , $\lambda = 0.7$ , $T = 3$ . Output: The remaining optimal structure;
+1 $t = 0$ ; $c = 0$ ;
+2 Get initial performance $m_{k,0}^{(i,j)}$ ;
+3 while $(K > 1)$ do
+4 $c \gets c + 1$ ;
+5 $t \gets t + 1$ ;
+6 Calculate $s_L(o_k^{(i,j)})$ using Eq. 3;
+7 Calculate $p(o_k^{(i,j)})$ using Eq. 4;
+8 Select an architecture by sampling one operation based on $p(o_k^{(i,j)})$ from $\Omega^{(i,j)}$ for every edge;
+9 // Train adversarily the selected architecture
+10 for $e = 1,\dots,E$ do
+11 $\delta = \mathrm{Uniform}(-\epsilon, \epsilon)$ ;
+12 $\delta \gets \delta + \alpha \cdot \mathrm{sign}\left(\nabla_x l(f(x_e + \delta), y_e)\right)$ ;
+13 $\delta = \max\left(\min(\delta, \epsilon), -\epsilon\right)$ ;
+14 $\theta \gets \theta - \nabla_\theta l(f_\theta(x_e + \delta), y_e)$ ;
+15 end
+16 Get the accuracy $a$ on the validation data;
+17 Update the performance $m_{k,t}^{(i,j)}$ using Eq. 5;
+18 if $c = K * T$ then
+19 Calculate $s_U(o_k^{(i,j)})$ using Eq. 6;
+20 Update the search space $\{\Omega^{(i,j)}\}$ using Eq. 7;
+21 $c = 0$ ;
+22 $K \gets K - 1$ ;
+23 end
+24 end
+
+is more and more certain. Based on this observation, we derive a simple yet effective operation abandoning method. During training, along with the increasing epochs, we progressively abandon the worst performing operation for each edge. Unlike [45] which just uses the performance as the evaluation metric to decide which operation should be pruned, we use the anti-bandit algorithm described next to make a decision about which one should be pruned.
+
+Following UCB in the bandit algorithm, we obtain the initial performance for each operation in every edge. Specifically, we sample one from the $K$ operations in $\Omega^{(i,j)}$ for every edge, then obtain the validation accuracy $a$ which is the initial performance $m_{k,0}^{(i,j)}$ by training adversariably the sampled network for one epoch, and finally assigning this accuracy to all the sampled operations.
+
+By considering the confidence of the $k$ th operation with the UCB for every edge, the LCB is calculated by
+
+$$
+s _ {L} \left(o _ {k} ^ {(i, j)}\right) = m _ {k, t} ^ {(i, j)} - \sqrt {\frac {2 \log N}{n _ {k , t} ^ {(i , j)}}}, \tag {3}
+$$
+
+where $N$ is to the total number of samples, $n_{k,t}^{(i,j)}$ refers to the number of times the $k$ th operation of edge $(i,j)$ has been selected, and $t$ is the index of the epoch. The first item in Eq. 3 is the value term which favors the operations that look good historically and the second is the exploration term which allows operations to get an exploration bonus that grows with $\log N$ . The selection probability for each operation is defined as
+
+$$
+p \left(o _ {k} ^ {(i, j)}\right) = \frac {\exp \left\{- s _ {L} \left(o _ {k} ^ {(i , j)}\right) \right\}}{\sum_ {m} \exp \left\{- s _ {L} \left(o _ {m} ^ {(i , j)}\right) \right\}}. \tag {4}
+$$
+
+The minus sign in Eq. 4 means that we prefer to sample operations with a smaller confidence. After sampling one operation for every edge based on $p(o_k^{(i,j)})$ , we obtain the validation accuracy $a$ by training adversarially the sampled network for one epoch, and then update the performance $m_{k,t}^{(i,j)}$ which historically indicates the validation accuracy of all the sampled operations $o_k^{(i,j)}$ as
+
+$$
+m _ {k, t} ^ {(i, j)} = (1 - \lambda) m _ {k, t - 1} ^ {(i, j)} + \lambda * a, \tag {5}
+$$
+
+where $\lambda$ is a hyper-parameter.
+
+Finally, after $K * T$ samples where $T$ is a hyper-parameter, we calculate the confidence with the UCB according to Eq. 1 as
+
+$$
+s _ {U} \left(o _ {k} ^ {(i, j)}\right) = m _ {k, t} ^ {(i, j)} + \sqrt {\frac {2 \log N}{n _ {k , t} ^ {(i , j)}}}. \tag {6}
+$$
+
+The operation with the minimal UCB for every edge is abandoned. This means that the operations that are given more opportunities, but result in poor performance, are removed. With this pruning strategy, the search space is significantly reduced from $|\varOmega^{(i,j)}|^{10 \times 6}$ to $(|\varOmega^{(i,j)}| - 1)^{10 \times 6}$ , and the reduced space becomes
+
+$$
+\Omega^ {(i, j)} \leftarrow \Omega^ {(i, j)} - \left\{\underset {o _ {k} ^ {(i, j)}} {\arg \min } s _ {U} \left(o _ {k} ^ {(i, j)}\right) \right\}, \forall (i, j). \tag {7}
+$$
+
+The reduction procedure is carried out repeatedly until the optimal structure is obtained where there is only one operation left in each edge. Our anti-bandit search algorithm is summarized in Algorithm 1.
+
+Complexity Analysis. There are $\mathcal{O}(K^{|\mathcal{E}_{\mathcal{M}}| \times v})$ combinations in the process of finding the optimal architecture in the search space with $v$ kinds of different cells. In contrast, ABanditNAS reduces the search space for every $K * T$ epochs. Therefore, the complexity of the proposed method is
+
+$$
+\mathcal {O} (T \times \sum_ {k = 2} ^ {K} k) = \mathcal {O} \left(T K ^ {2}\right). \tag {8}
+$$
+
+# 4 Experiments
+
+We demonstrate the robustness of our ABanditNAS on two benchmark datasets (MNIST and CIFAR-10) for the image classification task, and compare ABanditNAS with state-of-the-art robust models.
+
+# 4.1 Experiment Protocol
+
+In our experiments, we search architectures on an over-parameterized network on MNIST and CIFAR-10, and then evaluate the best architecture on corresponding datasets. Unlike previous NAS works [23, 41, 31], we learn six kinds of cells, instead of two, to increase the diversity of the network.
+
+Search and Training Settings. In the search process, the over-parameterize network is constructed with six cells, where the $2^{nd}$ and $4^{th}$ cells are used to double the channels of the feature maps and halve the height and width of the feature maps, respectively. There are $M = 4$ intermediate nodes in each cell. The hyperparameter $T$ which denotes the sampling times is set to 3, so the total number of epochs is $\sum_{k=2}^{K} k * T$ . The hyperparameter $\lambda$ is set to 0.7. The evaluation of the hyperparameters is provided in the supplementary file. A large batch size of 512 is used. And we use an additional regularization cutout [9] for CIFAR-10. The initial number of channels is 16. We employ FGSM adversarial training combined with random initialization and $\epsilon = 0.3$ for MNIST, and $\epsilon = 0.031$ for CIFAR-10. We use SGD with momentum to optimize the network weights, with an initial learning rate of 0.025 for MNIST and 0.1 for CIFAR-10 (annealed down to zero following a cosine schedule), a momentum of 0.9 and a weight decay of $3 \times 10^{-4}$ for MNIST/CIFAR-10.
+
+After search, the six cells are stacked to get the final networks. To adversarially train them, we employ FGSM combined with random initialization and $\epsilon = 0.3$ on MNIST, and use PGD-7 with $\epsilon = 0.031$ and step size of 0.0078 on CIFAR-10. Next, we use ABanditNAS- $V$ to represent ABanditNAS with $V$ cells in the training process. The number $V$ can be different from the number $v$ . The initial number of channels is 16 for MNIST, and 48 for CIFAR-10. We use a batch size of 96 and an additional regularization cutout [9] for CIFAR-10. We employ the SGD optimizer with an initial learning rate of 0.025 for MNIST and 0.1 for CIFAR-10 (annealed down to zero following a cosine schedule without restart), a momentum of 0.9, a weight decay of $3\times 10^{-4}$ , and a gradient clipping at 5. We train 150 epochs for MNIST and CIAFR-10.
+
+White-Box vs. Black-Box Attack Settings. In an adversarial setting, there are two main threat models: white-box attacks where the adversary possesses complete knowledge of the target model, including its parameters, architecture and the training method, and black-box attacks where the adversary feeds perturbed images at test time, which are generated without any knowledge of the target model, and observes the output. We evaluate the robustness of our proposed defense against both settings. The perturbation size $\epsilon$ and step size are the same as those in the adversarial training for both the white-box and black-box attacks. The numbers of iterations for MI-FGSM and BIM are both
+
+| Architecture | Clean (%) | FGSM (%) | PGD-40 (%) | PGD-100 # (%) | Params (M) | Search Cost (GPU days) | Search Method |
| LeNet [26] | 98.8 | 95.6 | 93.2 | 91.8 | 3.27 | - | Manual |
| LeNet (Prep. + Adv. train [42]) | 97.4 | - | 94.0 | 91.8 | 0.06147 | - | Manual |
| UCBNAS | 99.5 | 98.67 | 96.94 | 95.4 | 0.082 | 0.13 | Bandit |
| UCBNAS (pruning) | 99.52 | 98.56 | 96.62 | 94.96 | 0.066 | 0.08 | Bandit |
| ABanditNAS-6 | 99.52 | 98.94 | 97.01 | 95.7 | 0.089 | 0.08 | Anti-Bandit |
+
+Table 1. Robustness of ABanditNAS under FGSM and PGD attacks on MNIST.
+
+| Structure | White-Box | Black-Box |
| Clean | MI-FGSM | BIM | PGD | MI-FGSM | BIM | PGD |
| MNIST (ε = 0.3) |
| LeNet [26] (copy) | 98.8 | - | - | 93.2 | - | - | 96.0 |
| ABanditNAS-6 (copy) | 99.52 | 97.41 | 97.63 | 97.58 | 99.09 | 99.12 | 99.02 |
| CIFAR-10 (ε = 0.031) |
| Wide-ResNet [26] (copy) | 87.3 | - | - | 50.0 | - | - | 64.2 |
| NASNet [7] (copy) | 93.2 | - | - | 50.1 | - | - | 75.0 |
| ABanditNAS-6 (copy) | 87.16 | 48.77 | 47.59 | 50.0 | 74.94 | 75.78 | 76.13 |
| ABanditNAS-6 (ResNet-18) | 87.16 | 48.77 | 47.59 | 50.0 | 77.06 | 77.63 | 78.0 |
| ABanditNAS-10 (ResNet-18) | 90.64 | 54.19 | 55.31 | 58.74 | 80.25 | 80.8 | 81.26 |
+
+Table 2. Robustness of our model in the white-box and black-box settings on MNIST and CIFAR-10. Here $\epsilon$ is the perturbation size. PGD means PGD-40 for MNIST and PGD-7 for CIFAR-10. 'copy' means we use a copied network to generate black-box adversarial examples, and 'ResNet-18' means using ResNet-18 to generate black-box adversarial examples.
+
+set to 10 with a step size and a standard perturbation size the same as those in the white-box attacks. We evaluate ABanditNAS against transfer-based attack where a copy of the victim network is trained with the same training setting. We apply attacks similar to the white-box attacks on the copied network to generate black-box adversarial examples. We also generate adversarial samples using a ResNet-18 model, and feed them to the model obtainedly ABanditNAS.
+
+# 4.2 Results on Different Datasets
+
+MNIST. Owing to the search space reduction by anti-bandit, the entire search process only requires 1.93 hours on a single NVIDIA Titan V GPU. For MNIST, the structure searched by ABanditNAS is directly used for training. We evaluate the trained network by 40 and 100 attack steps, and compare our method with LeNet [26] and MeNet [42] in Table 1. From these results, we can see that ABanditNAS using FGSM adversarial training with random initialization is more robust than LeNet with PGD-40 adversarial training, no matter which attack is used. Although MeNet uses matrix estimation (ME) as preprocessing to destroy the adversarial structure of the noise, our method still performs better. In addition, our method has the best performance $(99.52\%)$ on the clean images with a strong robustness. For the black-box attacks, Table 2 shows that
+
+| Architecture | Clean (%) | MI-FGSM (%) | PGD-7 (%) | PGD-20 (%) | # Params (M) | Search Cost (GPU days) | Search Method |
| VGG-16 [44] | 85.16 | - | 46.04 (PGD-10) | - | - | - | Manual |
| ResNet [26] | 79.4 | - | 47.1 | 43.7 | 0.46 | - | Manual |
| Wide-ResNet [26] | 87.3 | - | 50.0 | 45.8 | 45.9 | - | Manual |
| NASNet [7] | 93.2 | - | 50.1 | - | - | ~7×2000 | RL |
| UCBNAS (pruning) | 89.54 | 53.12 | 54.55 | 45.33 | 8.514 | 0.08 | Bandit |
| ABanditNAS-6 | 87.16 | 48.77 | 50.0 | 45.9 | 2.892 | 0.08 | Anti-Bandit |
| ABanditNAS-6 (larger) | 87.31 | 52.01 | 51.24 | 45.79 | 12.467 | 0.08 | Anti-Bandit |
| ABanditNAS-10 | 90.64 | 54.19 | 58.74 | 50.51 | 5.188 | 0.08 | Anti-Bandit |
+
+Table 3. Validation accuracy and robustness of various models trained on CIFAR-10. Note that the search cost of NASNet which is unknown is estimated based on [7]. 'PGD-10' means the result of VGG-16 is under PGD-10 attack which comes from [44].
+
+they barely affect the structures searched by ABanditNAS compared with other models, either manually designed or searched by NAS. As illustrated in Fig. 3(a), with the increase of the perturbation size, our network's performance does not drop significantly, showing the robustness of our method.
+
+
+(a) MNIST
+
+
+(b) CIFAR-10
+Fig. 3. Robustness of ABanditNAS against different white-box attacks for various perturbation budgets.
+
+We also apply the conventional bandit which samples operations based on UCB to search the network, leading to UCBNAS. The main differences between UCBNAS and ABanditNAS lie in that UCBNAS only uses UCB as an evaluation measure to select an operation, and there is no operation pruning involved. Compared with UCBNAS, ABanditNAS can get better performance and use less search time under adversarial attacks as shown in Table 1. Also, to further demonstrate the effectiveness of our ABanditNAS, we use UCBNAS with pruning to search for a robust model, which not only uses UCB to select an operation,
+
+
+(a) First Cell
+
+
+(b) Second Cell
+
+
+(c) Third Cell
+
+
+(d) Fourth Cell
+
+
+(e) Fifth Cell
+Fig. 4. Detailed structures of the best cells discovered on CIFAR-10 using FGSM with random initialization.
+
+
+(f) Sixth Cell
+
+but also prune operation of less potential. Although UCBNAS (pruning) is as fast as ABanditNAS, it has worse performance than ABanditNAS because of unfair competitions between operations before pruning.
+
+CIFAR-10. The results for different architectures on CIFAR-10 are summarized in Table 3. We use one Titan V GPU to search, and the batch size is 512. The entire search process takes about 1.94 hours. We consider $V = 6$ and $V = 10$ cells for training. In addition, we also train a larger network variant with 100 initial channels for $V = 6$ . Compared with Wide-ResNet, ABanditNAS-10 achieves not only a better performance (50.0% vs. 58.74%) in PGD-7, but also fewer parameters (45.9M vs. 5.188M). Although the result of VGG-16 is under PGD-10, ABanditNAS-10 achieves a better performance under more serious attack PGD-20 (46.04% vs. 50.51%). When compared with $\mathrm{NASNet}^4$ which has a better performance on clean images, our method obtains better performance on adversarial examples with a much faster search speed ( $\sim 7 \times 2000$ vs. 0.08). Note that the results in Table 3 are the best we got, which are unstable and need more trials to get the results. Table 2 shows the black-box attacks barely affect the networks obtained by ABanditNAS, much less than those by other methods. In addition, Fig. 3(b) illustrates ABanditNAS is still robust when the disturbance increases.
+
+For the structure searched by ABanditNAS on CIFAR-10, we find that the robust structure prefers pooling operations, Gabor filters and denosing blocks (Fig. 4). The reasons lie in that the pooling can enhance the nonlinear modeling capacity, Gabor filters can extract robust features, and the denosing block and mean pooling act as smoothing filters for denosing. Gabor filters and denosing blocks are usually set in the front of cell by ABanditNAS to denoise feature encoded by the previous cell. The setting is consistent with [40], which demonstrates the rationality of ABanditNAS.
+
+# 4.3 Ablation Study
+
+The performances of the structures searched by ABanditNAS with different values of the $\lambda$ are used to find the best $\lambda$ . We train the structures under the same setting.
+
+Effect on the hyperparameter $\lambda$ : The hyperparameter $\lambda$ is used to balance the performance between the past and the current. Different values of $\lambda$ result in similar search costs. From Fig. 4.3, we can see that when $\lambda = 0.7$ , ABanditNAS is most robust.
+
+
+Fig. 5. The performances of the structures searched by ABanditNAS with different values of the hyperparameters $T$ and $\lambda$ .
+
+# 5 Conclusion
+
+We have proposed an ABanditNAS approach to design robust structures to defend adversarial attacks. To solve the challenging search problem caused by the complicated huge search space and the adversarial training process, we have introduced an anti-bandit algorithm to improve the search efficiency. We have investigated the relationship between our strategy and potential operations based on both lower and upper bounds. Extensive experiments have demonstrated that the proposed ABanditNAS is much faster than other state-of-the-art NAS methods with a better performance in accuracy. Under adversarial attacks, ABanditNAS achieves much better performance than other methods.
+
+# Acknowledgement
+
+Baochang Zhang is also with Shenzhen Academy of Aerospace Technology, Shenzhen, China, and he is the corresponding author. He is in part Supported by National Natural Science Foundation of China under Grant 61672079, Shenzhen Science and Technology Program (No.KQTD2016112515134654)
+
+# References
+
+1. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: ICML (2018)
+2. Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Machine learning (2002)
+3. Bender, G., Kindermans, P.J., Zoph, B., Vasudevan, V., Le, Q.V.: Understanding and simplifying one-shot architecture search. In: ICML (2018)
+4. Bengio, Y., Goodfellow, I., Courville, A.: Deep learning. CiteSeer (2017)
+5. Buades, A., Coll, B., Morel, J.: A non-local algorithm for image denoising. In: CVPR (2005)
+6. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (2017)
+7. Cubuk, E.D., Zoph, B., Schoenholz, S.S., Le, Q.V.: Intriguing properties of adversarial examples. In: ICLR (2017)
+8. Das, N., Shanbhogue, M., Chen, S., Hohman, F., Chen, L., Kounavis, M.E., Chau, D.H.: Keeping the bad guys out: Protecting and vaccinating deep learning withJPEG compression. arXiv (2017)
+9. DeVries, T., Taylor, G.W.: Improved regularization of convolutional neural networks with cutout. arXiv (2017)
+0. Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum. In: CVPR (2018)
+1. D.V.Vargas, S.Kotyan: Evolving robust neural architectures to defend from adversarial attacks. arXiv (2019)
+2. Dziugaite, G.K., Ghahramani, Z., Roy, D.M.: A study of the effect of jpg compression on adversarial images. arXiv (2016)
+3. Even-Dar, E., Mannor, S., Mansour, Y.: Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. Journal of Machine Learning Research (2006)
+4. Gabor, D.: Electrical engineers part iii: Radio and communication engineering, j. Journal of the Institution of Electrical Engineers - Part III: Radio and Communication Engineering 1945-1948 (1946)
+5. Gabor, D.: Theory of communication. part 1: The analysis of information. Journal of the Institution of Electrical Engineers-Part III: Radio and Communication Engineering (1946)
+6. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv (2014)
+7. Gupta, P., Rahtu, E.: Ciodefence: Defeating adversarial attacks by fusing class-specific image inpainting and image denoising. In: ICCV (2019)
+8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
+9. Ilyas, A., Engstrom, L., Madry, A.: Prior convictions: Black-box adversarial attacks with bandits and priors. In: ICLR (2018)
+20. Kocsis, L., Szepesvari, C.: Bandit based monte-carlo planning. Proceedings of the 17th European conference on Machine Learning (2006)
+21. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: ICLR (2016)
+22. Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: CVPR (2018)
+
+23. Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. In: ICLR (2018)
+24. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: ICLR (2016)
+25. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR (2015)
+26. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2017)
+27. M.Guo, Y.Yang, R.Xu, Z.Liu: When nas meets robustness: In search of robust architectures against adversarial attacks. CVPR (2020)
+28. Na, T., Ko, J.H., Mukhopadhyay, S.: Cascade adversarial machine learning regularized with a unified embedding. In: ICLR (2017)
+29. N.Dong, M.Xu, X.Liang, Y.Jiang, W.Dai, E.Xing: Neural architecture search for adversarial medical image segmentation. In: MICCAI (2019)
+30. Osadchy, M., Hernandez-Castro, J., Gibson, S., Dunkelman, O., Pérez-Cabo, D.: No bot expects the deepcaptcha! introducing immutable adversarial examples, with applications to captcha generation. IEEE Transactions on Information Forensics and Security (2017)
+31. Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search via parameter sharing. In: ICML (2018)
+32. Pérez, J.C., Alfarra, M., Jeanneret, G., Bibi, A., Thabet, A.K., Ghanem, B., Arbeláez, P.: Robust gabor networks. arXiv (2019)
+33. Shafahi, A., Najib, M., A, A.G., Xu, Z., Dickerson, J., Studer, C., Davis, L.S., Taylor, G., Goldstein, T.: Adversarial training for free! In: NIPS (2019)
+34. Silver, D., S., J., S., K., etc., I.A.: Mastering the game of go without human knowledge. In: Nature (2017)
+35. S.Pouya, K.Maya, C.Rama: Defense-GAN: Protecting classifiers against adversarial attacks using generative models. In: ICLR (2018)
+36. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR (2015)
+37. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: ICLR (2013)
+38. Wang, J., Zhang, H.: Bilateral adversarial training: Towards fast training of more robust models against adversarial attacks. In: ICCV (2019)
+39. Wong, E., Rice, L., Kolter, J.Z.: Fast is better than free: Revisiting adversarial training. In: ICLR (2020)
+40. Xie, C., Wu, Y., Maaten, L.V.D., Yuille, A.L., He, K.: Feature denoising for improving adversarial robustness. In: CVPR (2019)
+41. Xu, Y., Xie, L., Zhang, X., Chen, X., Qi, G., Tian, Q., Xiong, H.: Pc-darts: Partial channel connections for memory-efficient differentiable architecture search. In: ICLR (2019)
+42. Yang, Y., Zhang, G., Katabi, D., Xu, Z.: Me-net: Towards effective adversarial robustness with matrix estimation. In: ICML (2019)
+43. Ying, C., Klein, A., Real, E., Christiansen, E., Murphy, K., Hutter, F.: Nas-bench-101: Towards reproducible neural architecture search. In: ICML (2019)
+44. Zhang, C., Liu, A., Liu, X., Xu, Y., Yu, H., Ma, Y., Li, T.: Interpreting and improving adversarial robustness with neuron sensitivity. arXiv (2019)
+45. Zheng, X., Ji, R., Tang, L., Wan, Y., Zhang, B., Wu, Y., Wu, Y., Shao, L.: Dynamic distribution pruning for efficient network architecture search. arXiv (2019)
+46. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: ICLR (2016)
+
+47. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: CVPR (2018)
\ No newline at end of file
diff --git a/antibanditneuralarchitecturesearchformodeldefense/images.zip b/antibanditneuralarchitecturesearchformodeldefense/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..df930a17b499aa16c73a392e51d92c76e6e8802c
--- /dev/null
+++ b/antibanditneuralarchitecturesearchformodeldefense/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:931e13000efe2265eea916c4afa5dee473da402ccccc6c6581903442c1c14613
+size 336895
diff --git a/antibanditneuralarchitecturesearchformodeldefense/layout.json b/antibanditneuralarchitecturesearchformodeldefense/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2ab506a291a17f5855f71b4d04f501166e4b6a01
--- /dev/null
+++ b/antibanditneuralarchitecturesearchformodeldefense/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:54e1bd7323d8a3e0f5ef796ee00bfa8f9d425b2e109bf4cd8a1af12a4bd09150
+size 471978
diff --git a/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/e9d16da2-1447-487e-857a-d5bdecb61250_content_list.json b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/e9d16da2-1447-487e-857a-d5bdecb61250_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9cf7a4d87c67893e0cfbb9369dbbad78400048e3
--- /dev/null
+++ b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/e9d16da2-1447-487e-857a-d5bdecb61250_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:efb076922422741bcec2583e956ed7efbf8356ef8d96c4ce87e6342db193577a
+size 82861
diff --git a/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/e9d16da2-1447-487e-857a-d5bdecb61250_model.json b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/e9d16da2-1447-487e-857a-d5bdecb61250_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c7b19f095f4de98fd0c5b141b104ffb0413efbee
--- /dev/null
+++ b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/e9d16da2-1447-487e-857a-d5bdecb61250_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f8ad65e39e675e81d4794c0e68aa445bcd09b5ca1c20b2b496614203ad410845
+size 107885
diff --git a/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/e9d16da2-1447-487e-857a-d5bdecb61250_origin.pdf b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/e9d16da2-1447-487e-857a-d5bdecb61250_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9846f4048a0b7345d7177254cc79d59769c29609
--- /dev/null
+++ b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/e9d16da2-1447-487e-857a-d5bdecb61250_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:53164428db4b530f04b9d119faf38606eba1b3597e5df6adc5cf9b35dc53c6d5
+size 11526202
diff --git a/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/full.md b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6c407adfa011b8d9eae5229195e20efdfb05b7c2
--- /dev/null
+++ b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/full.md
@@ -0,0 +1,285 @@
+# Appearance Consensus Driven Self-Supervised Human Mesh Recovery
+
+Jogendra Nath Kundu $^{1*}$ , Mugalodi Rakesh $^{1*}$ , Varun Jampani $^{2}$ , Rahul Mysore Venkatesh $^{1}$ , and R. Venkatesh Babu $^{1}$
+
+$^{1}$ Indian Institute of Science, Bangalore $^{2}$ Google Research
+
+Abstract. We present a self-supervised human mesh recovery framework to infer human pose and shape from monocular images in the absence of any paired supervision. Recent advances have shifted the interest towards directly regressing parameters of a parametric human model by supervising them on large-scale datasets with 2D landmark annotations. This limits the generalizability of such approaches to operate on images from unlabeled wild environments. Acknowledging this we propose a novel appearance consensus driven self-supervised objective. To effectively disentangle the foreground (FG) human we rely on image pairs depicting the same person (consistent FG) in varied pose and background (BG) which are obtained from unlabeled wild videos. The proposed FG appearance consistency objective makes use of a novel, differentiable Color-recovery module to obtain vertex colors without the need for any appearance network; via efficient realization of color-picking and reflectional symmetry. We achieve state-of-the-art results on the standard model-based 3D pose estimation benchmarks at comparable supervision levels. Furthermore, the resulting colored mesh prediction opens up the usage of our framework for a variety of appearance-related tasks beyond the pose and shape estimation, thus establishing our superior generalizability.
+
+# 1 Introduction
+
+Inferring highly deformable 3D human pose and shape from in-the-wild monocular images has been a longstanding goal in the vision community [12]. This is considered as a key step for a wide range of downstream applications such as robot interaction, rehabilitation guidance, animation industry, etc. Being one of the important subtasks, human pose estimation has gained considerable performance improvements in recent years [61,45,57], but in a fully-supervised setting. Such approaches heavily rely on large-scale 2D or 3D pose annotations. Following this, the parametric models of human body, such as SCAPE [3], SMPL [40], SMPL(-X) [49,58] lead the way for a full 3D pose and shape estimation. Additionally, to suppress the inherent 2D-to-3D ambiguity, researchers have also utilized auxiliary cues of supervision such as temporal consistency [4,62], multi-view image pairs [53,20,14], or even alternate sensor data from Kinect [67] or IMUs [44].
+
+
+Fig. 1. Our framework disentangles the co-salient FG human from input image pairs. The resulting colored mesh prediction opens up its usage for a variety of tasks.
+
+
+
+However, estimating 3D human pose and shape from a single RGB image without relying on any direct supervision remains a very challenging problem.
+
+Early approaches [5,8,35] adopt iterative optimization techniques to fit a parametric human model (e.g. SMPL) to a given image observation. These works attempt to iteratively estimate the body pose and shape that best describe the available 2D observation, which is most often the 2D landmark annotations. Though these works usually get good body fits, such approaches are slow and heavily rely on the 2D landmark annotations [2,18,28] or predictions of an off-the-shelf, fully-supervised Image-to-2D pose networks. However, the recent advances in deep learning has shifted the interest towards data-driven regression based methods [21,64], where a deep network directly regresses parameters of the human model for a given input image [48,51,69] in a single-shot computation. This is a promising direction as the network can utilize the full image information instead of just the sparse landmarks to estimate human body shape and pose. In the absence of datasets having images with 3D pose and shape ground-truth (GT), several recent works leverage a variety of available paired 2D annotations [50,63] such as 2D landmarks or silhouettes [51]; alongside the unpaired 3D pose samples to instill the 3D pose priors [21] (i.e. to assure recovery of valid 3D poses). The strong reliance on paired 2D keypoint ground-truth limits the generalization of such approaches when applied to images from an unseen wild environment. Given the transient nature of human fashion, the visual appearance of human attire keeps evolving. This demands such approaches to periodically update their 2D pose dataset in order to retain their functionality.
+
+In this work, the overarching objective is to move away from any kind of paired pose-related supervision for superior generalizability. Our aim is to explore a form of self-supervised objective which can learn both pose and shape from monocular images without accessing any paired GT annotations. We draw motivation from works [46,56,41,32] that aim to disentangle the fundamental factors of variations
+
+from a given image. For human-centric images [33], these factors could be; a) pose, b) foreground (FG) appearance, and c) background (BG) appearance. Here, we leverage the full advantage of incorporating a parametric human model in our framework. Note that, this parametric model not only encapsulates the pose but also segregates the FG region from the BG, which is enabled by projecting the 3D mesh onto the image plane. Thus, the problem boils down to a faithful registration of the 3D mesh onto the image plane or in other words disentanglement of FG from BG. To achieve this disentanglement, we rely on image pairs depicting consistent FG appearance but varied 3D poses. Such image pairs can be obtained from videos depicting actions of a single person, which are abundantly available on the internet. Our idea stems from the concept of co-saliency detection [70,13] where the objective is to segment out the common, salient FG from a set of two or more images. Surprisingly, this idea works the best for image pairs sampled from wild videos as compared to videos captured in a constrained in-studio setup (static homogeneous background). This is because in wild scenarios, the commonness of FG is distinctly salient in relatively diverse BGs as a result of substantial camera movements (see Fig. 1B). Thus, in contrast to prior self-supervised approaches that either rely on videos with static BG [54] or operate under the assumption of BG commonness between temporally close frames [16]; our approach is more favorable to learn from wild videos hence better generalizable.
+
+In the proposed framework, we first employ a CNN regressor to obtain the parameters (both pose and shape) of the SMPL model for a given input image. The human mesh model uses these parameters to output the mesh vertex locations. In
+
+contrast to the general trend [1,22], we propose a novel way of inferring mesh texture where the network's burden to regress vertex color or any sort of appearance representation (such as UV map) is entirely taken away. This is realized via a differentiable Color-recovery module which aims to assign color to the mesh vertices via spatial registration of the mesh over the image plane while effectively accounting for the challenges of mesh-vertex visibility like self and inter-part occlusions. To obtain a fully-colored mesh, we use a predefined, 4-way symmetry grouping knowledge (front-back and left-right) to propagate the color from camera visible vertices to the non-visible ones in a fully differentiable fashion.
+
+For a given image pair, we pass them through two parallel pathways of our colored mesh prediction framework (see Fig. 1A). The commonness of FG appearance allows us to impose an appearance consistency loss between the predicted mesh representations. In the absence of any paired supervision, this appearance consistency not only helps us to segregate the common FG human from their respective wild BGs but also discovers the required pose deformation in a fully self-supervised manner. The proposed reflectional symmetry module
+
+Table 1. Characteristic comparison against prior-arts.
+
+| Model-based methods | 2D keypoint supervision | Temporal supervision | Colored mesh prediction |
| [21,26,27,51,48] | Yes | No | No |
| [62,4,23] | Yes | Yes | No |
| Ours(self-sup.) | No | No | Yes |
+
+brings in a substantial advantage in our self-supervised framework by allowing us to impose appearance consistency even between body parts which are "commonly invisible" in both the images. Recognizing the unreliability of consistent raw color intensities which can easily be violated as result of illumination changes, we propose a part-prototype consistency objective. This aims to match a higher level appearance representation beyond the raw color intensities which is enabled by operating the Color-recovery module on convolutional feature maps instead of the raw image. Additionally, to regularize the self-supervised framework, we also impose a shape consistency loss alongside the imposition of 3D pose prior learned from a set of unpaired MoCap samples. Note that at test time, we perform single image inference to estimate 3D human pose and shape.
+
+In summary, we make the following main contributions:
+
+- We propose a self-supervised learning technique to perform simultaneous pose and shape estimation which uses image pairs sampled from in-the-wild videos in the absence of any paired supervision.
+- The proposed Color-recovery module completely eliminates the network's burden to regress any appearance-related representation via efficient realization of color-picking and reflectional symmetry. This best suits our self-supervised framework which relies on FG appearance consistency.
+- We demonstrate generalizability of our framework to operate on unseen wild datasets. We achieve state-of-the-art results against the prior model-based pose estimation approaches when tested at comparable supervision levels.
+
+# 2 Related Work
+
+Vertex-color reconstruction. In literature, we find different ways to infer textured 3D mesh from a monocular RGB image. Certain approaches [34,60] train a deep network to directly regress 3D features (RGB colors) for individual vertices. In the second kind, a fully convolutional deep network is trained to map the location of each pixel to the corresponding continuous UV-map coordinate parameterization [1]. In the third kind, the deep model is trained to directly regress the UV-image [22]. Note that, the spatial structure of the UV image is much different from that of the input image which prevents employing a fully-convolutional network for the same. Recently proposed, Soft-Rasterizer [38] uses a color-selection and color-sampling network whose outputs are processed to obtain the final vertex colors. All the above approaches adopt a learnable way to obtain the mesh color (i.e. obtained as neural output). In such cases, the deep network requires substantial training iterations to instill the knowledge of pre-defined UV mapping conventions. We believe this is an additional burden for the network specifically in absence of any auxiliary paired supervisions.
+
+Model-based human mesh estimation. Recently, parametric human models [3,40] have been used as the output target for the simultaneous pose and shape estimation task. Such a well-defined mesh model with ordered vertices provides a direct mapping to the corresponding 3D pose and part segments. Both optimization [5,35,68] and regression [21,48,51,69] based approaches estimate the
+
+body pose and shape that best describes the available 2D observations such as 2D keypoints [21], silhouettes [51], body/part segmentation [48] etc. Due to the lack of datasets having wild images with 3D pose and shape GT, most of the above approaches fully rely on the availability of 2D keypoints annotations [2,37] followed by different variants of a 2D reprojection loss [63,64] (see Table 1).
+
+Use of auxiliary supervision. In the absence of any shape supervision, certain prior works also leverage full mesh supervision available from synthetically rendered human images [66] or images with fairly successful body fits [35]. Furthermore, multi-view image pairs have also been used for 3D pose [54] and shape estimation [11,36] via enforcing consistency of canonical 3D pose across multiple views. Liang et al. [36] use a multi-stage regressor for multi-view images to further reduce the projection ambiguity in order to obtain a better performance for 3D human body under clothing. To inculcate strong 3D pose prior, Zhou et al. [71] makes use of left-right symmetric bone-length constraint for the skeleton based 3D pose estimation task. Further, to assure recovery of valid 3D poses for the model-based pose estimation task, Kanazawa et al. [21] enforce learning based human pose and shape prior via adversarial networks using unpaired sample of plausible 3D pose and shape. With the advent of differentiable renderers [10,24] certain methods supervise 3D shape and pose estimation through a textured mesh prediction network to encourage matching of the rendered texture image with the image FG [22], alongside the 2D keypoint supervision [50].
+
+# 3 Approach
+
+We aim to discover the 3D human pose and shape from unlabeled image pairs of consistent FG appearance. During training, we assume access to a parametric human mesh model to aid our self-supervised paradigm. The mesh model provides a low dimensional parametric representation of variations in human shape and pose deformations. However, by design, this model is unaware of the plausibility restrictions of human pose and shape. Thus, it is prone to implausible poses and self-penetrations specifically in the absence of paired 3D supervision [21]. Therefore, to constrain the pose predictions, we assume access to a pool of human 3D pose samples to learn a 3D pose prior.
+
+Fig. 2 shows an overview of our training approach. For a given image pair, two parallel pathways of shared CNN regressors predict the human shape and pose parameters alongside the required camera settings to segregate the co-salient FG human. Moreover, to realize a colored mesh representation, we develop a differentiable Color-recovery module which infers mesh vertex colors directly from the given image without employing any explicit appearance extraction network.
+
+# 3.1 Representation and notations
+
+Human mesh model. We employ the widely used SMPL body model [40] which parameterizes a triangulated human mesh of $K = 6890$ vertices. This model factorizes the mesh deformations into shape $\beta \in \mathbb{R}^{10}$ and pose $\theta \in \mathbb{R}^{3J}$
+
+with $J = 23$ skeleton joints [21]. We use the first 10 PCA coefficients of the shape space as a compact shape representation inline with [21]. And, the pose is parameterized as parent-relative rotations in the axis-angle representation. This differentiable SMPL function outputs mesh vertex locations in a canonical 3D space which is represented as $V \in \mathbb{R}^{K \times 3} = \mathcal{M}(\theta, \beta)$ . Here, the corresponding 3D pose (i.e. 3D location of $J$ joints) is obtained using a pre-trained linear regressor, i.e. $Y \in \mathbb{R}^{J \times 3} = W_{p}V$ parameterized by $W_{p} \in \mathbb{R}^{J \times K}$ . RGB color corresponding to the mesh vertices, $V$ is denoted as $C \in \mathbb{R}^{3 \times K} = CRM(V, I)$ , where $CRM$ is the Color-recovery module. For each vertex id $k$ , $C^{(k)}$ stores the corresponding RGB color intensities. As shown in Fig. 2, we use subscripts $a$ and $b$ to associate the terms with the respective input images, $I_{a}$ and $I_{b}$ .
+
+Camera model. We define a weak perspective camera model using a global orientation $R \in \mathbb{R}^{3 \times 3}$ in axis-angle representation (3 angle parameters), a translation $t \in \mathbb{R}^2$ and a scale $s \in \mathbb{R}$ . Given these parameters, the 2D camera space coordinates of the 3D mesh vertices with vertex index $k$ is obtained as $v^{(k)} = \pi(V^{(k)}) = s\varPi(RV^{(k)}) + t$ ; $v^{(k)} \in \mathcal{U}$ , where $\varPi$ denotes orthographic projection and $\mathcal{U} \subset \mathbb{R}^2$ denotes the space of image coordinates. Similarly, the camera projected 2D joint locations (2D pose) is expressed as $y \in \mathbb{R}^{J \times 2} = \pi(Y)$ .
+
+# 3.2 Mesh estimation architecture
+
+For a given monocular image, $I$ as input, we first employ a CNN regressor to predict the SMPL parameters (i.e. $\theta$ and $\beta$ ) alongside the camera parameters, $(R,s,t)$ . This is followed by the Color-recovery module. The prime functionality of this module is to assign color to the 3D mesh vertices, $C^{(k)}$ ; $k = 1,2,\ldots K$ based on the corresponding image space coordinates obtained via camera projection. However, a reliable color assignment requires us to segregate the vertices based on the following two important criteria.
+
+a) Non-camera-facing vertices: First, the camera-facing vertices are separated from the non-camera-facing ones using the mesh vertex normals. Here, the vertex normal is computed as the normalized average of the surface normals of the faces connected to a given vertex. We first transform these normals from the default canonical system to the camera coordinate system. Following this, Z-component of the camera-space-normals, $N^{(k)} \in \mathbb{R}$ are used to segregate the non-camera-facing vertices via a sigmoid operation, as shown in Fig. 2.
+b) Camera-facing, self-occluded vertices: Note that, $N^{(k)}$ can not be used to select all the camera-visible vertices in presence of inter-part occlusions (see Fig. 2). As, in such scenario, there exist mesh vertices which face the camera but are obscured by other camera-facing vertices which are closer to the camera in 3D. This calls for modeling the relative depth of mesh-vertices as the second criteria to reliably select the vertices which are closer to the camera among all the camera-facing vertices projected to a certain spatial region. To realize this, we utilize camera-space-depths, $Z^{(k)} \in \mathbb{R}$ which stores the Z-component (or depth) of the vertex location in the camera transformed space.
+
+
+Fig. 2. The proposed self-supervised framework makes use of a differentiable Color-recovery module to recover the fully colored mesh vertices. Yellow-circle: camera-facing vertices does not account for inter-part occlusion. Green-circle: $W_{a}$ accounts for the inter-part occlusion. Blue-circle: Fully colored mesh vertices via reflectional symmetry.
+
+3.2.1 Color-recovery module. In absence of any appearance related features, we plan to realize a spatial depth map using a fast differentiable renderer [10] where the camera-space-depth of the mesh vertices, $Z$ is treated as the color intensities for the rendering pipeline. The resultant depth-map is represented as $I^{z}(u)$ , where $u$ spans the space of spatial indices. The general idea is to use this depth-map as a margin. More concretely, for effective color assignment, one must select the spatially modulated mesh vertices which have the least absolute depth difference with respect to the above defined depth margin. To realize this, we compute a depth difference $D^{(k)}$ as $|I^{z}(v^{(k)}) - Z^{(k)}|$ , where $I^{z}(v^{(k)})$ is computed by performing bilinear sampling on $I^{z}(u)$ . In accordance with the above discussion, we formulate a visibility-aware-weighing which takes into account both the above mentioned criteria required for an effective mesh vertex selection.
+
+$$
+W ^ {(k)} \in [ 0, 1 ] = \exp (- \alpha D ^ {(k)}) \sigma (\gamma N ^ {(k)}), \mathrm {w h e r e} D ^ {(k)} = | I ^ {z} (v ^ {(k)}) - Z ^ {(k)} |
+$$
+
+Here, $\exp(-\alpha D^{(k)})$ performs a soft selection by assigning a higher weight value (close to 1) for mesh vertices, $k$ whose camera-space-depth $Z^{(k)}$ is in agreement with $I^z(v^{(k)})$ and vice-versa. In the second term, $\sigma$ denotes a sigmoid function with a higher steepness $\gamma$ to reject the non-camera-facing mesh vertices by attributing a low (close to 0) weighing value. Refer Fig. 2 for visual illustration. Intermediate vertex color assignment. The above defined visibility-aware-weighing is employed to realize a primary vertex color assignment. We denote $\tilde{C} \in \mathbb{R}^{3 \times K}$ as the intermediate vertex color, where $\tilde{C}^{(k)}$ stores the corresponding RGB color intensities acquired from the given input image $I$ . Thus, the primary vertex colors are obtained as, $\tilde{C}^{(k)} = I(v^{(k)})$ ( $2W^{(k)} - 1$ ), where $I(v^{(k)})$ stores the RGB color intensities at the spatial coordinates $v^{(k)}$ realized via performing
+
+bilinear sampling on the input RGB image $I$ . The scaled weighing function $(2W^{(k)} - 1)$ assigns negative weight to the vertices having low visibility. This assigns a negative color intensity for the corresponding vertices thereby allowing a distinction between the less-bright (near-black) colors versus unassigned vertices.
+
+3.2.2 Vertex color assignment via reflectional symmetry. Here, the prime objective is to propagate the reliable color intensities from the assigned vertices to the unreliable/unassigned ones. The idea is to use reflectional symmetry as a prior knowledge by accessing a predefined set of reflectional groups. For each group-id $g = 1,2,\ldots G$ , a set of 4 vertices are identified according to left-right and front-back symmetry which would have the same color property (except the vertices belonging to the head where only left-right symmetry is used). This symmetry knowledge is stored as a multi-hot encoding denoted as $S^{(g)}\in \{0,1\} ^K$ which constitutes of four ones indicating vertex members in the symmetry group $g$ . All the symmetry groups are combined in a symmetry-encoding matrix represented as $S\in \{0,1\}^{G\times K}$ . This multi-hot symmetry group representation helps us to perform a fully-differentiable vertex color assignment for all the vertices including the occluded and non-camera facing ones.
+
+To realize the final vertex colors $C$ , we first estimate a group-color for each group $g$ which is denoted by $\mathcal{C}^{(g)} \in \mathbb{R}^3 = (S^{(g)} \circ \mathrm{ReLU}(\tilde{C})) / (S^{(g)} \circ \mathrm{ReLU}(2W - 1))$ . Here, $\circ$ denotes dot product between the $K$ -dimensional vectors. The group color can be interpreted as a combination of the intermediate vertex colors weighted by their visibility weighing $W$ . This effectively handles the cases when only one or more of the vertices in a group are initially colored (visible). That is, when visibility is active only for a single vertex among the four vertices in a symmetry set; and when visibility is active for all the 4 vertices in a symmetry set; and also the intermediate cases. Finally, the group color is directly propagated to all the mesh vertices using the following matrix multiplication operation, i.e. $C = S^T * \mathcal{C}$ , where $\mathcal{C} \in \mathbb{R}^{G \times 3} = [\mathcal{C}^{(1)}, \mathcal{C}^{(2)}, \dots, \mathcal{C}^{(G)}]$ (see Suppl for more details).
+
+# 3.3 Self-supervised learning objectives
+
+For a given image pair, denoted as $I_{a}$ and $I_{b}$ (depicting the same person in diverse pose and BGs), we forward them through two parallel pathways of our colored mesh estimation architecture (see Fig. 2). The commonness of FG appearance allows us to impose an appearance consistency loss between the predicted fully colored mesh representations.
+
+a) Color consistency. First, we impose the following consistency loss,
+
+$$
+\mathcal {L} _ {C C} = \mathcal {L} _ {C} + \lambda \mathcal {L} _ {\tilde {C}}, \text {w h e r e} \mathcal {L} _ {C} = \| C _ {a} - C _ {b} \| \text {a n d} \mathcal {L} _ {\tilde {C}} = \| W _ {a} \odot W _ {b} \odot (\tilde {C} _ {a} - \tilde {C} _ {b}) \|
+$$
+
+Here, $\odot$ denotes element-wise multiplication. Note that, $\mathcal{L}_{\tilde{C}}$ enforces a vertex-color consistency on the co-visible mesh vertices (computed as $(W_{a}\odot W_{b})$ ), i.e. the vertices which are visible in both the mesh representations obtained from the image pair, $(I_a,I_b)$ . However, $\mathcal{L}_C$ enforces full vertex color consistency. Here, $\mathcal{L}_{CC}$ combines both of the losses thereby providing a higher weightage to the
+
+co-visible vertex colors as compared to the approximate full color representation, considering the approximate nature of the symmetry assumption.
+
+b) Part-prototype consistency. The proposed Color-recovery module can also be applied on the convolutional feature maps. For a given vertex $k$ and a convolutional feature map $H \in \mathbb{R}^{\tilde{w} \times \tilde{h} \times \tilde{d}}$ , we sample $\mathcal{H}^{(k)} \in \mathbb{R}^{\tilde{d}} = H(v^{(k)})$ . Note that, we define a fixed vertex to part-segmentation mapping represented as $Q^{(l)}$ , which stores a set of vertex indices for each part $l = 1,2,\ldots L$ . Now, one can use the vertex visibility weighing $W^{(k)}$ to obtain a prototype appearance feature for each body-part $l$ , which is computed as; $\mathcal{F}^{(l)} = (\Sigma_{k\in Q^{(l)}}W^{(k)}\mathcal{H}^{(k)}) / (\Sigma_{k\in Q^{(l)}}W^{(k)})$ . Following this, we enforce a prototype consistency loss between the image pairs as $\mathcal{L}_P = \Sigma_l\| \mathcal{F}_a^{(l)} - \mathcal{F}_b^{(l)}\| / L$ . Note that, the prototype feature computation is inherently aware of the inter-part occlusions as a result of incorporating the visibility weighing $W^{(k)}$ . As compared to enforcing vertex-color consistency, $\mathcal{L}_{CC}$ (i.e. the raw color intensities), the part-prototype consistency aims to match a higher-level semantic abstraction (e.g. checkered regular patterns versus just plain individual colors) of the part appearances extracted from the image pairs. This also helps us to overcome the unreliability of raw vertex colors which could arise due to illumination differences. Motivated by the perceptual loss idea [17], we obtain $H_{a}$ and $H_{b}$ as the Conv2-1 features corresponding to $I_{a}$ and $I_{b}$ from an ImageNet trained (frozen) VGG-16 network [59].
+c) Shape-consistency. We also enforce a shape consistency loss between the shape parameters obtained from the image pair, i.e. $\mathcal{L}_{\beta} = |\beta_{a} - \beta_{b}|$ . Almost all the prior works [21,51,50] utilize an unpaired human shape dataset to enforce plausibility of the shape predictions via adversarial prior. However, in the proposed self-supervised framework we do not access any human shape dataset. To regularize the shape parameters during the initial training iterations we enforce a loss on shape predictions with respect to a fixed mean shape as a regularization. However, after gaining a decent mesh estimation performance we gradually reduce weightage of this loss by allowing shape variations beyond the mean shape driven by the proposed appearance and shape consistency objectives.
+d) Enforcing validity of pose predictions. Additionally, to assure validity of the predicted pose parameters we train an adversarial auto-encoder [42] to realize a continuous human pose manifold [29,30] mapped from a latent pose representation, $\phi \in [-1,1]^{32}$ . This is trained using an unpaired 3D human pose dataset. The frozen pose decoder obtained from this generative framework is directly employed as a module, with instilled human 3D pose prior. More concretely, a tanh non-linearity on the pose-prediction head of the CNN regressor (inline with the latent pose $\phi$ ) followed by the frozen pose decoder prevents implausible pose predictions during our self-supervised training. In contrast to enforcing an adversarial pose prior objective [21,50], the proposed setup greatly simplifies our training procedure (devoid of discriminator training).
+In absence of paired supervision, parameters of the shared CNN regressor is trained by directly enforcing the above consistency losses, i.e. $\mathcal{L}_{CC}$ , $\mathcal{L}_P$ , and $\mathcal{L}_{\beta}$ .
+
+A. Results on H36M dataset (in-studio)
+
+
+B. Results on 3DPW dataset (in-the-wild)
+
+
+
+
+
+
+C. Results on LSP dataset (in-the-wild)
+
+
+
+
+
+
+Fig.3. Qualitative results. In each panel, 1st column depicts the input image, 2nd column depicts our colored mesh prediction, and 3rd column shows the model-based part segments. Our model fails (in magenta) in presence of complex inter-part occlusions.
+
+
+
+
+
+# 4 Experiments
+
+We perform thorough experimental analysis to demonstrate the generalizability of our framework across several datasets on a variety of tasks.
+
+Implementation details. We use Resnet-50 [9] initialized from ImageNet as the base CNN network. The average pooled last layer features are forwarded through a series of fully-connected layers to regress the pose (latent pose encoding $\phi$ ), shape and camera parameters. Note that, the series of differentiable operations post the CNN regressor do not include any trainable parameters even to estimate the vertex colors. During training, we optimize individual loss terms at alternate training iteration using Adam optimizer [25]. We enforce prediction of the mean shape for initial 100k training iterations. We also impose a silhouette loss on the predicted human mesh with respect to a pseudo silhouette ground-truth obtained either by using an unsupervised saliency detection method [72] or by using a background estimate as favourable for static camera scenarios [54].
+
+Datasets. We sample image pairs with diverse BG (pairs with large $L2$ distance) from the following standard datasets, i.e. Human3.6M [15], MPII [2], MPI-INF-3DHP [47] and an in-house collection of wild YouTube videos. In contrast to the in-studio datasets with hardly any camera movement implying static BG [15], the videos collected from YouTube have diverse camera movements (e.g. Parkour and Free-running videos). We prune the raw video samples using a person-detector [52] to obtain reliable human-centric crops as required for the mesh estimation pipeline (see Suppl). The unpaired 3D pose dataset required to train the 3D pose prior is obtained from CMU-MoCap (also used in MoSh [39]).
+
+a) Human3.6M This is a widely used dataset consisting of paired image with 3D pose annotations of actors imitating various day-to-day tasks in a controlled in-studio environment. Adhering to well established standards [21] we consider subjects S1, S6, S7, S8 for training, S5 for validation and S9, S11 for evaluation, in both Protocol-1 [54,55] and Protocol-2 [21].
+
+
+A. Results on YouTube, LSP and 3DPW dataset (in-the-wild)
+
+
+B. Ablation results
+
+
+Fig. 4. A. Qualitative results on single image colored human mesh recovery. The model fails in presence of complex inter-limb occlusions (in magenta box). B. Qualitative analysis demonstrating importance of incorporating $\mathcal{L}_P$ to extract relevant part-semantics.
+
+b) LSP A standard 2D pose dataset consisting of wild athletic actions. We access the LSP test-set with silhouette and part segment annotations as given by Lassner et al. [35]. In absence of any standard shape evaluation dataset, segmentation results are considered as a proxy for the shape fitting performance [21,27].
+c) 3DPW We also evaluate on the 3D Poses in the Wild dataset [43]. We do not train on 3DPW and use it only to evaluate our cross-dataset generalizability [31]. We compute the mean per joint position error (MPJPE) [15], both before and after rigid alignment. Rigid alignment is done via Procrustes Analysis [7]. MPJPE computed post Procrustes alignment is denoted by PA-MPJPE.
+
+# 4.1 Ablative study
+
+To analyze effectiveness of individual self-supervised consistency objectives, we perform ablations by removing certain losses as shown in Table 2. First, we train Baseline-1 by enforcing $\mathcal{L}_C$ and $\mathcal{L}_{\beta}$ . Following this, in Baseline-2 we enforce $\mathcal{L}_{CC}$ by incorporating $\mathcal{L}_{\tilde{C}}$ which further penalizes color inconsistency between the vertices which are commonly visible in both the mesh representations. This results in marginal improvement of performance. Moving forward, we recognize a clear limitation in our assumption of FG color consistency (raw RGB intensities) which can easily be violated by illumination differences. Further, the assumption of left-right and front-back symmetry in apparel color can also be violated specifically for asymmetric upper body apparel. As a solution, the proposed part-prototype consistency objective, $\mathcal{L}_P$ tries to match a higher level appearance representation beyond just raw color intensities (see Fig. 4B), thus resulting in a significant performance gain (Ours(unsup) in Table 2). Note that, $\mathcal{L}_P$ is possible as a consequence of the proposed differentiable Color-recovery module.
+
+Table 2. Ablative study (on Human3.6M) to analyze importance of self-supervised objectives (first 3 rows), and results at varied degree of paired supervision (last 3 rows). P1 and P2 denote MPJPE and PA-MPJPE in Protocol-1 and Protocol-2 respectively.
+
+| Methods | P1(↓) | P2(↓) |
| Baseline-1; (LC + Lβ) | 127.1 | 101.2 |
| Baseline-2; (LC + Lβ) | 119.6 | 97.4 |
| Ours(unsup.); (LC + Lβ + Lp) | 110.8 | 90.5 |
| Ours(multi-view-sup) | 102.1 | 74.1 |
| Ours(weakly-sup) | 86.4 | 58.2 |
| Ours(semi-sup) | 73.8 | 48.1 |
+
+Table 3. Evaluation on wild 3DPW dataset in a fully-unseen setting. Note that, in contrast to Temporal-HMR [23] we do not use any temporal supervision. Methods in first 5 rows use equivalent 2D and 3D pose supervision, thus directly comparable.
+
+| Methods | MPJPE(↓) | PA-MPJPE(↓) |
| Martinez et al. [45] | - | 157.0 |
| SMPLify [5] | 199.2 | 106.1 |
| TP-Net [6] | 163.7 | 92.3 |
| Temporal-HMR [23] | 127.1 | 80.1 |
| Ours(semi-sup) | 125.8 | 78.2 |
| Ours(weakly-sup) | 153.4 | 89.8 |
| Ours(unsup) | 187.1 | 102.7 |
+
+Further, maintaining a fair comparison ground against the prior weakly supervised approaches, we train 3 variants of the proposed framework by utilizing increasing level of paired supervisions alongside our self-supervised objectives.
+
+a) Ours(multi-view-sup) Under multi-view supervision, we impose additional consistency loss on the canonically aligned (view-invariant) 3D mesh vertices (i.e. $\| V_{a} - V_{b}\|$ ) and the 3D pose (i.e. $\| Y_a - Y_b\|$ ) for the time synchronized multi-view pairs, $(I_a,I_b)$ . Inline with Rhodin et al. [54], we also use full 3D pose supervision only for S1 while evaluating on the standard Human3.6M dataset. We outperform Rhodin et al. [54] by a significant margin as reported in the Table 4. This is beyond the usual trend of weaker performance in non-parametric approaches against the model-based parametric ones. Thus, we attribute this performance gain to the proposed appearance consensus driven self-supervised objectives.
+b) Ours(weakly-sup) In this setting, we access image datasets with paired 2D landmark annotations, inline with the supervision setting of prior model-based approaches [21]. Alongside the proposed self-supervised objectives, we impose a direct 2D landmark supervision loss (i.e. $\| y - y_{gt} \|$ ) with respect to the corresponding ground-truths but only on samples from specific datasets, such as LSP, LSP-extended [19] and MPII [2]. Certain prior arts, such as HMR [21], use even more images with paired 2D landmark annotations from COCO [37].
+c) Ours(semi-sup) In this variant, we access paired 3D pose supervision on the widely used in-studio Human3.6M [15] dataset alongside the 2D landmark supervision as used in Ours(weakly-sup). Note that, a better performance on Human3.6M (with limited BG and FG diversity as a result of the in-studio data collection setup) does not translate to the same on wild images as a result of the significant domain gap. As we impose the above supervisions alongside the proposed self-supervised objective on unlabeled wild images, such a training is expected to deliver improved performance by successfully overcoming the domain-shift issue. We evaluate this on the wild 3DPW dataset.
+
+Table 4. Evaluation on Human3.6M (Protocol-2). Methods in first 9 rows use equivalent 2D and 3D pose supervision hence are directly comparable. Same analogy applies for the rows 10-11 and 12-13.
+
+| No. | Methods | PA-MPJPE(↓) |
| 1. | Lassner et al. [35] | 93.9 |
| 2. | Pavlakos et al. [51] | 75.9 |
| 3. | Omran et al. [48] | 59.9 |
| 4. | HMR [21] | 56.8 |
| 5. | Temporal HMR [23] | 56.9 |
| 6. | Arnab et al. [4] | 54.3 |
| 7. | Kolotouros et al. [27] | 50.1 |
| 8. | TexturePose [50] | 49.7 |
| 9. | Ours(semi-sup) | 48.1 |
| 10. | HMR unpaired [21] | 66.5 |
| 11. | Ours(weakly-sup) | 58.2 |
| 12. | Rhodin et al. [54] | 98.2 |
| 13. | Ours(multi-view-sup) | 74.1 |
+
+Table 5. Evaluation of FG-BG and 6-part segmentation on LSP test set. It reports accuracy (Acc.) and F1 score values of ours against the prior-arts. First group: Iterative, optimization-based approaches. Last 3 groups: Regression-based methods grouped based on comparable supervision levels.
+
+| Methods | FG-BG Seg. | Part Seg. |
| Acc.(↑) | F1(↑) | Acc.(↑) | F1(↑) |
| SMPLify oracle [5] | 92.17 | 0.88 | 88.82 | 0.67 |
| SMPLify [5] | 91.89 | 0.88 | 87.71 | 0.64 |
| SMPLify on [51] | 92.17 | 0.88 | 88.24 | 0.64 |
| Bodynet [65] | 92.75 | 0.84 | - | - |
| HMR [21] | 91.67 | 0.87 | 87.12 | 0.60 |
| Kolotouros et al. [27] | 91.46 | 0.87 | 88.69 | 0.66 |
| TexturePose [50] | 91.82 | 0.87 | 89.00 | 0.67 |
| Ours(semi-sup) | 91.84 | 0.87 | 89.08 | 0.67 |
| HMR unpaired [21] | 91.30 | 0.86 | 87.00 | 0.59 |
| Ours(weakly-sup) | 91.70 | 0.87 | 87.12 | 0.60 |
| Ours(unsup) | 91.46 | 0.86 | 87.26 | 0.64 |
+
+# 4.2 Comparison with the state-of-the-art
+
+Evaluation on Human3.6M. Table 4 shows a comparison of different variants of the proposed framework against the prior-arts which are grouped based on the respective supervision levels. We clearly outperform in all the three groups i.e. while accessing comparable a) 3D pose supervision, b) 2D landmark supervision, and c) multi-view supervision. Except Rhodin et al. [54] all the prior works mentioned in Table 4 use parametric human model for the human mesh estimation task. Note the significant performance gain specifically in absence of any 3D pose supervision, i.e. for Ours(weakly-sup) and Ours(multi-view-sup) against the relevant counterparts as reported in the last 4 rows.
+
+Evaluation on 3DPW. Table 3 reports a comparison of different variants of the proposed framework against the prior-arts which use comparable pose supervision as used in Ours(semi-sup) (except certain methods, such as HMR [21] which use even more supervision on 3D pose from the MPI-INF-3DHP [47] dataset). It is worth noting that none of our model variants is trained on the samples from 3DPW dataset (not even in self-supervised paradigm). A better performance in such unseen setting highlights our superior cross-dataset generalizability.
+
+Evaluation of part-segmentation. We also evaluate our performance on FG-BG segmentation and body part-segmentation tasks which are considered as a proxy to quantify the shape fitting performance. In presence of 2D landmark annotation, iterative model fitting approaches have a clear advantage over the single-shot regressor based approaches as shown in Table 5. At comparable supervision, Ours(semi-sup) not only outperforms the relevant regression based prior arts but also performs competitive to the iterative model fitting based approaches with a significant advantage on inference time (1 min vs 0.04 sec).
+
+
+A. Part-conditioned appearance transfer
+
+
+B. Full-body appearance transfer
+Fig. 5. Qualitative results on A. Part-conditioned, and B. Full-body appearance transfer. This is enabled as a result of our ability to infer the colored mesh representation.
+
+We also report performance of our self-supervised variant such as Ours(unsup) which performs competitive to the prior supervised regression-based approaches, thus establishing the importance of FG appearance consistency for accurate shape recovery. See Suppl for qualitative results.
+
+# 4.3 Qualitative results
+
+The proposed mesh recovery model not only infers pose and shape but also outputs a colored mesh representation as a result of the proposed reflectionalsymmetry procedure. To evaluate effectiveness of the recovered part appearance we perform 2 different tasks a) part-conditioned appearance transfer, and b) full-body appearance transfer as shown in Fig. 5. On the top, we show the target images whose pose and shape (network predicted) is combined with part appearances recovered from the source image (only for the highlighted parts) shown on left, to realize a novel synthesized image. Note that, in case of partconditioned appearance transfer, appearance of the non-highlighted parts are taken from the target image shown on the top. For instance, in the first row, the synthesized image depicts upper-body apparel of the person in the source image combined with the lower-body apparel from the target (and in the target image pose). Qualitative results of Ours(semi-sup) model on other primary tasks are shown in Fig. 3 and Fig. 4 with highlighted failure scenarios (see Suppl).
+
+# 5 Conclusion
+
+We introduce a self-supervised framework for model-based human pose and shape recovery. The proposed appearance consistency not only helps us to segregate the common FG human from their respective wild BGs but also discovers the required pose deformation in a fully self-supervised manner. However, extending such a framework for human centric images with occlusion by external objects or truncated human visibility, remains to be explored in future.
+
+Acknowledgements. We thank Qualcomm Innovation Fellowship India 2020.
+
+# References
+
+1. Alp Güler, R., Neverova, N., Kokkinos, I.: Densepose: Dense human pose estimation in the wild. In: CVPR (2018) 3, 4
+2. Andriluka, M., Pishchulin, L., Gehler, P., Schiele, B.: 2d human pose estimation: New benchmark and state of the art analysis. In: CVPR (2014) 2, 5, 10, 12
+3. Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J.: Scope: shape completion and animation of people. In: ACM SIGGRAPH (2005) 1, 4
+4. Arnab, A., Doersch, C., Zisserman, A.: Exploiting temporal context for 3d human pose estimation in the wild. In: CVPR (2019) 1, 3, 13
+5. Bogo, F., Kanazawa, A., Lassner, C., Gehler, P., Romero, J., Black, M.J.: Keep it spl: Automatic estimation of 3d human pose and shape from a single image. In: ECCV (2016) 2, 4, 12, 13
+6. Dabral, R., Mundhada, A., Kusupati, U., Afaque, S., Sharma, A., Jain, A.: Learning 3d human pose from structure and motion. In: ECCV (September 2018) 12
+7. Gower, J.C.: Generalized procrustes analysis. Psychometrika 40(1), 33-51 (1975) 11
+8. Guan, P., Weiss, A., Balan, A.O., Black, M.J.: Estimating human shape and pose from a single image. In: ICCV (2009) 2
+9. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: ECCV (2016) 10
+0. Henderson, P., Ferrari, V.: Learning single-image 3D reconstruction by generative modelling of shape, pose and shading. International Journal of Computer Vision (2019) 5, 7
+1. Hofmann, M., Gavrila, D.M.: Multi-view 3d human pose estimation combining single-frame recovery, temporal integration and model adaptation. In: CVPR (2009) 5
+2. Hogg, D.: Model-based vision: a program to see a walking person. Image and Vision computing 1(1), 5-20 (1983) 1
+3. Hsu, K.J., Tsai, C.C., Lin, Y.Y., Qian, X., Chuang, Y.Y.: Unsupervised cnn-based co-saliency detection with graphical optimization. In: ECCV (2018) 3
+4. Huang, Y., Bogo, F., Lassner, C., Kanazawa, A., Gehler, P.V., Romero, J., Akhter, I., Black, M.J.: Towards accurate marker-less human shape and pose estimation over time. In: 3DV (2017) 1
+5. Ionescu, C., Papava, D., Olaru, V., Sminchisescu, C.: Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence (2013) 10, 11, 12
+6. Jakab, T., Gupta, A., Bilen, H., Vedaldi, A.: Unsupervised learning of object landmarks through conditional image generation. In: NeurIPS (2018) 3
+7. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: ECCV (2016) 9
+8. Johnson, S., Everingham, M.: Clustered pose and nonlinear appearance models for human pose estimation. In: BMVC (2010) 2
+9. Johnson, S., Everingham, M.: Clustered pose and nonlinear appearance models for human pose estimation. In: BMVC (2010) 12
+20. Joo, H., Simon, T., Sheikh, Y.: Total capture: A 3d deformation model for tracking faces, hands, and bodies. In: CVPR (2018) 1
+21. Kanazawa, A., Black, M.J., Jacobs, D.W., Malik, J.: End-to-end recovery of human shape and pose. In: CVPR (2018) 2, 3, 4, 5, 6, 9, 10, 11, 12, 13
+
+22. Kanazawa, A., Tulsiani, S., Efros, A.A., Malik, J.: Learning category-specific mesh reconstruction from image collections. In: ECCV (2018) 3, 4, 5
+23. Kanazawa, A., Zhang, J.Y., Felsen, P., Malik, J.: Learning 3d human dynamics from video. In: CVPR (2019) 3, 12, 13
+24. Kato, H., Ushiku, Y., Harada, T.: Neural 3d mesh renderer. In: CVPR (2018) 5
+25. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 10
+26. Kolotouros, N., Pavlakos, G., Black, M.J., Daniilidis, K.: Learning to reconstruct 3d human pose and shape via model-fitting in the loop. In: ICCV (2019) 3
+27. Kolotouros, N., Pavlakos, G., Daniilidis, K.: Convolutional mesh regression for single-image human shape reconstruction. In: CVPR (2019) 3, 11, 13
+28. Kundu, J.N., Ganeshan, A., MV, R., Prakash, A., Babu, R.V.: iSPA-Net: Iterative semantic pose alignment network. In: ACM Multimedia (2018) 2
+29. Kundu, J.N., Gor, M., Babu, R.V.: BiHMP-GAN: Bidirectional 3d human motion prediction gan. In: AAAI (2019) 9
+30. Kundu, J.N., Gor, M., Uppala, P.K., Babu, R.V.: Unsupervised feature learning of human actions as trajectories in pose embedding manifold. In: WACV (2019) 9
+31. Kundu, J.N., Patravali, J., Babu, R.V.: Unsupervised cross-dataset adaptation via probabilistic amodal 3d human pose completion. In: WACV (2020) 11
+32. Kundu, J.N., Seth, S., Jampani, V., Rakesh, M., Babu, R.V., Chakraborty, A.: Self-supervised 3d human pose estimation via part guided novel image synthesis In: CVPR (2020) 2
+33. Kundu, J.N., Seth, S., Rahul, M., Rakesh, M., Babu, R.V., Chakraborty, A.: Kinematic-structure-preserved representation for unsupervised 3d human pose estimation. In: AAAI (2020) 3
+34. L Navaneet, K., Mandikal, P., Jampani, V., Babu, V.: Differ: Moving beyond 3d reconstruction with differentiable feature rendering. In: CVPR Workshops (2019) 4
+35. Lassner, C., Romero, J., Kiefel, M., Bogo, F., Black, M.J., Gehler, P.V.: Unite the people: Closing the loop between 3d and 2d human representations. In: CVPR (2017) 2, 4, 5, 11, 13
+36. Liang, J., Lin, M.C.: Shape-aware human pose and shape reconstruction using multi-view images. In: ICCV (2019) 5
+37. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dolkar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: ECCV (2014) 5, 12
+38. Liu, S., Li, T., Chen, W., Li, H.: Soft rasterizer: A differentiable renderer for image-based 3d reasoning. In: ICCV (2019) 4
+39. Loper, M., Mahmood, N., Black, M.J.: Mosh: Motion and shape capture from sparse markers. ACM Transactions on Graphics (TOG) 33(6), 220 (2014) 10
+40. Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: Smpl: A skinned multi-person linear model. ACM transactions on graphics (2015) 1, 4, 5
+41. Ma, L., Sun, Q., Georgoulis, S., Van Gool, L., Schiele, B., Fritz, M.: Disentangled person image generation. In: CVPR (2018) 2
+42. Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., Frey, B.: Adversarial autoencoders. arXiv preprint arXiv:1511.05644 (2015) 9
+43. von Marcard, T., Henschel, R., Black, M.J., Rosenhahn, B., Pons-Moll, G.: Recovering accurate 3d human pose in the wild using imus and a moving camera. In: ECCV (2018) 11
+44. von Marcard, T., Rosenhahn, B., Black, M.J., Pons-Moll, G.: Sparse inertial poser: Automatic 3d human pose estimation from sparse imus. In: Computer Graphics Forum. vol. 36, pp. 349-360. Wiley Online Library (2017) 1
+
+45. Martinez, J., Hossain, R., Romero, J., Little, J.J.: A simple yet effective baseline for 3d human pose estimation. In: ICCV (2017) 1, 12
+46. Mathieu, M.F., Zhao, J.J., Zhao, J., Ramesh, A., Sprechmann, P., LeCun, Y.: Disentangling factors of variation in deep representation using adversarial training. In: NeurIPS. pp. 5040-5048 (2016) 2
+47. Mehta, D., Rhodin, H., Casas, D., Fua, P., Sotnychenko, O., Xu, W., Theobalt, C.: Monocular 3d human pose estimation in the wild using improved cnn supervision. In: 3DV (2017) 10, 13
+48. Omran, M., Lassner, C., Pons-Moll, G., Gehler, P., Schiele, B.: Neural body fitting: Unifying deep learning and model based human pose and shape estimation. In: 3DV (2018) 2, 3, 4, 5, 13
+49. Pavlakos, G., Choutas, V., Ghorbani, N., Bolkart, T., Osman, A.A., Tzionas, D., Black, M.J.: Expressive body capture: 3d hands, face, and body from a single image. In: CVPR (2019) 1
+50. Pavlakos, G., Kolotouros, N., Daniilidis, K.: Texturepose: Supervising human mesh estimation with texture consistency. In: ICCV (2019) 2, 5, 9, 13
+51. Pavlakos, G., Zhu, L., Zhou, X., Daniilidis, K.: Learning to estimate 3d human pose and shape from a single color image. In: CVPR (2018) 2, 3, 4, 5, 9, 13
+52. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: NeurIPS. pp. 91-99 (2015) 10
+53. Rhodin, H., Robertini, N., Casas, D., Richardt, C., Seidel, H.P., Theobalt, C.: General automatic human shape and motion capture using volumetric contour cues. In: ECCV (2016) 1
+54. Rhodin, H., Salzmann, M., Fua, P.: Unsupervised geometry-aware representation for 3d human pose estimation. In: ECCV (2018) 3, 5, 10, 12, 13
+55. Rhodin, H., Sporri, J., Katircioglu, I., Constantin, V., Meyer, F., Müller, E., Salzmann, M., Fua, P.: Learning monocular 3d human pose estimation from multi-view images. In: CVPR (2018) 10
+56. Rifai, S., Bengio, Y., Courville, A., Vincent, P., Mirza, M.: Disentangling factors of variation for facial expression recognition. In: ECCV (2012) 2
+57. Rogez, G., Weinzaepfel, P., Schmid, C.: Lcr-net: Localization-classification-regression for human pose. In: CVPR (2017) 1
+58. Romero, J., Tzionas, D., Black, M.J.: Embodied hands: Modeling and capturing hands and bodies together. ACM Transactions on Graphics (ToG) 36(6), 245 (2017) 1
+59. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) 9
+60. Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic scene completion from a single depth image. In: CVPR (2017) 4
+61. Sun, X., Xiao, B., Wei, F., Liang, S., Wei, Y.: Integral human pose regression. In: ECCV (2018) 1
+62. Sun, Y., Ye, Y., Liu, W., Gao, W., Fu, Y., Mei, T.: Human mesh recovery from monocular images via a skeleton-disentangled representation. In: ICCV (2019) 1, 3
+63. Tan, V., Budvytis, I., Cipolla, R.: Indirect deep structured learning for 3d human body shape and pose prediction. In: BMVC (2017) 2, 5
+64. Tung, H.Y., Tung, H.W., Yumer, E., Fragkiadaki, K.: Self-supervised learning of motion capture. In: NIPS (2017) 2, 5
+65. Varol, G., Ceylan, D., Russell, B., Yang, J., Yumer, E., Laptev, I., Schmid, C.: Bodynet: Volumetric inference of 3d human body shapes. In: ECCV (2018) 13
+66. Varol, G., Romero, J., Martin, X., Mahmood, N., Black, M.J., Laptev, I., Schmid, C.: Learning from synthetic humans. In: CVPR (2017) 5
+
+67. Weiss, A., Hirshberg, D., Black, M.J.: Home 3d body scans from noisy image and range data. In: ICCV (2011) 1
+68. Zanfir, A., Marinoiu, E., Sminchisescu, C.: Monocular 3d pose and shape estimation of multiple people in natural scenes-the importance of multiple scene constraints. In: CVPR (2018) 4
+69. Zanfir, A., Marinoui, E., Zanfir, M., Popa, A.I., Sminchisescu, C.: Deep network for the integrated 3d sensing of multiple people in natural images. In: NIPS (2018) 2, 4
+70. Zhang, D., Meng, D., Han, J.: Co-saliency detection via a self-paced multiple-instance learning framework. IEEE transactions on pattern analysis and machine intelligence 39(5), 865-878 (2016) 3
+71. Zhou, X., Huang, Q., Sun, X., Xue, X., Wei, Y.: Towards 3d human pose estimation in the wild: a weakly-supervised approach. In: CVPR (2017) 5
+72. Zhu, W., Liang, S., Wei, Y., Sun, J.: Saliency optimization from robust background detection. In: CVPR (2014) 10
\ No newline at end of file
diff --git a/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/images.zip b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..038dd36c03feb47f115992dce39328d598f595f9
--- /dev/null
+++ b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b605f924d589d78672922dbfd6536e11e9bf5721dd4c5265ec742fc3a2dac10
+size 501203
diff --git a/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/layout.json b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cb9195a1ccff4c74f04fe99c146ea05df2577c2b
--- /dev/null
+++ b/appearanceconsensusdrivenselfsupervisedhumanmeshrecovery/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:64200aaf822892d0f953ea69ee694a3821239dfe823e5c0dafd06548bc60c7e7
+size 461375
diff --git a/appearancepreserving3dconvolutionforvideobasedpersonreidentification/69479b83-0e47-4c18-9582-e60ad84cc28d_content_list.json b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/69479b83-0e47-4c18-9582-e60ad84cc28d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2a4900360f54375f77a34ca55f3b6f35e8126383
--- /dev/null
+++ b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/69479b83-0e47-4c18-9582-e60ad84cc28d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:053c8f8833aa95d6774163999742e5a274eca22e9709c6e2b5682b50d0c1e2ef
+size 73929
diff --git a/appearancepreserving3dconvolutionforvideobasedpersonreidentification/69479b83-0e47-4c18-9582-e60ad84cc28d_model.json b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/69479b83-0e47-4c18-9582-e60ad84cc28d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5e012971bf8f2bea48f6a991ff52c6d749bd2b9d
--- /dev/null
+++ b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/69479b83-0e47-4c18-9582-e60ad84cc28d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:66ba7c5705ac1a4a98310b46138dc3fc7da348a3e71600a1efa34f7d2fa7f18b
+size 91577
diff --git a/appearancepreserving3dconvolutionforvideobasedpersonreidentification/69479b83-0e47-4c18-9582-e60ad84cc28d_origin.pdf b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/69479b83-0e47-4c18-9582-e60ad84cc28d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..dd9e8c2571475dedb08c1f78adbcedf7d32a0e77
--- /dev/null
+++ b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/69479b83-0e47-4c18-9582-e60ad84cc28d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:af56751c6cfdea6a30eb243502a625ef7e77ad391c9e21cf757c3b77e26e6032
+size 959529
diff --git a/appearancepreserving3dconvolutionforvideobasedpersonreidentification/full.md b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6608feeb986ca9e58d61f4e03bccea6518f9b31c
--- /dev/null
+++ b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/full.md
@@ -0,0 +1,295 @@
+# Appearance-Preserving 3D Convolution for Video-based Person Re-identification
+
+Xinqian Gu $^{1,2}$ , Hong Chang $^{1,2}$ , Bingpeng Ma $^{2}$ , Hongkai Zhang $^{1,2}$ , and Xilin Chen $^{1,2}$
+
+1 Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing, 100190, China
+
+2 University of Chinese Academy of Sciences, Beijing, 100049, China
+xinqian.gu@vipl.ict.ac.cn, changhong@ict.ac.cn, bpma@ucas.ac.cn, hongkai.zhang@vipl.ict.ac.cn, xlchen@ict.ac.cn
+
+Abstract. Due to the imperfect person detection results and posture changes, temporal appearance misalignment is unavoidable in video-based person re-identification (ReID). In this case, 3D convolution may destroy the appearance representation of person video clips, thus it is harmful to ReID. To address this problem, we propose Appearance-Preserving 3D Convolution (AP3D), which is composed of two components: an Appearance-Preserving Module (APM) and a 3D convolution kernel. With APM aligning the adjacent feature maps in pixel level, the following 3D convolution can model temporal information on the premise of maintaining the appearance representation quality. It is easy to combine AP3D with existing 3D ConvNets by simply replacing the original 3D convolution kernels with AP3Ds. Extensive experiments demonstrate the effectiveness of AP3D for video-based ReID and the results on three widely used datasets surpass the state-of-the-arts. Code is available at: https://github.com/guxinqian/AP3D.
+
+Keywords: video-based person re-identification, temporal appearance misalignment, Appearance-Preserving 3D Convolution
+
+# 1 Introduction
+
+Video-based person re-identification (ReID) [32,11,13] plays a crucial role in intelligent video surveillance system. Compared with image-based ReID [28,12], the main difference is that the query and gallery in video-based ReID are both videos and contain additional temporal information. Therefore, how to deal with the temporal relations between video frames effectively is of central importance in video-based ReID.
+
+The most commonly used temporal information modeling methods in computer vision include LSTM [10,23], 3D convolution [29,2,24], and Non-local operation [33]. LSTM and 3D convolution are adept at dealing with local temporal relations and encoding the relative position. Some researchers [2] have demonstrated that 3D convolution is superior to CNN+LSTM on the video classification tasks. In contrast, Non-local operation does not encode the relative position,
+
+
+Fig. 1. Temporal appearance misalignment caused by (a) smaller bounding boxes, (b) bigger bounding boxes and (c) posture changes. (d) AP3D firstly uses APM to reconstruct the adjacent feature maps to guarantee the appearance alignment with respect to the central feature map and then performs 3D convolution
+
+
+
+
+
+
+
+but it can model long-range temporal dependencies. These methods are complementary to each other. In this paper, we mainly focus on improving existing 3D convolution to make it more suitable for video-based ReID.
+
+Recently, some researchers [19,17] try to introduce 3D convolution to video-based ReID. However, they neglect that, compared with other video-based tasks, the video sample in video-based ReID consists of a sequence of bounding boxes produced by some pedestrian detector [25,35] (see Figure 1), not the original video frames. Due to the imperfect person detection algorithm, some resulting bounding boxes are smaller (see Figure 1 (a)) or bigger (see Figure 1 (b)) than the ground truths. In this case, because of the resizing operation before feeding into a neural network, the same spatial positions in adjacent frames may belong to different body parts and the same body parts in adjacent frames may be scaled to different sizes. Even though the detection results are accurate, the misalignment problem may still exist due to the posture changes of the target person (see Figure 1 (c)). Note that one 3D convolution kernel processes the features at the same spatial position in adjacent frames into one value. When temporal appearance misalignment exists, 3D convolution may mixture the features belonging to different body parts in adjacent frames into one feature, which destroys the appearance representations of person videos. Since the performance of video-based ReID highly relies on the appearance representation, so the appearance destruction is harmful. Therefore, it is desirable to develop a new 3D convolution method which can model temporal relations on the premise of maintaining appearance representation quality.
+
+In this paper, we propose Appearance-Preserving 3D convolution (AP3D) to address the appearance destruction problem of existing 3D convolution. As shown in Figure 1 (d), AP3D is composed of an Appearance-Preserving Module (APM) and a 3D convolution kernel. For each central feature map, APM reconstructs its adjacent feature maps according to the cross-pixel semantic similarity and guarantees the temporal appearance alignment between the reconstructed and central feature maps. The reconstruction process of APM can be considered as feature map registration between two frames. As for the problem of asym
+
+metric appearance information (e.g., in Figure 1 (a), the first frame does not contain foot region, thus can not be aligned with the second frame perfectly), Contrastive Attention is proposed to find the unmatched regions between the reconstructed and central feature maps. Then, the learned attention mask is imposed on the reconstructed feature map to avoid error propagation. With APM guaranteeing the appearance alignment, the following 3D convolution can model the spatiotemporal information more effectively and enhance the video representation with higher discriminative ability but no appearance destruction. Consequently, the performance of video-based ReID can be greatly improved. Note that the learning process of APM is unsupervised. In other words, no extra correspondence annotations are required, and the model can be trained only with identification supervision.
+
+The proposed AP3D can be easily combined with existing 3D ConvNets (e.g., I3D [2] and P3D [24]) just by replacing the original 3D convolution kernels with AP3Ds. Extensive ablation studies on two widely used datasets indicate that AP3D outperforms existing 3D convolution significantly. Using RGB information only and without any bells and whistles (e.g., optical flow, complex feature matching strategy), AP3D achieves state-of-the-art results on both datasets.
+
+In summary, the main contributions of our work lie in three aspects: (1) finding that existing 3D convolution is problematic for extracting appearance representation when misalignment exists; (2) proposing an AP3D method to address this problem by aligning the feature maps in pixel level according to semantic similarity before convolution operation; (3) achieving superior performance on video-based ReID compared with state-of-the-art methods.
+
+# 2 Related Work
+
+Video-based ReID. Compared with image-based ReID, the samples in video-based ReID contain more frames and additional temporal information. Therefore, some existing methods [17,32,4,22] attempt to model the additional temporal information to enhance the video representations. In contrast, other methods [21,18,27,3] extract video frame features just using image-based ReID model and explore how to integrate or match multi-frame features. In this paper, we try to solve video-based ReID through developing an improved 3D convolution model for better spatiotemporal feature representation.
+
+Temporal Information Modeling. The widely used temporal information modeling methods in computer vision include LSTM [10,23], 3D convolution [29,2], and Non-local operation [33]. LSTM and 3D convolution are adept at modeling local temporal relations and encoding the relative position, while Non-local operation can deal with long-range temporal relations. They are complementary to each other. Zisserman et al.[2] has demonstrated that 3D convolution outperforms CNN+LSTM on the video classification task. In this paper, we mainly improve the original 3D convolution to avoid the appearance destruction problem and also attempt to combine the proposed AP3D with some existing 3D ConvNets.
+
+
+Fig. 2. The overall framework of the proposed AP3D. Each feature map of the input tensor is considered as the central feature map and its two neighbors are sampled as the corresponding adjacent feature maps. APM is used to reconstruct the adjacent feature maps to guarantee the appearance alignment with respect to corresponding central feature maps. Then the following 3D convolution is performed. Note that the temporal stride of 3D convolution kernel is set to its temporal kernel size. In that case, the shape of output tensor is the same as the shape of input tensor
+
+Image Registration. Transforming different images into the same coordinate system is called image registration [39,1]. These images may be obtained at different times, from different viewpoints or different modalities. The spatial relations between these images may be estimated using rigid, affine, or complex deformation models. As for the proposed method, the alignment operation of APM can be considered as feature map registration. Different feature maps are obtained at sequential times and the subject of person is non-rigid.
+
+# 3 Appearance-Preserving 3D Convolution
+
+In this section, we first illustrate the overall framework of the proposed AP3D. Then, the details of the core module, i.e. Appearance-Preserving Module (APM), are explained followed with discussion. Finally, we introduce how to combine AP3D with existing 3D ConvNets.
+
+# 3.1 The Framework
+
+3D convolution is widely used on video classification task and achieves state-of-the-art performance. Recently, some researchers [19,17] introduce it to video-based ReID. However, they neglect that the performance of ReID tasks is highly dependent on the appearance representation, instead of the motion representation. Due to the imperfect detection results or posture changes, appearance misalignment is unavoidable in video-based ReID samples. In this case, existing 3D convolutions, which process the same spatial position across adjacent
+
+
+(a)
+
+
+(b)
+Fig. 3. Visualization of (a) a central frame, (b) its adjacent frame and (c) similarity distribution with different scale factors $s$ on the adjacent feature maps. With a reasonable $s$ , APM can locate the corresponding region on the adjacent feature map w.r.t. the marked position on the central frame accurately
+
+
+s=1
+s=2
+(c)
+
+
+s=4
+
+frames as a whole, may destroy the appearance representation of person videos, therefore they are harmful to ReID.
+
+In this paper, we propose a novel AP3D method to address the above problem. The proposed AP3D is composed of an APM and a following 3D convolution. An example of AP3D with $3 \times 3 \times 3$ convolution kernel is shown in Figure 2. Specifically, given an input tensor with $T$ frames, each frame is considered as the central frame. We first sample two neighbors for each frame and obtain $2T$ adjacent feature maps in total after padding zeros. Secondly, APM is used to reconstruct each adjacent feature map to guarantee the appearance alignment with corresponding central feature map. Then, we integrate the reconstructed adjacent feature maps and the original input feature maps to form a temporary tensor. Finally, the $3 \times 3 \times 3$ convolution with stride $(3,1,1)$ is performed and an output tensor with $T$ frames can be produced. With APM guaranteeing appearance alignment, the following 3D convolution can model temporal relations without appearance destruction. The details of APM are presented in next subsection.
+
+# 3.2 Appearance-Preserving Module
+
+Feature Map Registration. The objective of APM is reconstructing each adjacent feature map to guarantee that the same spatial position on the reconstructed and corresponding central feature maps belong to the same body part. It can be considered as a graph matching or registration task between each two feature maps. On one hand, since the human body is a non-rigid object, a simple affine transformation can not achieve this goal. On the other hand, existing video-based ReID datasets do not have extra correspondence annotations. Therefore, the process of registration is not that straightforward.
+
+We notice that the middle-level features from ConvNet contain some semantic information [1]. In general, the features with the same appearance have higher cosine similarity, while the features with different appearances have lower cosine similarity [1,13]. As shown in Figure 3, the red crosses indicate the same position on the central (in Figure 3 (a)) and adjacent (in Figure 3 (b)) frames, but
+
+they belong to different body parts. We compute the cross-pixel cosine similarities between the marked position on the central feature map and all positions on the adjacent feature map. After normalization, the similarity distribution is visualized in Figure 3 (c) ( $s = 1$ ). It can be seen that the region with the same appearance is highlighted. Hence, in this paper, we locate the corresponding positions in adjacent frames according to the cross-pixel similarities to achieve feature map registration.
+
+Since the scales of the same body part on the adjacent feature maps may be different, one position on the central feature map may have several corresponding pixels on its adjacent feature map, and vice versa. Therefore, filling the corresponding position on the reconstructed feature map with only the most similar position on the original adjacent feature map is not accurate. To include all pixels with the same appearance, we compute the response $y_{i}$ at each position on the reconstructed adjacent feature map as a weighted sum of the features $x_{j}$ at all positions on the original adjacent feature map:
+
+$$
+y _ {i} = \sum_ {j} \frac {e ^ {f \left(c _ {i} , x _ {j}\right)} x _ {j}}{\sum_ {j} e ^ {f \left(c _ {i} , x _ {j}\right)}}, \tag {1}
+$$
+
+where $c_{i}$ is the feature on the central feature map with the same spatial position as $y_{i}$ and $f(c_{i},x_{j})$ is defined as the cosine similarity between $c_{i}$ and $x_{j}$ with a scale factor $s > 0$ :
+
+$$
+f \left(c _ {i}, x _ {j}\right) = s \frac {g \left(c _ {i}\right) \cdot g \left(x _ {j}\right)}{\| g \left(c _ {i}\right) \| \| g \left(x _ {j}\right) \|}, \tag {2}
+$$
+
+where $g(\cdot)$ is a linear transformation that maps the features to a low-dimensional space. The scale factor $s$ is used to adjust the range of cosine similarities. And a big $s$ can make the relatively high similarity even higher while the relatively low similarity lower. As shown in Figure 3 (c), with a reasonable scale factor $s$ , APM can locate the corresponding region on the adjacent feature map precisely. In this paper, We set the scale factor to 4.
+
+Contrastive Attention. Due to the error of pedestrian detection, some regressive bounding boxes are smaller than the ground truths, so some body parts may be lost in the adjacent frames (see Figure 1 (a)). In this case, the adjacent feature maps can not align with the central feature map perfectly. To avoid error propagation caused by imperfect registration, Contrastive Attention is proposed to find the unmatched regions between the reconstructed and central feature maps. Then, the learned attention mask is imposed on the reconstructed feature map. The final response $z_{i}$ at each position on the reconstructed feature map is defined as:
+
+$$
+z _ {i} = \text {C o n t r a s t i v e A t t} \left(c _ {i}, y _ {i}\right) y _ {i}. \tag {3}
+$$
+
+Here ContrastiveAtt $(c_i, y_i)$ produces an attention value in $[0,1]$ according to the semantic similarity between $c_i$ and $y_i$ :
+
+$$
+\text {C o n t r a s t i v e A t t} \left(c _ {i}, y _ {i}\right) = \operatorname {s i g m o i d} \left(w ^ {T} \left(\theta \left(c _ {i}\right) \odot \phi \left(y _ {i}\right)\right)\right), \tag {4}
+$$
+
+
+Fig. 4. The illustration of APM. The adjacent feature map is firstly reconstructed by feature map registration. Then a Contrastive Attention mask is multiplied with the reconstructed feature map to avoid error propagation caused by imperfect registration
+
+where $w$ is a learnable weight vector implemented by $1 \times 1$ convolution, and $\odot$ is Hadamard product. Since $c_{i}$ and $y_{i}$ are from the central and reconstructed feature maps respectively, we use two asymmetric mapping functions $\theta(\cdot)$ and $\phi(\cdot)$ to map $c_{i}$ and $y_{i}$ to a shared low-dimension semantic space.
+
+The registration and contrastive attention of APM are illustrated in Figure 4. All three semantic mappings, i.e. $g$ , $\theta$ and $\phi$ , are implemented by $1 \times 1$ convolution layers. To reduce the computation, the output channels of these convolution layers are set to $C / 16$ .
+
+# 3.3 Discussion
+
+Relations between APM and Non-local. APM and Non-local (NL) operation can be viewed as two graph neural network modules. Both modules consider the feature at each position on feature maps as a node in graph and use weighted sum to estimate the feature. But they have many differences:
+
+(a) NL aims to use spatiotemporal information to enhance feature and its essence is graph convolution or self-attention on a spatiotemporal graph. In contrast, APM aims to reconstruct adjacent feature maps to avoid appearance destruction by the following 3D Conv. Its essence is graph matching or registration between two spatial graphs.
+(b) The weights in the weighted sum in NL are used for building dependencies between each pair of nodes only and do not have specific meaning. In contrast, APM defines the weights using cosine similarity with a reasonable scale factor, in order to find the positions with the same appearance on the adjacent feature maps accurately (see Figure 3).
+(c) After APM, the integrated feature maps in Figure 2 can still maintain spatiotemporal relative relations to be encoded by the following 3D Conv, while NL cannot.
+
+
+(a) C2D
+
+
+(b) AP-I3D
+
+
+(c) AP-P3D-A
+
+
+(d) AP-P3D-B
+Fig. 5. The C2D, AP-I3D and AP-P3D versions of Residual blocks. As for AP-I3D and AP-P3D Residual blocks, only the original temporal convolution kernels are replaced by AP3Ds
+
+
+(e) AP-P3D-C
+
+(d) Given a spatiotemporal graph with $N$ frames, the computational complexity of NL is $O(N^2)$ , while the computational complexity of APM is only $O(N)$ , much lower than NL.
+
+Relations between Contrastive Attention and Spatial Attention. The Contrastive Attention in APM aims to find the unmatched regions between two frames to avoid error propagation caused by imperfect registration, while the widely used spatial attention [18] in ReID aims to locate more discriminative regions for each frame. As for formulation, Contrastive Attention takes two feature maps as inputs and is imposed on the reconstructed feature map, while Spatial Attention takes one feature map as input and is imposed on itself.
+
+# 3.4 Combining AP3D with I3D and P3D Blocks
+
+To leverage successful 3D ConvNet designs, we combine the proposed AP3D with I3D [2] and P3D [24] Residual blocks. Transferring I3D and P3D Residual blocks to their AP3D versions just needs to replace the original temporal convolution kernel with AP3D with the same kernel size. The C2D, AP-I3D and AP-P3D versions of Residual blocks are shown in Figure 5.
+
+# 4 AP3D for Video-based ReID
+
+To investigate the effectiveness of AP3D for video-based ReID, we use the 2D ConvNet (C2D) form [13] as our baseline method and extend it into AP3D ConvNet with the proposed AP3D. The details of network architectures are described in Section 4.1, and then the loss function we use is introduced in Section 4.2.
+
+# 4.1 Network Architectures
+
+C2D baseline. We use ResNet-50 [8] pre-trained on ImageNet [26] as the backbone and remove the down-sampling operation of stage $^5$ following [28] to enrich
+
+the granularity. Given an input video clip with $T$ frames, it outputs a tensor with shape $T \times H \times W \times 2048$ . After spatial max pooling and temporal average pooling, a 2048-dimension feature is produced. Before feeding into the classifier, a BatchNorm [14] operation is used to normalize the feature following [13]. The C2D baseline does not involve any temporal operations except the final temporal average pooling.
+
+AP3D ConvNet. We replace some 2D Residual blocks with AP3D Residual blocks to turn C2D into AP3D ConvNet for spatiotemporal feature learning. Specifically, we investigate replacing one, half of or all Residual blocks in one stage of ResNet, and the results are reported in Section 5.4
+
+# 4.2 Objective Function
+
+Following [30], we combine cross entropy loss and triplet loss [9] for spatiotemporal representation learning. Since cross entropy loss mainly optimizes the features in angular subspace [31], to maintain consistency, we use cosine distance for triplet loss.
+
+# 5 Experiments
+
+# 5.1 Datasets and Evaluation Protocol
+
+Datasets. We evaluate the proposed method on three video-based ReID datasets, i.e. MARS [37], DukeMTMC-VideoReID [34] and iLIDS-VID [32]. Since MARS and DukeMTMC-VideoReID have fixed train/test splits, for convenience, we perform ablation studies mainly on these two datasets. Besides, we report the final results on iLIDS-VID to compare with the state-of-the-arts.
+
+Evaluation Protocol. We use the Cumulative Matching Characteristics (CMC) and mean Average Precision (mAP) [38] as the evaluation metrics.
+
+# 5.2 Implementation Details
+
+Training. In the training stage, for each video tracklet, we randomly sample 4 frames with a stride of 8 frames to form a video clip. Each batch contains 8 persons, each person with 4 video clips. We resize all the video frames to $256 \times 128$ pixels and use horizontal flip for data augmentation. As for the optimizer, Adam [15] with weight decay 0.0005 is adopted to update the parameters. We train the model for 240 epochs in total. The learning rate is initialized to $3 \times 10^{-4}$ and multiplied by 0.1 after every 60 epochs.
+
+Testing. In the test phase, for each video tracklet, we first split it into several 32-frame video clips. Then we extract the feature representation for each video clip and the final video feature is the averaged representation of all clips. After feature extraction, the cosine distances between the query and gallery features are computed, based on which the retrieval is performed.
+
+Table 1. Comparison between AP3D and original 3D convolution
+
+| Model | Param. | GFLOPs | MARS | Duke-Video |
| top-1 | mAP | top-1 | mAP |
| C2D | 23.51 | 16.35 | 88.9 | 83.4 | 95.6 | 95.1 |
| I3D | 27.64 | 19.37 | 88.6 | 83.0 | 95.4 | 95.2 |
| AP-I3D | 27.68 | 19.48 | 90.1 | 84.8 | 96.2 | 95.4 |
| P3D-A | 24.20 | 16.85 | 88.9 | 83.2 | 95.0 | 95.0 |
| AP-P3D-A | 24.24 | 16.90 | 90.1 | 84.9 | 96.0 | 95.3 |
| P3D-B | 24.20 | 16.85 | 88.8 | 83.0 | 95.4 | 95.3 |
| AP-P3D-B | 24.24 | 16.96 | 89.9 | 84.7 | 96.4 | 95.9 |
| P3D-C | 24.20 | 16.85 | 88.5 | 83.1 | 95.3 | 95.3 |
| AP-P3D-C | 24.24 | 16.90 | 90.1 | 85.1 | 96.3 | 95.6 |
+
+# 5.3 Comparison with Related Approaches
+
+AP3D vs. original 3D convolution. To verify the effectiveness and generalization ability of the proposed AP3D, we implement I3D and P3D residual blocks using AP3D and the original 3D convolution, respectively. Then, we replace one 2D block with 3D block for every 2 residual blocks in stage $_2$ and stage $_3$ of C2D ConvNets, and 5 residual blocks in total are replaced. As shown in Table 1, compared with the C2D baseline, I3D and P3D show close or lower results due to appearance destruction. With APM aligning the appearance representation, the corresponding AP3D versions improve the performance significantly and consistently on both two datasets with few additional parameters and little extra computational complexity. Specifically, AP3D increases about $1\%$ top-1 and $2\%$ mAP over I3D and P3D on MARS dataset. Note that the mAP improvement on DukeMTMC-VideoReID is not as much as that on MARS. One possible explanation is that the bounding boxes of video samples in DukeMTMC-VideoReID dataset are manually annotated and the appearance misalignment is not too serious, so the improvement of AP3D is not very significant.
+
+Compared with other varieties, AP-P3D-C achieves the best performance among most settings. So we conduct the following experiments based on AP-P3D-C (denoted as AP3D for short) if not specifically noted.
+
+AP3D vs. Non-local. Both APM in AP3D and Non-local (NL) are graph-based methods. We insert the same 5 NL blocks into C2D ConvNets and compare AP3D with NL in Table 2. It can be seen that, with fewer parameters and less computational complexity, AP3D outperforms NL on both two datasets.
+
+To compare more fairly, we also implement Contrastive Attention embedded Non-local (CA-NL) and the combination of NL and P3D (NL-P3D). As shown in Table 2, CA-NL achieves the same result as NL on MARS and is still inferior to AP3D. On DukeMTMC-VideoReID, the top-1 of CA-NL is even lower than NL. It is more likely that the Contrastive Attention in APM is designed to avoid
+
+Table 2. Comparison with NL and other temporal information modeling methods
+
+| Model | Param. | GFLOPs | MARS | Duke-Video |
| top-1 | mAP | top-1 | mAP |
| NL | 30.87 | 21.74 | 89.6 | 85.0 | 96.2 | 95.6 |
| CA-NL | 32.75 | 21.92 | 89.6 | 85.0 | 95.9 | 95.6 |
| NL-P3D | 31.56 | 22.17 | 89.9 | 84.8 | 96.2 | 95.5 |
| AP3D | 24.24 | 16.90 | 90.1 | 85.1 | 96.3 | 95.6 |
| NL-AP3D | 31.60 | 22.29 | 90.7 | 85.6 | 97.2 | 96.1 |
| Deformable 3D Conv [5] | 27.75 | 19.53 | 88.5 | 81.9 | 95.2 | 95.0 |
| CNN+LSTM [10] | 28.76 | 16.30 | 88.7 | 79.8 | 95.7 | 94.6 |
+
+error propagation caused by imperfect registration. However, the essence of NL is graph convolution on a spatiotemporal graph, not graph registration. So NL can not co-work with Contrastive Attention. Besides, since P3D can not handle appearance misalignment in video-based ReID, NL-P3D shows close results to NL and is inferior to AP3D, too. With APM aligning the appearance, further improvement is achieved by NL-AP3D. This result demonstrates that AP3D and NL are complementary to each other.
+
+AP3D vs. other methods for temporal information modeling. We also compare AP3D with Deformable 3D convolution [5] and CNN+LSTM [10]. To compare fairly, the same backbone and hyper-parameters are used. As shown in Table 2, AP3D outperforms these two methods significantly on both two datasets. This comparison further demonstrates the effectiveness of AP3D for learning temporal cues.
+
+# 5.4 Ablation Study
+
+Effective positions to place AP3D blocks. Table 3 compares the results of replacing a residual block with AP3D block in different stages of C2D ConvNet. In each of these stages, the second last residual block is replaced with the AP3D block. It can be seen that the improvements by placing AP3D block in stage $_2$ and stage $_3$ are similar. Especially, the results of placing only one AP3D block in stage $_2$ or stage $_3$ surpass the results of placing 5 P3D blocks in stage $_{2,3}$ . However, the results of placing AP3D block in stage $_1$ or stage $_4$ are worse than the C2D baseline. It is likely that the low-level features in stage $_1$ are insufficient to provide precise semantic information, thus APM in AP3D can not align the appearance representation very well. In contrast, the features in stage $_4$ are insufficient to provide precise spatial information, so the improvement by appearance alignment is also limited. Hence, we only consider replacing the residual blocks in stage $_2$ and stage $_3$ .
+
+How many blocks should be replaced by AP3D? Table 3 also shows the results with more AP3D blocks. We investigate replacing 2 blocks (1 for each
+
+Table 3. The results of replacing different numbers of residual blocks in different stages with AP3D block
+
+| Model | Stage | Num. | MARS | Duke-Video |
| top-1 | mAP | top-1 | mAP |
| C2D | | | 88.9 | 83.4 | 95.6 | 95.1 |
| P3D | \( {\text{stage}}_{2,3} \) | 5 | 88.5 | 83.1 | 95.3 | 95.3 |
| AP3D | \( {\text{stage}}_{1} \) | 1 | 89.0 | 83.2 | 95.3 | 95.1 |
| \( {\text{stage}}_{2} \) | 1 | 89.5 | 84.0 | 95.6 | 95.4 |
| \( {\text{stage}}_{3} \) | 1 | 89.7 | 84.1 | 95.9 | 95.3 |
| \( {\text{stage}}_{4} \) | 1 | 88.8 | 82.9 | 95.4 | 95.0 |
| \( {\text{stage}}_{2,3} \) | 2 | 90.1 | 84.7 | 96.2 | 95.4 |
| \( {\text{stage}}_{2,3} \) | 5 | 90.1 | 85.1 | 96.3 | 95.6 |
| \( {\text{stage}}_{2,3} \) | 10 | 89.8 | 84.7 | 95.9 | 95.2 |
+
+Table 4. The results with different backbones Table 5. The results of AP3D with/without CA on MARS
+
+| Backbone | Model | MARS | Duke-Video |
| top-1 | mAP | top-1 | mAP |
| ResNet-18 | C2D | 86.9 | 79.0 | 93.7 | 92.9 |
| P3D | 86.9 | 79.5 | 93.2 | 92.9 |
| AP3D | 88.1 | 80.9 | 94.2 | 93.4 |
| ResNet-34 | C2D | 87.5 | 80.9 | 94.6 | 93.6 |
| P3D | 87.6 | 81.0 | 94.4 | 93.7 |
| AP3D | 88.7 | 82.1 | 95.2 | 94.7 |
+
+| Model | w/ CA? | top-1 | mAP |
| I3D | - | 88.6 | 83.0 |
| AP-I3D | × | 89.7 | 84.7 |
| ✓ | 90.1 | 84.8 |
| P3D | - | 88.5 | 83.1 |
| AP-P3D | × | 89.6 | 84.8 |
| ✓ | 90.1 | 85.1 |
+
+stage), 5 blocks (half of residual blocks in stage $_2$ and stage $_3$ ) and 10 blocks (all residual blocks in stage $_2$ and stage $_3$ ) in C2D ConvNet. It can be seen that more AP3D blocks generally lead to higher performance. We argue that more AP3D blocks can perform more temporal communications, which can hardly be realized via the C2D model. As for the results with 10 blocks, the performance drop may lie in the overfitting caused by the excessive parameters.
+
+Effectiveness of AP3D across different backbones. We also investigate the effectiveness and generalization ability of AP3D across different backbones. Specifically, we replace half of the residual blocks in stage2,3 of ResNet-18 and ResNet-34 with AP3D blocks. As shown in Table 4, AP3D can improve the results of these two architectures significantly and consistently on both datasets. In particular, AP3D-ResNet-18 is superior to both its ResNet-18 counterparts (C2D and P3D) and the deeper ResNet-34, a model which has almost double the number of parameters and computational complexity, on MARS dataset. This comparison shows that the effectiveness of AP3D does not rely on additional parameters and computational load.
+
+
+Fig.6. The results with different $s$ on MARS dataset
+
+
+Fig. 7. The visualization of the original and the reconstructed feature maps after APM
+
+The effectiveness of Contrastive Attention. As described in Section 3.2, we use Contrastive Attention to avoid error propagation of imperfect registration caused by asymmetric appearance information. To verify the effectiveness, we reproduce AP3D with/without Contrastive Attention (CA) and the experimental results on MARS, a dataset produced by pedestrian detector, are shown in Table 5. It can be seen that, without Contrastive Attention, AP-I3D and AP-P3D can still increase the performance of I3D and P3D baselines by a considerable margin. With Contrastive Attention applied on the reconstructed feature map, the results of AP-I3D and AP-P3D can be further improved.
+
+The influence of the scale factor $s$ . As discussed in Section 3.2, the larger the scale factor $s$ , the higher the weights of pixels with high similarity. We show the experimental results with varying $s$ on MARS dataset in Figure 6. It can be seen that AP3D with different scale factors consistently improves over the baseline and the best performance is achieved when $s = 4$ .
+
+# 5.5 Visualization
+
+We select some misaligned samples and visualize the original feature maps and the reconstructed feature maps in stage3 after APM in Figure 7. It can be seen that the highlighted regions of the central feature map and the adjacent feature map before APM mainly focus on their own foreground respectively and are misaligned. After APM, the highlighted regions of the reconstructed feature maps are aligned w.r.t. the foreground of the corresponding central frame. It can further validate the alignment mechanism of APM.
+
+# 5.6 Comparison with State-of-the-Art Methods
+
+We compare the proposed method with state-of-the-art video-based ReID methods which use the same backbone on MARS, DukeMTMC-VideoReID, and
+
+Table 6. Comparison with state-of-the-arts on MARS, DukeMTMC-VideoReID and iLIDS-VID datasets. 'Flow' denotes optical flow and 'Att.' represents attribute
+
+| Method | Modality | MARS | Duke-Video | iLIDS-VID |
| top-1 | mAP | top-1 | mAP | top-1 |
| EUG [34] | RGB | 80.8 | 67.4 | 83.6 | 78.3 | |
| DuATM [27] | RGB | 81.2 | 67.7 | - | - | - |
| DRSA [18] | RGB | 82.3 | 65.8 | - | - | 80.2 |
| TKP [7] | RGB | 84.0 | 73.3 | 94.0 | 91.7 | - |
| M3D [17] | RGB | 84.4 | 74.1 | - | - | 74.0 |
| Snippet [3] | RGB + Flow | 86.3 | 76.1 | - | - | 85.4 |
| STA [6] | RGB | 86.3 | 80.8 | 96.2 | 94.9 | - |
| AttDriven [36] | RGB + Att. | 87.0 | 78.2 | - | - | 86.3 |
| GLTR [16] | RGB | 87.0 | 78.5 | 96.3 | 93.7 | 86.0 |
| VRSTC [13] | RGB | 88.5 | 82.3 | 95.0 | 93.5 | 83.4 |
| NVAN [20] | RGB | 90.0 | 82.8 | 96.3 | 94.9 | - |
| AP3D | RGB | 90.1 | 85.1 | 96.3 | 95.6 | 86.7 |
| NL-AP3D | RGB | 90.7 | 85.6 | 97.2 | 96.1 | 88.7 |
+
+iLIDS-VID datasets. The results are summarized in Table 6. Note that these comparison methods differ in many aspects, e.g., using information from different modalities. Nevertheless, using RGB only and with a simple feature integration strategy (i.e. temporal average pooling), the proposed AP3D surpasses all these methods consistently on these three datasets. Especially, AP3D achieves $85.1\%$ mAP on MARS dataset. When combined with Non-local, further improvement can be obtained.
+
+# 6 Conclusion
+
+In this paper, we propose a novel AP3D method for video-based ReID. AP3D consists of an APM and a 3D convolution kernel. With APM guaranteeing the appearance alignment across adjacent feature maps, the following 3D convolution can model temporal information on the premise of maintaining the appearance representation quality. In this way, the proposed AP3D addresses the appearance destruction problem of the original 3D convolution. It is easy to combine AP3D with existing 3D ConvNets. Extensive experiments verify the effectiveness and generalization ability of AP3D, which surpasses start-of-the-art methods on three widely used datasets. As a future work, we will extend AP3D to make it a basic operation in deep neural networks for various video-based recognition tasks.
+
+Acknowledgement This work is partially supported by Natural Science Foundation of China (NSFC): 61876171 and 61976203.
+
+# References
+
+1. Aberman, K., Liao, J., Shi, M., Lischinski, D., Chen, B., Cohen-Or, D.: Neural best-buddies: Sparse cross-domain correspondence. TOG (2018) 4, 5, 6
+2. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: CVPR (2017) 1, 3, 8
+3. Chen, D., Li, H., Xiao, T., Yi, S., Wang, X.: Video person re-identification with competitive snippet-similarity aggregation and co-attentive snippet embedding. In: CVPR (2018) 3, 14
+4. Chung, D., Tahboub, K., Delp, E.J.: A two stream siamese convolutional neural network for person re-identification. In: ICCV (2017) 3
+5. Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolutional networks. In: ICCV (2017) 11
+6. Fu, Y., Wang, X., Wei, Y., Huang, T.: Sta: Spatial-temporal attention for large-scale video-based person re-identification. In: AAAI (2019) 14
+7. Gu, X., Ma, B., Chang, H., Shan, S., Chen, X.: Temporal knowledge propagation for image-to-video person re-identification. In: ICCV (2019) 14
+8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016) 8
+9. Hermans, A., Beyer, L., Leibe, B.: In defense of the triplet loss for person re-identification. ArXiv:1703.07737 (2017) 9
+0. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation (1997) 1, 3, 11
+1. Hou, R., Chang, H., Ma, B., Shan, S., Chen, X.: Temporal complementary learning for video person re-identification. In: ECCV (2020) 1
+2. Hou, R., Ma, B., Chang, H., Gu, X., Shan, S., Chen, X.: Interaction-and-aggregation network for person re-identification. In: CVPR (2019) 1
+3. Hou, R., Ma, B., Chang, H., Gu, X., Shan, S., Chen, X.: Vrstc: Occlusion-free video person re-identification. In: CVPR (2019) 1, 6, 8, 9, 14
+4. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICML (2015) 9
+5. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015) 9
+6. Li, J., Wang, J., Tian, Q., Gao, W., Zhang, S.: Global-local temporal representations for video person re-identification. In: ICCV (2019) 14
+7. Li, J., Zhang, S., Huang, T.: Multi-scale 3d convolution network for video based person re-identification. In: AAAI (2019) 2, 3, 4, 14
+8. Li, S., Bak, S., Carr, P., Wang, X.: Diversity regularized spatiotemporal attention for video-based person re-identification. In: CVPR (2018) 3, 8, 14
+9. Liao, X., He, L., Yang, Z., Zhang, C.: Video-based person re-identification via 3d convolutional networks and non-local attention. In: ACCV (2018) 2, 4
+20. Liu, C.T., Wu, C.W., Wang, Y.C.F., Chien, S.Y.: Spatially and temporally efficient non-local attention network for video-based person re-identification. In: BMVC (2019) 14
+21. Liu, Y., Yan, J., Ouyang, W.: Quality aware network for set to set recognition. In: CVPR (2017) 3
+22. Mclaughlin, N., Rincon, J.M.D., Miller, P.: Recurrent convolutional network for video-based person re-identification. In: CVPR (2016) 3
+23. Ng, Y.H., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.: Beyond short snippets: Deep networks for video classification. In: CVPR (2015) 1, 3
+
+24. Qiu, Z., Yao, T., Mei, T.: Learning spatio-temporal representation with pseudo-3d residual networks. In: ICCV (2017) 1, 3, 8
+25. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: NIPS (2015) 2
+26. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. IJCV (2015) 8
+27. Si, J., Zhang, H., Li, C.G., Kuen, J., Kong, X., Kot, A.C., Wang, G.: Dual attention matching network for context-aware feature sequence based person re-identification. In: CVPR (2018) 3, 14
+28. Sun, Y., Zheng, L., Yang, Y., Tian, Q., Wang, S.: Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In: ECCV (2018) 1, 8
+29. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3d convolutional networks. In: ICCV (2015) 1, 3
+30. Wang, G., Yuan, Y., Chen, X., Li, J., Zhou, X.: Learning discriminative features with multiple granularities for person re-identification. In: ACM MM (2018) 9
+31. Wang, H., Wang, Y., Zhou, Z., Ji, X., Gong, D., Zhou, J., Li, Z., Liu, W.: Cosface: Large margin cosine loss for deep face recognition. In: CVPR (2018) 9
+32. Wang, T., Gong, S., Zhu, X., Wang, S.: Person re-identification by video ranking. In: ECCV (2014) 1, 3, 9
+33. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR (2018) 1, 3
+34. Wu, Y., Lin, Y., Dong, X., Yan, Y., Ouyang, W., Yang, Y.: Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning. In: CVPR (2018) 9, 14
+35. Zhang, H., Chang, H., Ma, B., Wang, N., Chen, X.: Dynamic R-CNN: Towards high quality object detection via dynamic training. In: ECCV (2020) 2
+36. Zhao, Y., Shen, X., Jin, Z., Lu, H., Hua, X.s.: Attribute-driven feature disentangling and temporal aggregation for video person re-identification. In: CVPR (2019) 14
+37. Zheng, L., Bie, Z., Sun, Y., Wang, J., Su, C., Wang, S., Tian, Q.: Mars: A video benchmark for large-scale person re-identification. In: ECCV (2016) 9
+38. Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., Tian, Q.: Scalable person re-identification: A benchmark. In: ICCV (2015) 9
+39. Zitov, B., Flusser, J.: Image registration methods: a survey. IVC (2003) 4
\ No newline at end of file
diff --git a/appearancepreserving3dconvolutionforvideobasedpersonreidentification/images.zip b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..27f0d6b861f755802eeb8296fb18065617958384
--- /dev/null
+++ b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9fe8f5a39ec5dc765433aeee496d14c3b3d80e05504549ad89b39ccc39164a27
+size 488301
diff --git a/appearancepreserving3dconvolutionforvideobasedpersonreidentification/layout.json b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d3edf22f30801c8dc31fd72e151a2550c54fc62d
--- /dev/null
+++ b/appearancepreserving3dconvolutionforvideobasedpersonreidentification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef4be84baf9ca82925cee9834c88d193b26e1e8d11c7c7b758f94344ed5cd319
+size 379774
diff --git a/arbitraryorientedobjectdetectionwithcircularsmoothlabel/dba276a6-6b2d-4912-9ae4-349d74cb3edf_content_list.json b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/dba276a6-6b2d-4912-9ae4-349d74cb3edf_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fa4e1df318c341660b07a7842c4c3fceda004cfd
--- /dev/null
+++ b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/dba276a6-6b2d-4912-9ae4-349d74cb3edf_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba0d8ca5416d5f3dcef0be4b463be06d3c0ab3e3e58b57e015548270478553a3
+size 92828
diff --git a/arbitraryorientedobjectdetectionwithcircularsmoothlabel/dba276a6-6b2d-4912-9ae4-349d74cb3edf_model.json b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/dba276a6-6b2d-4912-9ae4-349d74cb3edf_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b4d0e501db95875cdcf7f6468417859a80752c5f
--- /dev/null
+++ b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/dba276a6-6b2d-4912-9ae4-349d74cb3edf_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3331ad8efcf4213b58abce1239896aa8a6a1ab4966c12b4693e9d660b4bcb0ef
+size 116500
diff --git a/arbitraryorientedobjectdetectionwithcircularsmoothlabel/dba276a6-6b2d-4912-9ae4-349d74cb3edf_origin.pdf b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/dba276a6-6b2d-4912-9ae4-349d74cb3edf_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..71b336acbdad5e11f379f48deab6062623abef44
--- /dev/null
+++ b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/dba276a6-6b2d-4912-9ae4-349d74cb3edf_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:35e589725da834125dfe65f00bd8d2fd00ee0674121054ef404c1825695f30b6
+size 30844338
diff --git a/arbitraryorientedobjectdetectionwithcircularsmoothlabel/full.md b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..442a3845886091cb48325226c018d44ff63d647d
--- /dev/null
+++ b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/full.md
@@ -0,0 +1,357 @@
+# Arbitrary-Oriented Object Detection with Circular Smooth Label
+
+Xue Yang $^{1,2[0000-0002-7084-9101]}$ , Junchi Yan $^{1,2[0000-0001-9639-7679]\star}$
+
+$^{1}$ Department of Computer Science and Engineering, Shanghai Jiao Tong University $^{2}$ MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University {yangxue-2019-sjtu, yanjunchi}@sjtu.edu.cn
+
+Abstract. Arbitrary-oriented object detection has recently attracted increasing attention in vision for their importance in aerial imagery, scene text, and face etc. In this paper, we show that existing regression-based rotation detectors suffer the problem of discontinuous boundaries, which is directly caused by angular periodicity or corner ordering. By a careful study, we find the root cause is that the ideal predictions are beyond the defined range. We design a new rotation detection baseline, to address the boundary problem by transforming angular prediction from a regression problem to a classification task with little accuracy loss, whereby high-precision angle classification is devised in contrast to previous works using coarse-granularity in rotation detection. We also propose a circular smooth label (CSL) technique to handle the periodicity of the angle and increase the error tolerance to adjacent angles. We further introduce four window functions in CSL and explore the effect of different window radius sizes on detection performance. Extensive experiments and visual analysis on two large-scale public datasets for aerial images i.e. DOTA, HRSC2016, as well as scene text dataset ICDAR2015 and MLT, show the effectiveness of our approach. The code is public available at https://github.com/Thinklab-SJTU/CSL_RetinaNet_Tensorflow.
+
+Keywords: Oriented Object Detection, Circular Smooth Label.
+
+# 1 Introduction
+
+Object detection is one of the fundamental tasks in computer vision. In particular, rotation detection has played a huge role in the field of aerial images [2,4,41,42,44], scene text [12, 18, 19, 24, 27, 49] and face [11, 33, 34]. The rotation detector can provide accurate orientation and scale information, which will be helpful in applications such as object change detection in aerial images and recognition of sequential characters for multi-oriented scene texts.
+
+Recently, a line of advanced rotation detectors evolved from classic detection algorithms [3, 7, 20, 21, 32] have been proposed. Among these methods, detectors based on region regression occupy the mainstream, and the representation
+
+of multi-oriented object is achieved by rotated bounding box or quadrangles. Although these rotation detectors have achieved promising results, there are still some fundamental problems. Specifically, we note both the five-parameter regression and the eight-parameter regression methods suffer the problem of discontinuous boundaries, as often caused by angular periodicity or corner ordering. However, the inherent reasons are not limited to the particular representation of the bounding box. In this paper, we argue that the root cause of boundary problems based on regression methods is that the ideal predictions are beyond the defined range. Thus, the model's loss value suddenly increase at the boundary situation so that the model cannot obtain the prediction result in the simplest and most direct way, and additional more complicated treatment is often needed. Therefore, these detectors often have difficulty in boundary conditions. For detection using rotated bounding boxes, the accuracy of angle prediction is critical. A slight angle deviation leads to important Intersection-over-Union (IoU) drop, resulting in inaccurate object detection, especially in case of large aspect ratios.
+
+There have been efforts addressing the boundary problem. IoU-smooth L1 [44] loss introduces the IoU factor, and modular rotation loss [30] increases the boundary constraint to eliminate the sudden increase in boundary loss and reduce the difficulty of model learning. Yet these methods are still regression-based detection methods, and still have not solved the root cause as mentioned above.
+
+In this paper, we are aimed to find a more fundamental rotation detection baseline to solve the boundary problem. Specifically, we consider the prediction of the object angle as a classification problem to better limit the prediction results, and then we design a circular smooth label (CSL) to address the periodicity of the angle and increase the error tolerance between adjacent angles. Although the conversion from continuous regression to discrete classification, the impact of the lost accuracy on the rotation detection task is negligible. We also introduce four window functions in CSL and explore the effect of different window radius sizes on detection performance. After a lot of experiments and visual analysis, we find that CSL-based rotation detection algorithm is indeed a better baseline choice than the angle regression-based method on different detectors and datasets.
+
+In summary, the main contribution of this paper are four-folds:
+
+- We summarize the boundary problems in different regression-based rotation detection methods [2, 4, 41, 42] and show the root cause is that the ideal predictions are beyond the defined range.
+- We design a new rotation detection baseline, which transforms angular prediction from a regression problem to a classification problem. Specifically, to our best knowledge, we devise the first high-precision angle (less than 1 degree) classification based pipeline in rotation detection, in contrast to previous coarse classification granularity (around 10-degree) methods [33]. Our method has little accuracy loss compared with regression-based methods and can effectively eliminate the boundary problem.
+- We also propose the circular smooth label (CSL) technique, as an independent module which can also be readily reused in existing regression based
+
+methods by replacing the regression with classification, to address angular prediction for boundary conditions and objects with large aspect ratio.
+
+- Extensive experimental results on DOTA and HRSC2016 show the state-of-the-art performance of our detector, and the efficacy of our CSL technique as an independent component has been verified across different detectors.
+
+# 2 Related Work
+
+Horizontal region object detection. Classic object detection aims to detect general objects in images with horizontal bounding boxes, and many high-performance general-purpose object detections have been proposed. R-CNN [8] pioneers a method based on CNN detection. Subsequently, region-based models such as Fast R-CNN [7], Faster R-CNN [32], and R-FCN [3] are proposed, which improve the detection speed while reducing computational storage. FPN [20] focus on the scale variance of objects in images and propose feature pyramid network to handle objects at different scales. SSD [23],YOLO [31] and RetinaNet [21] are representative single-stage methods, and their single-stage structure allows them to have faster detection speeds. Compared to anchor-based methods, many anchor-free have become extremely popular in recent years. CornerNet [15], CenterNet [5] and ExtremeNet [48] attempt to predict some keypoints of objects such as corners or extreme points, which are then grouped into bounding boxes. However, horizontal detector does not provide accurate orientation and scale information, which poses problem in real applications such as object change detection in aerial images and recognition of sequential characters for multi-oriented scene texts.
+
+Arbitrary-oriented object detection. Aerial images and scene text are the main application scenarios of the rotation detector. Recent advances in multi-oriented object detection are mainly driven by adaption of classical object detection methods using rotated bounding boxes or quadrangles to represent multi-oriented objects. Due to the complexity of the remote sensing image scene and the large number of small, cluttered and rotated objects, multi-stage rotation detectors are still dominant for their robustness. Among them, ICN [2], ROITransformer [4], SCRDet [41], $\mathrm{R}^3\mathrm{Det}$ [41] are state-of-the-art detectors. Gliding Vertex [40] and RSDet [30] achieve more accurate object detection through quadrilateral regression prediction. For scene text detection, RRPN [27] employ rotated RPN to generate rotated proposals and further perform rotated bounding box regression. TextBoxes++ [18] adopts vertex regression on SSD. RRD [19] further improves TextBoxes++ by decoupling classification and bounding box regression on rotation-invariant and rotation sensitive features, respectively. Although the regression-based arbitrary-oriented object detection method occupies the mainstream, we have found that most of these methods have some boundary problems due to the situations beyond the defined range. Therefore, we design a new rotation detection baseline, which basically eliminates the boundary problem by transforming angular prediction from a regression problem to a classification problem with little accuracy loss.
+
+
+Fig. 1. Architecture of the proposed rotation detector (RetinaNet as an embodiment). 'C' and 'T' represent the number of object and angle categories, respectively.
+
+Classification for orientation information. The method of obtaining orientation information through classification is earlier used for multi-view face detection with arbitrary rotation-in-plane (RIP) angles. Divide-and-Conquer is adopted in [11], which use several small neural networks to deal with a small range of face appearance variations individually. In [33], a router network is firstly used to estimate each face candidate's RIP angle. PCN [34] progressively calibrates the RIP orientation of each face candidate and shrinks the RIP range by half in early stages. Finally, PCN makes the accurate final decision for each face candidate to determine whether it is a face and predict the precise RIP angle. In other research areas, [14] adopts ordinal regression for or effective future motion classification. [43] obtains the orientation information of the ship by classifying the four sides. The above methods all obtain the approximate orientation range through classification, but cannot be directly applied to scenarios that require precise orientation information such as aerial images and scene text.
+
+# 3 Proposed Method
+
+We give an overview of our method as sketched in Figure 1. The embodiment is a single-stage rotation detector based on the RetinaNet [21]. The figure shows a multi-tasking pipeline, including regression-based prediction branch and CSL-based prediction branch, to facilitate the comparison of the performance of the two methods. It can be seen from the figure that CSL-based method is more accurate for learning the orientation and scale information of the object. It should be noted that the method proposed in this paper is applicable to most regression-based methods, which has been verified in the FPN [20] detector in experiments.
+
+# 3.1 Regression-based Rotation Detection Method
+
+Parametric regression is currently a popular method for rotation object detection, mainly including five-parameter regression-based methods [4,12,27,41, 42,44] and eight-parameter regression-based methods [18,25,30,40]. The commonly used five-parameter regression-based methods realize arbitrary-oriented
+
+
+Fig. 2. Several definitions of bounding boxes.
+
+
+
+
+
+
+(a) Five-parameter method (b) Five-parameter method (c) Ordered quadrilateral with $90^{\circ}$ angular range. with $180^{\circ}$ angular range. representation.
+
+
+
+
+
+bounding box detection by adding an additional angle parameter $\theta$ . Figure 2(a) shows one of the rectangular definition $(x, y, w, h, \theta)$ with $90^{\circ}$ angular range [27, 41, 42, 44], $\theta$ denotes the acute angle to the x-axis, and for the other side we refer it as $w$ . It should be distinguished from another definition $(x, y, h, w, \theta)$ illustrated in Figure 2(b), with $180^{\circ}$ angular range [4, 27], whose $\theta$ is determined by the long side $(h)$ of the rectangle and x-axis. The eight-parameter regression-based detectors directly regress the four corners $(x_{1}, y_{1}, x_{2}, y_{2}, x_{3}, y_{3}, x_{4}, y_{4})$ of the object, so the prediction is a quadrilateral. The key step to the quadrilateral regression is to sort the four corner points in advance, which can avoid a very large loss even if the prediction is correct, as shown in Figure 2(c).
+
+# 3.2 Boundary Problem of Regression Method
+
+Although the parametric regression-based rotation detection method has achieved competitive performance in different vision tasks, and has been a building block for a number of excellent detection methods, these methods essentially suffer the discontinuous boundaries problem [30,44]. Boundary discontinuity problems are often caused by angular periodicity in the five-parameter method and corner ordering in the eight-parameter method, but there exist more fundamental root cause regardless the representation choices of the bounding box.
+
+The boundary discontinuity problem often makes the model's loss value suddenly increase at the boundary situation. Thus methods have to resort to particular and often complex tricks to mitigate this issue. Therefore, these detection methods are often inaccurate in boundary conditions. We describe the boundary problem in three typical categories of regression-based methods according to their different representation forms (the first two refer to the five-parameter methods):
+
+
+(a)
+
+
+(b)
+Fig. 3. The boundary problem of three categories of regression based methods. The red solid arrow indicates the actual regression process, and the red dotted arrow indicates the ideal regression process.
+
+
+(c)
+
+- $90^{\circ}$ -regression-based method, as sketched in Figure 3(a). It shows that an ideal form of regression (the blue box rotates counterclockwise to the red box), but the loss of this situation is very large due to the periodicity of angular (PoA) and exchangeability of edges (EoE), see the example in Figure 3(a) and Equation 3, 4 for detail. Therefore, the model has to be regressed in other complex forms (such as the blue box rotating clockwise to the gray box while scaling w and h), increasing the difficulty of regression.
+- $180^{\circ}$ -regression-based method, as illustrated in Figure 3(b). Similarly, this method also has a problem of sharp increase of loss caused by the PoA at the boundary. The model will eventually choose to rotate the proposal a large angle clockwise to get the final predicted bounding box.
+- Point-based method, as shown in Figure 3(c). Through further analysis, the boundary discontinuity problem still exists in the eight-parameter regression method due to the advance ordering of corner points. Consider the situation of an eight-parameter regression in the boundary case, the ideal regression process should be $\{(a \to b), (b \to c), (c \to d), (d \to a)\}$ , but the actual regression process from the blue reference box to the green ground truth box is $\{(a \to a), (b \to b), (c \to c), (d \to d)\}$ . In fact, this situation also belongs to PoA. By contrast, the actual and ideal regression of the blue to red bounding boxes is consistent.
+
+Some approaches have been proposed to solve these problems based on the above analysis. For example, IoU-smooth L1 [44] loss introduces the IoU factor, and modular rotation loss [30] increases the boundary constraint to eliminate the sudden increase in boundary loss and reduce the difficulty of model learning. However, these methods are still regression-based detection methods, and no solution is given from the root cause. In this paper, we will start from a new perspective and replace regression with classification to achieve better and more robust rotation detectors. We reproduce some classic rotation detectors based on regression and compare them visually under boundary conditions, as shown in Figure 4(a) to Figure 4(e). In contrast, CLS-based methods have no boundary problem, as shown in Figure 4(i).
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+
+(e)
+
+
+(f)
+
+
+(g)
+
+
+(h)
+
+
+(i)
+Fig. 4. Comparison of five regression-based rotation detection methods and CSL in the boundary case. (a) RetinaNet-H [41]. (b) RetinaNet-R [41]. (c) FPN-H [20]. (d) $\mathbf{R}^3$ Det [41]. (e) IoU-Smooth L1 [44]. (f) 180-CSL-Pulse. (g) $180^{\circ}$ -CSL-Rectangular. (h) $180^{\circ}$ -CSL-Triangle. (i) $180^{\circ}$ -CSL-Gaussian. (j) $90^{\circ}$ -CSL-Gaussian. 'H' and 'R' represent the horizontal and rotating anchors. Red dotted circles indicate some bad cases.
+
+
+(j)
+
+# 3.3 Circular Smooth Label for Angular Classification
+
+The main cause of boundary problems based on regression methods is that the ideal predictions are beyond the defined range. Therefore, we consider the prediction of the object angle as a classification problem to better limit the prediction results. A simple solution is to use the object angle as its category label, and the number of categories is related to the angle range. Figure 5(a) shows the label setting for a standard classification problem (one-hot label encoding). The conversion from regression to classification can cause certain accuracy loss. Taking the five-parameter method with $180^{\circ}$ angle range as an example, $\omega$ (default $\omega = 1^{\circ}$ ) degree per interval refers to a category. We can calculate the maximum accuracy loss Max(loss) and expected accuracy loss $E(loss)$ :
+
+$$
+\operatorname {M a x} (\text {l o s s}) = \frac {\omega}{2}, \quad E (\text {l o s s}) = \int_ {a} ^ {b} x * \frac {1}{b - a} d x = \int_ {0} ^ {\omega / 2} x * \frac {1}{\omega / 2 - 0} d x = \frac {\omega}{4} \tag {1}
+$$
+
+Based on the above equations, one can see the loss is slight for a rotation detector. For example, when two rectangles with a $1:9$ aspect ratio differ by $0.25^{\circ}$ and $0.5^{\circ}$ (default expected and maximum accuracy loss), the Intersection over Union (IoU) between them only decreases by 0.02 and 0.05. However, one-hot label has two drawbacks for rotation detection:
+
+Fig. 5. Two kind of labels for angular classification. FL means focal loss [21].
+
+ground truth $=$ one-hot(0)
+predict1 $\approx$ one-hot(1)
+predict2 $\approx$ one-hot(-90)
+$\mathsf{FL}(\mathsf{gt} - \mathsf{predict1})\approx \mathsf{FL}(\mathsf{gt} - \mathsf{predict2})$
+(a) One-hot label.
+
+
+(b) Circle smooth label.
+
+- The EoE problem still exists when the bounding box uses the $90^{\circ}$ -regression-based method. In addition, $90^{\circ}$ -regression-based method has two different border cases (vertical and horizontal), while $180^{\circ}$ -regression-based method has only vertical border cases.
+- Note vanilla classification loss is agnostic to the angle distance between the predicted label and ground truth label, thus is inappropriate for the nature of the angle prediction problem. As shown in Figure 5(a), when the ground-truth is $0^{\circ}$ and the prediction results of the classifier are $1^{\circ}$ and $-90^{\circ}$ respectively, their prediction losses are the same, but the prediction results close to ground-truth should be allowed from a detection perspective.
+
+Therefore, we design a circular smooth label (CSL) technique to obtain more robust angular prediction through classification without suffering boundary conditions, including EoE and PoA. It can be clearly seen from Figure 5(b) that CSL involves a circular label encoding with periodicity, and the assigned label value is smooth with a certain tolerance. The expression of CSL is as follows:
+
+$$
+C S L (x) = \left\{ \begin{array}{c c} g (x), & \theta - r < x < \theta + r \\ 0, & \text {o t h e r w i s e} \end{array} \right. \tag {2}
+$$
+
+where $g(x)$ is a window function. $r$ is the radius of the window function. $\theta$ represents the angle of the current bounding box. An ideal window function $g(x)$ is required to hold the following properties:
+
+- Periodicity: $g(x) = g(x + kT), k \in N$ . $T = 180 / \omega$ represents the number of bins into which the angle is divided, and the default value is 180.
+- Symmetry: $0 \leq g(\theta + \varepsilon) = g(\theta - \varepsilon) \leq 1, |\varepsilon| < r$ . $\theta$ is the center of symmetry.
+- Maximum: $g(\theta) = 1$ .
+- Monotonic: $0 \leq g(\theta \pm \varepsilon) \leq g(\theta \pm \varsigma) \leq 1, |\varsigma| < |\varepsilon| < r$ . The function presents a monotonous non-increasing trend from the center point to both sides
+
+We give four efficient window functions that meet the above three properties: pulse functions, rectangular functions, triangle functions, and Gaussian
+
+functions, as shown in Figure 5(b). Note that the label value is continuous at the boundary and there is no arbitrary accuracy loss due to the periodicity of CSL. In addition, one-hot label is equivalent to CSL when the window function is a pulse function or the radius of the window function is very small.
+
+# 3.4 Loss Function
+
+Our multi-tasking pipeline contains regression-based prediction branch and CSL-based prediction branch, to facilitate the performance comparison of the two methods on an equal footing. The regression of the bounding box is:
+
+$$
+t _ {x} = (x - x _ {a}) / w _ {a}, t _ {y} = (y - y _ {a}) / h _ {a}
+$$
+
+$$
+t _ {w} = \log (w / w _ {a}), t _ {h} = \log (h / h _ {a}),
+$$
+
+$$
+t _ {\theta} = \left(\theta - \theta_ {a}\right) \cdot \pi / 1 8 0 \quad (o n l y f o r r e g r e s s i o n b r a n c h)
+$$
+
+$$
+t _ {x} ^ {\prime} = \left(x ^ {\prime} - x _ {a}\right) / w _ {a}, t _ {y} ^ {\prime} = \left(y ^ {\prime} - y _ {a}\right) / h _ {a} \tag {3}
+$$
+
+$$
+t _ {w} ^ {\prime} = \log \left(w ^ {\prime} / w _ {a}\right), t _ {h} ^ {\prime} = \log \left(h ^ {\prime} / h _ {a}\right),
+$$
+
+$$
+t _ {\theta} ^ {\prime} = \left(\theta^ {\prime} - \theta_ {a}\right) \cdot \pi / 1 8 0 \quad (o n l y f o r r e g r e s s i o n b r a n c h)
+$$
+
+where $x, y, w, h, \theta$ denote the box's center coordinates, width, height and angle, respectively. Variables $x, x_{a}, x'$ are for the ground-truth box, anchor box, and predicted box, respectively (likewise for $y, w, h, \theta$ ). The multi-task loss is:
+
+$$
+\begin{array}{l} L = \frac {\lambda_ {1}}{N} \sum_ {n = 1} ^ {N} o b j _ {n} \cdot \sum_ {j \in \{x, y, w, h, \theta_ {r e g} \}} L _ {r e g} \left(v _ {n j} ^ {\prime}, v _ {n j}\right) \tag {4} \\ + \frac {\lambda_ {2}}{N} \sum_ {n = 1} ^ {N} L _ {C S L} \left(\theta_ {n} ^ {\prime}, \theta_ {n}\right) + \frac {\lambda_ {3}}{N} \sum_ {n = 1} ^ {N} L _ {c l s} \left(p _ {n}, t _ {n}\right) \\ \end{array}
+$$
+
+where $N$ indicates the number of anchors, $obj_{n}$ is a binary value ( $obj_{n} = 1$ for foreground and $obj_{n} = 0$ for background, no regression for background). $v_{*j}'$ denotes the predicted offset vectors, $v_{*j}$ is the targets vector of ground-truth. $\theta_{n},\theta_{n}'$ denote the label and predict of angle respectively. $t_n$ represents the label of object, $p_n$ is the probability distribution of various classes calculated by Sigmoid function. The hyper-parameter $\lambda_1,\lambda_2,\lambda_3$ control the trade-off and are set to $\{1,0.5,1\}$ by default. The classification loss $L_{cls}$ and $L_{CSL}$ is focal loss [21] or sigmoid cross-entropy loss depend on detector. The regression loss $L_{reg}$ is smooth L1 loss as used in [7].
+
+# 4 Experiments
+
+We use Tensorflow [1] to implement the proposed methods on a server with GeForce RTX 2080 Ti and 11G memory. The experiments in this article are initialized by ResNet50 [10] by default unless otherwise specified. Weight decay and
+
+momentum are set 0.0001 and 0.9, respectively. We employ MomentumOptimizer over 4 GPUs with a total of 4 images per minibatch (1 images per GPU). At each pyramid level we use anchors at seven aspect ratios $\{1,1 / 2,2,1 / 4,4,1 / 6,6\}$ , and the remaining anchor settings are the same as the original RetinaNet and FPN.
+
+# 4.1 Benchmarks and Protocols
+
+DOTA [39] is one of the largest aerial image detection benchmarks. There are two detection tasks for DOTA: horizontal bounding boxes (HBB) and oriented bounding boxes (OBB). DOTA contains 2,806 aerial images from different sensors and platforms and the size of image ranges from around $800 \times 800$ to $4,000 \times 4,000$ pixels. The fully annotated DOTA benchmark contains 15 common object categories and 188,282 instances, each of which is labeled by an arbitrary quadrilateral. Half of the original images are randomly selected as the training set, 1/6 as the validation set, and 1/3 as the testing set. We divide the training and validation images into $600 \times 600$ subimages with an overlap of 150 pixels and scale it to $800 \times 800$ . With all these processes, we obtain about 27,000 patches.
+
+ICDAR2015 [13] is the Challenge 4 of ICDAR 2015 Robust Reading Competition, which is commonly used for oriented scene text detection and spotting. This dataset includes 1,000 training images and 500 testing images. In training, we first train our model using 9,000 images from ICDAR 2017 MLT training and validation datasets, then we use 1,000 training images to fine-tune our model.
+
+ICDAR 2017 MLT [28] is a multi-lingual text dataset, which includes 7,200 training images, 1,800 validation images and 9,000 testing images. The dataset is composed of complete scene images in 9 languages, and text regions in this dataset can be in arbitrary orientations, being more diverse and challenging.
+
+HRSC2016 [26] contains images from two scenarios including ships on sea and ships close inshore. All images are collected from six famous harbors. The training, validation and test set include 436, 181 and 444 images, respectively.
+
+All datasets are trained by 20 epochs (the number of image iterations per epoch is $e$ ) in total, and learning rate was reduced tenfold at 12 epochs and 16 epochs, respectively. The initial learning rates for RetinaNet and FPN are 5e-4 and 1e-3 respectively. The value of $e$ for DOTA, ICDAR2015, MLT and HRSC2016 are 27k, 10k, 10k and 5k, and doubled if data augmentation and multi-scale training are used.
+
+# 4.2 Ablation Study
+
+Comparison of four window functions. Table 1 shows the performance comparison of the four window functions on the DOTA dataset. It also details the accuracy of the five categories with larger aspect ratio and more border cases in the dataset. We believe that these categories can better reflect the advantages of our method. In general, the Gaussian window function performs best, while the pulse function performs worst because it has not learned any orientation and scale information. Figures 4(f)-4(i) show the visualization of the four window functions. According to Figure 4(i)-4(j), the $180^{\circ}$ -CSL-based method obviously
+
+Table 1. Comparison of four window functions on the DOTA dataset. 5-mAP refers to the mean average precision of the five categories with large aspect ratio. mAP means mean average precision of all 15 categories. EoE indicates the issue of exchangeability of edges and a tick in table means the method suffers from EoE.
+
+| Based Method | Angle Range | EoE | Label Mode | BR | SV | LV | SH | HA | 5-mAP | mAP |
| RetinaNet-H (CSL-Based) | 90 | ✓ | Pulse | 9.80 | 28.04 | 11.42 | 18.43 | 23.35 | 18.21 | 39.52 |
| 90 | ✓ | Rectangular | 37.62 | 54.28 | 48.97 | 62.59 | 50.26 | 50.74 | 58.86 |
| 90 | ✓ | Triangle | 37.25 | 54.45 | 44.01 | 60.03 | 52.20 | 49.59 | 60.15 |
| 90 | ✓ | Gaussian | 41.03 | 59.63 | 52.57 | 64.56 | 54.64 | 54.49 | 63.51 |
| 180 | | Pulse | 13.95 | 16.79 | 6.50 | 16.80 | 22.48 | 15.30 | 42.06 |
| 180 | | Rectangular | 36.14 | 60.80 | 50.01 | 65.75 | 53.17 | 53.17 | 61.98 |
| 180 | | Triangle | 32.69 | 47.25 | 44.39 | 54.11 | 41.90 | 44.07 | 57.94 |
| 180 | | Gaussian | 41.16 | 63.68 | 55.44 | 65.85 | 55.23 | 56.21 | 64.50 |
+
+Table 2. Comparison of detection results under different radius.
+
+| Based Method | Angle Range | Label Mode | r=0 | r=2 | r=4 | r=6 | r=8 |
| RetinaNet-H(CSL-Based) | 180 | Gaussian | 40.78 | 59.23 | 62.12 | 64.50 | 63.99 |
| FPN-H(CSL-Based) | 180 | Gaussian | 48.08 | 70.18 | 70.09 | 70.92 | 69.75 |
+
+has better boundary prediction due to the EoE problem still exists in the $90^{\circ}$ CSL-based method. The visualization results in Figure 4 are consistent with the data analysis results in Table 1.
+
+Suitable window radius. The Gaussian window form has shown best performance, while here we study the effect of radius of the window function. When the radius is too small, the window function tends to a pulse function. Conversely, the discrimination of all predictable results becomes smaller. Therefore, we choose a suitable radius range from 0 to 8, Table 2 shows the performance of the two detectors in this range. Although both detectors achieve the best performance with a radius of 6, the single-stage detection method is more sensitive to radius. We speculate that the instance-level feature extraction capability (like RoI Pooling [7] and RoI Align [9]) in the two-stage detector is stronger than the image-level in the single-stage detector. Therefore, the two-stage detection method can distinguish the difference between the two approaching angles. Figure 6 compares visualizations at different window radius. When the radius is 0, the detector cannot learn any orientation and scale information, which is consistent with the performance of the pulse function above. As the radius becomes larger and optimal, the detector can learn the angle in any direction.
+
+Classification is better than regression. Three rotation detectors in Table 3, including RetinaNet-H, RetinaNet-R and FPN-H, are used to compare the performance differences between CSL-based and regression-based methods. The former two are single-stage detectors, whose anchor format is different. The latter is a classic two-stage detection method. It can be clearly seen that CSL has better detection ability for objects with large aspect ratios and more boundary conditions. It also should be noted that CSL is designed to solve the boundary problem, whose proportion in the entire dataset is relatively small, so the overall performance (mAP) is not as obvious as the five categories listed (5-mAP).
+
+
+(a) radius=0
+
+
+(b) radius=2
+
+
+(c) radius=4
+Fig. 6. Visualization of detection results (RetinaNet-H CSL-Based) under different radius. The red bounding box indicates that no orientation and scale information has been learned, and the green bounding box is the correct detection result.
+
+
+(d) radius=6
+
+
+(e) radius=8
+
+Table 3. Comparison between CSL-based and regression-based methods on DOTA. Improvement by CSL-based methods have been made under the same configuration.
+
+| Based Method | Angle Range | Angle Pred. | PoA | EoE | Label Mode | BR | SV | LV | SH | HA | 5-mAP | mAP |
| RetinaNet-H | 90 | regression-based | ✓ | ✓ | - | 41.15 | 53.75 | 48.30 | 55.92 | 55.77 | 50.98 | 63.18 |
| 90 | CSL-based | | ✓ | Gaussian | 41.03 | 59.63 | 52.57 | 64.56 | 54.64 | 54.49 (+3.51) | 63.51 (+0.33) |
| 180 | regression-based | ✓ | | - | 38.47 | 54.15 | 47.89 | 60.87 | 53.63 | 51.00 | 64.10 |
| 180 | CSL-based | | | Gaussian | 41.16 | 63.68 | 55.44 | 65.85 | 55.23 | 56.21 (+5.21) | 64.50 (+0.40) |
| RetinaNet-R | 90 | regression-based | ✓ | ✓ | - | 32.27 | 64.64 | 71.01 | 68.62 | 53.52 | 58.01 | 62.76 |
| 90 | CSL-based | | ✓ | Gaussian | 35.14 | 63.21 | 73.92 | 69.49 | 55.53 | 59.46 (+1.45) | 65.45 (+2.69) |
| FPN-H | 90 | regression-based | ✓ | ✓ | - | 44.78 | 70.25 | 71.13 | 68.80 | 54.27 | 61.85 | 68.25 |
| 90 | CSL-based | | ✓ | Gaussian | 45.46 | 70.22 | 71.96 | 76.06 | 54.84 | 63.71 (+1.86) | 69.02 (+0.77) |
| 180 | regression-based | ✓ | | - | 45.88 | 69.37 | 72.06 | 72.96 | 62.31 | 64.52 | 69.45 |
| 180 | CSL-based | | | Gaussian | 47.90 | 69.66 | 74.30 | 77.06 | 64.59 | 66.70 (+2.18) | 70.92 (+1.47) |
+
+Table 4. Comparison between CSL-based and regression-based methods on the text dataset ICDAR2015, MLT, and another remote sensing dataset HRSC2016. 07 or 12 means use the 2007 or 2012 evaluation metric.
+
+| Method | ICDAR2015 | MLT | HRSC2016 |
| Recall | Precision | Hmean | Recall | Precision | Hmean | mAP (07) | mAP (12) |
| FPN-regression-based | 81.81 | 83.07 | 82.44 | 56.15 | 80.26 | 66.08 | 88.33 | 94.70 |
| FPN-CSL-based | 83.00 | 84.30 | 83.65 (+1.21) | 56.72 | 80.77 | 66.64 (+0.56) | 89.62 (+1.29) | 96.10 (+1.40) |
+
+Overall, the CSL-based rotation detection algorithm is indeed a better baseline choice than the angle regression-based method.
+
+CSL performance on other datasets. In order to further verify that CSL-based method is a better baseline model, we have also verified it in other datasets, including the text dataset ICDAR2015, MLT, and another remote sensing dataset HRSC2016. These three datasets are single-class object detection datasets, whose objects have a large aspect ratio. Although boundary conditions still account for a small proportion of these data sets, CSL still shows a stronger performance advantage. As shown in Table 4, the CSL-based method is improved by $1.21\%$ , $0.56\%$ , and $1.29\%$ ( $1.4\%$ ) respectively compared with the regression-based method under the same experimental configuration. These experimental results provide strong support for demonstrating the versatility of the CSL-based method.
+
+
+
+
+(a) $\mathrm{bin} = 90$
+
+
+
+
+(b) $\mathrm{bin} = 15$
+
+
+
+
+(c) $\mathrm{bin} = 9$
+Fig.7. Angular feature visualization of the 90-CSL-FPN detector on the DOTA dataset. First, we divide the entire angular range into several bins, and bins are different between columns. The two rows show two-dimensional feature visualizations of pulse and gaussian function, respectively. Each point represents a RoI of the test set with a index of the bin it belongs to.
+
+
+
+
+(d) $\mathrm{bin} = 6$
+
+Visual analysis of angular features. By zooming in on part of Figure 4(i), we find that the prediction of the boundary conditions became continuous (for example, two large vehicle in the same direction predicted $90^{\circ}$ and $-88^{\circ}$ , respectively). This phenomenon reflects the purpose of designing the CSL: the labels are periodic (circular) and the prediction of adjacent angles has a certain tolerance. In order to confirm that the angle classifier has indeed learned this property, we visually analyze the angular features of each region of interest (RoI) in the FPN detector by principal component analysis (PCA) [38], as shown in Figure 7. The detector does not learn the orientation information of well when we use the pulse window function. It can be seen from the first row of Figure 7 that the feature distribution of RoI is relatively random, and the prediction results of some angles occupy the vast majority. For the gaussian function, the feature distribution is obvious a ring structures, and the features of adjacent angles are close to each other and have a certain overlap. It is this property that helps CSL-based detectors to eliminate boundary problems and accurately obtain the orientation and scale information of the object.
+
+# 4.3 Comparison with the State-of-the-Art
+
+Results on DOTA. Although CSL is only a theoretical improvement on the original regression-based rotation detection method, it can still show competitive performance through data augmentation and multi-scale training and test that are widely used. We chose DOTA as the main validation dataset due to the complexity of the remote sensing image scene and the large number of small, cluttered and rotated objects. Our data augmentation methods mainly
+
+Table 5. Detection accuracy on each object (AP) and overall performance (mAP) on DOTA. Note $\mathrm{O}^2$ -DNet uses Hourglass104 [29] as backbone.
+
+| Method | Backbone | PL | BD | BR | GTF | SV | LV | SH | TC | BC | ST | SBF | RA | HA | SP | HC | mAP |
| FR-O [39] | ResNet101 | 79.09 | 69.12 | 17.17 | 63.49 | 34.20 | 37.16 | 36.20 | 89.19 | 69.60 | 58.96 | 49.4 | 52.52 | 46.69 | 44.80 | 46.30 | 52.93 |
| IENet [22] | ResNet101 | 80.20 | 64.54 | 39.82 | 32.07 | 49.71 | 65.01 | 52.58 | 81.45 | 44.66 | 78.51 | 46.54 | 56.73 | 64.40 | 64.24 | 36.75 | 57.14 |
| R-DFPN [42] | ResNet101 | 80.92 | 65.82 | 33.77 | 58.94 | 55.77 | 50.94 | 54.78 | 90.33 | 66.34 | 68.66 | 48.73 | 51.76 | 55.10 | 51.32 | 35.88 | 57.94 |
| R²CNN [12] | ResNet101 | 80.94 | 65.67 | 35.34 | 67.44 | 59.92 | 50.91 | 55.81 | 90.67 | 66.92 | 72.39 | 55.06 | 52.23 | 55.14 | 53.35 | 48.22 | 60.67 |
| RRPN [27] | ResNet101 | 88.52 | 71.20 | 31.66 | 59.30 | 51.85 | 56.19 | 57.25 | 90.81 | 72.84 | 67.38 | 56.69 | 52.84 | 53.08 | 51.94 | 53.58 | 61.01 |
| ICN [2] | ResNet101 | 81.40 | 74.30 | 47.70 | 70.30 | 64.90 | 67.80 | 70.00 | 90.80 | 79.10 | 78.20 | 53.60 | 62.90 | 67.00 | 64.20 | 50.20 | 68.20 |
| RADet [17] | ResNeXt101 | 79.45 | 76.99 | 48.05 | 65.83 | 65.46 | 74.40 | 68.86 | 89.70 | 78.14 | 74.97 | 49.92 | 64.63 | 66.14 | 71.58 | 62.16 | 69.09 |
| Rol-Transformer [4] | ResNet101 | 88.64 | 78.52 | 43.44 | 75.92 | 68.81 | 73.68 | 83.59 | 90.74 | 77.27 | 81.46 | 58.39 | 53.54 | 62.83 | 58.93 | 47.67 | 69.56 |
| P-RSDet [47] | ResNet101 | 89.02 | 73.65 | 47.33 | 72.03 | 70.58 | 73.71 | 72.76 | 90.82 | 80.12 | 81.32 | 59.45 | 57.87 | 60.79 | 65.21 | 52.59 | 69.82 |
| CAD-Net [45] | ResNet101 | 87.8 | 82.4 | 49.4 | 73.5 | 71.1 | 63.5 | 76.7 | 90.9 | 79.2 | 73.3 | 48.4 | 60.9 | 62.0 | 67.0 | 62.2 | 69.9 |
| O²-DNet [37] | Hourglass104 | 89.31 | 82.14 | 47.33 | 61.21 | 71.32 | 74.03 | 78.62 | 90.76 | 82.23 | 81.36 | 60.93 | 60.17 | 58.21 | 66.98 | 61.03 | 71.04 |
| SCRDet [44] | ResNet101 | 89.98 | 80.65 | 52.09 | 68.36 | 68.36 | 60.32 | 72.41 | 90.85 | 87.94 | 86.86 | 65.02 | 66.68 | 66.25 | 68.24 | 65.21 | 72.61 |
| SARD [36] | ResNet101 | 89.93 | 84.11 | 54.19 | 72.04 | 68.41 | 61.18 | 66.00 | 90.82 | 87.79 | 86.59 | 65.65 | 64.04 | 66.68 | 68.84 | 60.83 | 72.95 |
| FADet [16] | ResNet101 | 90.21 | 79.58 | 45.49 | 76.41 | 73.18 | 68.27 | 79.56 | 90.83 | 83.40 | 84.68 | 53.40 | 65.42 | 74.17 | 69.69 | 64.86 | 73.28 |
| R³Det [41] | ResNet152 | 89.49 | 81.17 | 50.53 | 66.10 | 70.92 | 78.66 | 78.21 | 90.81 | 85.26 | 84.23 | 61.81 | 63.77 | 68.16 | 69.83 | 67.17 | 73.74 |
| RSDet [30] | ResNet152 | 90.1 | 82.0 | 53.8 | 68.5 | 70.2 | 78.7 | 73.6 | 91.2 | 87.1 | 84.7 | 64.3 | 68.2 | 66.1 | 69.3 | 63.7 | 74.1 |
| Gliming Vertex [40] | ResNet101 | 89.64 | 85.00 | 52.26 | 77.34 | 73.01 | 73.14 | 86.82 | 90.74 | 79.02 | 86.81 | 59.55 | 70.91 | 72.94 | 70.86 | 57.32 | 75.02 |
| Mask OBB [35] | ResNeXt-101 | 89.56 | 85.95 | 54.21 | 72.90 | 76.52 | 74.16 | 85.63 | 89.85 | 83.81 | 86.48 | 54.89 | 69.64 | 73.94 | 69.06 | 63.32 | 75.33 |
| FFA [6] | ResNet101 | 90.1 | 82.7 | 54.2 | 75.2 | 71.0 | 79.9 | 83.5 | 90.7 | 83.9 | 84.6 | 61.2 | 68.0 | 70.7 | 76.0 | 63.7 | 75.7 |
| APE [50] | ResNeXt-101 | 89.96 | 83.62 | 53.42 | 76.03 | 74.01 | 77.16 | 79.45 | 90.83 | 87.15 | 84.51 | 67.72 | 60.33 | 74.61 | 71.84 | 65.55 | 75.75 |
| CSL (FPN based) | ResNet152 | 90.25 | 85.53 | 54.64 | 75.31 | 70.44 | 73.51 | 77.62 | 90.84 | 86.15 | 86.69 | 69.60 | 68.04 | 73.83 | 71.10 | 68.93 | 76.17 |
+
+Table 6. Detection accuracy on HRSC2016 dataset.
+
+| Method | R2CNN [12] | RC1 & RC2 [26] | RRPN [27] | R2PN [46] | RetinaNet-H [41] | RRD [19] |
| mAP (07) | 73.07 | 75.7 | 79.08 | 79.6 | 82.89 | 84.30 |
| Method | RoI-Transformer [4] | RSDet [30] | Gliding Vertex [40] | RetinaNet-R [41] | R3Det [41] | FPN-CSL-based |
| mAP (07) | 86.20 | 86.5 | 88.20 | 89.18 | 89.33 | 89.62 |
+
+include random horizontal, vertical flipping, random graying, and random rotation. Training and testing scale set to [400, 600, 720, 800, 1000, 1100]. As shown in Table 5, FPN-CSL-based method shows competitive performance, at $76.17\%$ . Results on HRSC2016. The HRSC2016 contains lots of large aspect ratio ship instances with arbitrary orientation, which poses a huge challenge to the positioning accuracy of the detector. Experimental results show that our model achieves state-of-the-art performances, about $89.62\%$ .
+
+# 5 Conclusions
+
+We study and summarize the boundary problems on different regression-based rotation detection methods. The main cause of boundary problems based on regression methods is that the ideal predictions are beyond the defined range. Therefore, consider the prediction of the object angle as a classification problem to better limit the prediction results, and then we design a circular smooth label (CSL) to adapt to the periodicity of the angle and increase the tolerance of classification between adjacent angles with little accuracy loss. We also introduce four window functions in CSL and explore the effect of different window radius sizes on detection performance. Extensive experiments and visual analysis on different detectors and datasets show that CSL-based rotation detection algorithm is indeed an effective baseline choice.
+
+# References
+
+1. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Gheematwat, S., Irving, G., Isard, M., et al.: Tensorflow: A system for large-scale machine learning. In: 12th {USENIX} symposium on operating systems design and implementation (OSDI) 16). pp. 265-283 (2016)
+2. Azimi, S.M., Vig, E., Bahmanyar, R., Korner, M., Reinartz, P.: Towards multi-class object detection in unconstrained remote sensing imagery. In: Asian Conference on Computer Vision. pp. 150-165. Springer (2018)
+3. Dai, J., Li, Y., He, K., Sun, J.: R-fcn: Object detection via region-based fully convolutional networks. In: Advances in neural information processing systems. pp. 379-387 (2016)
+4. Ding, J., Xue, N., Long, Y., Xia, G.S., Lu, Q.: Learning roi transformer for oriented object detection in aerial images. In: The IEEE Conference on Computer Vision and Pattern Recognition. pp. 2849-2858 (2019)
+5. Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., Tian, Q.: Centernet: Keypoint triplets for object detection. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 6569-6578 (2019)
+6. Fu, K., Chang, Z., Zhang, Y., Xu, G., Zhang, K., Sun, X.: Rotation-aware and multi-scale convolutional neural network for object detection in remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing 161, 294-308 (2020)
+7. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 1440-1448 (2015)
+8. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 580-587 (2014)
+9. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 2961-2969 (2017)
+0. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016)
+1. Huang, C., Ai, H., Li, Y., Lao, S.: High-performance rotation invariant multiview face detection. IEEE Transactions on pattern analysis and machine intelligence 29(4), 671-686 (2007)
+2. Jiang, Y., Zhu, X., Wang, X., Yang, S., Li, W., Wang, H., Fu, P., Luo, Z.: R2cnn: rotational region cnn for orientation robust scene text detection. arXiv preprint arXiv:1706.09579 (2017)
+3. Karatzas, D., Gomez-Bigorda, L., Nicolaou, A., Ghosh, S., Bagdanov, A., Iwamura, M., Matas, J., Neumann, L., Chandrasekhar, V.R., Lu, S., et al.: Icdar 2015 competition on robust reading. In: 2015 13th International Conference on Document Analysis and Recognition. pp. 1156-1160. IEEE (2015)
+4. Kim, K.R., Choi, W., Koh, Y.J., Jeong, S.G., Kim, C.S.: Instance-level future motion estimation in a single image based on ordinal regression. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 273-282 (2019)
+5. Law, H., Deng, J.: Cornernet: Detecting objects as paired keypoints. In: Proceedings of the European Conference on Computer Vision. pp. 734-750 (2018)
+6. Li, C., Xu, C., Cui, Z., Wang, D., Zhang, T., Yang, J.: Feature-attentioned object detection in remote sensing imagery. In: 2019 IEEE International Conference on Image Processing. pp. 3886-3890. IEEE (2019)
+
+17. Li, Y., Huang, Q., Pei, X., Jiao, L., Shang, R.: Radet: Refine feature pyramid network and multi-layer attention network for arbitrary-oriented object detection of remote sensing images. Remote Sensing 12(3), 389 (2020)
+18. Liao, M., Shi, B., Bai, X.: Textboxes ++: A single-shot oriented scene text detector. IEEE transactions on image processing 27(8), 3676-3690 (2018)
+19. Liao, M., Zhu, Z., Shi, B., Xia, G.s., Bai, X.: Rotation-sensitive regression for oriented scene text detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 5909-5918 (2018)
+20. Lin, T.Y., Dálár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2117-2125 (2017)
+21. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision. pp. 2980-2988 (2017)
+22. Lin, Y., Feng, P., Guan, J.: Ienet: Interacting embranchment one stage anchor free detector for orientation aerial object detection. arXiv preprint arXiv:1912.00969 (2019)
+23. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European conference on computer vision. pp. 21-37. Springer (2016)
+24. Liu, X., Liang, D., Yan, S., Chen, D., Qiao, Y., Yan, J.: Fots: Fast oriented text spotting with a unified network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 5676-5685 (2018)
+25. Liu, Y., Zhang, S., Jin, L., Xie, L., Wu, Y., Wang, Z.: Omnidirectional scene text detection with sequential-free box discretization. arXiv preprint arXiv:1906.02371 (2019)
+26. Liu, Z., Yuan, L., Weng, L., Yang, Y.: A high resolution optical satellite image dataset for ship recognition and some new baselines. In: Proceedings of the International Conference on Pattern Recognition Applications and Methods. vol. 2, pp. 324-331 (2017)
+27. Ma, J., Shao, W., Ye, H., Wang, L., Wang, H., Zheng, Y., Xue, X.: Arbitrary-oriented scene text detection via rotation proposals. IEEE Transactions on Multimedia 20(11), 3111-3122 (2018)
+28. Nayef, N., Yin, F., Bizid, I., Choi, H., Feng, Y., Karatzas, D., Luo, Z., Pal, U., Rigaud, C., Chazalon, J., et al.: Icdar2017 robust reading challenge on multi-lingual scene text detection and script identification-rrc-mlt. In: 2017 14th IAPR International Conference on Document Analysis and Recognition. vol. 1, pp. 1454–1459. IEEE (2017)
+29. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: European conference on computer vision. pp. 483-499. Springer (2016)
+30. Qian, W., Yang, X., Peng, S., Guo, Y., Yan, C.: Learning modulated loss for rotated object detection. arXiv preprint arXiv:1911.08299 (2019)
+31. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 779-788 (2016)
+32. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp. 91-99 (2015)
+33. Rowley, H.A., Baluja, S., Kanade, T.: Rotation invariant neural network-based face detection. In: Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No. 98CB36231). pp. 38-44. IEEE (1998)
+
+34. Shi, X., Shan, S., Kan, M., Wu, S., Chen, X.: Real-time rotation-invariant face detection with progressive calibration networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2295-2303 (2018)
+35. Wang, J., Ding, J., Guo, H., Cheng, W., Pan, T., Yang, W.: Mask obb: A semantic attention-based mask oriented bounding box representation for multi-category object detection in aerial images. Remote Sensing 11(24), 2930 (2019)
+36. Wang, Y., Zhang, Y., Zhang, Y., Zhao, L., Sun, X., Guo, Z.: Sard: Towards scale-aware rotated object detection in aerial imagery. IEEE Access 7, 173855-173865 (2019)
+37. Wei, H., Zhou, L., Zhang, Y., Li, H., Guo, R., Wang, H.: Oriented objects as pairs of middle lines. arXiv preprint arXiv:1912.10694 (2019)
+38. Wold, S., Esbensen, K., Geladi, P.: Principal component analysis. Chemometrics and intelligent laboratory systems 2(1-3), 37-52 (1987)
+39. Xia, G.S., Bai, X., Ding, J., Zhu, Z., Belongie, S., Luo, J., Datcu, M., Pelillo, M., Zhang, L.: Dota: A large-scale dataset for object detection in aerial images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3974-3983 (2018)
+40. Xu, Y., Fu, M., Wang, Q., Wang, Y., Chen, K., Xia, G.S., Bai, X.: Gliding vertex on the horizontal bounding box for multi-oriented object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)
+41. Yang, X., Liu, Q., Yan, J., Li, A., Zhang, Z., Yu, G.: R3det: Refined single-stage detector with feature refinement for rotating object. arXiv preprint arXiv:1908.05612 (2019)
+42. Yang, X., Sun, H., Fu, K., Yang, J., Sun, X., Yan, M., Guo, Z.: Automatic ship detection in remote sensing images from google earth of complex scenes based on multiscale rotation dense feature pyramid networks. Remote Sensing 10(1), 132 (2018)
+43. Yang, X., Sun, H., Sun, X., Yan, M., Guo, Z., Fu, K.: Position detection and direction prediction for arbitrary-oriented ships via multitask rotation region convolutional neural network. IEEE Access 6, 50839-50849 (2018)
+44. Yang, X., Yang, J., Yan, J., Zhang, Y., Zhang, T., Guo, Z., Sun, X., Fu, K.: Scdet: Towards more robust detection for small, cluttered and rotated objects. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 8232-8241 (2019)
+45. Zhang, G., Lu, S., Zhang, W.: Cad-net: A context-aware detection network for objects in remote sensing imagery. IEEE Transactions on Geoscience and Remote Sensing 57(12), 10015-10024 (2019)
+46. Zhang, Z., Guo, W., Zhu, S., Yu, W.: Toward arbitrary-oriented ship detection with rotated region proposal and discrimination networks. IEEE Geoscience and Remote Sensing Letters 15(11), 1745-1749 (2018)
+47. Zhou, L., Wei, H., Li, H., Zhang, Y., Sun, X., Zhao, W.: Objects detection for remote sensing images based on polar coordinates. arXiv preprint arXiv:2001.02988 (2020)
+48. Zhou, X., Zhuo, J., Krahenbuhl, P.: Bottom-up object detection by grouping extreme and center points. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 850-859 (2019)
+49. Zhou, X., Yao, C., Wen, H., Wang, Y., Zhou, S., He, W., Liang, J.: East: an efficient and accurate scene text detector. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. pp. 5551-5560 (2017)
+
+50. Zhu, Y., Du, J., Wu, X.: Adaptive period embedding for representing oriented objects in aerial images. IEEE Transactions on Geoscience and Remote Sensing (2020)
\ No newline at end of file
diff --git a/arbitraryorientedobjectdetectionwithcircularsmoothlabel/images.zip b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0a7517c856d208bb68ed8058908b20c5cc22626a
--- /dev/null
+++ b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1096f350f3ec2f2e5213039482fabd3795b94caaaf05b473afb3e9bc8c39a92a
+size 778718
diff --git a/arbitraryorientedobjectdetectionwithcircularsmoothlabel/layout.json b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0f51876356ccbb13826191d789ca7d746ddd77ac
--- /dev/null
+++ b/arbitraryorientedobjectdetectionwithcircularsmoothlabel/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:faf29280027f8c130071aea7f2b28ef744ac01ed38e01869f3a4edc6e9f3a2ee
+size 484829
diff --git a/arelabelsnecessaryforneuralarchitecturesearch/2d275bd0-6c8b-459c-a3c8-d0f9e7f66966_content_list.json b/arelabelsnecessaryforneuralarchitecturesearch/2d275bd0-6c8b-459c-a3c8-d0f9e7f66966_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ea244e38d85dd8075a9a658ccdf8f776820f23ba
--- /dev/null
+++ b/arelabelsnecessaryforneuralarchitecturesearch/2d275bd0-6c8b-459c-a3c8-d0f9e7f66966_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3777ca9590d447220c5c7c1a1faa57494502d00236311a719ac8a19c187d6827
+size 73719
diff --git a/arelabelsnecessaryforneuralarchitecturesearch/2d275bd0-6c8b-459c-a3c8-d0f9e7f66966_model.json b/arelabelsnecessaryforneuralarchitecturesearch/2d275bd0-6c8b-459c-a3c8-d0f9e7f66966_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..27996247b24d5c854a51f572352da82472e575ac
--- /dev/null
+++ b/arelabelsnecessaryforneuralarchitecturesearch/2d275bd0-6c8b-459c-a3c8-d0f9e7f66966_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:16ab0cb678bed3c3074e61b6965f3d0acd9a07ca9cf8bbdd7b96e89a2091dcd3
+size 89240
diff --git a/arelabelsnecessaryforneuralarchitecturesearch/2d275bd0-6c8b-459c-a3c8-d0f9e7f66966_origin.pdf b/arelabelsnecessaryforneuralarchitecturesearch/2d275bd0-6c8b-459c-a3c8-d0f9e7f66966_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c40b8a1770e97870a2c5d1cd5074212fe440b65c
--- /dev/null
+++ b/arelabelsnecessaryforneuralarchitecturesearch/2d275bd0-6c8b-459c-a3c8-d0f9e7f66966_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e88d2b61273d738190285896d30fd49dd24cf8d341a88be47eb775c91af8d65
+size 967893
diff --git a/arelabelsnecessaryforneuralarchitecturesearch/full.md b/arelabelsnecessaryforneuralarchitecturesearch/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..75eedf79319bbc1e0b9e08c7dbe6d7e1617d96b2
--- /dev/null
+++ b/arelabelsnecessaryforneuralarchitecturesearch/full.md
@@ -0,0 +1,269 @@
+# Are Labels Necessary for Neural Architecture Search?
+
+Chenxi Liu $^{1}$ , Piotr Dolkar $^{2}$ , Kaiming He $^{2}$ , Ross Girshick $^{2}$ , Alan Yuille $^{1}$ , and Saining Xie $^{2}$
+
+1 Johns Hopkins University
+
+$^{2}$ Facebook AI Research
+
+Abstract Existing neural network architectures in computer vision — whether designed by humans or by machines — were typically found using both images and their associated labels. In this paper, we ask the question: can we find high-quality neural architectures using only images, but no human-annotated labels? To answer this question, we first define a new setup called Unsupervised Neural Architecture Search (UnNAS). We then conduct two sets of experiments. In sample-based experiments, we train a large number (500) of diverse architectures with either supervised or unsupervised objectives, and find that the architecture rankings produced with and without labels are highly correlated. In search-based experiments, we run a well-established NAS algorithm (DARTS) using various unsupervised objectives, and report that the architectures searched without labels can be competitive to their counterparts searched with labels. Together, these results reveal the potentially surprising finding that labels are not necessary, and the image statistics alone may be sufficient to identify good neural architectures.
+
+Keywords: Neural Architecture Search; Unsupervised Learning
+
+# 1 Introduction
+
+Neural architecture search (NAS) has emerged as a research problem of searching for architectures that perform well on target data and tasks. A key mystery surrounding NAS is what factors contribute to the success of the search. Intuitively, using the target data and tasks during the search will result in the least domain gap, and this is indeed the strategy adopted in early NAS attempts [35,26]. Later, researchers [36] started to utilize the transferability of architectures, which enabled the search to be performed on different data and labels (e.g., CIFAR-10) than the target (e.g., ImageNet). However, what has not changed is that both the images and the (semantic) labels provided in the dataset need to be used in order to search for an architecture. In other words, existing NAS approaches perform search in the supervised learning regime.
+
+In this paper, we take a step towards understanding what role supervision plays in the success of NAS. We ask the question: How indispensable are labels in neural architecture search? Is it possible to find high-quality architectures using
+
+images only? This corresponds to the important yet underexplored unsupervised setup of neural architecture search, which we formalize in Section 3.
+
+With the absence of labels, the quality of the architecture needs to be estimated in an unsupervised fashion during the search phase. In the present work, we conduct two sets of experiments using three unsupervised training methods [11,34,21] from the recent self-supervised learning literature. $^{3}$ These two sets of experiments approach the question from complementary perspectives. In sample-based experiments, we randomly sample 500 architectures from a search space, train and evaluate them using supervised vs. self-supervised objectives, and then examine the rank correlation (when sorting models by accuracy) between the two training methodologies. In search-based experiments, we take a well-established NAS algorithm, replace the supervised search objective with a self-supervised one, and examine the quality of the searched architecture on tasks such as ImageNet classification and Cityscapes semantic segmentation. Our findings include:
+
+- The architecture rankings produced by supervised and self-supervised pretext tasks are highly correlated. This finding is consistent across two datasets, two search spaces, and three pretext tasks.
+- The architectures searched without human annotations are comparable in performance to their supervised counterparts. This result is consistent across three pretext tasks, three pretext datasets, and two target tasks. There are even cases where unsupervised search outperforms supervised search.
+- Existing NAS approaches typically use labeled images from a smaller dataset to learn transferable architectures. We present evidence that using unlabeled images from a large dataset may be a more promising approach.
+
+We conclude that labels are not necessary for neural architecture search, and the deciding factor for architecture quality may hide within the image pixels.
+
+# 2 Related Work
+
+Neural Architecture Search. Research on the NAS problem involves designing the search space [36,33] and the search algorithm [35,25]. There are special focuses on reducing the overall time cost of the search process [18,23,19], or on extending to a larger variety of tasks [4,17,10,27]. Existing works on NAS all use human-annotated labels during the search phase. Our work is orthogonal to existing NAS research, in that we explore the unsupervised setup.
+
+Architecture Transferability. In early NAS attempts [35,26], the search phase and the evaluation phase typically operate on the same dataset and task. Later, researchers realized that it is possible to relax this constraint. In these situations, the dataset and task used in the search phase are typically referred
+
+as the proxy to the target dataset and task, reflecting a notion of architecture transferability. [36] demonstrated that CIFAR-10 classification is a good proxy for ImageNet classification. [18] measured the rank correlation between these two tasks using a small number of architectures. [15] studied the transferability of 16 architectures (together with trained weights) between more supervised tasks. [14] studied the role of architecture in several self-supervised tasks, but with a small number of architectures and a different evaluation method (i.e. linear probe). Part of our work studies architecture transferability at a larger scale, across supervised and unsupervised tasks.
+
+Unsupervised Learning. There is a large literature on unsupervised learning, e.g., [9,8,30,34,21,11,22,31,29,12]. In the existing literature, methods are generally developed to learn the weights (parameters) of a fixed architecture without using labels, and these weights are evaluated by transferring to a target supervised task. In our study, we explore the possibility of using such methods to learn the architecture without using labels, rather than the weights. Therefore, our subject of study is simultaneously the unsupervised generalization of NAS, and the architecture level generalization of unsupervised learning.
+
+# 3 Unsupervised Neural Architecture Search
+
+The goal of this paper is to provide an answer to the question asked in the title: are labels necessary for neural architecture search? To formalize this question, in this section, we define a new setup called Unsupervised Neural Architecture Search (UnNAS). Note that UnNAS represents a general problem setup instead of any specific algorithm for solving this problem. We instantiate UnNAS with specific algorithms and experiments to explore the importance of labels in neural architecture search.
+
+# 3.1 Search Phase
+
+The traditional NAS problem includes a search phase: given a pre-defined search space, the search algorithm explores this space and estimates the performance (e.g. accuracy) of the architectures sampled from the space. The accuracy estimation can involve full or partial training of an architecture. Estimating the accuracy requires access to the labels of the dataset. So the traditional NAS problem is essentially a supervised learning problem.
+
+We define UnNAS as the counterpart unsupervised learning problem. It follows the definition of the NAS problem, only except that there are no human-annotated labels provided for estimating the performance of the architectures. An algorithm for the UnNAS problem still explores a pre-defined search space, but it requires other criteria to estimate how good a sampled architecture is.
+
+
+Figure 1: Unsupervised neural architecture search, or UnNAS, is a new problem setup that helps answer the question: are labels necessary for neural architecture search? In traditional unsupervised learning (top panel), the training phase learns the weights of a fixed architecture; then the evaluation phase measures the quality of the weights by training a classifier (either by fine-tuning the weights or using them as a fixed feature extractor) using supervision from the target dataset. Analogously, in UnNAS (bottom panel), the search phase searches for an architecture without using labels; and the evaluation phase measures the quality of the architecture found by an UnNAS algorithm by training the architecture's weights using supervision from the target dataset.
+
+# 3.2 Evaluation Phase
+
+Generally speaking, the goal of the NAS problem is to find an architecture. The weights of the found architecture are not necessarily the output of a NAS algorithm. Instead, the weights are optimized after the search phase in an evaluation phase of the NAS problem: it includes training the found architecture on a target dataset's training split, and validating the accuracy on the target dataset's validation split. We note that "training the architecture weights" is part of the NAS evaluation phase—the labels in the target dataset, both training and validation splits, play a role of evaluating an architecture.
+
+Based on this context, we define the evaluation phase of UnNAS in the same way: training the weights of the architecture (found by an UnNAS algorithm) on a target dataset's training split, and validating the accuracy on the target dataset's validation split, both using the labels of the target dataset. We remark that using labels during the evaluation phase does not conflict with the definition of UnNAS: the search phase is unsupervised, while the evaluation phase requires labels to examine how good the architecture is.
+
+# 3.3 Analogy to Unsupervised Learning
+
+Our definition of UnNAS is analogous to unsupervised weight learning in existing literature [11,34,21]. While the unsupervised learning phase has no labels
+
+for training weights, the quality of the learned weights is evaluated by transferring them to a target task, supervised by labels. We emphasize that in the UnNAS setup, labels play an analogous role during evaluation. See Figure 1 for an illustration and elaboration on this analogy.
+
+Similar to unsupervised weight learning, in principle, the search dataset should be different than the evaluation (target) dataset in UnNAS in order to more accurately reflect real application scenarios.
+
+# 4 Experiments Overview
+
+As Section 3 describes, an architecture discovered in an unsupervised fashion will be evaluated by its performance in a supervised setting. Therefore, we are essentially looking for some type of architecture level correlation that can reach across the unsupervised vs. supervised boundary, so that the unsupervisedly discovered architecture could be reliably transferred to the supervised target task. We investigate whether several existing self-supervised pretext tasks (described in Section 4.1) can serve this purpose, through two sets of experiments of complementary nature: sample-based (Section 5) and search-based (Section 6). In sample-based, each network is trained and evaluated individually, but the downside is that we can only consider a small, random subset of the search space. In search-based, the focus is to find a top architecture from the entire search space, but the downside is that the training dynamics during the search phase does not exactly match that of the evaluation phase.
+
+# 4.1 Pretext Tasks
+
+We explore three unsupervised training methods (typically referred to as pretext tasks in self-supervised learning literature): rotation prediction, colorization, and solving jigsaw puzzles. We briefly describe them for completeness.
+
+- Rotation prediction [11] (Rot): the input image undergoes one of four preset rotations (0, 90, 180, and 270 degrees), and the pretext task is formulated as a 4-way classification problem that predicts the rotation.
+- Colorization [34] (Color): the input is a grayscale image, and the pretext task is formulated as a pixel-wise classification problem with a set of predefined color classes (313 in [34]).
+- Solving jigsaw puzzles [21] (Jigsaw): the input image is divided into patches and randomly shuffled. The pretext task is formulated as an image-wise classification problem that chooses from one out of $K$ preset permutations.5
+
+All three pretext tasks are image or pixel-wise classification problems. Therefore, we can compute the classification accuracy of the pretext task (on a validation set). Based on this pretext task accuracy, we can analyze its correlation
+
+
+3.0 93.5 94.0
+
+
+CIFAR-10 su
+
+
+
+
+
+
+CIFAR-10 supervised classification accuracy (NAS-Bench-101 search space)
+Figure2: Correlation between supervised classification accuracy vs. pretext task accuracy on CIFAR-10 ("C10"). Top panel: DARTS search space. Bottom panel: NAS-Bench-101 search space. The straight lines are fit with robust linear regression [13] (same for Figure 3 and Figure 4).
+
+with the supervised classification accuracy (also on a validation set), as in our sample-based experiments (Section 5). Also, since these pretext tasks all use cross entropy loss, it is also straightforward to use them as the training objective in standard NAS algorithms, as done in our search-based experiments (Section 6).
+
+# 5 Sample-Based Experiments
+
+# 5.1 Experimental Design
+
+In sample-based experiments, we first randomly sample 500 architectures from a certain search space. We train each architecture from scratch on the pretext task and get its pretext task accuracy (e.g., the 4-way classification accuracy in the rotation task), and also train the same architecture from scratch on the supervised classification task (e.g., 1000-way classification on ImageNet).6
+
+With these data collected, we perform two types of analysis. For the rank correlation analysis, we empirically study the statistical rank correlations between the pretext task accuracy and the supervised accuracy, by measuring the Spearman's Rank Correlation [28], denoted as $\rho$ . For the random experiment analysis, we follow the setup proposed in [2] and recently adopted in [24]. Specifically, for each experiment size $m$ , we sample $m$ architectures from our pool of $n = 500$ architectures. For each pretext task, we select the architecture with the highest pretext task accuracy among the $m$ . This process is repeated $\lceil n / m \rceil$ times, to compute the mean and error bands ( $\pm 2$ standard deviations) of the top-1 accuracy of these $\lceil n / m \rceil$ architectures on the target dataset/task.
+
+
+
+
+Figure3: Correlation between supervised classification accuracy vs. pretext task accuracy on ImageNet ("IN"). Top panel: DARTS search space. Bottom panel: NAS-Bench-101 search space.
+
+These two studies provide complementary views. The rank correlation analysis aims to provide a global picture for all architectures sampled from a search space, while the random experiment analysis focuses on the top architectures in a random experiment of varying size.
+
+We study two search spaces: the DARTS search space [19], and the NAS-Bench-101 search space [33]. The latter search space was built for benchmarking NAS algorithms, so we expect it to be less biased towards the search space for a certain algorithm. The experiments are conducted on two commonly used datasets: CIFAR-10 [16] and ImageNet [7].
+
+# 5.2 Implementation Details
+
+Details of sampling the 500 architectures are described in Appendix. In the DARTS search space, regardless of the task, each network is trained for 5 epochs with width 32 and depth 22 on ImageNet; 100 epochs with width 16 and depth 20 on CIFAR-10. In the NAS-Bench-101 search space, regardless of the task, each network is trained for 10 epochs with width 128 and depth 12 on ImageNet; 100 epochs with width 128 and depth 9 on CIFAR-10. Please refer to [19,33] for the respective definitions of width and depth. The performance of each network on CIFAR-10 is the average of 3 independent runs to reduce variance.
+
+We remark that the small-scale of CIFAR-10 dataset and the short training epochs on ImageNet are the compromises we make to allow for more diverse architectures (i.e. 500).
+
+# 5.3 Results
+
+High rank correlation between supervised accuracy and pretext accuracy on the same dataset. In Figure 2 and Figure 3, we show the scatter plots of the 500 architectures' supervised classification accuracy (horizontal axis) and
+
+
+ImageNet supervised classification accuracy (DARTS search space)
+
+
+
+
+
+
+
+
+ImageNet supervised classification accuracy (NAS-Bench-101 search space)
+Figure 4: Correlation between ImageNet supervised classification accuracy vs. CIFAR-10 ("C10") pretext task accuracy. Rankings of architectures are highly correlated between supervised classification and three unsupervised tasks, as measured by Spearman's rank correlation $(\rho)$ . We also show rank correlation using CIFAR-10 supervised proxy in the rightmost panel. Top panel: DARTS search space. Bottom panel: NAS-Bench-101 search space.
+
+
+
+
+
+
+
+pretext task accuracy (vertical axis) on CIFAR-10 and ImageNet, respectively. We see that this rank correlation is typically higher than 0.8, regardless of the dataset, the search space, and the pretext task. This type of consistency and robustness indicates that this phenomenon is general, as opposed to dataset/search space specific.
+
+The same experiment is performed on both the DARTS and the NAS-Bench-101 search spaces. The rank correlations on the NAS-Bench-101 search space are generally higher than those on the DARTS search space. A possible explanation is that the architectures in NAS-Bench-101 are more diverse, and consequently their accuracies have larger gaps, i.e., are less affected by training noise.
+
+Interestingly, we observe that among the three pretext tasks, colorization consistently has the lowest correlation on CIFAR-10, but the highest correlation on ImageNet. We suspect this is because the small images in CIFAR-10 make the learning of per-pixel colorization difficult, and consequently the performance after training is noisy.
+
+High rank correlation between supervised accuracy and pretext accuracy across datasets. In Figure 4 we show the across dataset rank correlation analysis, where the pretext task accuracy is measured on CIFAR-10, but the supervised classification accuracy is measured on ImageNet.
+
+On the DARTS search space (top panel), despite the image distribution shift brought by different datasets, for each of the three pretext tasks (left three plots), the correlation remains consistently high ( $\sim 0.8$ ). This shows that across the entire search space, the relative ranking of an architecture is likely to be
+
+
+Figure 5: Random experiment efficiency curves. Left panel: DARTS search space. Right panel: NAS-Bench-101 search space. We show the range of ImageNet classification accuracies of top architectures identified by the three pretext tasks and the supervised task under various experiment sizes. See text for more details.
+
+
+
+similar, whether under the unsupervised pretext task accuracy or under the supervised target task accuracy. In the rightmost panel, we compare with a scenario where instead of using an unsupervised pretext task, we use a proxy task of CIFAR-10 supervised classification. The correlation is $\rho = 0.90$ in this case. As the CIFAR-10 supervised classification is a commonly used proxy task in existing NAS literature [36], it gives a reference on $\rho$ 's value.
+
+In the bottom panel we show more analysis of this kind by replacing the search space with NAS-Bench-101. Although the architectures are quite different, the observations are similar. In all cases, the self-supervised pretext task accuracy is highly correlated to the supervised classification accuracy.
+
+Better pretext accuracy translates to better supervised accuracy. In addition to the rank correlation analysis, we also perform the random experiment analysis. Figure 5 shows the random experiment efficiency curve for DARTS and NAS-Bench-101 search spaces. Again, the pretext accuracies are obtained on CIFAR-10, and the target accuracies are from ImageNet. By design of this experiment, as the experiment size $m$ increases, the pretext accuracies of the $\lceil n / m \rceil$ architectures should increase. Figure 5 shows that the target accuracies of these $\lceil n / m \rceil$ architectures also increase with $m$ . In addition, at each experiment size, most unsupervised pretext objectives perform similarly compared to the commonly used supervised CIFAR-10 proxy. The overall trends are also comparable. This shows that the architecture rankings produced with and without labels are not only correlated across the entire search space, but also towards the top of the search space, which is closer to the goal of UnNAS.
+
+# 6 Search-Based Experiments
+
+# 6.1 Experimental Design
+
+In search-based experiments, the idea is to run a well-established NAS algorithm, except that we make the minimal modification of replacing its supervised search
+
+objective with an unsupervised one. Following the UnNAS setup, we then examine (by training from scratch) how well these unsupervisedly discovered architectures perform on supervised target tasks. Since all other variables are controlled to be the same, search-based experiments can easily compare between the supervised and unsupervised counterparts of a NAS algorithm, which can help reveal the importance of labels.
+
+The NAS algorithm we adopt is DARTS (short for "differentiable architecture search") [19] for its simplicity. DARTS formulates the activation tensor selection and operation selection as a categorical choice, implemented as a softmax function on a set of continuous parameters (named $\alpha$ ). These parameters are trained in a similar fashion as the architecture weights, by backpropagation from a loss function. After this training, the softmax outputs are discretized and produce an architecture.
+
+The self-supervised objectives that we consider are, still, those described in Section 4.1: rotation prediction (Rot), colorization (Color), and solving jigsaw puzzles (Jigsaw). For comparison, we also perform NAS search with supervised objectives, e.g. classification (Supv.Cls) or semantic segmentation (Supv.Seg). To help distinguish, we name the method NAS-DARTS if the search objective is supervised, and UnNAS-DARTS if the search objective is unsupervised.
+
+# 6.2 Implementation Details
+
+Search phase. We use three different datasets for architecture search: ImageNet-1K (IN1K) [7], ImageNet-22K (IN22K) [7], and Cityscapes [6]. IN1K is the standard ImageNet benchmark dataset with 1.2M images from 1K categories. IN22K is the full ImageNet dataset that has $\sim 14$ M images from 22K categories. Cityscapes is a dataset of street scenes that has drastically different image statistics. Note that UnNAS-DARTS will only access the images provided in the dataset, while NAS-DARTS will additionally access the (semantic) labels provided in the respective dataset. The search phase will operate only within the training split, without accessing the true validation or test split.
+
+We report in Appendix the hyper-parameters we used. One major difference between our experiments and DARTS [19] is that the images in the search datasets that we consider are much larger in size. We use $224 \times 224$ random crops for search on IN1K/IN22K, and $312 \times 312$ for search on Cityscapes following [17]. To enable DARTS training with large input images, we use 3 stride-2 convolution layers at the beginning of the network to reduce spatial resolution. This design, together with appropriately chosen number of search epochs (see Appendix), allows UnNAS search to be efficient ( $\sim 2$ GPU days on IN1K/Cityscapes, $\sim 10$ GPU days on IN22K, regardless of task) despite running on larger images.
+
+Evaluation phase. We use two distinct datasets and tasks for UnNAS evaluation: (1) ImageNet-1K (IN1K) for image classification. The performance metric is top-1 accuracy on the IN1K validation set. (2) Cityscapes for semantic segmentation. We use the train_fine set (2975 images) for training. The performance metric is mean Intersection-over-Union (mIoU) evaluated on the val set (500 images).
+
+| method | search dataset & task | top-1 acc. | FLOPs (M) | params (M) |
| NAS-DARTS [19] | CIFAR-10 Supv.Cls | 73.3 | 574 | 4.7 |
| NAS-P-DARTS [5] | CIFAR-10 Supv.Cls | 75.6 | 557 | 4.9 |
| NAS-PC-DARTS [32] | CIFAR-10 Supv.Cls | 74.9 | 586 | 5.3 |
| NAS-PC-DARTS [32] | IN1K Supv.Cls | 75.8 | 597 | 5.3 |
| NAS-DARTS† | CIFAR-10 Supv.Cls | 74.9±0.08 | 538 | 4.7 |
| NAS-DARTS | IN1K Supv.Cls | 76.3±0.06 | 590 | 5.3 |
| UnNAS-DARTS | IN1K Rot | 75.8±0.18 | 558 | 5.1 |
| UnNAS-DARTS | IN1K Color | 75.7±0.12 | 547 | 4.9 |
| UnNAS-DARTS | IN1K Jigsaw | 75.9±0.15 | 567 | 5.2 |
| NAS-DARTS | IN22K Supv.Cls | 75.9±0.09 | 585 | 5.2 |
| UnNAS-DARTS | IN22K Rot | 75.7±0.23 | 549 | 5.0 |
| UnNAS-DARTS | IN22K Color | 75.9±0.21 | 547 | 5.0 |
| UnNAS-DARTS | IN22K Jigsaw | 75.9±0.31 | 559 | 5.1 |
| NAS-DARTS | Cityscapes Supv.Seg | 75.8±0.13 | 566 | 5.1 |
| UnNAS-DARTS | Cityscapes Rot | 75.9±0.19 | 554 | 5.1 |
| UnNAS-DARTS | Cityscapes Color | 75.2±0.15 | 594 | 5.1 |
| UnNAS-DARTS | Cityscapes Jigsaw | 75.5±0.06 | 566 | 5.0 |
+
+Table 1: ImageNet-1K classification results of the architectures searched by NAS and UnNAS algorithms. Rows in gray correspond to invalid UnNAS configurations where the search and evaluation datasets are the same. $\dagger$ is our training result of the DARTS architecture released in [19].
+
+For IN1K evaluation, we fix depth to 14 and adjust the width to have #FLOPs $\in$ [500,600]M. Models are trained for 250 epochs with an auxiliary loss weighted by 0.4, batch size 1024 across 8 GPUs, cosine learning rate schedule [20] with initial value 0.5, and 5 epochs of warmup. For Cityscapes evaluation, we fix depth to 12 and adjust the width to have #Params $\in$ [9.5,10.5]M. We train the network for 2700 epochs, with batch size 64 across 8 GPUs, cosine learning rate schedule with initial value 0.1. For both ImageNet and Cityscapes evaluations, we report the mean and standard deviation of 3 independent trainings of the same architecture. More implementation details are described in Appendix.
+
+We note that under our definition of the UnNAS setup, the same dataset should not be used for both search and evaluation (because this scenario is unrealistic); We provide the IN1K $\rightarrow$ IN1K and Cityscapes $\rightarrow$ Cityscapes results purely as a reference. Those settings are analogous to the linear classifier probe for IN1K in conventional unsupervised learning research.
+
+# 6.3 Results
+
+In search-based experiments, the architectures are evaluated on both ImageNet classification, summarized in Table 1, and Cityscapes semantic segmentation, summarized in Table 2. We provide visualization of all NAS-DARTS and UnNAS-DARTS cell architectures in Appendix.
+
+| method | search dataset & task | mIoU | FLOPs (B) | params (M) |
| NAS-DARTS† | CIFAR-10 Supv.Cls | 72.6±0.55 | 121 | 9.6 |
| NAS-DARTS | IN1K Supv.Cls | 73.6±0.31 | 127 | 10.2 |
| UnNAS-DARTS | IN1K Rot | 73.6±0.29 | 129 | 10.4 |
| UnNAS-DARTS | IN1K Color | 72.2±0.56 | 122 | 9.7 |
| UnNAS-DARTS | IN1K Jigsaw | 73.1±0.17 | 129 | 10.4 |
| NAS-DARTS | IN22K Supv.Cls | 72.4±0.29 | 126 | 10.1 |
| UnNAS-DARTS | IN22K Rot | 72.9±0.23 | 128 | 10.3 |
| UnNAS-DARTS | IN22K Color | 73.6±0.41 | 128 | 10.3 |
| UnNAS-DARTS | IN22K Jigsaw | 73.1±0.59 | 129 | 10.4 |
| NAS-DARTS | Cityscapes Supv.Seg | 72.4±0.15 | 128 | 10.3 |
| UnNAS-DARTS | Cityscapes Rot | 73.0±0.25 | 128 | 10.3 |
| UnNAS-DARTS | Cityscapes Color | 72.5±0.31 | 122 | 9.5 |
| UnNAS-DARTS | Cityscapes Jigsaw | 74.1±0.39 | 128 | 10.2 |
+
+Table 2: Cityscapes semantic segmentation results of the architectures searched by NAS and UnNAS algorithms. These are trained from scratch: there is no fine-tuning from ImageNet checkpoint. Rows in gray correspond to an illegitimate setup where the search dataset is the same as the evaluation dataset. $\dagger$ is our training result of the DARTS architecture released in [19].
+
+UnNAS architectures perform competitively to supervised counterparts. We begin by comparing NAS-DARTS and UnNAS-DARTS when they are performed on the same search dataset. This would correspond to every four consecutive rows in Table 1 and Table 2, grouped together by horizontal lines.
+
+As discussed earlier, strictly speaking the IN1K $\rightarrow$ IN1K experiment is not valid under our definition of UnNAS. For reference, we gray out these results in Table 1. NAS-DARTS on IN1K dataset has the highest performance among our experiments, achieving a top-1 accuracy of $76.3\%$ . However, the UnNAS algorithm variants with Rot, Color, Jigsaw objectives all perform very well (achieving $75.8\%$ , $75.7\%$ and $75.9\%$ top-1 accuracy, respectively), closely approaching the results obtained by the supervised counterpart. This suggests it might be desirable to perform architecture search on the target dataset directly, as also observed in other work [3].
+
+Two valid UnNAS settings include IN22K→IN1K and Cityscapes→IN1K for architecture search and evaluation. For IN22K→IN1K experiments, NAS and UnNAS results across the board are comparable. For Cityscapes→IN1K experiments, among the UnNAS architectures, Rot and Jigsaw perform well, once again achieving results comparable to the supervised search. However, there is a drop for UnNAS-DARTS search with Color objective (with a 75.2% top-1 accuracy). We hypothesize that this might be owing to the fact that the color distribution in Cityscapes images is not as diverse: the majority of the pixels are from road and ground categories of gray colors.
+
+In general, the variances are higher for Cityscapes semantic segmentation (Table 2), but overall UnNAS-DARTS architectures still perform competitively to
+
+NAS-DARTS architectures, measured by mIoU. For the Cityscapes $\rightarrow$ Cityscapes experiment, we observe that searching with segmentation objective directly leads to inferior result (mean $72.4\%$ mIoU), compared to the architectures searched for ImageNet classification tasks. This is different from what has been observed in Table 1. However, under this setting, our UnNAS algorithm, in particular the one with the Jigsaw objective shows very promising results (mean $74.1\%$ mIoU). In fact, when the search dataset is IN22K or Cityscapes, all UnNAS-DARTS architectures perform better than the NAS-DARTS architecture. This is the opposite of what was observed in IN1K $\rightarrow$ IN1K. Results for Cityscapes $\rightarrow$ Cityscapes are grayed out for the same reason as before (invalid under UnNAS definition).
+
+NAS and UnNAS results are robust across a large variety of datasets and tasks. The three search datasets that we consider are of different nature. For example, IN22K is 10 times larger than IN1K, and Cityscapes images have a markedly different distribution than those in ImageNet. In our experiments, NAS-DARTS/UnNAS-DARTS architectures searched on IN22K do not significantly outperform those searched on IN1K, meaning that they do not seem to be able to enjoy the benefit of having more abundant images. This reveals new opportunities in designing better algorithms to exploit bigger datasets for neural architecture search. For Cityscapes $\rightarrow$ IN1K experiments, it is interesting to see that after switching to a dataset with markedly distinct search images (urban street scenes), we are still able to observe decent performance. The same goes for the reverse direction IN1K/IN22K $\rightarrow$ Cityscapes, which implies that the search does not severely overfit to the images from the dataset.
+
+In addition to this robustness to the search dataset distribution, NAS and UnNAS also exhibit robustness to target dataset and task. Classification on ImageNet and segmentation on Cityscapes are different in many ways, but among different combinations of search dataset and task (whether supervised or unsupervised), we do not observe a case where the same architecture performs well on one but poorly on the other.
+
+UnNAS outperforms previous methods. Finally, we compare our UnNAS-DARTS results against existing works. We first note that we are able to achieve a better baseline number with the NAS-DARTS architecture (searched with the CIFAR-10 proxy) compared to what was reported in [19]. This is mainly due to better hyper-parameter tuning we adopt from [5] for the evaluation phase model training. This baseline sets up a fair ground for all the UnNAS experiments; we use the same evaluation phase hyper-parameters across different settings.
+
+On ImageNet, our UnNAS-DARTS architectures can comfortably outperform this baseline by up to $1\%$ classification accuracy. In fact, the extremely competitive UnNAS-DARTS results also outperform the previous best result $(75.8\%)$ on this search space, achieved with a more sophisticated NAS algorithm [32].
+
+On Cityscapes, there have not been many works that use DARTS variants as the backbone. The closest is Auto-DeepLab [17], but we use a lighter architecture (in that we do not have the Decoder) and shorter training iterations, so the
+
+results are not directly comparable. Nonetheless, according to our evaluation, the UnNAS architectures perform favorably against the DARTS architecture released in [19] (discovered with the CIFAR-10 proxy). The best UnNAS-DARTS variant (Cityscapes Jigsaw) achieves $74.1\%$ mIoU, which outperforms this baseline by $1.5\%$ on average. Overall, our experiments demonstrate that exploring neural architecture search with unsupervised/self-supervised objectives to improve target task performance might be a fruitful direction.
+
+Outperforming previous methods was far from the original goal of our study. Nonetheless, the promising results of UnNAS suggest that in addition to developing new algorithms and finding new tasks, the role of data (in our case, more/larger images) and paradigm (in our case, no human annotations) is also worth attention in future work on neural architecture search.
+
+# 7 Discussion
+
+In this paper, we challenge the common practice in neural architecture search and ask the question: do we really need labels to successfully perform NAS? We approach this question with two sets of experiments. In sample-based experiments, we discover the phenomenon that the architecture rankings produced with and without labels are highly correlated. In search-based experiments, we show that the architectures learned without accessing labels perform competitively, not only relative to their supervised counterpart, but also in terms of absolute performance. In both experiments, the observations are consistent and robust across various datasets, tasks, and/or search spaces. Overall, the findings in this paper indicate that labels are not necessary for neural architecture search.
+
+How to learn and transfer useful representations to subsequent tasks in an unsupervised fashion has been a research topic of extensive interest, but the discovery of neural network architectures has been driven solely by supervised tasks. As a result, current NAS products or AutoML APIs typically have the strict prerequisite for users to "put together a training dataset of labeled images" [1]. An immediate implication of our study is that the job of the user could potentially be made easier by dropping the labeling effort. In this sense, UnNAS could be especially beneficial to the many applications where data constantly comes in at large volume but labeling is costly.
+
+At the same time, we should still ask: if not labels, then what factors are needed to reveal a good architecture? A meaningful unsupervised task seems to be important, though the several pretext tasks considered in our paper do not exhibit significant difference in either of the two experiments. In the future we plan to investigate even more and even simpler unsupervised tasks. Another possibility is that the architecture quality is mainly decided by the image statistics, and since the datasets that we consider are all natural images, the correlations are high and the results are comparable. This hypothesis would also suggest an interesting, alternative direction: that instead of performing NAS again and again for every specific labeled task, it may be more sensible to perform NAS once on large amounts of unlabeled images that capture the image distribution.
+
+# References
+
+1. Google AutoML Vision API Tutorial (2019 (accessed Nov 14, 2019)), https://cloud.google.com/vision/automl/docs/tutorial
+2. Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. Journal of Machine Learning Research 13(Feb), 281-305 (2012)
+3. Cai, H., Zhu, L., Han, S.: Proxylessnas: Direct neural architecture search on target task and hardware. In: ICLR (2019)
+4. Chen, L.C., Collins, M., Zhu, Y., Papandreou, G., Zoph, B., Schroff, F., Adam, H., Shlens, J.: Searching for efficient multi-scale architectures for dense image prediction. In: NeurIPS (2018)
+5. Chen, X., Xie, L., Wu, J., Tian, Q.: Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. In: ICCV (2019)
+6. Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The Cityscapes dataset for semantic urban scene understanding. In: CVPR (2016)
+7. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: CVPR (2009)
+8. Doersch, C., Gupta, A., Efros, A.A.: Unsupervised visual representation learning by context prediction. In: ICCV (2015)
+9. Dosovitskiy, A., Springenberg, J.T., Riedmiller, M., Brox, T.: Discriminative unsupervised feature learning with convolutional neural networks. In: NeurIPS (2014)
+0. Ghiasi, G., Lin, T.Y., Le, Q.V.: Nas-fpn: Learning scalable feature pyramid architecture for object detection. In: CVPR (2019)
+1. Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. In: ICLR (2018)
+2. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR (2020)
+3. Huber, P.J.: Robust statistics. Springer (2011)
+4. Kolesnikov, A., Zhai, X., Beyer, L.: Revisiting self-supervised visual representation learning. In: CVPR (2019)
+5. Kornblith, S., Shlens, J., Le, Q.V.: Do better imagenet models transfer better? In: CVPR (2019)
+6. Krizhevsky, A.: Learning multiple layers of features from tiny images. Tech. rep., Citeseer (2009)
+7. Liu, C., Chen, L.C., Schroff, F., Adam, H., Hua, W., Yuille, A.L., Fei-Fei, L.: Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In: CVPR (2019)
+8. Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L.J., Fei-Fei, L., Yuille, A., Huang, J., Murphy, K.: Progressive neural architecture search. In: ECCV (2018)
+9. Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. In: ICLR (2019)
+20. Loshchilov, I., Hutter, F.: Sgdr: Stochastic gradient descent with warm restarts. In: ICLR (2017)
+21. Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving jigsaw puzzles. In: ECCV (2016)
+22. Oord, A.v.d., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv:1807.03748 (2018)
+23. Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search via parameter sharing. In: ICML (2018)
+
+24. Radosavovic, I., Johnson, J., Xie, S., Lo, W.Y., Dollar, P.: On network design spaces for visual recognition. In: ICCV (2019)
+25. Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. In: AAAI (2019)
+26. Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y.L., Tan, J., Le, Q.V., Kurakin, A.: Large-scale evolution of image classifiers. In: ICML (2017)
+27. So, D.R., Liang, C., Le, Q.V.: The evolved transformer. In: ICML (2019)
+28. Spearman, C.: The proof and measurement of association between two things. The American Journal of Psychology (1904)
+29. Tian, Y., Krishnan, D., Isola, P.: Contrastive multiview coding. arXiv:1906.05849 (2019)
+30. Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: ICCV (2015)
+31. Wu, Z., Xiong, Y., Yu, S., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: CVPR (2018)
+32. Xu, Y., Xie, L., Zhang, X., Chen, X., Qi, G.J., Tian, Q., Xiong, H.: Pc-darts: Partial channel connections for memory-efficient differentiable architecture search. In: ICLR (2020)
+33. Ying, C., Klein, A., Christiansen, E., Real, E., Murphy, K., Hutter, F.: Nas-bench-101: Towards reproducible neural architecture search. In: ICML (2019)
+34. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: ECCV (2016)
+35. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: ICLR (2017)
+36. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: CVPR (2018)
\ No newline at end of file
diff --git a/arelabelsnecessaryforneuralarchitecturesearch/images.zip b/arelabelsnecessaryforneuralarchitecturesearch/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9980f1f6c85685ae8ea99536b0fe639fbce7327b
--- /dev/null
+++ b/arelabelsnecessaryforneuralarchitecturesearch/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef9f8c61f5e0e533f75fd33d3ee0af8467456a93adb118a11241ebc2166d1241
+size 388046
diff --git a/arelabelsnecessaryforneuralarchitecturesearch/layout.json b/arelabelsnecessaryforneuralarchitecturesearch/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..06126c87101e65c641cc0680b4f34973a06d0ede
--- /dev/null
+++ b/arelabelsnecessaryforneuralarchitecturesearch/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6cec95284718bc795813a687bf7a98ff1998c6817f21583a195fed2b3a3dac57
+size 345657
diff --git a/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/6f61824b-a4fa-40da-95d5-612095d6687c_content_list.json b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/6f61824b-a4fa-40da-95d5-612095d6687c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f3449757ba04c37de5ca4e6a40c764865323f9a1
--- /dev/null
+++ b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/6f61824b-a4fa-40da-95d5-612095d6687c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ba84b0276f55ed2ac90a9e9474288a79188ff0dbc545e6338c89e74b9dfcbef
+size 78316
diff --git a/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/6f61824b-a4fa-40da-95d5-612095d6687c_model.json b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/6f61824b-a4fa-40da-95d5-612095d6687c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..505c01beb7bc3f91eac9842cddba6b5cbababbe1
--- /dev/null
+++ b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/6f61824b-a4fa-40da-95d5-612095d6687c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ff600d415f7cdc8a480d69e91caacfc0a90c1c7860100957e729856aba55d04
+size 100724
diff --git a/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/6f61824b-a4fa-40da-95d5-612095d6687c_origin.pdf b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/6f61824b-a4fa-40da-95d5-612095d6687c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ac68301145c05a393eaac53fe74f4060cde861ac
--- /dev/null
+++ b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/6f61824b-a4fa-40da-95d5-612095d6687c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8ba04a0ab2e78d739db1145de4a294d3f60a632d0e5d634bf33e4ca191eb88cf
+size 1233612
diff --git a/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/full.md b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..29c6a43e8507a8b3eb3e31f6850d2cb5a112e539
--- /dev/null
+++ b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/full.md
@@ -0,0 +1,313 @@
+# AssembleNet++: Assembling Modality Representations via Attention Connections
+
+Michael S. Ryoo $^{1,2}$ , AJ Piergiovanni $^{1}$ , Juhana Kangaspunta $^{1}$ , and Anelia Angelova $^{1}$
+
+$^{1}$ Robotics at Google
+
+2 Stony Brook University
+
+{mryoo,ajpiergi,juhana,anelia}@google.com
+
+Abstract. We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network. A new network component named peer-attention is introduced, which dynamically learns the attention weights using another block or input modality. Even without pre-training, our models outperform the previous work on standard public activity recognition datasets with continuous videos, establishing new state-of-the-art. We also confirm that our findings of having neural connections from the object modality and the use of peer-attention is generally applicable for different existing architectures, improving their performances. We name our model explicitly as AssembleNet++. The code will be available at: https://sites.google.com/corp/view/assemblenet/
+
+Keywords: video understanding, activity recognition, attention
+
+# 1 Introduction
+
+Video understanding is a fundamental problem in vision with many novel approaches proposed recently. While many advanced neural architectures have been used for video understanding [4, 43], including two-stream and multi-stream ones [4, 9, 34], learning of interactions between raw input modalities (e.g., RGB and motion) and semantic input modalities such as objects in the scene (e.g., persons and objects) have been limited.
+
+Inspired by previous work, e.g. AssembleNet architectures for videos [34] and RandWire architectures for images [53] which proposed random or targeted connectivity between layers in a neural network, we create a family of powerful video models that explicitly learn interactions between spatial object-specific information and raw appearance and motion features. In particular, inter-block attention connectivity is searched for to best capture the interplay between different modality representations.
+
+The main technical contributions of this paper include:
+
+1. Optimizing neural architecture connectivity for object modality fusion. We discover that models with 'omnipresent' connectivity from object input allows the best multi-modal fusion.
+2. Learning of video models with peer-attention on the connections. We newly introduce an one-shot model formulation to efficiently search for architectures with better peer-attention connectivity.
+
+We test the approach extensively on challenging video understanding datasets, showing notable improvements: compared to the baseline backbone architecture we use, our new one-shot attention search model with object modality obtains $+12.6\%$ on Charades classification task and $+6.22\%$ on Toyota Smarthome dataset. Our approach also outperforms reported numbers of existing approaches on both datasets, establishing new state-of-the-art.
+
+# 2 Previous work
+
+Video CNNs Convolutional neural network (CNNs) for videos [38, 4, 8, 9, 31, 46, 18, 54, 42, 43] are a popular approach to video understanding, for example, solutions, such as 3D Video CNNs [40, 17, 41, 4, 42, 12], $(2 + 1)\mathrm{D}$ CNNs [43] or even novel architecture searched models [27, 28, 34] are widely used. Action recognition has also been the topic of intense research [11, 45].
+
+Action recognition with objects Action recognition with objects has been traditionally studied years back [26]. The presence of specific objects in video frames, has been shown to be important for video recognition, even in the context of advanced feature learned by deep neural models, e.g., Sigurdsson et al. [37]; they are useful even if provided as a single label per frame. This is not surprising as many of the activities, e.g. 'speaking on the phone', or 'reading a book' are primarily determined by the objects themselves. Furthermore, clues about the location of persons, e.g., by 2D human pose has also been shown to be beneficial [5]. Recent video CNNs have also tried to integrate object-related information, from segmentation [2, 32] or pre-training from image datasets [6]. One-time late (or intermediate) fusion of object representation with RGB and flow representations has been widely used (e.g., [24]). Ji et al. [16] modeled scene relations on top of video CNNs using graph neural networks, for better usage of object information. However, we are not aware of any prior work that 'learns' the connectivity between among input modalities including object information, as we do in this paper.
+
+Attention Use of attention within CNNs have been widely studied. Vaswani et al. [44] investigated different forms and applications of attention while focusing on self-attention. Hu et al. [14] introduced Squeeze-and-Excitation, which is a form of channel-wise self-attention. Researchers also developed other forms of channel-wise self-attention [50, 10, 15, 19, 47], often together with spatial self-attention. Attention was also applied to video CNN models [29, 22, 5]. However,
+
+we are not aware of prior work explicitly searching for inter-block attention connectivity (i.e., peer-attention) as we do in this paper.
+
+Neural architecture search Neural Architecture Search (NAS) is the concept of automatically finding superior architectures based on training data [58, 59, 20, 33, 39]. Multiple different strategies including learning of reinforcement learning controller (e.g., [58, 59] as well as evolutionary algorithms (e.g., [33]) have been developed for NAS. In particular, one-shot differentiable architecture search [3, 21] has been successful as it does not require a massive amount of model training. RandWire network [53] could also be interpreted as a form of differentiable architecture search, as it learns weights of (random) connections to minimize the classification loss.
+
+However, architecture search for neural attention connectivity has been very limited. Ahmed and Torresani [1] searched for layer connectivity and Ryoo et al. [34] searched for multi-stream connectivity for video CNNs, but they were without any attention learning which becomes a crucial component when we have a mixture of input modalities. We believe this paper is the first paper to search for models with attention connectivity.
+
+# 3 Approach
+
+# 3.1 Preliminaries
+
+This section describes the video CNN architecture framework, which will be used as a base for developing our approach.
+
+We here adopt a multi-stream, multi-block architecture design from AssemblyNet [34]. AssemblyNet design allows learning of connections between modalities and their intermediate features. This architecture is similar to other two-stream models [4,9], but is more flexible in two ways: 1) it allows the use of more than two streams, and 2) it allows connections to be formed (and potentially learned) between individual blocks of the neural architecture.
+
+More specifically, the architecture we use has multiple input blocks, each corresponding to an input modality. The network blocks have a structure inspired by ResNet architectures [13]. Each input block is composed of a small number of pooling and convolutional layers attached directly on top of the input. The input blocks are then connected to network blocks at the next level. We follow the $(2 + 1)\mathrm{D}$ ResNet block structure from [43], where each module is composed of one 1D temporal conv., one 2D spatial conv., and one 1x1 conv. layer. A block is formed by repeating the $(2 + 1)\mathrm{D}$ residual module multiple times. This allows a fair and direct comparison between our approach and previous models using the same module and block [43, 9, 34].
+
+Each network block (or block for short) can be connected to any block from any modality at the next level, including its own. Blocks are organized at levels so that connections do not form cycles. Connections can also be formed to skip levels. We note that since many connections between blocks are formed early,
+
+the neural blocks themselves will often contain information from many input modalities as early as the first level of the network.
+
+Figure 4 (a) shows one example architecture, where the structure of the network and example connectivity can be seen.
+
+# 3.2 Input modalities and semantics
+
+In addition to the standard raw RGB video input, motion information is added as a separate modality. More specifically, optical flow, either pre-computed for the dataset [55], or trained on the fly [7, 30], has been shown to be a crucial input for achieving better accuracy across the board [4].
+
+We here propose to use object segmentation information as a separate 'object' modality. Objects and their locations provide semantics information which conveys useful information about activities in a video. Crucially here, semantic information is incorporated in the full architecture so that it is able to interact with other modalities and the intermediate features from them (as described more in Sections 3.3 to 3.5), to maximize its utilization for the best representation.
+
+Input block details We construct an input block for each input modality. Each input block is composed of one pooling and up to two convolutional layers for raw RGB and optical flow inputs, and just one pooling layer for semantic object inputs applied directly on top of the inputs. In the object input block, a segmentation mask having an integer class value per pixel is converted into a HxWx $C_O$ tensor using one-hot operation, where $C_O$ is the number of object classes. The segmentation masks are obtained from a model trained on a non-related image-based dataset.
+
+# 3.3 Learning weighted connections
+
+Blocks in the network can potentially form connections with one or more blocks. While connectivity and the strength of the connectivity could also be hand-coded, we formulate our networks so that they are learnable.
+
+Let $G$ be the connectivity graph of the network where $(j, i)$ specifies that there is a connection from the $j$ th block to $i$ th block. We allow each block to receive its inputs from multiple different input blocks as well as intermediate convolutional blocks, and generates an output. Specifically, we formulate the input of the block as a weighted summation over multiple connections where we learn one weight for each connection.
+
+$$
+x _ {i} ^ {i n} = \sum_ {(j, i) \in G} \sigma \left(w _ {j i}\right) \cdot x _ {j} ^ {\text {o u t}} \tag {1}
+$$
+
+where $i$ and $j$ are block indexes, $x_{i}^{in}$ corresponds on the final input to the $i$ th block, and $x_{i}^{out}$ corresponds to the output of the block. $\sigma$ is a sigmoid function.
+
+Learning of the connection weights together with the other convolutional layer parameters with the standard back propagation allows the network to optimize itself on which connections to use and which to not based on the training data. In our approach, this is done by initially connecting every possible blocks in the graph while using the block levels to avoid cycles, and then learning them. We consider every connection $(j,i)$ from the $j$ th block to $i$ th block as valid as long as $L(j) < L(i)$ where $L(i)$ indicates the level of the block.
+
+# 3.4 Attention connectivity and peer-attention
+
+In addition to having and learning static weights per connection, we use attention to dynamically control the behavior of each connection. The intuition is that objects and activities are correlated, and using attention allows the model to focus on important objects based on motion context and vice versa. For instance, motion features of 'drinking' could suggest another network stream to focus more on objects related to such motion (e.g., 'cups' and 'bottles').
+
+We formulate our connectivity graph $G$ to have one more component for each edge: $((j,i),k)$ , where $k$ is the convolutional block influencing the connection $(j,i)$ via attention. A channel-wise attention is used to implement this behavior. Let $C_i$ be the size of the input channel of block $i$ . For each connection $(j,i)$ , the attention vector of size $C_i$ is computed per frame as:
+
+$$
+A _ {i} (x) = \left[ a _ {1}, \dots , a _ {C _ {i}} \right] = \sigma \left(f (\operatorname {G A P} (x))\right) \tag {2}
+$$
+
+where $f$ is a function (one fully connected layer in our case) mapping a vector to a vector of size $C$ . GAP is the global average pooling over spatial resolution in the input tensor, making $\mathrm{GAP}(x)$ to have a form of a vector per frame.
+
+Using $A_{i}(x)$ , the input for each block $i$ is computed by combining every connection $(j,i)$ while considering its attention from block $k$ :
+
+$$
+x _ {i} ^ {i n} = \sum_ {\left((j, i), k\right) \in G} \sigma \left(w _ {j i}\right) \cdot \left(A _ {i} \left(x _ {k} ^ {o u t}\right) \cdot x _ {j} ^ {o u t}\right). \tag {3}
+$$
+
+The simplest special case of our attention is self-attention, which is done by making $x_{k}$ and $x_{j}$ to be identical. In this form, the usage of attention becomes similar to Squeeze-and-Elicitation [14].
+
+Importantly, in our approach, we learn to select different $x_{k}$ where $x_{k} \neq x_{j}$ , which we discuss more in the following subsection. Attention with $x_{k} \neq x_{j}$ implies that the channels to use for the connection is dynamically decided based on another input modality and peer blocks. We more explicitly name this approach as peer-attention. In principle, we define a 'peer' as any block $p$ that could potentially be connected to $i$ . In our formulation where the convolutional blocks are organized into multiple levels (to avoid cycles), the set of peers $P$ for a connection $(j,i)$ is computed as $P_{(j,i)} = \{p \mid L(p) < L(i)\}$ where $L(p)$ indicates the level of the block $p$ . We consider the attention connection $((j,i),k)$ to be valid as long as $k \in P_{(j,i)}$ .
+
+Figure 1 compares connectivity without attention and connectivity with self- and peer-attention.
+
+
+(a) Connectivity without attention
+
+
+(b) Connectivity with self-attention
+
+
+(c) Connectivity with peer-attention
+Fig. 1. Examples of convolutional block connectivity (a) without attention, (b) with self-attention, and (c) with peer-attention. Red lines indicate weighted connections from Section 3.3. Blue curves specify the attention connectivity. GAP is global average pooling and, FC is a fully connected layer. Our attention is channel-wise attention, and it is applied per frame.
+
+# 3.5 One-shot attention search model
+
+Given a set of convolutional blocks, instead of hand-designing peer-attention connections, we search for the attention connectivity. Our new one-shot attention search model is introduced, which optimizes the model's peer-attention configuration directly based on training data.
+
+Our one-shot attention search model is formulated by combining attention from all possible peer blocks for each connection with learnable weights. The idea is to enable the model to soft-select the best peer for each block by learning differentiable weights, maximizing the recognition performance. All possible attention connectivity is considered as a consequence, and the searching is done solely based on the standard backpropagation.
+
+For each pair of blocks $(j,i)$ where $L(j) < L(i)$ , we place a weight for every $k \in P_{(j,i)}$ . Let $h$ be a weight vector of size $m = |P_{(j,i)}|$ , and $X_P^{out} = [x_1^{out}, \ldots, x_m^{out}]$ be the tensor concatenating $x_k^{out}$ of every possible peer $k$ in $P$ . Then, we reformulate Equation 3 as:
+
+$$
+x _ {i} ^ {i n} = \sum_ {(j, i) \in G} \sigma (w _ {j i}) \cdot (A (x) \cdot x _ {j} ^ {\text {o u t}}) \quad \text {w h e r e} \quad x = \mathbf {1} ^ {T} \left(\operatorname {s o f t m a x} (h) \cdot X _ {P _ {(j, i)}} ^ {\text {o u t}}\right). \tag {4}
+$$
+
+1 is a vector of size $m$ having 1 as all its element values, making $x$ to be a weighted sum of peer block outputs $x_{k}^{out}$ . Use of softmax function allows one-hot like behavior (i.e., selecting one peer to control the attention) based on learned weights $h = [h_1, \ldots, h_m]$ . Figure 2 visualizes the process.
+
+The entire process is fully differentiable, allowing the one-shot training to learn the attention weights $h$ together with the connection weights $w_{ji}$ . This is
+
+unlike AssembleNet which partially relies on exponential mutations to explore connections. Once the attention weights are found, we can either prune the connections by only leaving the argmax over $h_k$ or leave them with softmax. We confirmed that they do not make different in practice, allowing us to only maintain one peer-attention per block as shown in Figure 1 (c). Peer-attention only causes $0.151\%$ increase in computation, which we describe more in Appendix.
+
+
+Fig. 2. Visualization of our one-shot attention search model. Magenta connections illustrate weights for the attention connection $h$ . The softmax-sum module in the illustration corresponds to Eq. 4, fusing attentions from different blocks. These weights are fully differentiable and are learned together with convolutional filters, enabling the one-shot connectivity search.
+
+# 3.6 Model implementation details
+
+In order to provide fair comparison to previous work, we comply with the same block structure as AssembleNet [34], which by itself is comparable to $(2 + 1)\mathrm{D}$ ResNet-50.
+
+We build two RGB input blocks (whose temporal resolutions are searched), two optical flow input blocks, and one object input block. RGB blocks and optical flow blocks have the same number of channels and layers as AssembleNet, while the object input block only has one max spatial pooling layer which does not increase the number of parameters of the model.
+
+The object input block obtains its input from a fixed object segmentation model trained independently with the ADE-20K object segmentation dataset [57]. We treat this module as a blackbox and do not propagate gradients into it. Because this is an off-the-shelf segmentation module and was not trained on any video dataset, its outputs become noisy when directly applied to video datasets as shown in Figure 3.
+
+Our model has convolutional blocks of four levels (five levels if we count input blocks). The sum of channel sizes are held as a constant at each level (regardless
+
+the number of blocks), in order to maintain the total number of parameters. The total channels are 128 at input level, and 128, 256, 512, and 512 at levels 1 to 4 following the ResNet module and block formulation. As a result, all models have equivalent number of parameters to standard two-stream CNNs with $(2 + 1)\mathrm{D}$ residual modules.
+
+Each convolutional block was implemented by alternating 2-D residual modules and $(2 + 1)\mathrm{D}$ residual modules as was done in [43, 34]. $(2 + 1)\mathrm{D}$ module is composed of 1D temporal convolution layer followed by 2D spatial convolution layer, followed by 1x1 convolution layer. The temporal resolution of each block is controlled using temporally dilated 1-D convolution, avoiding hard frame down-sampling. More details of the blocks are in the supplementary material.
+
+Although the number of blocks at each level could be hand-designed, we use AssembleNet architecture search (with an evolutionary algorithm) to find the optimal combination of convolutional blocks and their temporal resolutions. Once we have the blocks, we connect blocks with weighted connections (doing weighted summation) following Section 3.3. Finally, the one-shot attention search model obtained by implementing our peer-attention with softmax-weighted-sum, as described in Section 3.5.
+
+Approach summary The overall process could be summarized as follows:
+
+1. Prepare blocks. We use AssembleNet evolution to find convolutional blocks, roughly connected.
+2. Initialize our one-shot search model by including all possible block connections as well as new attention connections, as described in Sections 3.3~3.5.
+3. Train the one-shot model, learning the attention connectivity weights.
+4. Prune low weight connections to make the model more compact. We maintain only one peer-attention per block.
+
+We name our final approach specifically as AssembleNet++.
+
+
+Fig. 3. Examples of the segmentation CNN applied directly on Charades video frames with in-home activities. These noisy masks serve as an input to the object input block, suggesting that our video model is required to learn to handle such noisy input.
+
+# 4 Experimental results
+
+We conduct experiments on popular video recognition datasets: multi-class multi-label Charades [36], and also the recent Toyota Smarthome dataset [5], which records natural person activities in their homes.
+
+We note that we report results without any pre-training on a large-scale video dataset, which is unlike most of previous work. Regardless of that, AssemblyNet++ outperforms prior work. We conduct multiple ablation experiments to confirm the benefit of our multi-modal model formulation with peer-attention and our one-shot attention search.
+
+Charades dataset. The Charades dataset [36] is composed of continuous videos of humans interacting with objects. This dataset is a multi-class multi-label video dataset with a total of 66,500 annotations. The videos in the dataset involve motion of small objects in real-world home environments, making it a very challenging dataset. Example video frames of the Charades dataset could be found in Figure 3. We follow the standard v1 classification setting of the dataset, reporting mAP %. We use Charades as our main dataset for ablations, as it is a realistic dataset explicitly requiring modeling of interactions between object information and other raw inputs such as RGB.
+
+Toyota Smarthomes dataset The Toyota Smarthomes dataset [5] consists of real-world activities of humans in their daily lives, such as reading, watching TV, making coffee or breakfast, etc. Humans often interact with objects in this dataset (e.g., 'drink from a can' and 'cook-cut'). The dataset contains 16,115 videos of 31 action classes, and the videos are taken from 7 different camera viewpoints. We only use RGB frames from this dataset, although depth and skeleton inputs are also present in the dataset.
+
+Baselines As a baseline model, we use AssemblyNet architecture backbone [34] which consists of multiple configurable $(2 + 1)\mathrm{D}$ ResNet blocks. Ablations, including our models without the object input block and without peer-attention, are also implemented and compared.
+
+In our ablation experiments which compare different aspects of the proposed approach (Sections 4.1, 4.2, 4.4, and 4.5), we train the models for $50\mathrm{K}$ iterations with cosine decay for the Charades dataset. When using the Toyota dataset, we train our models for $15\mathrm{K}$ iterations with cosine decay as this is a smaller dataset than Charades (66,500 annotations in Charades vs. 16,115 segmented videos in Toyota Smarthome). Further, when comparing against the state-of-the-art, we use the learning rate following a cosine decay function with 'warm_restart' [23], which we discuss more in Section 4.3.
+
+Since the model is a one-shot architecture search to discover the attention connectivity, training is efficient and takes only $20\sim 30$ hours.
+
+# 4.1 Using object modality
+
+In this ablation experiment, we explore the importance of the object input. For this study, our model learns the block connectivity from the Charades training, while not using any attention (i.e., they look like Figure 1 (a)).
+
+Figure 4 (a) shows the best connectivity the one-shot model discovered. This is obtained by (i) evolving the blocks with 100 rounds of architecture evolution, (ii) connecting all blocks, (iii) training the weights in one-shot, and then (iv) pruning the low-weight connections. The connections weights $w_{ji}$ with values higher than 0.2 are visualized. Interestingly, the best model is obtained by connecting the object input block to every possible block. The model with this 'omnipresent' object connectivity obtains 50.43 mAP on Charades compared to 47.18 mAP of the model without any object connections, which attests the usefulness of the object modality. The learned weights of each object connection is more than 0.7, suggesting the strong usage of it.
+
+Motivated by the finding that the usage of object information at every block is beneficial (i.e., omnipresent object modality connectivity), we ran an experiment to investigate how performance changes with respect to the best models found with different number of object connections. Figure 4 (b) shows the Charades classification performances of our best found models with full vs. restricted object input usage. X-axis of the graph corresponds to how often the model uses the direct input from the object input block. 0 means that it does not use object information at all, and 1 means it fuses the object information at every block. We are able to clearly observe that the performance increases proportionally to the usage of the object information.
+
+
+(a) Learned connectivity between blocks
+(b) Charades performance with respect to the number of object connections
+Fig. 4. (a) Learned connectivity graph of the model and (b) Charades classification performance per object connection ratio. The highlighted blue edges correspond to the direct connections from the object input block.
+
+
+
+# 4.2 Attention search
+
+Next, we confirm the effectiveness of our proposed AssembleNet++ with attention search. Table 1 illustrates how much performance gain we get by using attention connections as opposed to the standard weighted connections.
+
+In addition to attention connectivity with self-attention (Figure 1 (b)) and peer-attention (Figure 1 (c)), we implemented and tested 'static attention'. This is when learning fixed weights not influenced by any input. We are able to observe that our approach of one-shot attention search (with peer-attention) greatly improves the performance. The benefit was even higher (i.e., by $\sim 6\%$ mAP) when using the object input.
+
+Table 1. Comparison between performance with and without attention connections on Charades (mAP). The models were trained for 50K iterations.
+
+| Attention | without object | with object |
| None | 47.18 | 50.43 |
| Static | 48.82 | 51.15 |
| Self | 51.91 | 55.40 |
| Peer | 52.39 | 56.38 |
+
+# 4.3 Comparison to the state-of-the-art
+
+In this section, we compare the performance of our AssembleNet++ model with the previous state-of-the-art approaches. We use the model with optimal peer-attention found using our one-shot attention search, and compare it against the results reported by previous work. Unlike most of the existing methods benefiting from pre-training with a large-scale video dataset, we demonstrate that we are able to outperform state-of-the-art without such pre-training. Below, we show our results on Charades and Toyota Smarthome datasets.
+
+We also note that the proposed learned attention mechanisms are very powerful, as also seen in the ablation experiments in Section 4.2, and even without object information and without pre-training can outperform, or be competitive to, the state-of-the-art.
+
+Charades dataset Table 2 shows the results of our method on the Charades dataset. Notice that we are establishing a new state-of-the-art number on this dataset, outperforming previous approaches relying on pre-training. Further, we emphasize that our model is organized to have a maximum of 50 convolutional layers as its depth, explicitly denoting it as 'AssembleNet++ 50'. Our model performs, without pre-training, even superior to AssembleNet with the depth of 101 layers that uses a significantly larger number of parameters. We also
+
+Table 2. Classification performance on the Charades dataset (mAP).
+
+| Method | Pre-training | mAP |
| Two-stream [35] | UCF101 | 18.6 |
| CoViAR [52] (Compressed) | ImageNet | 21.9 |
| Asyn-TF [35] | UCF101 | 22.4 |
| MultiScale TRN [56] (RGB) | ImageNet | 25.2 |
| I3D [4] (RGB-only) | Kinetics | 32.9 |
| I3D from [48] (RGB-only) | Kinetics | 35.5 |
| I3D + Non-local [48] (RGB-only) | Kinetics | 37.5 |
| EvaNet [28] (RGB-only) | Kinetics | 38.1 |
| STRG [49] (RGB-only) | Kinetics | 39.7 |
| LFB-101 [51] (RGB-only) | Kinetics | 42.5 |
| SGFB-101 [16] (RGB-only) | Kinetics | 44.3 |
| SlowFast-101 [9] (RGB+RGB) | Kinetics | 45.2 |
| Two-stream (2+1)D ResNet-101 | Kinetics | 50.6 |
| AssembleNet-50 [34] | MiT | 53.0 |
| AssembleNet-50 [34] | Kinetics | 56.6 |
| AssembleNet-101 [34] | Kinetics | 58.6 |
| AssembleNet-50 [34] | None | 47.2 |
| AssembleNet++ 50 (ours) without object | None | 54.98 |
| AssembleNet++ 50 (ours) | None | 59.8 |
+
+note that the use of object modality and attention mechanism proposed here, improves the corresponding AssembleNet baseline by $+12.6\%$ .
+
+For this experiment, we use the learning rate with 'warm_restart' [23]. More specifically, we use a cosine decay function that restarts to a provided initial learning rate at every cycle. The motivation is to train our models with an identical amount of training iterations compared to the other state-of-the-art. We apply 100K training iterations with two 50K cycles while only using Charades videos, in contrast to previous work (e.g., AssembleNet [34] and SlowFast [9]) that used 50K pre-training with another dataset and did 50K fine-tuning with Charades on top of it. We note that the results of our model without 'warm_restart' and without pre-training (as seen in the ablation results in Table 1) at 56.38 are also very competitive to the state-of-the-art.
+
+Toyota Smarthome dataset We follow the dataset's Cross-Subject (CS) evaluation setting, and measure performance in terms of two standard metrics of the dataset [5]: (1) activity classification accuracy $(\%)$ and (2) 'mean per-class accuracies' $(\%)$ . Table 3 reports our results. Compared to [5], which benefits from Kinetics pre-training and additional 3D skeleton joint information, we obtain superior performance while training the model from scratch and without skeletons. We believe we are establishing new state-of-the-art numbers on this dataset.
+
+Table 3. Performance on the Toyota Smarthome dataset. Classification % and mean per-class accuracy % are reported. Note that our models are being trained from scratch without any pre-training, while the previous work (e.g., [5]) relies on Kinetics pretraining.
+
+| Method | Classification % | mean per-class |
| LSTM [25] | - | 42.5 |
| I3D (with Kinetics pre-training) | 72.0 | 53.4 |
| I3D (pre-trained) + NL [48] | - | 53.6 |
| I3D (pre-trained) + separable STA [5] | 75.3 | 54.2 |
| Baseline AssembleNet-50 | 77.77 | 57.42 |
| Baseline + self-attention | 77.59 | 57.84 |
| Ours (object + self-attention) | 79.08 | 62.30 |
| Ours (object + peer-attention) | 80.64 | 63.64 |
+
+Table 4. Comparing AssembleNet++ using peer-attention vs. a modification using 1x1 convolutional layer instead of attention. They use an identical number of parameters. Charades classification accuracy (mAP) and Toyota mean per-class accuracy (\%) are reported.
+
+| Model | Charades | Toyota |
| Base | 50.43 | 59.16 |
| Base + 1x1 conv. | 50.24 | 59.44 |
| Random peer-attention | 53.40 | 60.23 |
| Our peer-attention | 56.38 | 63.64 |
+
+# 4.4 Ablation
+
+In this experiment, we explicitly compare AssembleNet++ using peer-attention with its modifications using the same number of parameters. Specifically, we compare our model against (i) the model using 1x1 convolutional layers instead of attention and (ii) the model using peer-attention but with random attention connectivity. For (i), we make the number of 1x1 convolutional layer parameters identical to the number of parameters in FC layers for attention. Table 4 compares the accuracies of these models on Charades and Toyota Smarthome datasets. While using the identical number of parameters, our one-shot peer-attention search model obtains superior results.
+
+# 4.5 General applicability of the findings
+
+Based on the findings that (1) having 'omnipresent' neural connectivity from the object modality and (2) using attention connectivity are beneficial, we investigate further whether such findings are generally applicable for many different CNN models. We add object modality connections and attention to (i) standard $\mathrm{R}(2 + 1)\mathrm{D}$ network, (ii) two-stream $\mathrm{R}(2 + 1)\mathrm{D}$ network, (iii) original AssembleNet, and (iv) our Charades-searched network (without object connectivity and attenuated object connectivity).
+
+tion), and observe how their recognition accuracy changes compared to the original models. Our model without object and attention is obtained by manually removing connections from the object input block.
+
+Table 5 shows the results tested on Charades. We are able to confirm that our findings are applicable to other manually designed, as well as, architecture searched architectures. The increase in accuracy is significant for all architectures. Note that our architecture itself is not significantly superior to AssembleNet without object. However, since its connectivity was searched together the object input block (i.e., Section 3.5), we are able to observe that our model better takes advantage of the object input via peer attention. 50K training iterations with cosine decay was used for this comparison.
+
+Table 5. Comparison between original CNN models (without object modality and without attention) and their modifications based on our attention connectivity and object modality. The value corresponding to 'AssembleNet++' for the column 'base' is obtained by manually removing connections from the object input block and removing attention from our final one-shot attention search model. Measured with Charades classification (mAP, higher is better), trained from scratch for 50k iterations.
+
+| CNN model | base | + object + attention |
| RGB R(2+1)D | 36.51 | 45.30 |
| Two-stream R(2+1)D | 39.93 | 47.74 |
| AssembleNet | 47.18 | 53.48 |
| AssembleNet++ | 47.62 | 56.38 |
+
+# 5 Conclusion
+
+We present a family of novel video models which are designed to learn interactions between the object modality input and the other raw inputs: AssembleNet++. We propose connectivity search to fuse new object input into the model, and introduce the concept of peer-attention to best capture the interplay between different modality representations. The concept of peer-attention generalizes previous channel-wise self-attention by allowing the attention weights to be computed based on other intermediate representations. An efficient differentiable one-shot attention search model is proposed to optimize the attention connectivity. Experimental results confirm that (i) our approach is able to appropriately take advantage of the object modality input (by learning connectivity to the object modality consistently) and that (ii) our searched peer-attention greatly benefits the final recognition. The method outperforms all existing approaches on two very challenging video datasets with daily human activities. Furthermore, we confirm that our proposed approach and the strategy are not just specific to one particular model but is generally applicable for different video CNN models, improving their performance notably.
+
+# References
+
+1. Ahmed, K., Torresani, L.: Connectivity learning in multi-branch networks. In: Workshop on Meta-Learning (MetaLearn), NeurIPS (2017)
+2. Baradel, F., Neverova, N., Wolf, C., Mille, J., Mori, G.: Object level visual reasoning in videos. In: Proceedings of European Conference on Computer Vision (ECCV) (2018)
+3. Bender, G., Kindermans, P.J., Zoph, B., Vasudevan, V., Le, Q.: Understanding and simplifying one-shot architecture search. In: International Conference on Machine Learning (ICML) (2018)
+4. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
+5. Das, S., Dai, R., Koperski, M., Minciullo, L., Garattoni, L., Bremond, F., Francesca, G.: Toyota smarthome: Real-world activities of daily living. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2019)
+6. Diba, A., Fayyaz, M., Sharma, V., Paluri, M., Gall, J., Stiefelhagen, R., Gool, L.V.: Holistic large scale video understanding. In: arxiv.org/pdf/1904.11451 (2019)
+7. Fan, L., Huang, W., Gan, C., Ermon, S., Gong, B., Huang, J.: End-to-end learning of motion representation for video understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
+8. Feichtenhofer, C., Pinz, A., Wildes, R.: Spatiotemporal residual networks for video action recognition. In: Advances in Neural Information Processing Systems (NeurIPS) (2016)
+9. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2019)
+0. Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
+1. Girdhar, R., Ramanan, D., Gupta, A., Sivic, J., Russell, B.: Actionvlad: Learning spatio-temporal aggregation for action classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 971-980 (2017)
+2. Hara, K., Kataoka, H., Satoh, Y.: Learning spatio-temporal features with 3d residual networks for action recognition. In: Proceedings of the ICCV Workshop on Action, Gesture, and Emotion Recognition. vol. 2, p. 4 (2017)
+3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
+4. Hu, J., Shen, L., Albanie, S., Sun, G., Wu, E.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
+5. Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: Ccnet: Criss-cross attention for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2019)
+6. Ji, J., Krishna, R., Fei-Fei, L., Niebles, J.C.: Action genome: Actions as composition of spatio-temporal scene graphs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
+
+17. Ji, S., Xu, W., Yang, M., Yu, K.: 3d convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1), 221-231 (2013)
+18. Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., Viola, F., Green, T., Back, T., Natsev, P., et al.: The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017)
+19. Li, X., Zhong, Z., Wu, J., Yang, Y., Lin, Z., Liu, H.: Expectation-maximization attention networks for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2019)
+20. Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L.J., Fei-Fei, L., Yuille, A., Huang, J., Murphy, K.: Progressive neural architecture search. In: Proceedings of European Conference on Computer Vision (ECCV) (2018)
+21. Liu, H., Simonyan, K., Yang, Y.: DARTS: Differentiable architecture search. In: International Conference on Learning Representations (ICLR) (2019)
+22. Long, X., Gan, C., de Melo, G., Wu, J., Liu, X., Wen, S.: Attention clusters: Purely attention based local feature integration for video classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 7834-7843 (2018)
+23. Loshchilov, I., Hutter, F.: Sgdr: Stochastic gradient descent with warm restarts. In: International Conference on Learning Representations (ICLR) (2017)
+24. Ma, M., Fan, H., Kitani, K.M.: Going deeper into first-person activity recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
+25. Mahasseni, B., Todorovic, S.: Regularizing long short term memory with 3d human-skeleton sequences for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
+26. Moore, D.J., Essa, I.A., HayesIII, M.H.: Exploiting human actions and object context for recognition tasks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (1999)
+27. Nekrasov, V., Chen, H., Shen, C., Reid, I.: Architecture search of dynamic cells for semantic video segmentation. In: CoRR:1904.02371 (2019)
+28. Piergiovanni, A., Angelova, A., Toshev, A., Ryoo, M.S.: Evolving space-time neural architectures for videos. Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2019)
+29. Piergiovanni, A., Fan, C., Ryoo, M.S.: Learning latent sub-events in activity videos using temporal attention filters. In: Proceedings of AAAI Conference on Artificial Intelligence (AAAI) (2017)
+30. Piergiovanni, A., Ryoo, M.S.: Representation flow for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
+31. Qiu, Z., Yao, T., Mei, T.: Learning spatio-temporal representation with pseudo-3d residual networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). pp. 5533-5541 (2017)
+32. Ray, J., Wang, H., Tran, D., Wang, Y., Feiszli, M., Torresani, L., Paluri, M.: Scenes-objects-actions: A multi-task, multilabel video dataset. In: Proceedings of European Conference on Computer Vision (ECCV) (2018)
+33. Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. In: Proceedings of AAAI Conference on Artificial Intelligence (AAAI) (2019)
+
+34. Ryoo, M., Piergiovanni, A., Tan, M., Angelova, A.: AssembleNet: Searching for multi-stream neural connectivity in video architectures. In: International Conference on Learning Representations (ICLR) (2020)
+35. Sigurdsson, G.A., Divvala, S., Farhadi, A., Gupta, A.: Asynchronous temporal fields for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
+36. Sigurdsson, G.A., Gupta, A., Schmid, C., Farhadi, A., Alahari, K.: Charades-ego: A large-scale dataset of paired third and first person videos. arXiv preprint arXiv:1804.09626 (2018)
+37. Sigurdsson, G.A., Russakovsky, O., Gupta, A.: What actions are needed for understanding human actions in videos? In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2017)
+38. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Advances in Neural Information Processing Systems (NeurIPS). pp. 568-576 (2014)
+39. Tan, M., Le, Q.: Efficientnet: Rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning (ICML) (2019)
+40. Taylor, G.W., Fergus, R., LeCun, Y., Bregler, C.: Convolutional learning of spatiotemporal features. In: Proceedings of European Conference on Computer Vision (ECCV) (2010)
+41. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3d convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2015)
+42. Tran, D., Bourdev, L.D., Fergus, R., Torresani, L., Paluri, M.: C3d: generic features for video analysis. CoRR, abs/1412.0767 2(7), 8 (2014)
+43. Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer look at spatiotemporal convolutions for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 6450-6459 (2018)
+44. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems (NeurIPS) (2017)
+45. Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Action recognition by dense trajectories. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3169-3176. IEEE (2011)
+46. Wang, L., Li, W., Li, W., Gool, L.V.: Appearance-and-relation networks for video classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
+47. Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., Hu, Q.: Eca-net: Efficient channel attention for deep convolutional neural networks. arXiv:1910.03151 (2019)
+48. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 7794-7803 (2018)
+49. Wang, X., Gupta, A.: Videos as space-time region graphs. In: Proceedings of European Conference on Computer Vision (ECCV). pp. 399-417 (2018)
+50. Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: Cbam: Convolutional block attention module. In: Proceedings of European Conference on Computer Vision (ECCV) (2018)
+51. Wu, C.Y., Feichtenhofer, C., Fan, H., He, K., Krahenbuhl, P., Girshick, R.: Long-term feature banks for detailed video understanding. arXiv preprint arXiv:1812.05038 (2018)
+
+52. Wu, C.Y., Zaheer, M., Hu, H., Manmatha, R., Smola, A.J., Krahenbuhl, P.: Compressed video action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 6026-6035 (2018)
+53. Xie, S., Kirillov, A., Girshick, R., He, K.: Exploring randomly wired neural networks for image recognition. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). pp. 1284-1293 (2019)
+54. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In: Proceedings of European Conference on Computer Vision (ECCV). pp. 305-321 (2018)
+55. Zach, C., Pock, T., Bischof, H.: A duality based approach for realtime tv-1 optical flow. In: Joint Pattern Recognition Symposium. pp. 214-223. Springer (2007)
+56. Zhou, B., Andonian, A., Oliva, A., Torralba, A.: Temporal relational reasoning in videos. In: Proceedings of European Conference on Computer Vision (ECCV). pp. 803-818 (2018)
+57. Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Scene parsing through ade20k dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
+58. Zoph, B., Le, Q.: Neural architecture search with reinforcement learning. In: International Conference on Learning Representations (ICLR) (2017)
+59. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
\ No newline at end of file
diff --git a/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/images.zip b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7cca5b76e397c9be1cec1dfee098896d866a1bb5
--- /dev/null
+++ b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bc7e47e0f38481f9bde707692a88314f47580272e25a633c34af55589d688096
+size 382943
diff --git a/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/layout.json b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cc03b6a8a4f6bc127ea680afe34549fde273b6f3
--- /dev/null
+++ b/assemblenetassemblingmodalityrepresentationsviaattentionconnectionssupplementarymaterial/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1ff08da5e0c9a213872dab265ebcfcd87467c22f24d6116415ec1d09a6414ea9
+size 411716
diff --git a/associative3dvolumetricreconstructionfromsparseviews/1083ea34-697a-46d4-bbb6-48595ef5ee1a_content_list.json b/associative3dvolumetricreconstructionfromsparseviews/1083ea34-697a-46d4-bbb6-48595ef5ee1a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..55a2f810efc01bcc13b255353b4e39961810d952
--- /dev/null
+++ b/associative3dvolumetricreconstructionfromsparseviews/1083ea34-697a-46d4-bbb6-48595ef5ee1a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f48060728582b943142b0fdb15617c5c19a00bef98d82e2e4f128f67dcfb366c
+size 76173
diff --git a/associative3dvolumetricreconstructionfromsparseviews/1083ea34-697a-46d4-bbb6-48595ef5ee1a_model.json b/associative3dvolumetricreconstructionfromsparseviews/1083ea34-697a-46d4-bbb6-48595ef5ee1a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c28c18340459649ca0b0bc4e5bb5fab74b986b53
--- /dev/null
+++ b/associative3dvolumetricreconstructionfromsparseviews/1083ea34-697a-46d4-bbb6-48595ef5ee1a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:035d98c731febddb107725957d51ce204f585de5d0c2efad8b815178e4a59645
+size 97056
diff --git a/associative3dvolumetricreconstructionfromsparseviews/1083ea34-697a-46d4-bbb6-48595ef5ee1a_origin.pdf b/associative3dvolumetricreconstructionfromsparseviews/1083ea34-697a-46d4-bbb6-48595ef5ee1a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6bbe5b4131361ac59660364e4606f144945f47b2
--- /dev/null
+++ b/associative3dvolumetricreconstructionfromsparseviews/1083ea34-697a-46d4-bbb6-48595ef5ee1a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:892da01764d3287a935694e135626a9695c5af7a708a15677ea9b61927d7af17
+size 21972048
diff --git a/associative3dvolumetricreconstructionfromsparseviews/full.md b/associative3dvolumetricreconstructionfromsparseviews/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e07d02a56a21e4bc5d66b0b2aae83ff42e846029
--- /dev/null
+++ b/associative3dvolumetricreconstructionfromsparseviews/full.md
@@ -0,0 +1,292 @@
+# Associative3D: Volumetric Reconstruction from Sparse Views
+
+Shengyi Qian\*[0000-0003-0262-2412], Linyi Jin\*[0000-0002-0841-6970], and David F. Fouhey[0000-0001-5028-5161]
+
+University of Michigan {syqian,jinlinyi,fouhey}@umich.edu
+
+Abstract. This paper studies the problem of 3D volumetric reconstruction from two views of a scene with an unknown camera. While seemingly easy for humans, this problem poses many challenges for computers since it requires simultaneously reconstructing objects in the two views while also figuring out their relationship. We propose a new approach that estimates reconstructions, distributions over the camera/object and camera/camera transformations, as well as an inter-view object affinity matrix. This information is then jointly reasoned over to produce the most likely explanation of the scene. We train and test our approach on a dataset of indoor scenes, and rigorously evaluate the merits of our joint reasoning approach. Our experiments show that it is able to recover reasonable scenes from sparse views, while the problem is still challenging. Project site: https://jasonqsy.github.io/Associative3D.
+
+Keywords: 3D Reconstruction
+
+
+Fig. 1. Given two views from unknown cameras, we aim to extract a coherent 3D space in terms of a set of volumetric objects placed in the scene. We represent the scene with a factored representation [49] that splits the scene into per-object voxel grids with a scale and pose.
+
+# 1 Introduction
+
+How would you make sense of the scene in Fig. 1? After rapidly understanding the individual pictures, one can fairly quickly attempt to match the objects in
+
+each: the TV on the left in image A must go with the TV on the right in image B, and similarly with the couch. Therefore, the two chairs, while similar, are not actually the same object. Having pieced this together, we can then reason that the two images depict the same scene, but seen with a $180^{\circ}$ change of view and infer the 3D structure of the scene. Humans have an amazing ability to reason about the 3D structure of scenes, even with as little as two sparse views with an unknown relationship. We routinely use this ability to understand images taken at an event, look for a new apartment on the Internet, or evaluate possible hotels (e.g., for ECCV). The goal of this paper is to give the same ability to computers.
+
+Unfortunately, current techniques are not up to this challenge of volumetric reconstruction given two views from unknown cameras: this approach requires both reconstruction and pose estimation. Classic methods based on correspondence [20,9] require many more views in practice and cannot make inferences about unseen parts of the scene (i.e., what the chair looks like from behind) since this requires some form of learning. While there has been success in learning-based techniques for this sort of object reconstructions [7,17,49,27], it is unknown how to reliably stitch together the set of reconstructions into a single coherent story. Certainly there are systems that can identify pose with respect to a fixed scene [26] or a pair of views [15]; these approaches, however cannot reconstruct.
+
+This paper presents a learning-based approach to this problem, whose results are shown in Fig. 1. The system can take two views with unknown relationship, and produce a 3D scene reconstruction for both images jointly. This 3D scene reconstruction comprises a set of per-object reconstructions rigidly placed in the scene with a pose as in [49,27,30]. Since the 3D scene reconstruction is the union of the posed objects, getting the $3D$ scene reconstruction correct requires getting both the $3D$ object reconstruction right as well as correctly identifying $3D$ object pose. Our key insight is that jointly reasoning about objects and poses improves the results. Our method, described in Section 3, predicts evidence including: (a) voxel reconstructions for each object; (b) distributions over rigid body transformations between cameras and objects; and (c) an inter-object affinity for stitching. Given this evidence, our system can stitch them together to find the most likely reconstruction. As we empirically demonstrate in Section 4, this joint reasoning is crucial - understanding each image independently and then estimating a relative pose performs substantially worse compared to our approach. These are conducted on a challenging and large dataset of indoor scenes. We also show some common failure modes and demonstrate transfer to NYUv2 [44] dataset.
+
+Our primary contributions are: (1) Introducing a novel problem - volumetric scene reconstruction from two unknown sparse views; (2) Learning an inter-view object affinity to find correspondence between images; (3) Our joint system, including the stitching stage, is better than adding individual components.
+
+# 2 Related Work
+
+The goal of this work is to take two views from cameras related by an unknown transformation and produce a single volumetric understanding of the scene. This
+
+touched on a number of important problems in computer vision ranging from the estimation of the pose of objects and cameras, full shape of objects, and correspondence across views. Our approach deliberately builds heavily on these works and, as we show empirically, our success depends crucially on their fusion.
+
+This problem poses severe challenges for classic correspondence-based approaches [20]. From a purely geometric perspective, we are totally out of luck: even if we can identify the position of the camera via epipolar geometry and wide baseline stereo [39,36], we have no correspondence for most objects in Fig. 1 that would permit depth given known baseline, let alone another view that would help lead to the understanding of the full shape of the chair.
+
+Recent work has tackled identifying this full volumetric reconstruction via learning. Learning-based 3D has made significant progress recently, including 2.5D representations [14,51,5,29], single object reconstruction [52,55,19,41,8], and scene understanding [6,23,33,32,12]. Especially, researchers have developed increasingly detailed volumetric reconstructions beginning with objects [7,17,18] and then moving to scenes [49,27,30,37] as a composition of object reconstructions that have a pose with respect to the camera. Focusing on full volumetric reconstruction, our approach builds on this progression, and creates an understanding that is built upon jointly reasoning over parses of two scenes, affinities, and relative poses; as we empirically show, this produces improvements in results. Of these works, we are most inspired by Kulkarni et al. [27] in that it also reasons over a series of relative poses; our work builds on top of this as a base inference unit and handles multiple images. We note that while we build on a particular approach to scenes [27] and objects [17], our approach is general.
+
+While much of this reconstruction work is single-image, some is multiview, although usually in the case of an isolated object [25,7,24] or with hundreds of views [22]. Our work aims at the particular task of as little as two views, and reasons over multiple objects. While traditional local features [34] are insufficient to support reasoning over objects, semantic features are useful [13,50,2].
+
+At the same time, there has been considerable progress in identifying the relative pose from images [35,26,15,1], RGB-D Scans [53,54] or video sequences [57,42,46]. Of these, our work is most related to learning-based approaches to identifying relative pose from RGB images, and semantic Structure-from-Motion [1] and SLAM [42], which make use of semantic elements to improve the estimation of camera pose. We build upon this work in our approach, especially work like RPNet [15] that directly predicts relative pose, although we do so with a regression-by-classification formulation that provides uncertainty. As we show empirically, propagating this uncertainty forward lets us reason about objects and produce superior results to only focusing on pose.
+
+# 3 Approach
+
+The goal of the system is to map a pair of sparse views of a room to a full 3D reconstruction. As input, we assume a pair of images of a room. As output, we produce a set of objects represented as voxels, which are rigidly transformed and
+
+
+Fig. 2. Our approach. We pass the two RGB image inputs into two branches that extract evidence, which is then fused together to stitch a final result. Our first network, object branch, is a detection network following [27] that produces a set of objects in terms of voxels and a transformation into the scene. We also predict an object embedding which we can use to form an affinity matrix between objects across images. Our second network, camera branch, is a siamese network that predicts a distribution over translations and rotations between the cameras. Finally our stitching stage examines the evidence from the networks and produces a final prediction.
+
+anisotropically scaled into the scene in a single coordinate frame. We achieve this with an approach, summarized in Fig. 2, that consists of three main parts: an object branch, a camera branch, and a stitching stage.
+
+The output space is a factored representation of a 3D scene, similar to [49,27,30]. Specifically, in contrast to using a single voxel-grid or mesh, the scene is represented as a set of per-object voxel-edges with a scale and pose that are placed in the scene. These can be converted to a single 3D reconstruction by taking their union, and so improving the 3D reconstruction can be done by either improving the per-object voxel grid or improving its placement in the scene.
+
+The first two parts of our approach are two neural networks. An object branch examines each image and detects and produces single-view 3D reconstructions for objects in the camera's coordinate frame, as well as a per-object embedding that helps find the object in the other image. At the same time, an camera branch predicts relative pose between images, represented as a distribution over a discrete set of rigid transformations between the cameras. These networks are trained separately to minimize complexity.
+
+The final step, a stitching stage, combines these together. The output of the two networks gives: a collection of objects per image in the image's coordinate frame; a cross-image affinity which predicts object correspondence in two views; and a set of likely transformations from one camera to the other. The stitching stage aims to select a final set of predictions minimizing an objective function that aims to ensure that similar objects are in the same location, the camera pose is likely, etc. Unlike the first two stages, this is an optimization rather than a feedforward network.
+
+# 3.1 Object Branch
+
+The goal of our object branch is to take an image and produce a set of reconstructed objects in the camera's coordinate frame as well as an embedding that lets us match across views. We achieve this by extending 3D-RelNet [27] and adjust it as little as possible to ensure fair comparisons. We refer the reader for a fuller explanation in [27,49], but briefly, these networks act akin to an object detector like Faster-RCNN [40] with additional outputs. As input, 3D-RelNet takes as input an image and a set of 2D bounding box proposals, and maps the image through convolutional layers to a feature map, from which it extracts per-bounding box convolutional features. These features pass through fully connected layers to predict: a detection score (to suppress bad proposals), voxels (to represent the object), and a transformation to the world frame (represented by rotation, scale, and translation and calculated via both per-object and pairwise poses). We extend this to also produce an n-dimensional embedding $\mathbf{e} \in \mathbb{R}^n$ on the unit sphere (i.e., $||\mathbf{e}||_2^2 = 1$ ) that helps associate objects across images.
+
+We use and train the embedding by creating a cross-image affinity matrix between objects. Suppose the first and second images have $N$ and $M$ objects each with embeddings $\mathbf{e}_i$ and $\mathbf{e}_j^\prime$ respectively. We then define our affinity matrix $\mathbf{A} \in \mathbb{R}^{N \times M}$ as
+
+$$
+\mathbf {A} _ {i, j} = \sigma \left(k \mathbf {e} _ {i} ^ {T} \mathbf {e} _ {j} ^ {\prime}\right) \tag {1}
+$$
+
+where $\sigma$ is the sigmoid/logistic function and where $k = 5$ scales the output. Ideally, $A_{i,j}$ should indicate whether objects $i$ and $j$ are the same object seen from a different view, where $A_{i,j}$ is high if this is true and low otherwise.
+
+We train this embedding network using ground-truth bounding box proposals so that we can easily calculate a ground-truth affinity matrix $\hat{\mathbf{A}}$ . We then minimize $L_{\mathrm{aff}}$ , a balanced mean-square loss between $\mathbf{A}$ and $\hat{\mathbf{A}}$ : if all positive labels are $(i,j)\in \mathcal{P}$ , and all negative labels are $(i,j)\in \mathcal{N}$ , then the loss is
+
+$$
+L _ {\text {a f f}} = \frac {1}{| \mathcal {P} |} \sum_ {(i, j) \in \mathcal {P}} \left(A _ {i j} - \hat {A} _ {i j}\right) ^ {2} + \frac {1}{| \mathcal {N} |} \sum_ {(i, j) \in \mathcal {N}} \left(A _ {i j} - \hat {A} _ {i j}\right) ^ {2}. \tag {2}
+$$
+
+which balances positive and negative labels (since affinity is imbalanced).
+
+# 3.2 Camera Branch
+
+Our camera branch aims to identify or narrow down the possible relationship between the two images. We approach this by building a siamese network [3] that predicts the relative camera pose $T_{c}$ between the two images. We use ResNet-50 [21] to extract features from two input images. We concatenate the output features and then use two linear layers to predict the translation and rotation.
+
+We formulate prediction of rotation and translation as a classification problem to help manage the uncertainty in the problem. We found that propagating uncertainty (via top predictions) was helpful: a single feedforward network suggests likely rotations and a subsequent stage can make a more detailed assessment in light of the object branch's predictions. Additionally, even if we care
+
+about only one output, we found regression-by-classification to be helpful since the output tended to have multiple modes (e.g., being fairly certain of the rotation modulo $90^{\circ}$ by recognizing that both images depict a cuboidal room). Regression tends to split the difference, producing predictions which satisfy neither mode, while classification picks one, as observed in [49,28].
+
+We cluster the rotation and translation vectors into 30 and 60 bins respectively, and predict two multinomial distributions over them. Then we minimize the cross entropy loss. At test time, we select the cartesian product of the top 3 most likely bins for rotation and top 10 most likely bins for translation as the final prediction results. The results are treated as proposals in the next section.
+
+# 3.3 Stitching Object and Camera Branches
+
+Once we have run the object and camera branches, our goal is to then produce a single stitched result. As input to this step, our object branch gives: for view 1, with $N$ objects, the voxels $V_{1},\ldots ,V_{N}$ and transformations $T_{1},\ldots ,T_{N}$ ; and similarly, for $M$ objects in view 2, the voxels $V_{1}^{\prime},\ldots ,V_{M}^{\prime}$ and transformations $T_1^\prime ,\dots ,T_M^\prime$ ; and a cross-view affinity matrix $\mathbf{A}\in [0,1]^{N\times M}$ . Additionally, we have a set of potential camera transformations $P_{1},\ldots ,P_{F}$ between two views.
+
+The goal of this section is to integrate this evidence to find a final cross-camera pose $P$ and correspondence $\mathbf{C} \in \{0,1\}^{M \times N}$ from view 1 to view 2. This correspondence is one-to-one and has the option to ignore an object (i.e., $\mathbf{C}_{i,j} = 1$ if and only if $i$ and $j$ are in correspondence and for all $i, \sum_{j} \mathbf{C}_{i,j} \leq 1$ , and similarly for $\mathbf{C}^T$ ).
+
+We cast this as a minimization problem over $P$ and $\mathbf{C}$ including terms in the objective function that incorporate the above evidence. The cornerstone term is one that integrates all the evidence to examine the quality of the stitch, akin to trying and seeing how well things match up under a camera hypothesis. We implement this by computing the distance $\mathcal{L}_D$ between corresponding object voxels according to $\mathbf{C}$ , once the transformations are applied, or:
+
+$$
+\mathcal {L} _ {D} = \frac {1}{| \mathbf {C} | _ {1}} \sum_ {(i, j) \text {s . t .} \mathbf {C} _ {i, j} = 1} D \left(P \left(T _ {i} \left(V _ {i}\right), T _ {j} ^ {\prime} \left(V _ {j} ^ {\prime}\right)\right). \right. \tag {3}
+$$
+
+Here, $D$ is the chamfer distance between points on the edges of each shape, as defined in [38,43], or for two point clouds $X$ and $Y$ :
+
+$$
+D (X, Y) = \frac {1}{| X |} \sum_ {x \in X} \min _ {y \in Y} \| x - y \| _ {2} ^ {2} + \frac {1}{| Y |} \sum_ {y \in Y} \min _ {x \in X} \| x - y \| _ {2} ^ {2}. \tag {4}
+$$
+
+Additionally, we have terms that reward making $\mathbf{C}$ likely according to our object and image networks, or: the sum of similarities between corresponding objects according to the affinity matrix $\mathbf{A}$ , $\mathcal{L}_S = \sum_{(i,j),\mathbf{C}_{i,j} = 1}(1 - A_{i,j})$ ; as well as the probability of the camera pose transformation $P$ from the image network $\mathcal{L}_P = (1 - Pr(P))$ . Finally, to preclude trivial solutions, we include a term rewarding
+
+minimizing the number of un-matched objects, or $\mathcal{L}_U = \min (M,N) - |\mathbf{C}|_1$ . In total, our objective function is the sum of these terms, or:
+
+$$
+\min _ {P, \mathbf {C}} \quad \mathcal {L} _ {D} + \lambda_ {P} \mathcal {L} _ {P} + \lambda_ {S} \mathcal {L} _ {S} + \lambda_ {U} \mathcal {L} _ {U}. \tag {5}
+$$
+
+The search space is intractably large, so we optimize the objective function by RANSAC-like search over the top hypotheses for $P$ and feasible object correspondences. For each top hypothesis of $P$ , we randomly sample $K$ object correspondence proposals. Here we use $K = 128$ . It is generally sufficient since the correspondence between two objects is feasible only if the similarity of them is higher than a threshold according to the affinity matrix. We use random search over object correspondences because the search space increases factorially between the number of objects in correspondence. Once complete, we average the translation and scale, and randomly pick one rotation and shape from corresponding objects. Averaging performs poorly for rotation since there are typically multiple rotation modes that cannot be averaged: a symmetric table is correct at either $0^{\circ}$ or $180^{\circ}$ but not at $90^{\circ}$ . Averaging voxel grids does not make sense since there are partially observable objects. We therefore pick one mode at random for rotation and shape. Details are available in the appendix.
+
+# 4 Experiments
+
+We now describe a set of experiments that aim to address the following questions: (1) how well does the proposed method work and are there simpler approaches that would solve the problem? and (2) how does the method solve the problem? We first address question (1) by evaluating the proposed approach compared to alternate approaches both qualitatively and by evaluating the full reconstruction quantitatively. We then address question (2) by evaluating individual components of the system. We focus on what the affinity matrix learns and whether the stitching stage can jointly improve object correspondence and relative camera pose estimation. Throughout, we test our approach on the SUNCG dataset [45,56], following previous work [49,27,30,53,56]. To demonstrate transfer to other data, we also show qualitative results on NYUv2 [44].
+
+# 4.1 Experimental Setup
+
+We train and do extensive evaluation on SUNCG [45] since it provides 3D scene ground truth including voxel representation of objects. There are realistic datasets such as ScanNet [10] and Matterport3D [4], but they only provide non-watertight meshes. Producing filled object voxel representation from non-watertight meshes remains an open problem. For example, Pix3D [47] aligns IKEA furniture models with images, but not all objects are labeled.
+
+Datasets. We follow the $70\% / 10\% / 20\%$ training, validation and test split of houses from [27]. For each house, we randomly sample up to ten rooms; for
+
+
+Fig. 3. Qualitative results on the SUNCG test set [45]. The final 3D predictions are shown in three different camera poses (1) the same camera as image 1; (2) the same camera as image 2; (3) a bird view to see all the objects in the whole scene. In the prediction, red/orange objects are from the left image, blue objects are from the right image, green/yellow objects are stitched.
+
+each room, we randomly sample one pair of views. Furthermore, we filter the validation and test set: we eliminate pairs where there is no overlapping object between views, and pairs in which all of one image's objects are in the other view (i.e., one is a proper subset of the other). We do not filter the training set since learning relative pose requires a large and diverse training set. Overall, we have 247532/1970/2964 image pairs for training, validation and testing, respectively. Following [49], we use six object classes - bed, chair, desk, sofa, table and tv.
+
+Full-Scene Evaluation: Our output is a full-scene reconstruction, represented as a set of per-object voxel grids that are posed and scaled in the scene. A scene prediction can be totally wrong if one of the objects has correct shape while its translation is off by 2 meters. Therefore, we quantify performance by treating the problem as a 3D detection problem in which we predict a series of 3D boxes and voxel grids. This lets us evaluate which aspect of the problem currently hold methods back. Similar to [27], for each object, we define error metrics as follows:
+
+- Translation (t): Euclidean distance, or $\delta_t = ||t - \hat{t}||_2$ , thresholded at $\delta_t = 1\mathrm{m}$ .
+- Scale (s): Average log difference in scaling factors, or $\delta_s = \frac{1}{3}\sum_{i=1}^{3}|\log_2(s_1^i) - \log_2(s_2^i)|$ , thresholded at $\delta_s = 0.2$ .
+- Rotation $(\mathbf{R})$ : Geodesic rotation distance, or $\delta_q = (2)^{-1/2} || \log (\mathbf{R}^T \hat{\mathbf{R}}) ||_F$ , thresholded at $\delta_q = 30^\circ$ .
+- Shape $(\mathbf{V})$ : Following [48], we use F-score@0.05 to measure the difference between prediction and ground truth, thresholded at $\delta_V = 0.25$ .
+
+
+Fig. 4. Comparison between Associative3D and alternative approaches. Row 1: Associative3D fixes the incorrect top-1 relative camera pose in light of a single bed in the room. Row 2: NMS works when the relative camera pose is accurate. Row 3: Associative3D outperforms all alternative approaches in finding correspondence in object clutter.
+
+A prediction is a true positive only if all errors are lower than our thresholds. We calculate the precision-recall curve based on that and report average precision (AP). We also report AP for each single error metric.
+
+Baselines. Since there is no prior work on this task, our experiments compare to ablations and alternate forms of our method. We use the following baseline methods, each of which tests a concrete hypothesis. (Feedforward): This method uses the object branch to recover single-view 3D scenes, and our camera branch to estimate the relative pose between different views. We ignore the affinity matrix and pick the top-1 relative pose predicted by the camera branch. There can be many duplicate objects in the output of this approach. This tests if a simple feedforward method is sufficient. (NMS): In addition to the feedforward approach, we perform non-maximum suppression on the final predictions. If two objects are close to each other, we merge them. This tests if a simple policy to merge objects would work. (Raw Affinity): Here, we use the predicted affinity matrix to merge objects based on top-1 similarity from the affinity matrix. This tests whether our stitching stage is necessary. (Associative3D): This is our complete method. We optimize the objective function by searching possible rotations, translations and object correspondence.
+
+# 4.2 Full Scene Evaluation
+
+We begin by evaluating our full scene reconstruction. Our output is a set of per-object voxels that are posed and scaled in the scene. The quality of reconstruction of a single object is decided by both the voxel grids and the object pose.
+
+First, we show qualitative examples from the proposed method in Fig. 3 as well as a comparison with alternate approaches in Fig. 4 on the SUNCG test set. The Feedforward approach tends to have duplicate objects since it does not know object correspondence. However, figuring out the camera pose and
+
+Table 1. We report the average precision (AP) in evaluation of the 3D detection setting. All means a prediction is a true positive only if all of translation, scale, rotation and shape are correct. Shape, Translation, Rotation, and Scale mean a prediction is a true positive when a single error is lower than thresholds. We include results on the whole test set, and top $25\%$ , $50\%$ and $75\%$ examples ranked by single-view predictions.
+
+| Methods | All Examples | Top 25% All | Top 50% All | Top 75% All |
| Shape | Trans | Rot | Scale | |
| Feedforward | 21.2 | 22.5 | 31.7 | 28.5 | 26.9 | 41.6 | 34.6 | 28.6 |
| NMS | 21.1 | 23.5 | 31.9 | 29.0 | 27.2 | 42.0 | 34.7 | 28.7 |
| Raw Affinity | 15.0 | 24.4 | 26.3 | 28.2 | 25.9 | 28.6 | 23.5 | 18.9 |
| Associative3D | 23.3 | 24.5 | 38.4 | 29.5 | 27.3 | 48.3 | 38.8 | 31.4 |
+
+common objects is a non-trivial task. Raw Affinity does not work since it may merge objects based on their similarity, regardless of possible global conflicts. NMS works when the relative camera pose is accurate but cannot work when many objects are close to each other. Instead, Associative3D demonstrates the ability to jointly reason over reconstructions, object pose and camera pose to produce a reasonable explanation of the scene. More qualitative examples are available in the supplementary material.
+
+We then evaluate our proposed approach quantitatively. In a factored representation [49], both object poses and shapes are equally important to the full scene reconstruction. For instance, the voxel reconstruction of a scene may have no overlap if all the shapes are right, but they are in the wrong place. Therefore, we formulate it as a 3D detection problem, as a prediction is a true positive only if all of translation, scale, rotation and shape are correct. However, 3D detection is a very strict metric. If the whole scene is slightly off in one aspect, we may have a very low AP. But the predicted scene may still be reasonable. We mainly use it quantify our performance.
+
+Table 1 shows our performance compared with all three baseline methods. Our approach outperforms all of them, which verifies what we see in the qualitative examples. Moreover, the improvement mainly comes from that on translation. The translation-only AP is around 7 points better than Feedforward. Meanwhile, the improvement of NMS over Feedforward is limited. As we see in qualitative examples, it cannot work when many objects are close to each other. Finally, raw affinity is even worse than Feedforward, since raw affinity may merge objects incorrectly. We will discuss why the affinity is informative, but top-1 similarity is not a good choice in Sec. 4.3.
+
+We notice our performance gain over Feedforward and NMS is especially large when single-view predictions are reasonable. On top $25\%$ examples which single-view prediction does a good job, Associative3D outperforms Feedforward and NMS by over 6 points. On top $50\%$ examples, the improvement is around 4 points. It is still significant but slightly lower than that of top $25\%$ examples. When single-view prediction is bad, our performance gain is limited since Associative3D is built upon it. We will discuss this in Sec. 4.5 as failure cases.
+
+Table 2. AUROC and rank correlation between the affinity matrix and category, model, shape, and instance, respectively. Model | Category means the ability of the affinity matrix to distinguish different models given the same category / semantic label.
+
+ | Category | Model | Category | Shape | Category | Instance | Model |
| AUROC | 0.92 | 0.73 | - | 0.59 |
| Correlation | 0.72 | 0.33 | 0.34 | 0.14 |
+
+# 4.3 Interview Object Affinity Matrix
+
+We then turn to evaluating how the method works by analyzing individual components. We start with the affinity matrix and study what it learns.
+
+We have three non-mutually exclusive hypotheses: (1) Semantic labels. The affinity is essentially doing object recognition. After detecting the category of the object, it simply matches objects with the same category. (2) Object shapes. The affinity matches objects with similar shapes since it is constructed from the embedding vectors which are also used to generate shape voxels and the object pose. (3) Correspondence. Ideally, the affinity matrix should give us ground truth correspondence. It is challenging given duplicate objects in the scene. For example, people can have three identical chairs in their office. These hypotheses are three different levels the affinity matrix may learn, but they are not in conflict. Learning semantic labels do not mean the affinity does not learn anything about shapes.
+
+We study this by examining a large number of pairs of objects and testing the relationship between affinity and known relationships (e.g., categories, model ids) using ground truth bounding boxes. We specifically construct three binary labels (same category, same model, same instance) and a continuous label shape similarity (namely F-score @ 0.05 [48]). When we evaluate shape similarity, we condition on the category to test if affinity distinguishes between different models of the same category, (e.g. chair). Similarly, we condition on the model when we evaluate instance similarity.
+
+We compute two metrics: a binary classification metric that treats the affinity as a predictor of the label as well as a correlation that tests if a monotonic relationship exists between the affinity and the label. For binary classification, we use AUROC to evaluate the performance since it is invariant to class imbalance and has a natural interpretation. For correlation, we compute Spearman's rank correlation coefficient [58] between the affinity predictors and labels. This tests how well the relationship between affinity and each label (e.g., shape overlap) fits a monotonic function (1 is perfect agreement, 0 no agreement).
+
+The results are shown in Table 2. Both the binary classification and the rank correlation show that the affinity matrix is able to distinguish different categories and objects of different shapes, but is sub-optimal in distinguishing the same instance. These results justify our stitching stage, which addresses the problem based on joint reasoning. It also explains why Raw Affinity underperforms all other baselines by a large margin in the full-scene evaluation. Additionally, the
+
+
+Fig. 5. Visualization of the stitching stage. The affinity matrix generates proposals of corresponding objects, and then the stitching stage removes outliers by inferring the most likely explanation of the scene.
+
+ability to distinguish categories and shapes provides important guidance to the stitching stage. For example, a sofa and bed are similar in 3D shapes. It is infeasible to distinguish them by simply looking at the chamfer distance, which can be distinguished by the affinity matrix.
+
+# 4.4 Stitching Stage
+
+We evaluate the stitching stage by studying two questions: (1) How well can it predict object correspondence? (2) Can it improve relative camera pose estimation? For example, if the top-1 relative pose is incorrect, could the stitching stage fix it by considering common objects in two views?
+
+Object Correspondence. To answer the first question, we begin with qualitative examples in Fig. 5, which illustrate object correspondence before and after the stitching stage. Before our stitching stage, our affinity matrix has generated correspondence proposals based on their similarity. However, there are outliers since the affinity is sub-optimal in distinguishing the same instance. The stitching stage removes these outliers.
+
+We evaluate object correspondence in the same setting as Sec 4.3. Suppose the first and second images have $N$ and $M$ objects respectively. We then have $N \times M$ pairs. The pair is a positive example if and only if they are corresponding. We use average precision (AP) to measure the performance since AP pays more attention to the low recall [11,16]. For $i^{th}$ object in view 1 and $j^{th}$ object in view 2, we produce a confidence score by $\gamma A_{ij}$ where $\gamma = 1$ if the pair is predicted to be corresponding and $\gamma = 0.5$ otherwise. This $\gamma$ term updates the confidence based on stitching stage to penalize pairs which have a high affinity score but are not corresponding.
+
+We compare Associative3D with 3 baselines. (All Negative): The prediction is always negative (the most frequent label). This serves as a lower bound. (Affinity): This simply uses the affinity matrix as the confidence. (Affinity Top1): Rather than using the raw affinity matrix, it uses affinity top-1 similarity as the correspondence and the same strategy to decide confidence as Associa
+
+Table 3. Evaluation of object correspondence with and without the stitching stage.
+
+ | All Negative | Affinity | Affinity Top1 | Associative3D |
| AP | 10.1 | 38.8 | 49.4 | 60.0 |
+
+Table 4. Evaluation of relative camera pose from the camera branch and picked by the stitching.
+
+| Method | Translation (meters) | Rotation (degrees) |
| Median | Mean | (Err ≤ 1m)% | Median | Mean | (Err ≤ 30°)% |
| Top-1 | 1.24 | 1.80 | 41.26 | 6.96 | 29.90 | 77.56 |
| Associative3D | 0.88 | 1.44 | 54.89 | 6.97 | 29.02 | 78.31 |
+
+tive3D. Table 3 shows that our stitching stage improves AP by $10\%$ compared to using the affinity matrix only as correspondence.
+
+Relative Camera Pose Estimation. We next evaluate the performance of relative camera pose (i.e., camera translation and rotation) estimation and see if the stitching stage improves the relative camera pose jointly. We compare the camera pose picked by the stitching stage and top-1 camera pose predicted by the camera branch. We follow the rotation and translation metrics in our full-scene evaluation to measure the error of our predicted camera poses. We summarize results in Table 4. There is a substantial improvement in translations, with the percentage of camera poses within $1\mathrm{m}$ of the ground truth being boosted from $41.3\%$ to $54.9\%$ . The improvement in rotation is smaller and we believe this is because the network already starts out working well and can exploit the fact that scenes tend to have three orthogonal directions. In conclusion, the stitching stage can mainly improve the prediction of camera translation.
+
+# 4.5 Failure Cases
+
+To understand the problem of reconstruction from sparse views better, we identify some representative failure cases and show them in Fig. 6. While our method is able to generate reasonable results on SUNCG, it cannot solve some common failure cases: (1) The image pair is ambiguous. (2) The single-view backbone does not produce reasonable predictions as we discuss in Sec. 4.2. (3) There are too many similar objects in the scene. The affinity matrix is then not able to distinguish them since it is sub-optimal in distinguishing the same instance. Our stitching stage is also limited by the random search over object correspondence. Due to factorial growth of search space, we cannot search all possible correspondences. The balancing of our sub-losses can also be sensitive.
+
+# 4.6 Results on NYU Dataset
+
+To test generalization, we also test our approach on images from NYUv2 [44]. Our only change is using proposals from Faster-RCNN [40] trained on COCO [31],
+
+| Input Images
+Image 1 | Camera 1
+Prediction | Camera 2
+GT | Camera 2
+Prediction | Birdview
+Prediction | GT |
| | | | | |
| | | | | |
| | | | | |
+
+
+Fig. 6. Representative failure cases on the SUNCG test set [45]. Row 1: The input images are ambiguous. There can be two or three beds in the scene. Row 2: The single-view backbone does not produce a reasonable prediction. Row 3: This is challenging because all chairs are the same.
+Image 1 Image 2 Sideview Birdview Image 1 Image 2 Sideview Birdview
+Fig. 7. Qualitative results on NYUv2 dataset [44]. Sideview corresponds to the camera view slightly transformed from the image 2 camera position.
+
+since Faster-RCNN trained on SUNCG cannot generalize to NYUv2 well. We do not finetune any models and show qualitative results in Fig. 7. Despite training on synthetic data, our model can often obtain a reasonable interpretation.
+
+# 5 Conclusion
+
+We have presented Associative3D, which explores 3D volumetric reconstruction from sparse views. While the output is reasonable, failure modes indicate the problem is challenging to current techniques. Directions for future work include joint learning of object affinity and relative camera pose, and extending the approach to many views and more natural datasets other than SUNCG.
+
+Acknowledgments We thank Nilesh Kulkarni and Shubham Tulsiani for their help of 3D-RelNet; Zhengyuan Dong for his help of visualization; Tianning Zhu for his help of video; Richard Higgins, Dandan Shan, Chris Rockwell and Tongan Cai for their feedback on the draft. Toyota Research Institute ("TRI") provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.
+
+# References
+
+1. Bao, S.Y., Bagra, M., Chao, Y.W., Savarese, S.: Semantic structure from motion with points, regions, and objects. In: CVPR (2012)
+2. Bowman, S.L., Atanasov, N., Daniilidis, K., Pappas, G.J.: Probabilistic data association for semantic slam. In: ICRA (2017)
+3. Bromley, J., Guyon, I., LeCun, Y., Sackinger, E., Shah, R.: Signature verification using a "siamese" time delay neural network. In: Advances in neural information processing systems. pp. 737-744 (1994)
+4. Chang, A., Dai, A., Funkhouser, T., Halber, M., Niessner, M., Savva, M., Song, S., Zeng, A., Zhang, Y.: Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158 (2017)
+5. Chen, W., Qian, S., Deng, J.: Learning single-image depth from videos using quality assessment networks. In: CVPR (2019)
+6. Chen, Y., Huang, S., Yuan, T., Qi, S., Zhu, Y., Zhu, S.C.: Holistic++ scene understanding: Single-view 3D holistic scene parsing and human pose estimation with human-object interaction and physical commonsense. In: The IEEE International Conference on Computer Vision (ICCV) (2019)
+7. Choy, C.B., Gwak, J., Savarese, S., Chandraker, M.: Universal correspondence network. In: NeurIPS. pp. 2414-2422 (2016)
+8. Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3D-R2N2: A unified approach for single and multi-view 3d object reconstruction. In: ECCV (2016)
+9. Crandall, D., Owens, A., Snavely, N., Huttenlocher, D.: SfM with MRFs: Discrete-continuous optimization for large-scale structure from motion. Transactions on Pattern Analysis and Machine Intelligence (PAMI) (2013)
+10. Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: Scannet: Richly-annotated 3d reconstructions of indoor scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5828-5839 (2017)
+11. Davis, J., Goadrich, M.: The relationship between Precision-Recall and ROC curves. In: ICML (2006)
+12. Du, Y., Liu, Z., Basevi, H., Leonardis, A., Freeman, B., Tenenbaum, J., Wu, J.: Learning to exploit stability for 3D scene parsing. In: Advances in Neural Information Processing Systems. pp. 1726-1736 (2018)
+13. Duggal, S., Wang, S., Ma, W.C., Hu, R., Urtasun, R.: DeepPruner: Learning efficient stereo matching via differentiable patchmatch. In: ICCV (2019)
+14. Eigen, D., Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: ICCV (2015)
+15. En, S., Lechervy, A., Jurie, F.: RPNet: an end-to-end network for relative camera pose estimation. In: ECCV (2018)
+16. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The Pascal visual object classes (voc) challenge. International journal of computer vision 88(2), 303-338 (2010)
+17. Girdhar, R., Fouhey, D., Rodriguez, M., Gupta, A.: Learning a predictable and generative vector representation for objects. In: ECCV (2016)
+18. Gkioxari, G., Malik, J., Johnson, J.: Mesh r-cnn. In: ICCV (2019)
+19. Groueix, T., Fisher, M., Kim, V.G., Russell, B., Aubry, M.: AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation. In: CVPR (2018)
+20. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edn. (2004)
+
+21. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
+22. Huang, P.H., Matzen, K., Kopf, J., Ahuja, N., Huang, J.B.: DeepMVS: Learning multi-view stereopsis. In: CVPR (2018)
+23. Huang, S., Qi, S., Zhu, Y., Xiao, Y., Xu, Y., Zhu, S.C.: Holistic 3D scene parsing and reconstruction from a single rgb image. In: ECCV (2018)
+24. Huang, Z., Li, T., Chen, W., Zhao, Y., Xing, J., LeGendre, C., Luo, L., Ma, C., Li, H.: Deep volumetric video from very sparse multi-view performance capture. In: ECCV (2018)
+25. Kar, A., Hane, C., Malik, J.: Learning a multi-view stereo machine. In: Advances in neural information processing systems. pp. 365-376 (2017)
+26. Kendall, A., Grimes, M., Cipolla, R.: Posenet: A convolutional network for real-time 6-dof camera relocalization. In: ICCV (2015)
+27. Kulkarni, N., Misra, I., Tulsiani, S., Gupta, A.: 3D-RelNet: Joint object and relational network for 3D prediction. In: ICCV (2019)
+28. Ladický, L., Zeisl, B., Pollefeys, M.: Discriminatively trained dense surface normal estimation. In: ECCV (2014)
+29. Lasinger, K., Ranftl, R., Schindler, K., Koltun, V.: Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. arXiv preprint arXiv:1907.01341 (2019)
+30. Li, L., Khan, S., Barnes, N.: Silhouette-assisted 3D object instance reconstruction from a cluttered scene. In: ICCV Workshops (2019)
+31. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dólar, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV (2014)
+32. Liu, C., Kim, K., Gu, J., Furukawa, Y., Kautz, J.: PlaneRCNN: 3D plane detection and reconstruction from a single image. In: CVPR (2019)
+33. Liu, C., Wu, J., Furukawa, Y.: Floornet: A unified framework for floorplan reconstruction from 3d scans. In: ECCV (2018)
+34. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International journal of computer vision 60(2), 91-110 (2004)
+35. Melekhov, I., Ylioinas, J., Kannala, J., Rahtu, E.: Relative camera pose estimation using convolutional neural networks. In: International Conference on Advanced Concepts for Intelligent Vision Systems. pp. 675-687. Springer (2017)
+36. Mishkin, D., Perdoch, M., Matas, J.: Mods: Fast and robust method for two-view matching. CVIU 1(141), 81-93 (2015)
+37. Nie, Y., Han, X., Guo, S., Zheng, Y., Chang, J., Zhang, J.J.: Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image. In: CVPR (2020)
+38. Price, A., Jin, L., Berenson, D.: Inferring occluded geometry improves performance when retrieving an object from dense clutter. International Symposium on Robotics Research (ISRR) (2019)
+39. Pritchett, P., Zisserman, A.: Wide baseline stereo matching. In: ICCV (1998)
+40. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp. 91-99 (2015)
+41. Richter, S.R., Roth, S.: Matryoshka networks: Predicting 3D geometry via nested shape layers. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1936-1944 (2018)
+42. Salas-Moreno, R.F., Newcombe, R.A., Strasdat, H., Kelly, P.H., Davison, A.J.: Slam++: Simultaneous localisation and mapping at the level of objects. In: CVPR (2013)
+
+43. Sharma, G., Goyal, R., Liu, D., Kalogerakis, E., Maji, S.: Csgnet: Neural shape parser for constructive solid geometry. In: CVPR (2018)
+44. Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD images. In: ECCV (2012)
+45. Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic scene completion from a single depth image. In: CVPR (2017)
+46. Sui, Z., Chang, H., Xu, N., Jenkins, O.C.: Geofusion: Geometric consistency informed scene estimation in dense clutter. arXiv:2003.12610 (2020)
+47. Sun, X., Wu, J., Zhang, X., Zhang, Z., Zhang, C., Xue, T., Tenenbaum, J.B., Freeman, W.T.: Pix3d: Dataset and methods for single-image 3d shape modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2974-2983 (2018)
+48. Tatarchenko, M., Richter, S.R., Ranftl, R., Li, Z., Koltun, V., Brox, T.: What do single-view 3d reconstruction networks learn? In: CVPR (2019)
+49. Tulsiani, S., Gupta, S., Fouhey, D.F., Efros, A.A., Malik, J.: Factoring shape, pose, and layout from the 2D image of a 3D scene. In: CVPR (2018)
+50. Wang, Q., Zhou, X., Daniilidis, K.: Multi-image semantic matching by mining consistent features. In: CVPR (2018)
+51. Wang, X., Fouhey, D., Gupta, A.: Designing deep networks for surface normal estimation. In: CVPR (2015)
+52. Wu, J., Wang, Y., Xue, T., Sun, X., Freeman, B., Tenenbaum, J.: Marrnet: 3D shape reconstruction via 2.5D sketches. In: Advances in neural information processing systems. pp. 540-550 (2017)
+53. Yang, Z., Pan, J.Z., Luo, L., Zhou, X., Grauman, K., Huang, Q.: Extreme relative pose estimation for rgb-d scans via scene completion. In: CVPR (2019)
+54. Yang, Z., Yan, S., Huang, Q.: Extreme relative pose network under hybrid representations. In: CVPR (2020)
+55. Zhang, X., Zhang, Z., Zhang, C., Tenenbaum, J., Freeman, B., Wu, J.: Learning to reconstruct shapes from unseen classes. In: Advances in Neural Information Processing Systems. pp. 2257-2268 (2018)
+56. Zhang, Y., Song, S., Yumer, E., Savva, M., Lee, J.Y., Jin, H., Funkhouser, T.: Physically-based rendering for indoor scene understanding using convolutional neural networks. In: CVPR (2017)
+57. Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: CVPR (2017)
+58. Zwillinger, D., Kokoska, S.: CRC standard probability and statistics tables and formulae. Crc Press (1999)
\ No newline at end of file
diff --git a/associative3dvolumetricreconstructionfromsparseviews/images.zip b/associative3dvolumetricreconstructionfromsparseviews/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..edc5405bcb8adb140fb8f9dc0fd74411e4d4e504
--- /dev/null
+++ b/associative3dvolumetricreconstructionfromsparseviews/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:79672f54859cd96f0f9b1274dbd87bc2a0e46fdeb5c93ea84eea24181887c476
+size 465300
diff --git a/associative3dvolumetricreconstructionfromsparseviews/layout.json b/associative3dvolumetricreconstructionfromsparseviews/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f63c26e4bb2052802a43050251c8aff8819773e2
--- /dev/null
+++ b/associative3dvolumetricreconstructionfromsparseviews/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3aeecce255ee771a0802f0b515b69697683da7e0997ddaec68fcbc47f075b2f3
+size 393854
diff --git a/associativealignmentforfewshotimageclassification/ec122606-f81a-4584-b4f9-eba8769aa328_content_list.json b/associativealignmentforfewshotimageclassification/ec122606-f81a-4584-b4f9-eba8769aa328_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6b0358a66af6f90b6076b6549ec1e15f05f6af96
--- /dev/null
+++ b/associativealignmentforfewshotimageclassification/ec122606-f81a-4584-b4f9-eba8769aa328_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0091d0348f87ed86370dd89b6bd2ea4fe7370b6547c5a33a2eeb571a3bb6bab9
+size 86649
diff --git a/associativealignmentforfewshotimageclassification/ec122606-f81a-4584-b4f9-eba8769aa328_model.json b/associativealignmentforfewshotimageclassification/ec122606-f81a-4584-b4f9-eba8769aa328_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4fca2b8441d69ebc17d3117877276734d2d0c953
--- /dev/null
+++ b/associativealignmentforfewshotimageclassification/ec122606-f81a-4584-b4f9-eba8769aa328_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0c393dcd3ec8e964b7aeab7e2a1e67ec5b32ad4ad5c00d7a40185f770a73e2a
+size 108880
diff --git a/associativealignmentforfewshotimageclassification/ec122606-f81a-4584-b4f9-eba8769aa328_origin.pdf b/associativealignmentforfewshotimageclassification/ec122606-f81a-4584-b4f9-eba8769aa328_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4f9dac3f1ec519574a4c884861486e3ee3dc234a
--- /dev/null
+++ b/associativealignmentforfewshotimageclassification/ec122606-f81a-4584-b4f9-eba8769aa328_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:37979bb93c0395ba1afbfe50020ff5d0df7b7e1117661dde4a769a55814ae486
+size 3802999
diff --git a/associativealignmentforfewshotimageclassification/full.md b/associativealignmentforfewshotimageclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3c6ce74e865e14abdaf82ec071819c5cc8baf666
--- /dev/null
+++ b/associativealignmentforfewshotimageclassification/full.md
@@ -0,0 +1,343 @@
+# Associative Alignment for Few-shot Image Classification
+
+Arman Afrasiyabi*, Jean-François Lalonde*, Christian Gagne*†
+
+*Université Laval, † Canada CIFAR AI Chair, Mila arman.afrasiyabi.1@ulaval.ca {jflalonde,christian.gagne}@gel.ulaval.ca https://lvsn.github.io/associative-alignment/
+
+Abstract. Few-shot image classification aims at training a model from only a few examples for each of the "novel" classes. This paper proposes the idea of associative alignment for leveraging part of the base data by aligning the novel training instances to the closely related ones in the base training set. This expands the size of the effective novel training set by adding extra "related base" instances to the few novel ones, thereby allowing a constructive fine-tuning. We propose two associative alignment strategies: 1) a metric-learning loss for minimizing the distance between related base samples and the centroid of novel instances in the feature space, and 2) a conditional adversarial alignment loss based on the Wasserstein distance. Experiments on four standard datasets and three backbones demonstrate that combining our centroid-based alignment loss results in absolute accuracy improvements of $4.4\%$ , $1.2\%$ , and $6.2\%$ in 5-shot learning over the state of the art for object recognition, fine-grained classification, and cross-domain adaptation, respectively.
+
+Keywords: associative alignment, few-shot image classification
+
+# 1 Introduction
+
+Despite recent progress, generalizing on new concepts with little supervision is still a challenge in computer vision. In the context of image classification, few-shot learning aims to obtain a model that can learn to recognize novel image classes when very few training examples are available.
+
+Meta-learning [9, 36, 42, 47] is a possible approach to achieve this, by extracting common knowledge from a large amount of labeled data (the "base" classes) to train a model that can then learn to classify images from "novel" concepts with only a few examples. This is achieved by repeatedly sampling small subsets from the large pool of base images, effectively simulating the few-shot scenario. Standard transfer learning has also been explored as an alternative method [3, 14, 34]. The idea is to pre-train a network on the base samples and then fine-tune the classification layer on the novel examples. Interestingly, Chen et al. [3] demonstrated that doing so performs on par with more sophisticated meta-learning strategies. It is, however, necessary to freeze the feature encoder part of the
+
+
+(a) before alignment
+
+
+(b) after alignment
+Fig.1: The use of many related bases (circles) in addition to few novel classes samples (diamonds) allows better discriminative models: (a) using directly related bases may not properly capture the novel classes; while (b) aligning both related base and novel training instances (in the feature space) provides more relevant training data for classification. Plots are generated with t-SNE [30] applied to the ResNet-18 feature embedding before (a) and after (b) the application of the centroid alignment. Points are color-coded by class.
+
+network when fine-tuning on the novel classes since the network otherwise overfits the novel examples. We hypothesize that this hinders performance and that gains could be made if the entire network is adapted to the novel categories.
+
+In this paper, we propose an approach that simultaneously prevents overfitting without restricting the learning capabilities of the network for few-shot image classification. Our approach relies on the standard transfer learning strategy [3] as a starting point, but subsequently exploits base categories that are most similar (in the feature space) to the few novel samples to effectively provide additional training examples. We dub these similar categories the "related base" classes. Of course, the related base classes represent different concepts than the novel classes, so fine-tuning directly on them could confuse the network (see fig. 1-(a)). The key idea of this paper is to align, in feature space, the novel examples with the related base samples (fig. 1-(b)).
+
+To this end, we present two possible solutions for associative alignment: by 1) centroid alignment, inspired by ProtoNet [42], benefits from explicitly shrinking the intra-class variations and is more stable to train, but makes the assumption that the class distribution is well-approximated by a single mode. Adversarial alignment, inspired by WGAN [1], does not make that assumption, but its train complexity is greater due to the critic network. We demonstrate, through extensive experiments, that our centroid-based alignment procedure achieves state-of-the-art performance in few-shot classification on several standard benchmarks. Similar results are obtained by our adversarial alignment, which shows the effectiveness of our associative alignment approach.
+
+We present the following contributions. First, we propose two approaches for aligning novel to related base classes in the feature space, allowing for ef-
+
+fective training of entire networks for few-shot image classification. Second, we introduce a strong baseline that combines standard transfer learning [3] with an additive angular margin loss [6], along with early stopping to regularize the network while pre-training on the base categories. We find that this simple baseline actually improves on the state of the art, in the best case by $3\%$ in overall accuracy. Third, we demonstrate through extensive experiments—on four standard datasets and using three well-known backbone feature extractors—that our proposed centroid alignment significantly outperforms the state of the art in three types of scenarios: generic object recognition (gain of $1.7\%$ , $4.4\%$ $2.1\%$ in overall accuracy for 5-shot on mini-ImageNet, tieredImageNet and FC100 respectively), fine-grained classification ( $1.2\%$ on CUB), and cross-domain adaptation ( $6.2\%$ from mini-ImageNet to CUB) using the ResNet-18 backbone.
+
+# 2 Related work
+
+The main few-shot learning approaches can be broadly categorized into meta-learning and standard transfer learning. In addition, data augmentation and regularization techniques (typically in meta-learning) have also been used for few-shot learning. We briefly review relevant works in each category below. Note that several different computer vision problems such as object counting [58], video classification [59], motion prediction [16], and object detection [52] have been framed as few-shot learning. Here, we mainly focus on works from the image classification literature.
+
+Meta-learning This family of approaches frames few-shot learning in the form of episodic training [7, 9, 36, 39, 42, 46, 52, 54]. An episode is defined by pretending to be in a few-shot regime while training on the base categories, which are available in large quantities. Initialization- and metric-based approaches are two variations on the episodic training scheme relevant for this work. Initialization-based methods [9, 10, 22] learn an initial model able to adapt to few novel samples with a small number of gradient steps. In contrast, our approach performs a larger number of updates, but requires that the alignment be maintained between the novel samples and their related base examples. Metric-based approaches [2, 12, 21, 25, 27, 33, 42, 44, 45, 47, 53, 57] learn a metric with the intent of reducing the intra-class variations while training on base categories. For example, ProtoNet [42] were proposed to learn a feature space where instances of a given class are located close to the corresponding prototype (centroid), allowing accurate distance-based classification. Our centroid alignment strategy borrows from such distance-based criteria but uses it to match the distributions in the feature space instead of building a classifier.
+
+Standard transfer learning The strategy behind this method is to pre-train a network on the base classes and subsequently fine-tune it on the novel examples [3, 14, 34]. Despite its simplicity, Chen et al. [3] recently demonstrated that such an approach could result in similar generalization performance compared to meta-learning when deep backbones are employed as feature extractors. However, they
+
+have also shown that the weights of the pre-trained feature extractor must remain frozen while fine-tuning due to the propensity for overfitting. Although the training procedure we are proposing is similar to standard fine-tuning in base categories, our approach allows the training of the entire network, thereby increasing the learned model capacity while improving classification accuracy.
+
+Regularization trick Wang et al. [51] proposed regression networks for regularization purposes by refining the parameters of the fine-tuning model to be close to the pre-trained model. More recently, Lee et al. [24] exploited the implicit differentiation of a linear classifier with hinge loss and $\mathcal{L}_2$ regularization to the CNN-based feature learner. Dvornik et al. [8] uses an ensemble of networks to decrease the classifiers variance.
+
+Data augmentation Another family of techniques relies on additional data for training in a few-shot regime, most of the time following a meta-learning training procedure [4, 5, 11, 15, 17, 31, 40, 49, 55, 56]. Several ways of doing so have been proposed, including Feature Hallucination (FH) [17], which learns mappings between examples with an auxiliary generator that then hallucinates extra training examples (in the feature space). Subsequently, Wang et al. [49] proposed to use a GAN for the same purpose, and thus address the poor generalization of the FH framework. Unfortunately, it has been shown that this approach suffers from mode collapse [11]. Instead of generating artificial data for augmentation, others have proposed methods to take advantage of additional unlabeled data [13, 37, 26, 50]. Liu et al. [29] propose to propagate labels from few labeled data to many unlabeled data, akin to our detection of related bases. We also rely on more data for training, but in contrast to these approaches, our method does not need any new data, nor does it require to generate any. Instead, we exploit the data that is already available in the base domain and align the novel domain to the relevant base samples through fine-tuning.
+
+Previous work has also exploited base training data, most related to ours are the works of [4] and [28]. Chen et al. [4] propose to use an embedding and deformation sub-networks to leverage additional training samples, whereas we rely on a single feature extractor network which is much simpler to implement and train. Unlike random base example sampling [4] for interpolating novel example deformations in the image space, we propose to borrow the internal distribution structure of the detected related classes in feature space. Besides, our alignment strategies introduce extra criteria to keep the focus of the learner on the novel classes, which prevents the novel classes from becoming outliers. Focused on object detection, Lim et al. [28] proposes a model to search similar object categories using a sparse grouped Lasso framework. Unlike [28], we propose and evaluate two associative alignments in the context of few-shot image classification.
+
+From the alignment perspective, our work is related to Jiang et al. [20] which stays in the context of zero-shot learning, and proposes a coupled dictionary matching in visual-semantic structures to find matching concepts. In contrast, we propose associative base-novel class alignments along with two strategies for enforcing the unification of the related concepts.
+
+# 3 Preliminaries
+
+Let us assume that we have a large base dataset $\mathcal{X}^b = \{(\mathbf{x}_i^b, y_i^b)\}_{i=1}^{N^b}$ , where $\mathbf{x}_i^b \in \mathbb{R}^d$ is the $i$ -th data instance of the set and $y_i^b \in \mathcal{Y}^b$ is the corresponding class label. We are also given a small amount of novel class data $\mathcal{X}^n = \{(\mathbf{x}_i^n, y_i^n)\}_{i=1}^{N^n}$ with labels $y_i^n \in \mathcal{Y}^n$ from a set of distinct classes $\mathcal{Y}^n$ . Few-shot classification aims to train a classifier with only a few examples from each of the novel classes (e.g., 5 or even just 1). In this work, we used the standard transfer learning strategy of Chen et al. [3], which is organized into the following two stages.
+
+Pre-training stage The learning model is a neural network composed of a feature extractor $f(\cdot | \theta)$ , parameterized by $\theta$ , followed by a linear classifier $c(\mathbf{x} | \mathbf{W}) \equiv \mathbf{W}^\top f(\mathbf{x} | \theta)$ , described by matrix $\mathbf{W}$ , ending with a scoring function such as softmax to produce the output. The network is trained from scratch on examples from the base categories $\mathcal{X}^b$ .
+
+Fine-tuning stage In order to adapt the network to the novel classes, the network is subsequently fine-tuned on the few examples from $\mathcal{X}^n$ . Since overfitting is likely to occur if all the network weights are updated, the feature extractor weights $\theta$ are frozen, with only the classifier weights $\mathbf{W}$ being updated in this stage.
+
+# 4 Associative alignment
+
+Freezing the feature extractor weights $\theta$ indeed reduces overfitting, but also limits the learning capacity of the model. In this paper, we strive for the best of both worlds and present an approach which controls overfitting while maintaining the original learning capacity of the model. We borrow the internal distribution structure of a subset of related base categories, $\mathcal{X}^{rb} \subset \mathcal{X}^{b}$ . To account for the discrepancy between the novel and related base classes, we propose to align the novel categories to the related base categories in feature space. Such a mapping allows for a bigger pool of training data while making instances of these two sets more coherent. Note that, as opposed to [4], we do not modify the related base instances in any way: we simply wish to align novel examples to the distributions of their related class instances.
+
+In this section, we first describe how the related base classes are determined. Then, we present our main contribution: the "centroid associative alignment" method, which exploits the related base instances to improve classification performance on novel classes. We conclude by presenting an alternative associative alignment strategy, which relies on an adversarial framework.
+
+# 4.1 Detecting the related bases
+
+We develop a simple, yet effective procedure to select a set of base categories related to a novel category. Our method associates $B$ base categories to each novel class. After training $c(f(\cdot |\theta)|\mathbf{W})$ on $\mathcal{X}^b$ , we first fine-tune $c(\cdot |\mathbf{W})$ on $\mathcal{X}^n$
+
+
+Fig. 2: Results of related base algorithm in a 5-way 5-shot scenario. Each column represents a different novel class. The top row shows the 5 novel instances, while the bottom row shows 60 randomly selected related base instances with $B = 10$ .
+
+while keeping $\theta$ fixed. Then, we define $\mathbf{M} \in \mathbb{R}^{K^b \times K^n}$ as a base-novel similarity matrix, where $K^b$ and $K^n$ are respectively the number of classes in $\mathcal{X}^b$ and $\mathcal{X}^n$ . An element $m_{i,j}$ of the matrix $\mathbf{M}$ corresponds to the ratio of examples associated to the $i$ -th base class that are classified as the $j$ -th novel class:
+
+$$
+m _ {i, j} = \frac {1}{\left| \mathcal {X} _ {i} ^ {b} \right|} \sum_ {\left(\mathbf {x} _ {l} ^ {b}, \cdot\right) \in \mathcal {X} _ {i} ^ {b}} \mathbb {I} \left[ j = \underset {k = 1} {\arg \max } \left(c _ {k} \left(f \left(\mathbf {x} _ {l} ^ {b} \mid \theta\right) \mid \mathbf {W}\right)\right) \right], \tag {1}
+$$
+
+where $c_{k}(f(\mathbf{x}|\theta)|\mathbf{W})$ is the classifier output $c(\cdot |\mathbf{W})$ for class $k$ . Then, the $B$ base classes with the highest score for a given novel class are kept as the related base for that class. Fig. 2 illustrates example results obtained with this method in a 5-shot, 5-way scenario.
+
+# 4.2 Centroid associative alignment
+
+Let us assume the set of instances $\mathcal{X}_i^n$ belonging to the $i$ -th novel class $i \in \mathcal{Y}^n$ , $\mathcal{X}_i^n = \{(\mathbf{x}_j^n, y_j^n) \in \mathcal{X}^n \mid y_j^n = i\}$ , and the set of related base examples $\mathcal{X}_i^{rb}$ belonging to the same novel class $i$ according to the $g(\cdot | \mathbf{M})$ mapping function, $\mathcal{X}_i^{rb} = \{(\mathbf{x}_j^b, y_j^b) \in \mathcal{X}^{rb} \mid g(y_j | \mathbf{M}) = i\}$ . The function $g(y_j | \mathbf{M}) : \mathcal{Y}^b \to \mathcal{Y}^n$ maps base class labels to the novel ones according to the similarity matrix $\mathbf{M}$ . We wish to find an alignment transformation for matching probability densities $p(f(\mathbf{x}_{i,k}^n \mid \theta))$ and $p(f(\mathbf{x}_{i,l}^{rb} \mid \theta))$ . Here, $\mathbf{x}_{i,k}^n$ is the $k$ -th element from class $i$ in the novel set, and $\mathbf{x}_{i,l}^{rb}$ is the $l$ -th element from class $i$ in the related base set. This approach has the added benefit of allowing the fine-tuning of all of the model parameters $\theta$ and $\mathbf{W}$ with a reduced level of overfitting.
+
+We propose a metric-based centroid distribution alignment strategy. The idea is to enforce intra-class compactness during the alignment process. Specifically, we explicitly push the training examples from the $i$ -th novel class $\mathcal{X}_i^n$ towards the centroid of their related examples $\mathcal{X}_i^{rb}$ in feature space. The centroid $\pmb{\mu}_i$ of $\mathcal{X}_i^{rb}$ is computed by
+
+$$
+\boldsymbol {\mu} _ {i} = \frac {1}{| \mathcal {X} _ {i} ^ {r b} |} \sum_ {\left(\mathbf {x} _ {j}, \cdot\right) \in \mathcal {X} _ {i} ^ {r b}} f \left(\mathbf {x} _ {j} \mid \theta\right), \tag {2}
+$$
+
+# Algorithm 1:
+
+# Centroid alignment.
+
+Input: pre-trained model $c(f(\cdot |\theta)|\mathbf{W})$ novel class $\mathcal{X}^n$ , related base set $\mathcal{X}^{rb}$
+
+Output: fine-tuned $c(f(\cdot |\theta)|\mathbf{W})$
+
+while not done do
+
+$\tilde{\mathcal{X}}^n \gets$ sample a batch from $\mathcal{X}^n$ $\tilde{\mathcal{X}}^{rb} \gets$ sample a batch from $\mathcal{X}^{rb}$
+
+evaluate $\mathcal{L}_{\mathrm{ca}}(\widetilde{\mathcal{X}}^n,\widetilde{\mathcal{X}}^{rb})$ (eq. 3) $\theta \gets \theta -\eta_{\mathrm{ca}}\nabla_{\theta}\mathcal{L}_{\mathrm{ca}}(\widetilde{\mathcal{X}}^n,\widetilde{\mathcal{X}}^{rb})$
+
+evaluate $\mathcal{L}_{\mathrm{clf}}(\widetilde{\mathcal{X}}^{rb})$ (eq. 7)
+
+$\mathbf{W}\gets \mathbf{W} - \eta_{\mathrm{clf}}\nabla_{\mathbf{W}}\mathcal{L}_{\mathrm{clf}}(\tilde{\mathcal{X}}^{rb})$
+
+evaluate $\mathcal{L}_{\mathrm{clf}}(\widetilde{\mathcal{X}}^n)$ (eq. 7)
+
+$\mathbf{W}\gets \mathbf{W} - \eta_{\mathrm{cf}}\nabla_{\mathbf{W}}\mathcal{L}_{\mathrm{cf}}(\hat{\boldsymbol{x}}^n)$
+
+$\theta \gets \theta -\eta_{\mathrm{clf}}\nabla_{\theta}\mathcal{L}_{\mathrm{clf}}(\tilde{\mathcal{X}}^n)$
+
+end
+
+
+Fig.3: Schematic overview of our centroid alignment. The feature learner $f(\cdot |\theta)$ takes an example from novel category $\mathbf{x}^n$ and an example related base $\mathbf{x}_i^{rb}$ . A Euclidean centroid based alignment loss $\mathcal{L}_{\mathrm{ca}}$ (red arrow) aligns the encoded $\mathbf{x}_i^n$ and $\mathbf{x}_i^{rb}$ . Blue arrows represent classification loss $\mathcal{L}_{\mathrm{clf}}$ .
+
+where $N^n$ and $N^{rb}$ are the number of examples in $\mathcal{X}^n$ and $\mathcal{X}^{rb}$ , respectively. This allows the definition of the centroid alignment loss as
+
+$$
+\mathcal {L} _ {\mathrm {c a}} \left(\mathcal {X} ^ {n}\right) = - \frac {1}{N ^ {n} N ^ {r b}} \sum_ {i = 1} ^ {K ^ {n}} \sum_ {\left(\mathbf {x} _ {j}, \cdot\right) \in \mathcal {X} _ {i} ^ {n}} \log \frac {\exp \left[ - \| f \left(\mathbf {x} _ {j} \mid \theta\right) - \boldsymbol {\mu} _ {i} \| _ {2} ^ {2} \right]}{\sum_ {k = 1} ^ {K ^ {n}} \exp \left[ - \| f \left(\mathbf {x} _ {j} \mid \theta\right) - \boldsymbol {\mu} _ {k} \| _ {2} ^ {2} \right]}. \tag {3}
+$$
+
+Our alignment strategy bears similarities to [42] which also uses eq. 3 in a meta-learning framework. In our case, we use that same equation to match distributions. Fig. 3 illustrates our proposed centroid alignment, and algorithm 1 presents the overall procedure. First, we update the parameters of the feature extraction network $f(\cdot |\theta)$ using eq. 3. Second, the entire network is updated using a classification loss $\mathcal{L}_{\mathrm{clf}}$ (defined in sec. 5).
+
+# 4.3 Adversarial associative alignment
+
+As an alternative associative alignment strategy, and inspired by WGAN [1], we experiment with training the encoder $f(\cdot |\theta)$ to perform adversarial alignment using a conditioned critic network $h(\cdot |\phi)$ based on Wasserstein-1 distance between two probability densities $p_x$ and $p_y$ :
+
+$$
+D \left(p _ {x}, p _ {y}\right) = \sup _ {\| h \| _ {L} \leq 1} \mathbb {E} _ {x \sim p _ {x}} [ h (x) ] - \mathbb {E} _ {x \sim p _ {y}} [ h (x) ], \tag {4}
+$$
+
+where sup is the supremum, and $h$ is a 1-Lipschitz function. Similarly to Arjovsky et al. [1], we use a parameterized critic network $h(\cdot |\phi)$ conditioned by the concatenation of the feature embedding of either $\mathbf{x}_i^n$ or $\mathbf{x}_j^{rb}$ , along with the
+
+# Algorithm 2:
+
+# Adversarial alignment
+
+Input: pre-trained model $c(f(\cdot |\theta)|\mathbf{W})$ novel class $\mathcal{X}^n$ , related base set $\mathcal{X}^{rb}$
+
+Output: fine-tuned $c(f(\cdot |\theta)|\mathbf{W})$
+
+while not done do
+
+$\tilde{\mathcal{X}}^n \gets$ sample a batch from $\mathcal{X}^n$ $\tilde{\mathcal{X}}^{rb} \gets$ sample a batch from $\mathcal{X}^{rb}$
+
+for $i = 0,\dots ,n_{\mathrm{critic}}$ do
+
+evaluate $\mathcal{L}_h(\widetilde{\mathcal{X}}^n,\widetilde{\mathcal{X}}^{rb})$ (eq.5)
+ $\triangleright$ update critic:
+ $\phi \gets \phi +\eta_h\nabla_\phi \mathcal{L}_h(\widetilde{\mathcal{X}}^n,\widetilde{\mathcal{X}}^{rb})$ $\phi \gets \mathrm{clip}(\phi , - 0.01,0.01)$
+
+end
+
+evaluate $\mathcal{L}_{\mathrm{aa}}(\widetilde{\mathcal{X}}^n)$ (eq. 6)
+
+$\theta \gets \theta -\eta_{\mathrm{aa}}\nabla_{\theta}\mathcal{L}_{\mathrm{aa}}(\mathcal{X}^n)$
+
+evaluate $\mathcal{L}_{\mathrm{clf}}(\tilde{\mathcal{X}}^{rb})$ (eq. 7)
+
+$\mathbf{W}\gets \mathbf{W} - \eta_{\mathrm{clf}}\nabla_{\mathbf{W}}\mathcal{L}_{\mathrm{clf}}(\tilde{\mathcal{X}}^{r,b})$
+
+evaluate $\mathcal{L}_{\mathrm{clf}}(\widetilde{\mathcal{X}}^n)$ (eq. 7)
+
+$\mathbf{W}\gets \mathbf{W} - \eta_{\mathrm{clf}}\nabla_{\mathbf{W}}\mathcal{L}_{\mathrm{clf}}(\widetilde{\mathcal{X}}^n)$
+
+$\theta \gets \theta -\eta_{\mathrm{clf}}\nabla_{\theta}\mathcal{L}_{\mathrm{clf}}(\tilde{\mathcal{X}}^{n})$
+
+end
+
+
+Fig.4: Overview of our adversarial alignment. The feature learner $f(\cdot |\theta)$ takes an image $\mathbf{x}_i^n$ from the $i$ -th novel class and an example $\mathbf{x}_i^{r\hat{b}}$ of the related base. The critic $h(\cdot |\phi)$ takes the feature vectors and the one-hot class label vector. Green, red and blue arrows present the critic $\mathcal{L}_h$ , adversarial $\mathcal{L}_{aa}$ and classification $\mathcal{L}_{\mathrm{clf}}$ losses respectively.
+
+corresponding label $y_{i}^{n}$ encoded as a one-hot vector. Conditioning $h(\cdot |\phi)$ helps the critic in matching novel categories and their corresponding related base categories. The critic $h(\cdot |\phi)$ is trained with loss
+
+$$
+\begin{array}{l} \mathcal {L} _ {h} \left(\mathcal {X} ^ {n}, \mathcal {X} ^ {r b}\right) = \frac {1}{N ^ {r b}} \sum_ {\left(\mathbf {x} _ {i} ^ {r b}, y _ {i} ^ {r b}\right) \in \mathcal {X} ^ {r b}} h \left(\left[ f \left(\mathbf {x} _ {i} ^ {r b} \mid \theta\right) y _ {i} ^ {r b} \right] \mid \phi\right) \\ - \frac {1}{N ^ {n}} \sum_ {\left(\mathbf {x} _ {i} ^ {n}, y _ {i} ^ {n}\right) \in \mathcal {X} ^ {n}} h \left(\left[ f \left(\mathbf {x} _ {i} ^ {n} \mid \theta\right) y _ {i} ^ {n} \right] \mid \phi\right), \tag {5} \\ \end{array}
+$$
+
+where, $[ \cdot ]$ is the concatenation operator. Then, the encoder parameters $\theta$ are updated using
+
+$$
+\mathcal {L} _ {\mathrm {a a}} \left(\mathcal {X} ^ {n}\right) = \frac {1}{K ^ {n}} \sum_ {\left(\mathbf {x} _ {i} ^ {n}, y _ {i} ^ {n}\right) \in \mathcal {X} ^ {n}} h \left(\left[ f \left(\mathbf {x} _ {i} ^ {n} \mid \theta\right) y _ {i} ^ {n} \right] \mid \phi\right). \tag {6}
+$$
+
+Algorithm 2 summarizes our adversarial alignment method. First, we perform the parameter update of critic $h(\cdot |\phi)$ using eq. 5. Similar to WGAN [1], we perform $n_{\mathrm{critic}}$ iterations to optimize $h$ , before updating $f(\cdot |\theta)$ using eq. 6. Finally, the entire network is updated by a classification loss $\mathcal{L}_{\mathrm{clf}}$ (defined in sec. 5).
+
+# 5 Establishing a strong baseline
+
+Before evaluating our alignment strategies in sec. 6, we first establish a strong baseline for comparison by following the recent literature. In particular, we build on the work of Chen et al. [3] but incorporate a different loss function and episodic early stopping on the pre-training stage.
+
+# 5.1 Classification loss functions
+
+Deng et al. [6] have shown that an additive angular margin ("arcmax" hereafter) outperforms other metric learning algorithms for face recognition. The arcmax has a metric learning property since it enforces a geodesic distance margin penalty on the normalized hypersphere, which we think can be beneficial for few-shot classification by helping keep class clusters compact and well-separated.
+
+Let $\mathbf{z}$ be the representation of $\mathbf{x}$ in feature space. As per [6], we transform the logit as $\mathbf{w}_j^\top \mathbf{z} = \| \mathbf{w}_j\| \| \mathbf{z}\|\cos \varphi_j$ , where $\varphi_{j}$ is the angle between $\mathbf{z}$ and $\mathbf{w}_j$ , the $j$ -th column in the weight matrix $\mathbf{W}$ . Each weight $\| \mathbf{w}_j\| = 1$ by $l_{2}$ normalization. Arcmax adds an angular margin $m$ to the distributed examples on a hypersphere:
+
+$$
+\mathcal {L} _ {\mathrm {c l f}} = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \frac {\exp (s \cos \left(\varphi_ {y _ {i}} + m\right))}{\exp (s \cos \left(\varphi_ {y _ {i}} + m\right)) + \sum_ {\forall j \neq y _ {i}} \exp (s \cos \varphi_ {j})}, \tag {7}
+$$
+
+where $s$ is the radius of the hypersphere on which $\mathbf{z}$ is distributed, $N$ the number of examples, and $m$ and $s$ are hyperparameters (see sec. 6.1). The overall goal of the margin is to enforce inter-class discrepancy and intra-class compactness.
+
+# 5.2 Episodic early stopping
+
+A fixed number of epochs in the pre-training stage has been commonly used (e.g., [3, 9, 42, 47]), but this might hamper performance in the fine-tuning stage. Using validation error, we observe the necessity of early-stopping in pre-training phase (see supp. mat. for a validation error plot). We thus make the use of episodic early stopping using validation set at pre-training time, specifically by stopping the training when the mean accuracy over a window of recent epochs starts to decrease. The best model in the window is selected as the final result.
+
+# 6 Experimental validation
+
+In the following, we are conducting an experimental evaluation and comparison of the proposed associative alignment strategies for few-shot learning. First, we introduce the datasets used and evaluate the strong baseline from sec. 5.
+
+# 6.1 Datasets and implementation details
+
+Datasets We present experiments on four benchmarks: mini-ImageNet [47], tieredImageNet [37], and FC100 [33] for generic object recognition; and CUB200-2011 (CUB) [48] for fine-grained image classification. mini-ImageNet is a subset of the ImageNet ILSVRC-12 dataset [38] containing 100 categories and 600 examples per class. We used the same splits as Ravi and Larochelle [36], where 64, 16, and 20 classes are used for the base, validation, and novel classes, respectively. As a larger benchmark, the tieredImageNet [37] is also a subset of ImageNet ILSVRC-12 dataset [38], this time with 351 base, 97 validation, and 160 novel classes respectively. Derived from CIFAR-100 [23], the FC100 dataset [33] contains 100 classes grouped into 20 superclasses to minimize class overlap. Base, validation and novel splits contain 60, 20, 20 classes belonging to 12, 5, and 5 superclasses, respectively. The CUB dataset [48] contains 11,788 images from 200 bird categories. We used the same splits as Hilliard et al. [19] using 100, 50, and 50 classes for the base, validation, and novel classes, respectively.
+
+Network architectures We experiment with three backbones for the feature learner $f(\cdot |\theta)$ : 1) a 4-layer convolutional network ("Conv4") with input image resolution of $84 \times 84$ , similar to [9, 36, 42]; 2) a ResNet-18 [18] with input size of $224 \times 224$ ; and 3) a 28-layers Wide Residual Network ("WRN-28-10") [41] with input size of $80 \times 80$ in 3 steps of dimension reduction. We use a single hidden layer MLP of 1024 dimensions as the critic network $h(\cdot |\phi)$ (c.f. sec. 4.3).
+
+Implementation details Recall from sec. 3 that training consists of two stages: 1) pre-training using base categories $\mathcal{X}^b$ ; and 2) fine-tuning on novel categories $\mathcal{X}^n$ . For pre-training, we use the early stopping algorithm from sec. 5.2 with a window size of 50. Standard data augmentation approaches (i.e., color jitter, random crops, and left-right flips as in [3]) have been employed, and the Adam algorithm with a learning rate of $10^{-3}$ and batch size of 64 is used for both pre-training and fine-tuning. The arcmax loss (eq. 7) is configured with $s = 20$ and $m = 0.1$ which are set by cross validation. In the fine-tuning stage, episodes are defined by randomly selecting $N = 5$ classes from the novel categories $\mathcal{X}^n$ . $k$ examples for each category are subsequently sampled ( $k = 1$ and $k = 5$ in our experiments). As in Chen et al. [3], no standard data augmentation was used in this stage. We used episodic cross-validation to find $s$ and $m$ with a fixed encoder. More specifically, $(s,m)$ were found to be $(5,0.1)$ for the Conv4 and $(5,0.01)$ for the WRN-28-10 and ResNet-18 backbones. The learning rate for Adam was set to $10^{-3}$ and $10^{-5}$ for the centroid and adversarial alignments respectively. Similarly to [1], 5 iterations (inner loop of algorithm 2) were used to train the critic $h(\cdot|\phi)$ . We fix the number of related base categories as $B = 10$ (see supp. mat. for an ablation study on $B$ ). For this reason, we used a relatively large number of categories (50 classes out of the 64 available in mini-ImageNet).
+
+Table 1: Preliminary evaluation using mini-ImageNet and CUB, presenting 5-way classification accuracy using the Conv4 backbone, with $\pm$ indicating the $95\%$ confidence intervals over 600 episodes. The best result is boldfaced, while the best result prior to this work is highlighted in blue. Throughout this paper, “-” indicates when a paper does not report results in the corresponding scenario.
+
+| Method | mini-ImageNet | CUB |
| 1-shot | 5-shot | 1-shot | 5-shot |
| Meta-learning | Meta-LSTM [36] | 43.44 ± 0.77 | 55.31 ± 0.71 | - | - |
| MatchingNet‡[47] | 43.56 ± 0.84 | 55.31 ± 0.73 | 60.52 ± 0.88 | 75.29 ± 0.75 |
| ProtoNet‡[42] | 49.42 ± 0.78 | 68.20 ± 0.66 | 51.31 ± 0.91 | 70.77 ± 0.69 |
| MAML‡[10] | 48.07 ± 1.75 | 63.15 ± 0.91 | 55.92 ± 0.95 | 72.09 ± 0.76 |
| RelationNet‡[44] | 50.44 ± 0.82 | 65.32 ± 0.70 | 62.45 ± 0.98 | 76.11 ± 0.69 |
| Softmax† | 46.40 ± 0.72 | 64.37 ± 0.59 | 47.12 ± 0.74 | 64.16 ± 0.71 |
| softmax†◇ | 46.99 ± 0.73 | 65.33 ± 0.60 | 45.68 ± 0.86 | 66.94 ± 0.84 |
| cosmax† | 50.92 ± 0.76 | 67.29 ± 0.59 | 60.53 ± 0.83 | 79.34 ± 0.61 |
| cosmax†◇ | 52.04 ± 0.82 | 68.47 ± 0.60 | 60.66 ± 1.04 | 79.79 ± 0.75 |
| our baseline (sec. 5) | 51.90 ± 0.79 | 69.07 ± 0.62 | 60.85 ± 1.07 | 79.74 ± 0.64 |
| adversarial | 52.13 ± 0.99 | 70.78 ± 0.60 | 63.30 ± 0.94 | 81.35 ± 0.67 |
| centroid | 53.14 ± 1.06 | 71.45 ± 0.72 | 62.71 ± 0.88 | 80.48 ± 0.81 |
+
+$\dagger$ our implementation with early stopping $\ddagger$ implementation from [3] for CUB
+
+# 6.2 mini-ImageNet and CUB with a shallow Conv4 backbone
+
+We first evaluate the new baseline presented in sec. 5 and our associative alignment strategies using the Conv4 backbone on the mini-ImageNet (see supp. mat. for evaluations in higher number of ways) and CUB datasets, with corresponding results presented in table 1. We note that arcmax with early stopping improves on using cosmax and softmax with and without early stopping for both the 1- and 5-shot scenarios, on both the mini-ImageNet and CUB datasets. We followed the same dataset split configuration, network architecture, and implementation details given in [3] for our testing. Our centroid associative alignment outperforms the state of the art in all the experiments, with gains of $1.24\%$ and $2.38\%$ in 1- and 5-shot over our baseline on mini-ImageNet. For CUB, the adversarial alignment provides an additional gain of $0.6\%$ and $0.87\%$ over the centroid one.
+
+# 6.3 mini-ImageNet and tieredImageNet with deep backbones
+
+We now evaluate our proposed associative alignment on both the mini-ImageNet and tieredimageNet datasets using two deep backbones: ResNet-18 and WRN-28-10. Table 2 compares our proposed alignment methods with several approaches.
+
+mini-ImageNet Our centroid associative alignment strategy achieves the best 1- and 5-shot classification tasks on both the ResNet-18 and WRN-28-10 backbones, with notable absolute accuracy improvements of $2.72\%$ and $1.68\%$ over
+
+Table 2: mini-ImageNet and tieredImageNet results using ResNet-18 and WRN-28-10 backbones. $\pm$ denotes the $95\%$ confidence intervals over 600 episodes.
+
+ | Method | mini-ImageNet | tieredImageNet |
| 1-shot | 5-shot | 1-shot | 5-shot |
| ResNet-18 | TADAM [33] | 58.50 ± 0.30 | 76.70 ± 0.30 | - | - |
| ProtoNet‡[42] | 54.16 ± 0.82 | 73.68 ± 0.65 | 61.23 ± 0.77 | 80.00 ± 0.55 |
| SNAIL [32] | 55.71 ± 0.99 | 68.88 ± 0.92 | - | - |
| IDeMe-Net [4] | 59.14 ± 0.86 | 74.63 ± 0.74 | - | - |
| Activation to Param. [35] | 59.60 ± 0.41 | 73.74 ± 0.19 | - | - |
| MTL [43] | 61.20 ± 1.80 | 75.50 ± 0.80 | - | - |
| TapNet [54] | 61.65 ± 0.15 | 76.36 ± 0.10 | 63.08 ± 0.15 | 80.26 ± 0.12 |
| VariationalFSL [57] | 61.23 ± 0.26 | 77.69 ± 0.17 | - | - |
| MetaOptNet* [24] | 62.64 ± 0.61 | 78.63 ± 0.46 | 65.99 ± 0.72 | 81.56 ± 0.53 |
| our baseline (sec. 5) | 58.07 ± 0.82 | 76.62 ± 0.58 | 65.08 ± 0.19 | 83.67 ± 0.51 |
| adversarial alignment | 58.84 ± 0.77 | 77.92 ± 0.82 | 66.44 ± 0.61 | 85.12 ± 0.53 |
| centroid alignment | 59.88 ± 0.67 | 80.35 ± 0.73 | 69.29 ± 0.56 | 85.97 ± 0.49 |
| WRN-28-10 | LEO [39] | 61.76 ± 0.08 | 77.59 ± 0.12 | 66.33 ± 0.09 | 81.44 ± 0.12 |
| wDAE [15] | 61.07 ± 0.15 | 76.75 ± 0.11 | 68.18 ± 0.16 | 83.09 ± 0.12 |
| CC+rot [13] | 62.93 ± 0.45 | 79.87 ± 0.33 | 70.53 ± 0.51 | 84.98 ± 0.36 |
| Robust-dist++ [39] | 63.28 ± 0.62 | 81.17 ± 0.43 | - | - |
| Transductive-ft [7] | 65.73 ± 0.68 | 78.40 ± 0.52 | 73.34 ± 0.71 | 85.50 ± 0.50 |
| our baseline (sec. 5) | 63.28 ±0.71 | 78.31 ±0.57 | 68.47 ±0.86 | 84.11 ±0.65 |
| adversarial alignment | 64.79 ±0.93 | 82.02 ±0.88 | 73.87 ±0.76 | 84.95 ±0.59 |
| centroid alignment | 65.92 ± 0.60 | 82.85 ± 0.55 | 74.40 ± 0.68 | 86.61 ±0.59 |
+
+‡ Results are from [3] for mini-ImageNet and from [24] for tieredImageNet, * ResNet-12
+
+MetaOptNet [24] and Robust-dist++ [8] respectively. The single case where a previous method achieves superior results is that of MetaOptNet, which outperforms our method by $2.76\%$ in 1-shot. For the WRN-28-10 backbone, we achieve similar results to Transductive-ft [7] for 1-shot, but outperform their method by $4.45\%$ in 5-shot. Note that unlike IDeMe-Net [4], SNAIL [32] and TADAM [33], which make use of extra modules, our method achieves significant improvements over these methods without any changes to the backbone.
+
+tieredImageNet Table 2 also shows that our centroid associative alignment outperforms the compared methods on tieredImageNet in both 1- and 5-shot scenarios. Notably, our centroid alignment results in a gain of $3.3\%$ and $4.41\%$ over MetaOptNet [24] using the ResNet-18. Likewise, our centroid alignment gains $1.06\%$ and $1.11\%$ over the best of the compared methods using WRN-28-10.
+
+# 6.4 FC100 and CUB with a ResNet-18 backbone
+
+We present additional results on the FC100 and CUB datasets with a ResNet-18 backbone in table 3. In FC100, our centroid alignment gains $0.73\%$ and $2.14\%$
+
+Table 3: Results on the FC100 and CUB dataset using ResNet-18 backbones. $\pm$ denotes the $95\%$ confidence intervals over 600 episodes. The best result is boldfaced, while the best result prior to this work is highlighted in blue.
+
+| Method | FC100 | CUB |
| 1-shot | 5-shot | 1-shot | 5-shot |
| Robust-20 [8] | - | - | 58.67 ± 0.65 | 75.62 ± 0.48 |
| GNN-LFT [45] | - | - | 51.51 ± 0.80 | 73.11 ± 0.68 |
| RelationNet‡ [44] | - | - | 67.59 ± 1.02 | 82.75 ± 0.58 |
| ProtoNet‡ [42] | 40.5 ± 0.6 | 55.3 ± 0.6 | 71.88 ± 0.91 | 87.42 ± 0.48 |
| TADAM [33] | 40.1 ± 0.4 | 56.1 ± 0.4 | - | - |
| MetaOptNet† [24] | 41.1 ± 0.6 | 55.5 ± 0.6 | - | - |
| MTL [43] | 45.1 ± 1.8 | 57.6 ± 0.9 | - | - |
| Transductive-ft [7] | 43.2 ± 0.6 | 57.6 ± 0.6 | - | - |
| our baseline (sec. 5) | 40.84 ± 0.71 | 57.02 ± 0.63 | 71.71 ± 0.86 | 85.74 ± 0.49 |
| adversarial | 43.44 ± 0.71 | 58.69 ± 0.56 | 70.80 ± 1.12 | 88.04 ± 0.54 |
| centroid | 45.83 ± 0.48 | 59.74 ± 0.56 | 74.22 ± 1.09 | 88.65 ± 0.55 |
+
+$\ddagger$ implementation from [3] for CUB, and from [24] for FC100
+
+over MTL [43] in 1- and 5-shot respectively. We also observe improvements in CUB with our associative alignment approaches, with the centroid alignment outperforming ProtoNet [42] by $2.3\%$ in 1-shot and $1.2\%$ in 5-shot. We outperform Robust-20 [8], an ensemble of 20 networks, by $4.03\%$ and $4.15\%$ on CUB.
+
+# 6.5 Cross-domain evaluation
+
+We also evaluate our alignment strategies in cross-domain image classification. Here, following [3], the base categories are drawn from mini-ImageNet, but the novel categories are from CUB. As shown in table 4, we gain $1.3\%$ and $5.4\%$ over the baseline in the 1- and 5-shot, respectively, with our proposed centroid alignment. Adversarial alignment falls below the baseline in 1-shot by $-1.2\%$ , but gains $5.9\%$ in 5-shot. Overall, our centroid alignment method shows absolute accuracy improvements over the state of the art (i.e., cosmax [3]) of $3.8\%$ and $6.0\%$ in 1- and 5- shot respectively. We also outperform Robust-20 [8], an ensemble of 20 networks, by $4.65\%$ for 5-shot on mini-ImageNet to CUB cross-domain. One could argue that the three bird categories (i.e., house finch, robin, and toucan) in mini-ImageNet bias the cross-domain evaluation. Re-training the approach by excluding these classes resulted in a similar performance as shown in table 4.
+
+# 7 Discussion
+
+This paper presents the idea of associative alignment for few-shot image classification, which allows for higher generalization performance by enabling the
+
+Table 4: Cross-domain results from mini-ImageNet to CUB in 1-shot, 5-shot, 10-shot scenarios using a ResNet-18 backbone.
+
+| Method | 1-shot | 5-shot | 10-shot |
| ProtoNet‡ [49] | - | 62.02 ± 0.70 | - |
| MAML‡ [10] | - | 51.34 ± 0.72 | - |
| RelationNet‡ [44] | - | 57.71 ± 0.73 | - |
| Diverse 20 [8] | - | 66.17 ± 0.73 | - |
| cosmax† [3] | 43.06 ± 1.01 | 64.38 ± 0.86 | 67.56±0.77 |
| our baseline (sec. 5) | 45.60 ± 0.94 | 64.93 ± 0.95 | 68.95±0.78 |
| adversarial | 44.37 ± 0.94 | 70.80 ± 0.83 | 79.63 ±0.71 |
| adversarial* | 44.65 ± 0.88 | 71.48 ± 0.96 | 78.52 ±0.70 |
| centroid | 46.85 ± 0.75 | 70.37 ± 1.02 | 79.98 ±0.80 |
| centroid* | 47.25 ± 0.76 | 72.37 ± 0.89 | 79.46 ±0.72 |
+
+* without birds (house finch, robin, toucan) in base classes † our implementation, with early stopping, ‡ implementation from [3]
+
+training of the entire network, still while avoiding overfitting. To do so, we design a procedure to detect related base categories for each novel class. Then, we proposed a centroid-based alignment strategy to keep the intra-class alignment while performing updates for the classification task. We also explored an adversarial alignment strategy as an alternative. Our experiments demonstrate that our approach, specifically the centroid-based alignment, outperforms previous works in almost all scenarios. The current limitations of our work provide interesting future research directions. First, the alignment approach (sec. 4) might include irrelevant examples from the base categories, so using categorical semantic information could help filter out bad samples. An analysis showed that $\sim 12\%$ of the samples become out-of-distribution (OOD) using a centroid nearest neighbour criteria on miniImageNet in 5-way 1- and 5-shot using ResNet-18. Classification results were not affected significantly by discarding OOD examples at each iteration. Second, the multi-modality of certain base categories look inevitable and might degrade the generalization performance compared to the single-mode case assumed by our centroid alignment strategy. Investigating the use of a mixture family might therefore improve generalization performance. Finally, our algorithms compute the related base once and subsequently keep them fixed during an episode, not taking into account the changes applied to the latent space during the episodic training. Therefore, a more sophisticated dynamic sampling mechanism could be helpful in the finetuning stage.
+
+# Acknowledgement
+
+This project was supported by funding from NSERC-Canada, Mitacs, Prompt-Québec, and E Machine Learning. We thank Ihsen Hedhli, Saed Moradi, Marc-Andre Gardner, and Annette Schwerdtfeger for proofreading of the manuscript.
+
+# References
+
+1. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv preprint arXiv:1701.07875 (2017)
+2. Bertinetto, L., Henriques, J.F., Torr, P., Vedaldi, A.: Meta-learning with differentiable closed-form solvers. In: The International Conference on Learning Representations (2019)
+3. Chen, W.Y., Liu, Y.C., Kira, Z., Wang, Y.C.F., Huang, J.B.: A closer look at few-shot classification. arXiv preprint arXiv:1904.04232 (2019)
+4. Chen, Z., Fu, Y., Wang, Y.X., Ma, L., Liu, W., Hebert, M.: Image deformation meta-networks for one-shot learning. In: The Conference on Computer Vision and Pattern Recognition (2019)
+5. Chu, W.H., Li, Y.J., Chang, J.C., Wang, Y.C.F.: Spot and learn: A maximum-entropy patch sampler for few-shot image classification. In: The Conference on Computer Vision and Pattern Recognition (2019)
+6. Deng, J., Guo, J., Xue, N., Zafeiriou, S.: Arcface: Additive angular margin loss for deep face recognition. In: The Conference on Computer Vision and Pattern Recognition (2019)
+7. Dhillon, G.S., Chaudhari, P., Ravichandran, A., Soatto, S.: A baseline for few-shot image classification. arXiv preprint arXiv:1909.02729 (2019)
+8. Dvornik, N., Schmid, C., Mairal, J.: Diversity with cooperation: Ensemble methods for few-shot classification. In: The International Conference on Computer Vision (2019)
+9. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: The International Conference on Machine Learning (2017)
+0. Finn, C., Xu, K., Levine, S.: Probabilistic model-agnostic meta-learning. In: Advances in Neural Information Processing Systems (2018)
+1. Gao, H., Shou, Z., Zareian, A., Zhang, H., Chang, S.F.: Low-shot learning via covariance-preserving adversarial augmentation networks. In: Advances in Neural Information Processing Systems (2018)
+2. Garcia, V., Bruna, J.: Few-shot learning with graph neural networks. arXiv preprint arXiv:1711.04043 (2017)
+3. Gidaris, S., Bursuc, A., Komodakis, N., Pérez, P., Cord, M.: Boosting few-shot visual learning with self-supervision. In: The International Conference on Computer Vision (2019)
+4. Gidaris, S., Komodakis, N.: Dynamic few-shot visual learning without forgetting. In: The Conference on Computer Vision and Pattern Recognition (2018)
+5. Gidaris, S., Komodakis, N.: Generating classification weights with gnn denoising autoencoders for few-shot learning. arXiv preprint arXiv:1905.01102 (2019)
+6. Gui, L.Y., Wang, Y.X., Ramanan, D., Moura, J.M.F.: Few-shot human motion prediction via meta-learning. In: The European Conference on Computer Vision (2018)
+7. Hariharan, B., Girshick, R.: Low-shot visual recognition by shrinking and hallucinating features. In: The International Conference on Computer Vision (2017)
+8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: The Conference on Computer Vision and Pattern Recognition (2016)
+9. Hilliard, N., Phillips, L., Howland, S., Yankov, A., Corley, C.D., Hodes, N.O.: Few-shot learning with metric-agnostic conditional embeddings. arXiv preprint arXiv:1802.04376 (2018)
+
+20. Jiang, H., Wang, R., Shan, S., Chen, X.: Learning class prototypes via structure alignment for zero-shot recognition. In: The European Conference on Computer Vision (2018)
+21. Kim, J., Oh, T.H., Lee, S., Pan, F., Kweon, I.S.: Variational prototyping-encoder: One-shot learning with prototypical images. In: The Conference on Computer Vision and Pattern Recognition (2019)
+22. Kim, T., Yoon, J., Dia, O., Kim, S., Bengio, Y., Ahn, S.: Bayesian model-agnostic meta-learning. arXiv preprint arXiv:1806.03836 (2018)
+23. Krizhevsky, A., Nair, V., Hinton, G.: Cifar-10 and cifar-100 datasets. URl: https://www.cs.toronto.edu/kriz/cifar.html (2009)
+24. Lee, K., Maji, S., Ravichandran, A., Soatto, S.: Meta-learning with differentiable convex optimization. In: The Conference on Computer Vision and Pattern Recognition (2019)
+25. Li, W., Wang, L., Xu, J., Huo, J., Gao, Y., Luo, J.: Revisiting local descriptor based image-to-class measure for few-shot learning. In: The Conference on Computer Vision and Pattern Recognition (2019)
+26. Li, X., Sun, Q., Liu, Y., Zhou, Q., Zheng, S., Chua, T.S., Schiele, B.: Learning to self-train for semi-supervised few-shot classification. In: Advances in Neural Information Processing Systems (2019)
+27. Lifchitz, Y., Avrithis, Y., Picard, S., Bursuc, A.: Dense classification and implanting for few-shot learning. In: The Conference on Computer Vision and Pattern Recognition (2019)
+28. Lim, J.J., Salakhutdinov, R.R., Torralba, A.: Transfer learning by borrowing examples for multiclass object detection. In: Advances in Neural Information Processing Systems (2011)
+29. Liu, B., Wu, Z., Hu, H., Lin, S.: Deep metric transfer for label propagation with limited annotated data. In: The IEEE International Conference on Computer Vision (ICCV) Workshops (Oct 2019)
+30. Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of Machine Learning Research (2008)
+31. Mehrotra, A., Dukkipati, A.: Generative adversarial residual pairwise networks for one shot learning. arXiv preprint arXiv:1703.08033 (2017)
+32. Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P.: A simple neural attentive meta-learner. arXiv preprint arXiv:1707.03141 (2017)
+33. Oreshkin, B., López, P.R., Lacoste, A.: Tadam: Task dependent adaptive metric for improved few-shot learning. In: Advances in Neural Information Processing Systems (2018)
+34. Qi, H., Brown, M., Lowe, D.G.: Low-shot learning with imprinted weights. In: The Conference on Computer Vision and Pattern Recognition (2018)
+35. Qiao, S., Liu, C., Shen, W., Yuille, A.L.: Few-shot image recognition by predicting parameters from activations. In: The Conference on Computer Vision and Pattern Recognition (2018)
+36. Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning (2016)
+37. Ren, M., Triantafillou, E., Ravi, S., Snell, J., Swersky, K., Tenenbaum, J.B., Larochelle, H., Zemel, R.S.: Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676 (2018)
+38. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. The International Journal of Computer Vision (2015)
+
+39. Rusu, A.A., Rao, D., Sygnowski, J., Vinyals, O., Pascanu, R., Osindero, S., Hadsell, R.: Meta-learning with latent embedding optimization. arXiv preprint arXiv:1807.05960 (2018)
+40. Schwartz, E., Karlinsky, L., Shtok, J., Harary, S., Marder, M., Kumar, A., Feris, R., Giryes, R., Bronstein, A.: Delta-encoder: an effective sample synthesis method for few-shot object recognition. In: Advances in Neural Information Processing Systems (2018)
+41. Sergey, Z., Nikos, K.: Wide residual networks. In: British Machine Vision Conference (2016)
+42. Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Advances in Neural Information Processing Systems (2017)
+43. Sun, Q., Liu, Y., Chua, T.S., Schiele, B.: Meta-transfer learning for few-shot learning. In: The Conference on Computer Vision and Pattern Recognition (2019)
+44. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: The Conference on Computer Vision and Pattern Recognition (2018)
+45. Tseng, H.Y., Lee, H.Y., Huang, J.B., Yang, M.H.: Cross-domain few-shot classification via learned feature-wise transformation. arXiv preprint arXiv:2001.08735 (2020)
+46. Vilalta, R., Drissi, Y.: A perspective view and survey of meta-learning. Artificial Intelligence Review (2002)
+47. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: Advances in Neural Information Processing Systems (2016)
+48. Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The caltech-ucsd birds-200-2011 dataset (2011)
+49. Wang, Y.X., Girshick, R., Hebert, M., Hariharan, B.: Low-shot learning from imaginary data. In: The Conference on Computer Vision and Pattern Recognition (2018)
+50. Wang, Y.X., Hebert, M.: Learning from small sample sets by combining unsupervised meta-training with cnns. In: Advances in Neural Information Processing Systems (2016)
+51. Wang, Y.X., Hebert, M.: Learning to learn: Model regression networks for easy small sample learning. In: The European Conference on Computer Vision. Springer (2016)
+52. Wang, Y.X., Ramanan, D., Hebert, M.: Meta-learning to detect rare objects. In: The International Conference on Computer Vision (2019)
+53. Wertheimer, D., Hariharan, B.: Few-shot learning with localization in realistic settings. In: The Conference on Computer Vision and Pattern Recognition (2019)
+54. Yoon, S.W., Seo, J., Moon, J.: Tapnet: Neural network augmented with task-adaptive projection for few-shot learning. arXiv preprint arXiv:1905.06549 (2019)
+55. Zhang, H., Zhang, J., Koniusz, P.: Few-shot learning via saliency-guided hallucination of samples. In: The Conference on Computer Vision and Pattern Recognition (2019)
+56. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)
+57. Zhang, J., Zhao, C., Ni, B., Xu, M., Yang, X.: Variational few-shot learning. In: The International Conference on Computer Vision (2019)
+58. Zhao, F., Zhao, J., Yan, S., Feng, J.: Dynamic conditional networks for few-shot learning. In: The European Conference on Computer Vision (2018)
+59. Zhu, L., Yang, Y.: Compound memory networks for few-shot video classification. In: The European Conference on Computer Vision (2018)
\ No newline at end of file
diff --git a/associativealignmentforfewshotimageclassification/images.zip b/associativealignmentforfewshotimageclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a95302be791c796c3896c18c282baf707afc8046
--- /dev/null
+++ b/associativealignmentforfewshotimageclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a748333d343ea241416ac0736dd8f1be1aabec70f94e883db08b6b10ef2ca89
+size 501183
diff --git a/associativealignmentforfewshotimageclassification/layout.json b/associativealignmentforfewshotimageclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ef6c6a063b5fa4137376de93ccb748883a92d620
--- /dev/null
+++ b/associativealignmentforfewshotimageclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:69add6e8367caa0ffc3c5a743d6b6dc578cf7a22dcb2191c294bee6f92eadae7
+size 528336
diff --git a/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/b8b7b9d8-3ad4-4059-82e3-c48bca843bd1_content_list.json b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/b8b7b9d8-3ad4-4059-82e3-c48bca843bd1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..661b4d2c6f99de4d203bd829aac4afcd9ab64757
--- /dev/null
+++ b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/b8b7b9d8-3ad4-4059-82e3-c48bca843bd1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1206dfd925363d90b03b7995b1ce4ad18641b8c9e1bed93dcf16be969eb00b62
+size 83734
diff --git a/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/b8b7b9d8-3ad4-4059-82e3-c48bca843bd1_model.json b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/b8b7b9d8-3ad4-4059-82e3-c48bca843bd1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fcb30e8c6524f224f110a1f55175187a284e0255
--- /dev/null
+++ b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/b8b7b9d8-3ad4-4059-82e3-c48bca843bd1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:368c04554b0938611e0c218164a70c5da518dda09d023d69065935b183cbb070
+size 102894
diff --git a/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/b8b7b9d8-3ad4-4059-82e3-c48bca843bd1_origin.pdf b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/b8b7b9d8-3ad4-4059-82e3-c48bca843bd1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e57e77e5189767aaabb91bacfc627dbb936698cf
--- /dev/null
+++ b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/b8b7b9d8-3ad4-4059-82e3-c48bca843bd1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:853f68f6d432b2dfb345b5f818f2d2a3ea904257a12daa61bbad8043dced98d8
+size 2248102
diff --git a/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/full.md b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..557f4b23ac623973534b2b1a7bbb4aaa1b901a1a
--- /dev/null
+++ b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/full.md
@@ -0,0 +1,294 @@
+# Asymmetric Two-Stream Architecture for Accurate RGB-D Saliency Detection
+
+Miao Zhang $^{1}$ , Sun Xiao Fei $^{1\star}$ , Jie Liu $^{1\star}$ , Shuang Xu $^{1}$ , Yongri Piao $^{1\boxdot}$ , and Huchuan Lu $^{1,2}$
+
+$^{1}$ Dalian University of Technology, Dalian, China
+ $^{2}$ Pengcheng Lab, Shenzhen, China
+{xiaofeisun, 1605721375, sxu1997}@mail.dlut.edu.cn, {miaozhang, yrpiao, lhchuan}@dlut.edu.cn
+https://github.com/OIPLab-DUT/ATSA
+
+Abstract. Most existing RGB-D saliency detection methods adopt symmetric two-stream architectures for learning discriminative RGB and depth representations. In fact, there is another level of ambiguity that is often overlooked: if RGB and depth data are necessary to fit into the same network. In this paper, we propose an asymmetric two-stream architecture taking account of the inherent differences between RGB and depth data for saliency detection. First, we design a flow ladder module (FLM) for the RGB stream to fully extract global and local information while maintaining the saliency details. This is achieved by constructing four detail-transfer branches, each of which preserves the detail information and receives global location information from representations of other vertical parallel branches in an evolutionary way. Second, we propose a novel depth attention module (DAM) to ensure depth features with high discriminative power in location and spatial structure being effectively utilized when combined with RGB features in challenging scenes. The depth features can also discriminatively guide the RGB features via our proposed DAM to precisely locate the salient objects. Extensive experiments demonstrate that our method achieves superior performance over 13 state-of-the-art RGB-D approaches on the 7 datasets. Our code will be publicly available.
+
+Keywords: Saliency detection $\cdot$ Flow ladder $\cdot$ Depth attention
+
+# 1 Introduction
+
+Salient object detection, which involves identifying the visually interesting regions, is a well-researched domain of computer vision. It serves as an essential pre-processing step for various visual tasks such as image retrieval [7,15,17,28], visual tracking [2,20,38], object segmentation [12,39,40,43,42], object recognition [10,36,37], and therefore makes an important contribution towards sustainable development.
+
+A majority of existing works [21,26] for saliency detection focus on operating RGB images. While RGB-based saliency detection methods have achieved great success, appearance features in RGB data are less predictive to some challenging scenes, such as multiple or transparent objects, similar foreground and background, complex background, low-intensity environment, etc.
+
+The depth cue has the preponderance of discriminative power in location and spatial structure, which has been proved beneficial to accurate saliency prediction [35]. Moreover, the paired depth data for RGB natural images are widely available with the advent of depth sensors, e.g., Kinect and Lytro Illum. Consequently, using depth information gains growing interests in saliency detection.
+
+Most RGB-D-based methods utilize symmetric two-stream architectures for extracting RGB and depth features [4,6,18,32]. However, we observe that while RGB data contain more information such as color, texture, contour, as well as
+
+limited location, grayscale depth data provide more information such as spatial structure and 3D layout. In consequence, a symmetric two-stream network may overlook the inherent differences of RGB and depth data. Asymmetric architectures have been adopted in few works to extract RGB and depth features, taking the differences between two modalities into account. Zhu et al. [48] present an architecture composed of a master network for processing RGB values, and a sub-network making full use of depth cues, which incorporates depth-based features into the master network via direct concatenation. Zhao et al. [46] incorporate the contrast prior to enhance the depth maps and then integrate them into the RGB stream for saliency detection. However, simple fusion strategies like direct concatenation or sum
+
+mation are less adaptive to locate the salient objects due to myriad possibilities of salient objects positions in the real world. Overall, these above methods overlook the fact that depth cue contributes differently to the salient object prediction in various scenes. Furthermore, existing RGB-D methods inevitably suffer from detail information loss [41,16] for adopting strides and pooling operations in the RGB and depth streams. An intuitive solution is to use skip-connections [22] or short-connections [21] for reconstructing the detail information. Although these strategies have brought satisfactory improvements, they remain restrictive to predict the complete structures with fine details.
+
+
+Fig.1. The comparison of predicted maps between our method and two top-ranking RGB-D-based methods on salient objects details, i.e., DMRA [32], CPFP [46]. The $1^{st}$ row and the $4^{th}$ row are the enlarged images of the red box area of the middle two rows, which show superior performance of our method on saliency details
+
+Building on the above observation, we strive to take a further step towards the goal of accurate saliency detection with an asymmetric two-stream model. The primary challenge towards this goal is how to effectively extract rich global context information while preserving local saliency details. The second challenge is how to effectively utilize the discriminative power of depth features to guide the RGB features for locating salient objects accurately.
+
+To confront these challenges, we propose an asymmetric two-stream architecture as illustrated in Fig. 2. Concretely, our contributions are:
+
+- We design a flow ladder module (FLM) and a lightweight depth network (DepthNet) with a small model size of 6.7MB. Instead of adopting skip-connections or short-connections, our FLM can effectively extract local detail information (see Fig. 1) and global context information through a local-global evolutionary fusion flow for accurate saliency detection.
+
+- We propose a novel depth attention module (DAM) to ensure that the depth features can effectively guide the RGB features by using the discriminative power of depth cues. Its effectiveness has been experimentally verified (see Table 4).
+
+- Furthermore, we conduct extensive experiments on 7 datasets and demonstrate that our method achieves consistently superior performance over 13 state-of-the-art RGB-D approaches in terms of 4 evaluation metrics. Numerically, our approach reduces the MAE performance by nearly $33\%$ on DUT-RGBD dataset. In addition, our method minimizes the model size by $33\%$ compared with the existing minimum method (PDNet) and achieves Top-2 running speed of 46 FPS.
+
+# 2 Related work
+
+RGB-D saliency detection. Although many RGB-based saliency detection methods have achieved appealing performance [16,29,33,44,45,47], they may not accurately detect the salient area because the appearance features in RGB data are less predictive when encountering with some complex scenes, such as low-contrast scenes, transparent objects, foreground sharing similar contents with background, multiple objects, and complex backgrounds. With the advent of consumer-grade depth cameras such as Kinect cameras, light field cameras and lidars, depth cues with a wealth of geometric and structural information is widely used in salient object detection (SOD).
+
+Existing RGB-D saliency detection methods can be generally classified into two categories: Traditional methods [49,50,9,31,8,35,24,11]. Ren et al. [35] propose a two-stage RGB-D saliency detection framework using the validity of global priors. Lang et al. [24] introduce the depth prior into the saliency detection model to improve detection performance. Desingh et al. [11] use a non-linear regression to combine the RGB-D saliency detection model with the RGB model to measure the saliency values. CNNs-based methods [46,32,6,5,48,4,18,34]. To better mine salient information in challenging scenes, some CNNs-based methods combine depth information with RGB information for more accurate results. Practices and theories that lead to symmetric two-stream architectures which extract RGB and depth representations equally have been studied for a long
+
+
+Fig. 2. The overall architecture of our proposed approach. Our asymmetric architecture consists of three parts, i.e., the RGBNet, the DAM and the lightweight DepthNet. The RGBNet includes a VGG-19 backbone and a flow ladder module. For depth stream, we also employ the same backbone of the RGBNet. The black arrows represent the information flows
+
+time [32,18,6,5,4]. Han et al. [18] design a symmetric architecture for fusing the deep representations of depth and RGB views automatically to obtain the final saliency map. Chen et al. [6] utilize two-stream CNNs-based models for introducing cross-model interactions in multiple layers by direct summation. Recently, several asymmetric architectures are proposed for processing different data types [46,48]. Zhao et al. [46] use the enhanced depth information as an auxiliary cue and adopt a pyramid decoding structure to obtain more accurate salient regions.
+
+Because of the inherent differences between RGB and depth information, classic symmetric two-stream architectures and simple fusion strategies may work their ways down to inaccurate prediction. Besides, the strides and pooling operations adopted in existing RGB-D-based methods for downsampling inevitably result in information loss. To address the above-mentioned issues, in this work, we design an asymmetric network and ably fuse RGB and depth information by a depth attention mechanism for precise saliency detection.
+
+# 3 The proposed method
+
+The overall architecture of our proposed method is shown in Fig. 2. In this section, we begin with describing the overall architecture in Section 3.1, then introduce the DepthNet in Section 3.2, the flow ladder module in Section 3.3, and finally the proposed depth attention module in Section 3.4.
+
+Table 1. Details of our DepthNet architecture, k represents the kernel, s represents the stride, chns represents the number of input/output channels for each layer, p represents the padding, in and out represent the input and output feature size
+
+| Name | Layer | k | s | p | chns | in | out |
| Conv1 | *1 | 3 | 1 | 1 | 3/64 | 256*256 | 256*256 |
| Maxpool | - | 2 | - | 64/64 | 256*256 | 128*128 |
| Conv2 | *1 | 3 | 1 | 1 | 64/128 | 128*128 | 128*128 |
| Maxpool | - | 2 | - | 128/128 | 128*128 | 64*64 |
| Transition1 | 3 | 1 | 1 | 128/32 | 64*64 | 64*64 |
| Conv3 | *4 | 3 | 1 | 1 | 32/32 | 64*64 | 64*64 |
| 3 | 1 | 1 | 32/32 | 64*64 | 64*64 |
| Conv4 | *4 | 3 | 1 | 1 | 32/32 | 64*64 | 64*64 |
| 3 | 1 | 1 | 32/32 | 64*64 | 64*64 |
| Transition2 | 3 | 1 | 1 | 32/128 | 64*64 | 64*64 |
| Conv5 | *4 | 3 | 1 | 1 | 128/128 | 64*64 | 64*64 |
| 3 | 1 | 1 | 128/128 | 64*64 | 64*64 |
+
+# 3.1 The overall architecture
+
+Considering that most RGB-D-based methods utilizing symmetric two-stream architectures overlook the inherent differences between RGB and depth data, we propose an asymmetric two-stream architecture, as illustrated in Fig. 2. Our two-stream architecture includes a lightweight depth stream and a RGB stream with a flow ladder module, namely DepthNet and RGBNet, respectively. As for the depth stream, we design a lightweight architecture as shown in Table 1. Then the extracted depth features are fed into the RGB stream through a depth attention mechanism (DAM, see Fig. 3) to generate complementary features with affluent information of location and spatial structure. For the RGB stream, we adopt the commonly used architecture VGG-19 as our baseline. Based on this baseline, we propose a novel flow ladder module (FLM) to preserve the detail information as well as receive global location information from representations of other vertical parallel branches in an evolutionary way, which benefits locating salient regions and achieves considerable performance gains.
+
+# 3.2 DepthNet
+
+Compared with RGB data which contains richer color and texture information, depth cues focus on spatial location information. A large number of parameters in a complex depth extraction network are redundant, thus we consider that is unnecessary to process depth data with a complex network as large as RGBNet. In addition, the ablation experiments on symmetric and asymmetric architectures in Section 4.3 also confirm our claim. As illustrated in Fig. 2, we adopt a
+
+
+Fig. 3. Illustration of the depth attention module (DAM). The images above $F_{out}$ are the corresponding original RGB image and ground truth
+
+detail-transfer architecture for the depth stream (see Table 1 for detailed specification) and take the original depth maps as input. Our DepthNet transfers detail information in the whole architecture to capture fine spatial details. Considering the differences between RGB and depth data, numerous redundant channels of depth features are unnecessary. Therefore, we prune the number of feature channels to 32 in Conv3, 4 and 128 in final Conv, which further achieves a more lightweight DepthNet with a model size of 6.7MB.
+
+# 3.3 RGBNet
+
+Deeper networks are able to extract richer high-level information such as location and semantic information, but strides and pooling operations widely used in existing RGB-D-based methods may cause detail information loss, such as boundary, small object, for saliency detection. A straightforward solution to this issue is combining the low-level features with the high-level features by skip-connections [22]. However, the low-level features take a less discriminative and predictive power for complex scenes, which has trouble contributing to accurate saliency detection. Hence, we design a novel RGBNet consisting of a VGG backbone for fair comparison and a flow ladder model to preserve the local detail information by constructing four detail-transfer branches and fuse the global location information in an evolutionary way. In order to fit our task, we truncate the last three fully-connected layers and maintain the five convolution blocks as well as all pooling operations of VGG-19. The FLM can preserve the resolution of representations in multiple scales and levels, ensuring that the local detail information and global location information contribute to the precision of saliency detection. More details are described as follows.
+
+In order to alleviate the detail information loss, we design a flow ladder model (FLM). This module is applied in VGG-19 and integrates four detail-transfer branches in the way of local-global evolutionary fusion flow. We design
+
+detail-transfer branches for preserving the saliency details. As shown in Fig. 2, the first two branches consist of 3 layers. The number of the layers in the $3^{rd}$ and $4^{th}$ branch is decreased to 2, 1, respectively. Specifically, we simply denote the $j^{th}$ layer of the $i^{th}$ branch as $B_{i}L_{j}$ , $i\epsilon[1,4]$ , $j\epsilon[1,3]$ . $B_{i}L_{j}$ is composed of four basicblocks [19], each of which consists of two convolutional layers as shown in the top of Fig. 2. Our FLM consists of 4 evolved detail-transfer branches. Instead of adopting strides and pooling operations, our FLM preserves the resolution of representations with more details in each branch by employing convolutional operations with $1^{*}1$ stride.
+
+We design a novel local-global evolutionary fusion flow for integrating multi-scale local and global features extracted from detail-transfer branches. Each branch receives rich information from other vertical parallel representations through our local-global evolutionary fusion flow. In this way, rich global context representations are generated while more local saliency details are preserved. Specifically, the representations of the deeper branches are fused into the shallower branches by upsampling and summation operations as well as the representations of the shallower branches are fused into the deeper branches by downsampling and summation operations as shown in the FLM of Fig. 2. Through the evolution between different branches (shown in Fig. 2), the local detail information and the global context information are effectively combined, which benefits the precision of saliency detection. The whole fusion process is described as the following equations:
+
+$$
+B _ {i} L _ {j} = \left\{ \begin{array}{c l} \operatorname {t r a n s} (\operatorname {C o n v} 2) & i = 1, j = 1 \\ \operatorname {t r a n s} (\operatorname {C o n v} (i + 1)) & i = j + 1, j \epsilon [ 1, 3 ], \\ \sum_ {n = 1} ^ {j} f \left(B _ {n} L _ {j - 1}\right) & i \epsilon [ 1, j ], j \epsilon [ 2, 3 ] \end{array} \right. \tag {1}
+$$
+
+$$
+F _ {R G B} ^ {j} = \sum_ {n = 1} ^ {j + 1} f \left(B _ {n} L _ {j}\right) j \epsilon [ 1, 2 ], \tag {2}
+$$
+
+$$
+F _ {R G B} ^ {3} = \operatorname {c a t} \left(f \left(B _ {n} L _ {3}\right)\right) n \epsilon [ 1, 4 ], \tag {3}
+$$
+
+where $B_{i}$ and $L_{j}$ denote the $i^{th}$ branch and $j^{th}$ layer, respectively. $f(\cdot)$ denotes $n - i$ times up-sampled when $n > i$ and $i - n$ times down-sampled when $n < i$ . And when $n$ is equal to $i$ , $f(\cdot)$ means no operation. Conv(i) means the output features of the $i^{th}$ Conv block in VGG-19 and trans(·) is operated by a convolutional layer to realize the transformation of the number of channels. $cat(\cdot)$ denotes concatenating all features together. The final output of our LFM namely $F_{RGB}^{3}$ is a concatenation of multi-scale features extracted from four branches. In conclusion, the features with local and global information are transferred to the parallel branches in an evolutionary way. Our proposed LFM can not only alleviate the object detail information loss but also effectively integrate multi-scale and multi-level features for precise saliency prediction.
+
+
+Fig. 4. Comparisons of ours with state-of-the-art CNNs-based methods. Those methods are top ranking ones in quantitative evaluation. Obviously, our results are more consistent with the ground truths (GT), especially in complex scenes
+
+# 3.4 Depth Attention Module
+
+Changes in statistics of object positions in the real world makes linear fusion strategies of RGB and depth data less adaptive to complex scenes. To take full advantage of the depth cues with the discriminative power in location and spatial structure, we design a depth attention module to adaptively fuse the RGB and depth representations as shown in Fig. 3. Firstly, the depth features contain abundant spatial and structural information. We utilize the context attention block which contains a $1^{*}1$ convolutional layer $W_{k}$ and a softmax function for extracting the salient location cues more precisely, instead of applying simple fusion like summation or concatenation. Then a matrix multiplication operation is adopted to aggregate all location features together to generate attention weights of each channel $i$ (i.e., $\alpha_{i}$ ) for capturing pixel-wise spatial dependencies. Moreover, the degree of response to the salient regions varies between features of different channels. Thus we adopt a channel-wise attention block which contains two $1^{*}1$ convolutional layers $W_{c}$ and a LayerNorm function to capture the interdependencies between channels, and further achieves a weighted depth feature $\beta$ . Then we adopt dot product operation to fuse $\beta$ into the RGB stream, which helps guide the RGB features at pixel-level to distinguish the foreground and background thoroughly. Furthermore, the ablation experiments in Section 4.3 also verify the effectiveness of our DAM compared with simple fusion. And the visual results in Fig. 6 (b) also prove that the salient regions are emphasized through the attention mechanism.
+
+The details of these three blocks can be formulated as the following equations:
+
+$$
+\alpha_ {i} = \sum_ {j = 1} ^ {N _ {p}} \frac {e ^ {W _ {k} F _ {d} ^ {j}}}{\sum_ {m = 1} ^ {N _ {p}} e ^ {W _ {k} F _ {d} ^ {m}}} F _ {d} ^ {j}, \tag {4}
+$$
+
+$$
+\beta_ {i} = \varsigma \left(W _ {c 2} \operatorname {R e L U} \left(L N \left(W _ {c 1} \alpha_ {i}\right)\right) \odot F _ {d}\right), \tag {5}
+$$
+
+$$
+F _ {f u s i o n} = \varsigma \left(F _ {R G B} \odot \beta\right), \tag {6}
+$$
+
+where $\alpha_{i}$ denotes the weight of the $i^{th}$ channel to obtain the global context features. $F_{d}^{j}$ means the $j^{th}$ position in depth feature $F_{depth}$ . $N_{p}$ is the number of positions in the depth feature map (e.g., $N_{p} = H\cdot W$ ). $W_{k}$ , $W_{c1}$ and $W_{c2}$ denote $1^{*}1$ convolutional operations. $LN$ denotes the LayerNorm operation after the convolution $W_{c1}$ , and $ReLU$ is an activate function. The $\varsigma(\cdot)$ and $\odot$ mean the sigmoid function and dot product operation, respectively. The $\beta_{i}$ indicates the depth pixel-wise attention map of $i^{th}$ channel of $F_{RGB}$ . $F_{RGB}$ and $F_{fusion}$ represent the input RGB feature and the output feature of the DAM, respectively. The $F_{fusion}$ can be calculated as a DAM output with much more effective depth-induced context-aware attention features. Furthermore, experiments in Section 4.3 show that our DAM is capable of fusing depth features discriminatively and filtering out features which are guided by depth cues in mistake.
+
+As illustrated in Fig. 3, the inputs of our DAM are $F_{RGB}^{i}$ and $F_{Depth}^{i}$ extracted from our LFM and DepthNet, respectively, $i = 1,2,3$ . At the end, a simple decoder is adopted for supervision. The decoder module contains two bilinear upsample functions, each of which is followed by 3 convolutional layers. The total loss $\mathrm{L}$ can be represented as:
+
+$$
+L = l _ {f} \left\{\text {D e c o d e r} \left(F _ {\text {f u s i o n}} ^ {3}\right); g t \right\}, \tag {7}
+$$
+
+where $F_{fusion}^{3}$ represents the output fusion feature of the third DAM and $gt$ means the ground-truth map. The cross-entropy loss $l_{f}$ can be computed as:
+
+$$
+l _ {f} \left\{\hat {y}; y \right\} = y l o g \hat {y} + (1 - y) l o g (1 - \hat {y}), \tag {8}
+$$
+
+where $y$ and $\hat{y}$ denote the saliency ground-truth map and the predicted map, respectively.
+
+# 4 Experiments
+
+# 4.1 Dataset
+
+We perform our experiments on 7 public RGB-D datasets for fair comparisons, i.e., NJUD [23], NLPR [31], RGBD135 [8], STEREO [30], LFSD [27], DUT-RGBD [32], SSD [25]. We split those datasets as [4,6,18] to guarantee fair comparisons. We randomly select 800 samples from DUT-RGBD, 1485 samples from NJUD and 700 samples from NLPR for training. The remaining images in these 3 datasets and other 4 datasets are all for testing to verify the generalization ability of saliency models. To prevent overfitting, we additionally augment the training set by flipping, cropping and rotating those images.
+
+Table 2. Quantitative comparisons of E-measure $(E_{\gamma})$ , S-measure $(S_{\lambda})$ , F-measure $(F_{\beta})$ and MAE $(M)$ on 7 widely-used RGB-D datasets. The best three results are shown in **boldface**, **red**, **green** fonts respectively. From top to bottom: the latest CNNs-based RGB-D methods and traditional RGB-D methods
+
+| Mehtod | Years | Backbone | DUT-RGBD | NJUD | NLPR | SSD |
| \(E_{\gamma}\)↑ | \(S_{\lambda}\)↑ | \(F_{\beta}\)↑ | \(M\)\(↓\) | \(E_{\gamma}\)↑ | \(S_{\lambda}\)↑ | \(F_{\beta}\)↑ | \(M\)\(↓\) | \(E_{\gamma}\)↑ | \(S_{\lambda}\)↑ | \(F_{\beta}\)↑ | \(M\)\(↓\) | \(E_{\gamma}\)↑ | \(S_{\lambda}\)↑ |
| Ours | - | VGG-19 | 0.948 | 0.918 | 0.920 | 0.032 | 0.921 | 0.901 | 0.893 | 0.040 | 0.945 | 0.907 | 0.876 | 0.028 | 0.901 | 0.860 |
| CPFP[46] | CVPR19 | VGG-16 | 0.814 | 0.749 | 0.736 | 0.099 | 0.906 | 0.878 | 0.877 | 0.053 | 0.924 | 0.888 | 0.822 | 0.036 | 0.832 | 0.807 |
| DMRA[32] | ICCV19 | VGG-19 | 0.927 | 0.888 | 0.883 | 0.048 | 0.908 | 0.886 | 0.872 | 0.051 | 0.942 | 0.899 | 0.855 | 0.031 | 0.892 | 0.857 |
| MMCI[6] | PR19 | VGG-16 | 0.855 | 0.791 | 0.753 | 0.113 | 0.878 | 0.859 | 0.813 | 0.079 | 0.871 | 0.855 | 0.729 | 0.059 | 0.860 | 0.814 |
| TANet[5] | TIP19 | VGG-16 | 0.866 | 0.808 | 0.779 | 0.093 | 0.893 | 0.878 | 0.844 | 0.061 | 0.916 | 0.886 | 0.795 | 0.041 | 0.879 | 0.839 |
| PDNet[48] | ICME19 | VGG-16 | 0.861 | 0.799 | 0.757 | 0.112 | 0.890 | 0.883 | 0.832 | 0.062 | 0.876 | 0.835 | 0.740 | 0.064 | 0.813 | 0.802 |
| PCA[4] | CVPR18 | VGG-16 | 0.858 | 0.801 | 0.760 | 0.100 | 0.896 | 0.877 | 0.844 | 0.059 | 0.916 | 0.873 | 0.794 | 0.044 | 0.883 | 0.843 |
| CTMF [18] | TCyb17 | VGG-16 | 0.884 | 0.834 | 0.792 | 0.097 | 0.864 | 0.849 | 0.788 | 0.085 | 0.869 | 0.860 | 0.723 | 0.056 | 0.837 | 0.776 |
| DF [34] | TIP17 | - | 0.842 | 0.730 | 0.748 | 0.145 | 0.818 | 0.735 | 0.744 | 0.151 | 0.838 | 0.769 | 0.682 | 0.099 | 0.802 | 0.742 |
| MB [49] | CAIP17 | - | 0.691 | 0.607 | 0.577 | 0.156 | 0.643 | 0.534 | 0.492 | 0.202 | 0.814 | 0.714 | 0.637 | 0.089 | 0.633 | 0.499 |
| CDCP [50] | ICCVW17 | - | 0.794 | 0.687 | 0.633 | 0.159 | 0.751 | 0.673 | 0.618 | 0.181 | 0.785 | 0.724 | 0.591 | 0.114 | 0.714 | 0.604 |
| DCMC [9] | SPL16 | - | 0.712 | 0.499 | 0.406 | 0.243 | 0.796 | 0.703 | 0.715 | 0.167 | 0.684 | 0.550 | 0.328 | 0.196 | 0.790 | 0.706 |
| NLPR [31] | ECCV14 | - | 0.767 | 0.568 | 0.659 | 0.174 | 0.722 | 0.530 | 0.625 | 0.201 | 0.772 | 0.591 | 0.520 | 0.119 | 0.726 | 0.562 |
| DES [8] | ICIMCS14 | - | 0.733 | 0.659 | 0.668 | 0.280 | 0.421 | 0.413 | 0.165 | 0.448 | 0.735 | 0.582 | 0.583 | 0.301 | 0.383 | 0.341 |
+
+Table 3. Continuation of Table 2
+
+| Method | Years | Backbone | SSD | STEREO | LFSD | RGBD135 |
| \(F_{\beta}\)↑ | M↓ | \(E_{\gamma}\)↑ | \(S_{\lambda}\)↑ | \(F_{\beta}\)↑ | M↓ | \(E_{\gamma}\)↑ | \(S_{\lambda}\)↑ | \(F_{\beta}\)↑ | M↓ | \(E_{\gamma}\)↑ | \(S_{\lambda}\)↑ | \(F_{\beta}\)↑ | M↓ |
| Ours | - | VGG-19 | 0.827 | 0.050 | 0.921 | 0.897 | 0.884 | 0.039 | 0.905 | 0.865 | 0.862 | 0.064 | 0.952 | 0.907 | 0.885 | 0.024 |
| CPFP [46] | CVPR19 | VGG-16 | 0.725 | 0.082 | 0.897 | 0.871 | 0.827 | 0.054 | 0.867 | 0.828 | 0.813 | 0.088 | 0.927 | 0.874 | 0.819 | 0.037 |
| DMRA [32] | ICCV19 | VGG-19 | 0.821 | 0.058 | 0.920 | 0.886 | 0.868 | 0.047 | 0.899 | 0.847 | 0.849 | 0.075 | 0.945 | 0.901 | 0.857 | 0.029 |
| MMCI [6] | PR19 | VGG-16 | 0.748 | 0.082 | 0.890 | 0.856 | 0.812 | 0.080 | 0.840 | 0.787 | 0.779 | 0.132 | 0.899 | 0.847 | 0.750 | 0.064 |
| TANet [5] | TIP19 | VGG-16 | 0.767 | 0.063 | 0.911 | 0.877 | 0.849 | 0.060 | 0.845 | 0.801 | 0.794 | 0.111 | 0.916 | 0.858 | 0.782 | 0.045 |
| PDNet [48] | ICME19 | VGG-16 | 0.716 | 0.115 | 0.903 | 0.874 | 0.833 | 0.064 | 0.872 | 0.845 | 0.824 | 0.109 | 0.915 | 0.868 | 0.800 | 0.050 |
| PCA[4] | CVPR18 | VGG-16 | 0.786 | 0.064 | 0.905 | 0.880 | 0.845 | 0.061 | 0.846 | 0.800 | 0.794 | 0.112 | 0.909 | 0.845 | 0.763 | 0.049 |
| CTMF [18] | TCyb17 | VGG-16 | 0.709 | 0.100 | 0.870 | 0.853 | 0.786 | 0.087 | 0.851 | 0.796 | 0.781 | 0.120 | 0.907 | 0.863 | 0.765 | 0.055 |
| DF [34] | TIP17 | - | 0.709 | 0.151 | 0.844 | 0.763 | 0.761 | 0.142 | 0.841 | 0.786 | 0.810 | 0.142 | 0.801 | 0.685 | 0.566 | 0.130 |
| MB [49] | CAIP17 | - | 0.414 | 0.219 | 0.693 | 0.579 | 0.572 | 0.178 | 0.631 | 0.538 | 0.543 | 0.218 | 0.798 | 0.661 | 0.588 | 0.102 |
| CDCP [50] | ICCVW17 | - | 0.524 | 0.219 | 0.801 | 0.727 | 0.680 | 0.149 | 0.737 | 0.658 | 0.634 | 0.199 | 0.806 | 0.706 | 0.583 | 0.119 |
| DCMC [9] | SPL16 | - | 0.551 | 0.200 | 0.838 | 0.745 | 0.761 | 0.150 | 0.842 | 0.754 | 0.815 | 0.155 | 0.674 | 0.470 | 0.228 | 0.194 |
| NLPR [31] | ECCV14 | - | 0.073 | 0.500 | 0.781 | 0.567 | 0.716 | 0.179 | 0.742 | 0.558 | 0.708 | 0.211 | 0.850 | 0.577 | 0.857 | 0.097 |
| DES [8] | ICIMCS14 | - | 0.684 | 0.168 | 0.451 | 0.473 | 0.223 | 0.417 | 0.475 | 0.440 | 0.228 | 0.415 | 0.786 | 0.627 | 0.689 | 0.289 |
+
+# 4.2 Experimental setup
+
+Evaluation Metrics. To comprehensively evaluate various methods, we adopt 4 evaluation metrics including F-measure $(F_{\beta})$ [1], mean absolute error (M) [3], S-measure $(S_{\lambda})$ [13], E-measure $(E_{\gamma})$ [14]. Specifically, the F-measure can evaluate the performance integrally. The M represents the average absolute difference between the saliency map and ground truth. The S-measure which is recently proposed can evaluate the structural similarities. The E-measure can jointly capture image level statistics and local pixel matching information.
+
+Implementation details. Our method is implemented with pytorch toolbox and trained on a PC with GTX 2080Ti GPU and 16 GB memory. The input images are uniformly resized to $256^{*}256$ . The momentum, weight decay, batch-size
+
+
+Fig. 5. Illustration of the six ablation experiments
+
+
+Fig. 6. (a) The visualization of the feature maps in FLM. The $B_{i}L_{j}$ presents the output features of corresponding block in Fig. 2. (b) Visualization of the effectiveness of the DAM. The $4^{th}$ (DAM b/f) and the $5^{th}$ columns (DAM a/f) show the feature maps before and after adopting DAM, respectively
+
+
+
+and learning rate of our network are set as 0.9, 0.0005, 2 and 1e-10, respectively. During training, we use softmax entropy loss described in Section 3.4 and the network converges after 60 epochs with mini-batch size 2.
+
+# 4.3 Ablation analysis
+
+Effect of FLM. We adopt the commonly two-stream VGG-19 network fused by direct summation as our baseline (denoted as 'B[s]' shown in Fig. 5 (a)). In order to verify the effectiveness of FLM, we employ the FLM in both the RGB and depth streams ('B+FLM[s]' shown in Fig. 5 (b)). The experimental results of (a) and (b) in Table 4 clearly demonstrate that our FLM obtains impressive performance gains. Moreover, as shown in Fig. 7, we can note that after employing FLM, the saliency maps achieve sharper boundaries as well as finer structures. Furthermore, for effectively analyzing the working mechanism of FLM module, we visualized the output features of each block in FLM. As shown in Fig. 6 (a), we can see that the branch 4 and branch 3 extract the
+
+Table 4. Ablation analysis on 7 datasets. The [s] and [a] following the modules represent the symmetric and asymmetric architectures, respectively. Obviously, each component of our architecture can achieve considerable accuracy gains. (a), (b), (c), (d), (e), (f) represent the modules indexed by the corresponding letters in Fig. 5
+
+| Components | Index | Modules | DUT-RGBD | NJUD | NLPR | STEREO | LFSD | RGBD135 | SSD |
| Fβ↑ | M↓ | Fβ↑ | M↓ | Fβ↑ | M↓ | Fβ↑ | M↓ | Fβ↑ | M↓ | Fβ↑ | M↓ | Fβ↑ | M↓ |
| FLM | (a) | B[s] | 0.822 | 0.068 | 0.810 | 0.071 | 0.766 | 0.050 | 0.816 | 0.068 | 0.812 | 0.088 | 0.794 | 0.044 | 0.749 | 0.080 |
| (b) | B+FLM[s] | 0.911 | 0.035 | 0.893 | 0.040 | 0.872 | 0.029 | 0.881 | 0.041 | 0.858 | 0.064 | 0.882 | 0.025 | 0.836 | 0.048 |
| DAM | (a) | B[s] | 0.822 | 0.068 | 0.810 | 0.071 | 0.766 | 0.050 | 0.816 | 0.068 | 0.812 | 0.088 | 0.794 | 0.044 | 0.749 | 0.080 |
| (c) | B+DAM[s] | 0.839 | 0.059 | 0.811 | 0.064 | 0.799 | 0.041 | 0.818 | 0.061 | 0.817 | 0.087 | 0.822 | 0.037 | 0.738 | 0.082 |
| (d) | B+FLM[a] | 0.909 | 0.034 | 0.886 | 0.041 | 0.870 | 0.029 | 0.879 | 0.041 | 0.870 | 0.060 | 0.882 | 0.025 | 0.825 | 0.052 |
| (e) | B+FLM+DAM[a] | 0.920 | 0.032 | 0.893 | 0.040 | 0.876 | 0.028 | 0.884 | 0.039 | 0.862 | 0.064 | 0.885 | 0.024 | 0.827 | 0.050 |
| Asymmetric | (b) | B+FLM[s] | 0.911 | 0.035 | 0.893 | 0.040 | 0.872 | 0.029 | 0.881 | 0.041 | 0.858 | 0.064 | 0.882 | 0.025 | 0.836 | 0.048 |
| (d) | B+FLM[a] | 0.909 | 0.034 | 0.886 | 0.041 | 0.870 | 0.029 | 0.879 | 0.041 | 0.870 | 0.060 | 0.882 | 0.025 | 0.825 | 0.052 |
| (f) | B+FLM+DAM[s] | 0.920 | 0.033 | 0.895 | 0.040 | 0.890 | 0.025 | 0.891 | 0.038 | 0.863 | 0.066 | 0.876 | 0.026 | 0.834 | 0.052 |
| (e) | B+FLM+DAM[a] | 0.920 | 0.032 | 0.893 | 0.040 | 0.876 | 0.028 | 0.884 | 0.039 | 0.862 | 0.064 | 0.885 | 0.024 | 0.827 | 0.050 |
+
+
+RGB depth
+
+
+GT
+
+
+RGB depth
+G7
+
+
+
+
+(a)
+
+(a)
+
+(a)
+
+(a)
+
+
+RGB
+Fig. 7. Visual comparisons of ablation analyses. (a), (b), (c), (d), (e), (f) represent the visual results of the experiments indexed by the corresponding letters in Fig. 5
+
+
+GT
+
+
+RGB
+
+
+GT
+
+
+RGB
+
+
+
+global location information while the branch 2 and branch 1 preserve more local detail information. This benefits from the evolutionary process of salient regions with finer details.
+
+Effect of DAM. We conduct contrast experiments for verifying the effectiveness of our DAM on both symmetric and asymmetric architectures. In terms of symmetric architecture, we replace simple summation with DAM on our baseline (denoted as $\mathrm{B} + \mathrm{DAM}[s]$ , as shown in Fig. 5 (c)). From the results of (a) and (c) in Table 4 we can see that the MAE is reduced by $18\%$ on NLPR dataset after employing DAM, which intuitively verifies the effect of DAM. Meanwhile, the corresponding visual results in Fig. 7 also illustrate that our DAM can fuse depth features discriminatively and filter out features which are guided by depth cues in mistake. On the other hand, we employ FLM in the RGB stream and replace the VGG-19 backbone with DepthNet in the depth stream (denoted as $\mathrm{B} + \mathrm{FLM}[a]$ , as shown in Fig. 5 (d)). And we adopt DAM on $\mathrm{B} + \mathrm{FLM}[a]$ for verifying the effect of DAM on asymmetric architecture (denoted as $\mathrm{B} + \mathrm{FLM} + \mathrm{DAM}[a]$ shown in Fig. 5 (e)). The comparison results of (d) and (e) in Table 4 demonstrate the effectiveness of DAM on asymmetric architecture over all datasets. Additional
+
+ly, we visualized the feature maps of our two-stream asymmetric architecture (before and after adopting DAM). As shown in Fig. 6 (b), we can see that the salient regions are emphasized after adopting DAM, which significantly improves our detection accuracy.
+
+Effect of asymmetric architecture. In order to illustrate the effectiveness of adopting asymmetric architecture, we compare the results of (b) and (d) in Fig. 5. Furthermore, for fair comparison, we adopt our FLM and DAM on the two-stream symmetric network (denoted as 'B+FLM+DAM[s]', as shown in Fig. 5 (f)). As we can see from Table 4 (Asymmetric), the asymmetric architecture achieves considerable performance compared with symmetrical architecture, but has a small size. Specifically, the asymmetric architecture tremendously minimizes the model size by $47\%$ (128.9MB vs. 244.4MB). Based on the above observation, we consider that is unnecessary to utilize large network as RGBNet for extracting features and we can replace it with a more lightweight network.
+
+# 4.4 Comparison with state-of-the-art
+
+Considering that most of the existing approaches are based on VGG network, we adopt VGG as our backbone for fair comparisons. And We compare our model with 13 RGB-D based salient object detection models including 8 CNNs-based methods: CPFP [46], DMRA [32], MMCI [6], TANet[5], PDNet [48], PCA [4], CTMF [18], DF [34], and 5 traditional methods: MB [49], CDCP [50], DCMC [9], NLPR [31], DES [8]. For fair comparisons, the results of the competing methods are generated by authorized codes or directly provided by authors.
+
+Quantitative Evaluation. Table 2 and 3 show the validation results in terms of 4 evaluation metrics on 7 datasets. As we can see, our model achieves significant outperformance over all other methods. It is noted that our approach outperforms all other methods by a dramatic margin on datasets DUT-RGBD, NJUD and RGBD135, which are considered as more challenging datasets due to the large number of complex scenes like similar foreground and background, low-contrast and transparent object. It further indicates that our model can be generalized to various challenging scenes.
+
+Qualitative Evaluation. We also visually compare our method with the most representative methods as shown in Fig. 4. From those results, we can observe that our saliency maps are closer to the ground truths. For instance, other methods have trouble in distinguishing salient objects in complex environments such as cluttered background (see the $1^{th}$ row), while ours can precisely identify the whole object and exquisite details. And our model can locate and detect the entire salient object with sharp details more accurately than others in more challenging scenes such as low-contrast (see the $2^{nd} - 3^{rd}$ rows), transparent object (see the $8^{th}$ row), multiple objects and small object (see the $5^{th} - 7^{th}$ rows). Those results further verify the effectiveness and robustness of our proposed model.
+
+Table 5. Complexity comparisons on two datasets. The best three results are shown in boldface, red, green fonts respectively
+
+| Methods | Size ↓ | FPS ↑ | DUT-RGBD | NLPR |
| Fβ ↑ | M ↓ | Fβ ↑ | M ↓ |
| PCA | 533.6MB | 15 | 0.760 | 0.100 | 0.794 | 0.044 |
| TANet | 951.9MB | 14 | 0.779 | 0.093 | 0.795 | 0.041 |
| MMCI | 929.7MB | 19 | 0.753 | 0.113 | 0.729 | 0.059 |
| PDNet | 192MB | 19 | 0.757 | 0.112 | 0.740 | 0.064 |
| CPFP | 278MB | 6 | 0.736 | 0.099 | 0.822 | 0.036 |
| CTMF | 826MB | 50 | 0.792 | 0.097 | 0.723 | 0.056 |
| DMRA | 238.8MB | 22 | 0.883 | 0.048 | 0.855 | 0.031 |
| Ours | 128.9MB | 46 | 0.920 | 0.032 | 0.876 | 0.028 |
+
+Complexity Evaluation. We compare the model size and execution time of our method with other 7 representative models, as shown in Table 5. It can be seen that our method achieves Top-1 model size and Top-2 FPS. To be specific, the model size of our architecture is only 128.9MB which is $2/3$ of the existing minimum model size (PDNet). Compared with the best performing method DMRA, our architecture tremendously minimizes the model size by $46\%$ and boosts the FPS by $109\%$ . Besides, we achieve a high running speed with 46 Frame Per Second (FPS) compared with the representative approaches.
+
+# 5 Conclusion
+
+In this paper, we propose an asymmetric two-stream architecture taking account of the inherent differences between RGB and depth data for saliency detection. For the RGB stream, we introduce a flow ladder module (FLM) for effectively extracting rich global context information while preserving local saliency details. And we design a lightweight DepthNet for depth stream with a small model size of 6.7MB. Besides, we propose a depth attention module (DAM) ensuring that the depth cues can discriminatively guide the RGB features for precisely locating salient objects. Our approach significantly advances the state-of-the-art methods over the widely used datasets and is capable of precisely capturing salient regions in challenging scenes.
+
+Acknowledgement. This work was supported by the Science and Technology Innovation Foundation of Dalian (2019J12GX034), the National Natural Science Foundation of China (61976035), and the Fundamental Research Funds for the Central Universities (DUT19JC58, DUT20JC42).
+
+# References
+
+1. Achanta, R., Hemami, S.S., Estrada, F.J., Ssstrunk, S.: Frequency-tuned salient region detection. In: CVPR. pp. 1597-1604 (2009) 4.2
+2. Borji, A., Frintrop, S., Sihite, D.N., Itti, L.: Adaptive object tracking by learning background context. In: CVPR. pp. 23-30 (2012), https://academic.microsoft.com/paper/2158535435 1
+3. Borji, A., Sihite, D.N., Itti, L.: Salient object detection: a benchmark. In: ECCV. pp. 414-429 (2012) 4.2
+4. Chen, H., Li, Y.: Progressively complementarity-aware fusion network for rgb-d salient object detection. In: CVPR. pp. 3051-3060 (2018) 1, 2, 4.1, 2, 3, 4.4
+5. Chen, H., Li, Y.: Three-stream attention-aware network for rgb-d salient object detection. TIP 28(6), 2825-2835 (2019) 2, 2, 3, 4.4
+6. Chen, H., Li, Y., Su, D.: Multi-modal fusion network with multi-scale multi-path and cross-modal interactions for rgb-d salient object detection. PR 86, 376-385 (2019) 1, 2, 4.1, 2, 3, 4.4
+7. Cheng, M.M., Hou, Q.B., Zhang, S.H., Rosin, P.L.: Intelligent visual media processing: When graphics meets vision. JCST 32(1), 110-121 (2017), https://academic.microsoft.com/paper/2571295082 1
+8. Cheng, Y., Fu, H., Wei, X., Xiao, J., Cao, X.: Depth enhanced saliency detection method. In: ICIMCS. pp. 23-27 (2014) 2, 4.1, 2, 3, 4.4
+9. Cong, R., Lei, J., Zhang, C., Huang, Q., Cao, X., Hou, C.: Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion. SPL 23(6), 819-823 (2016) 2, 2, 3, 4.4
+0. Dai, J., Li, Y., He, K., Sun, J.: R-fcn: object detection via region-based fully convolutional networks. In: NIPS. pp. 379-387 (2016), https://academic.microsoft.com/paper/2407521645 1
+1. Desingh, K., K, M.K., Rajan, D., Jawahar, C.V.: Depth really matters: Improving visual salient region detection with depth. In: BMVC (2013) 2
+2. Donoser, M., Urschler, M., Hirzer, M., Bischof, H.: Saliency driven total variation segmentation. In: ICCV. pp. 817-824 (2009), https://academic.microsoft.com/paper/2546160422 1
+3. Fan, D.P., Cheng, M.M., Liu, Y., Li, T., Borji, A.: Structure-measure: A new way to evaluate foreground maps. In: ICCV. pp. 4558-4567 (2017), https://academic.microsoft.com/paper/2963868681 4.2
+4. Fan, D.P., Gong, C., Cao, Y., Ren, B., Cheng, M.M., Borji, A.: Enhanced-alignment measure for binary foreground map evaluation. In: IJCAI. pp. 698-704 (2018) 4.2
+5. Fan, D.P., Wang, J., Liang, X.M.: Improving image retrieval using the context-aware saliency areas. AMM 734, 596-599 (2015), https://academic.microsoft.com/paper/2090323693 1
+6. Feng, M., Lu, H., Ding, E.: Attentive feedback network for boundary-aware salient object detection. In: CVPR. pp. 1623-1632 (2019), https://academic.microsoft.com/paper/2948510860 1, 2
+7. Gao, Y., Wang, M., Tao, D., Ji, R., Dai, Q.: 3-d object retrieval and recognition with hypergraph analysis. TIP 21(9), 4290-4303 (2012), https://academic.microsoft.com/paper/2068078373 1
+8. Han, J., Chen, H., Liu, N., Yan, C., Li, X.: Cnns-based rgb-d saliency detection via cross-view transfer and multiview fusion. TSMC 48(11), 3171-3183 (2018) 1, 2, 4.1, 2, 3, 4.4
+
+19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR). pp. 770-778 (2016) 3.3
+20. Hong, S., You, T., Kwak, S., Han, B.: Online tracking by learning discriminative saliency map with convolutional neural network. In: ICML. pp. 597-606 (2015), https://academic.microsoft.com/paper/1854404533 1
+21. Hou, Q., Cheng, M.M., Hu, X., Borji, A., Tu, Z., Torr, P.H.S.: Deeply supervised salient object detection with short connections. In: CVPR. vol. 41, pp. 815-828 (2017) 1, 1
+22. Hou, Q., Cheng, M.M., Hu, X., Borji, A., Tu, Z., Torr, P.H.S.: Deeply supervised salient object detection with short connections. TPAMI 41(4), 815-828 (2019), https://academic.microsoft.com/paper/2569272946 1, 3.3
+23. Ju, R., Ge, L., Geng, W., Ren, T., Wu, G.: Depth saliency based on anisotropic center-surround difference. In: ICIP. pp. 1115-1119 (2014) 4.1
+24. Lang, C., Nguyen, T.V., Katti, H., Yadati, K., Kankanhalli, M.S., Yan, S.: Depth matters: influence of depth cues on visual saliency. In: ECCV. pp. 101-115 (2012) 2
+25. Li, G., Zhu, C.: A three-pathway psychobiological framework of salient object detection using stereoscopic technology. In: ICCVW. pp. 3008-3014 (2017), https://academic.microsoft.com/paper/2766315367 4.1
+26. Li, G., Yu, Y.: Deep contrast learning for salient object detection. In: CVPR. pp. 478-487 (2016) 1
+27. Li, N., Ye, J., Ji, Y., Ling, H., Yu, J.: Saliency detection on light field. PAMI 39(8), 1605-1616 (2017) 4.1
+28. Liu, G., Fan, D.: A model of visual attention for natural image retrieval. In: ISCC-C. pp. 728-733 (2013), https://academic.microsoft.com/paper/23147078291
+29. Liu, N., Han, J.: Dhsnet: Deep hierarchical saliency network for salient object detection. In: CVPR. pp. 678-686 (2016) 2
+30. Niu, Y., Geng, Y., Li, X., Liu, F.: Leveraging stereopsis for saliency analysis. In: CVPR. pp. 454-461 (2012) 4.1
+31. Peng, H., Li, B., Xiong, W., Hu, W., Ji, R.: RGBd salient object detection: A benchmark and algorithms. In: ECCV. pp. 92-109 (2014) 2, 4.1, 2, 3, 4.4
+32. Piao, Y., Ji, W., Li, J., Zhang, M., Lu, H.: Depth-induced multi-scale recurrent attention network for saliency detection. In: ICCV (2019) 1, 1, 2, 4.1, 2, 3, 4.4
+33. Qin, X., Zhang, Z., Huang, C., Gao, C., Dehghan, M., Jagersand, M.: Basnet: Boundary-aware salient object detection. In: CVPR. pp. 7479-7489 (2019), https://academic.microsoft.com/paper/2961348656 2
+34. Qu, L., He, S., Zhang, J., Tian, J., Tang, Y., Yang, Q.: Rgbd salient object detection via deep fusion. TIP 26(5), 2274-2285 (2017) 2, 2, 3, 4.4
+35. Ren, J., Gong, X., Yu, L., Zhou, W., Yang, M.Y.: Exploiting global priors for rgb-d saliency detection. In: CVPRW. pp. 25-32 (2015) 1, 2
+36. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. TPAMI 39(6), 1137-1149 (2017), https://academic.microsoft.com/paper/639708223 1
+37. Ren, Z., Gao, S., Chia, L.T., Tsang, I.W.H.: Region-based saliency detection and its application in object recognition. TCSVT 24(5), 769-779 (2014), https://academic.microsoft.com/paper/2055180303 1
+38. Smeulders, A.W.M., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: An experimental survey. TPAMI 36(7), 1442-1468 (2014), https://academic.microsoft.com/paper/2126302311 1
+39. Wang, W., Shen, J., Porikli, F.: Saliency-aware geodesic video object segmentation. In: CVPR. pp. 3395-3402 (2015) 1
+
+40. Wang, W., Shen, J., Sun, H., Shao, L.: Video co-saliency guided co-segmentation. TCSVT 28(8), 1727-1736 (2018), https://academic.microsoft.com/paper/2887503470 1
+41. Wu, R., Feng, M., Guan, W., Wang, D., Lu, H., Ding, E.: A mutual learning method for salient object detection with intertwined multi-supervision. In: CVPR. pp. 8150-8159 (2019), https://academic.microsoft.com/paper/2962680827 1
+42. Zhang, M., Ji, W., Piao, Y., Li, J., Zhang, Y., Xu, S., Lu, H.: Lfnet: Light field fusion network for salient object detection. IEEE Transactions on Image Processing 29, 6276-6287 (2020) 1
+43. Zhang, M., Li, J., Ji, W., Piao, Y., Lu, H.: Memory-oriented decoder for light field salient object detection. In: NeurIPS 2019 : Thirty-third Conference on Neural Information Processing Systems. pp. 898-908 (2019) 1
+44. Zhang, P., Wang, D., Lu, H., Wang, H., Ruan, X.: Amulet: Aggregating multi-level convolutional features for salient object detection. In: ICCV. pp. 202-211 (2017), https://academic.microsoft.com/paper/2963032190 2
+45. Zhang, X., Wang, T., Qi, J., Lu, H., Wang, G.: Progressive attention guided recurrent network for salient object detection. In: CVPR (2018) 2
+46. Zhao, J.X., Cao, Y., Fan, D.P., Cheng, M.M., Li, X.Y., Zhang, L.: Contrast prior and fluid pyramid integration for rgbd salient object detection. In: CVPR. pp. 3927-3936 (2019) 1, 2, 2, 3, 4.4
+47. Zhao, R., Ouyang, W., Li, H., Wang, X.: Saliency detection by multi-context deep learning. In: CVPR. pp. 1265-1274 (2015) 2
+48. Zhu, C., Cai, X., Huang, K., Li, T.H., Li, G.: Pdnet: Prior-model guided depth-enhanced network for salient object detection. In: ICME (2019) 1, 2, 2, 3, 4.4
+49. Zhu, C., Li, G., Guo, X., Wang, W., Wang, R.: A multilayer backpropagation saliency detection algorithm based on depth mining. In: CAIP. pp. 14-23 (2017) 2, 2, 3, 4.4
+50. Zhu, C., Li, G., Wang, W., Wang, R.: An innovative salient object detection using center-dark channel prior. In: ICCVW. pp. 1509-1515 (2017) 2, 2, 3, 4.4
\ No newline at end of file
diff --git a/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/images.zip b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9fdd26bf3be4df86910dc3cc1ac607625a294ac5
--- /dev/null
+++ b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8d28cc84f47785db1ad167d243098610188e7f3809695dd9ced7f63dcbe8cd1d
+size 737588
diff --git a/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/layout.json b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b334b134b4fa132b3a7869e9fe5abce9e8c87641
--- /dev/null
+++ b/asymmetrictwostreamarchitectureforaccuratergbdsaliencydetection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eac809b36495b818846f28b8d40c30675bd9b1730ce76e6d5b5004e82a8b3ff0
+size 420855
diff --git a/asynchronousinteractionaggregationforactiondetection/64a93f36-6c68-4c6a-8b36-4bf5b32470b4_content_list.json b/asynchronousinteractionaggregationforactiondetection/64a93f36-6c68-4c6a-8b36-4bf5b32470b4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a4bd046035884bc8b60c157b41ea7166501f6d28
--- /dev/null
+++ b/asynchronousinteractionaggregationforactiondetection/64a93f36-6c68-4c6a-8b36-4bf5b32470b4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f082bce5dcff7ec75bfcb56d01f91eabc7e2c68a49ce82bc914eaaf263e737c7
+size 79525
diff --git a/asynchronousinteractionaggregationforactiondetection/64a93f36-6c68-4c6a-8b36-4bf5b32470b4_model.json b/asynchronousinteractionaggregationforactiondetection/64a93f36-6c68-4c6a-8b36-4bf5b32470b4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..48c3fb16d7f6cb2bdcab3e9502f6f6b3545b732d
--- /dev/null
+++ b/asynchronousinteractionaggregationforactiondetection/64a93f36-6c68-4c6a-8b36-4bf5b32470b4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3ee562f40116b5361764f980503ba36bded12547d3d161d36f1994c224ee69bc
+size 97716
diff --git a/asynchronousinteractionaggregationforactiondetection/64a93f36-6c68-4c6a-8b36-4bf5b32470b4_origin.pdf b/asynchronousinteractionaggregationforactiondetection/64a93f36-6c68-4c6a-8b36-4bf5b32470b4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c13aef533d684caac8ebab92858f304c9a0f10a8
--- /dev/null
+++ b/asynchronousinteractionaggregationforactiondetection/64a93f36-6c68-4c6a-8b36-4bf5b32470b4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f833bcde2b74904f2b27b09ac8b326c32f143c896a6941791e751ccbe3d24495
+size 8388529
diff --git a/asynchronousinteractionaggregationforactiondetection/full.md b/asynchronousinteractionaggregationforactiondetection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..92e3eaba8ca6cc35e70feeaf198fd4b52e4d8d20
--- /dev/null
+++ b/asynchronousinteractionaggregationforactiondetection/full.md
@@ -0,0 +1,317 @@
+# Asynchronous Interaction Aggregation for Action Detection
+
+Jiajun Tang*, Jin Xia*, Xinzhi Mu, Bo Pang, and Cewu Lu†
+
+Shanghai Jiao Tong University, China
+yelantingfeng@sjtu.edu.cn, ga.xiajin@gmail.com, {draconids, pangbo, lucewu}@sjtu.edu.cn
+
+Abstract. Understanding interaction is an essential part of video action detection. We propose the Asynchronous Interaction Aggregation network (AIA) that leverages different interactions to boost action detection. There are two key designs in it: one is the Interaction Aggregation structure (IA) adopting a uniform paradigm to model and integrate multiple types of interaction; the other is the Asynchronous Memory Update algorithm (AMU) that enables us to achieve better performance by modeling very long-term interaction dynamically without huge computation cost. We provide empirical evidence to show that our network can gain notable accuracy from the integrative interactions and is easy to train end-to-end. Our method reports the new state-of-the-art performance on AVA dataset, with 3.7 mAP gain (12.6% relative improvement) on validation split comparing to our strong baseline. The results on datasets UCF101-24 and EPIC-Kitchens further illustrate the effectiveness of our approach. Source code will be made public at: https://github.com/MVIG-SJTU/AlphaAction.
+
+Keywords: Action Detection $\cdot$ Video Understanding $\cdot$ Interaction $\cdot$ Memory
+
+# 1 Introduction
+
+The task of action detection (spatio-temporal action localization) aims at detecting and recognizing actions in space and time. As an essential task of video understanding, it has a variety of applications such as abnormal behavior detection and autonomous driving. On top of spatial representation and temporal features [21,27,3,10], the interaction relationships [13,39,47,29] are crucial for understanding actions. Take Figure 1 for example. The appearance of the man, the tea cup as well as the previous movement of the woman help to predict the action of the woman. In this paper, we propose a new framework which emphasizes on the interactions for action detection.
+
+
+Fig.1: Interaction Aggregation. In this target frame, we can tell that the women is serving tea to the man with following clues: (1) She is close to the man. (2) She puts down the tea cup before the man. (3) She prepared the tea a few seconds ago. These three clues correspond respectively the person-person, person-object and temporal interactions
+
+Interactions can be briefly considered as the relationship between the target person and context. Many existing works try to explore interactions in videos, but there are two problems in the current methods: (1) Previous methods such as [13,15] focus on a single type of interaction (eg. person-object). They can only boost one specific kind of actions. Methods such as [46] intend to merge different interactions, but they model them separately. Information of one interaction can't contribute to another interaction modeling. How to find interactions correctly in video and use them for action detection remains challenging. (2) The long-term temporal interaction is important but hard to track. Methods which use temporal convolution [21,27,10] have very limited temporal reception due to the resource challenge. Methods such as [41] require a duplicated feature extracting pre-process which is not practical in reality.
+
+In this work, we propose a new framework, the Asynchronous Interaction Aggregation network (AIA), who explores three kinds of interactions (person-person, person-object, and temporal interaction) that cover nearly all kinds of person-context interactions in the video. As a first try, AIA makes them work cooperatively in a hierarchical structure to capture higher level spatial-temporal features and more precise attentions. There are two main designs in our network: the Interaction Aggregation (IA) structure and the Asynchronous Memory Update (AMU) algorithm.
+
+The former design, IA structure, explores and integrates all three types of interaction in a deep structure. More specifically, it consists of multiple elemental interaction blocks, of each enhances the target features with one type of interaction. These three types of interaction blocks are nested along the depth of IA structure. One block may use the result of previous interactions blocks. Thus, IA structure is able to model interactions precisely using information across different types.
+
+Jointly training with long memory features is infeasible due to the large size of video data. The AMU algorithm is therefore proposed to estimate intractable features during training. We adopt a memory-like structure to store the spatial features and propose a series of write-read algorithm to update the content in memory: features extracted from target clips at each iteration are written to a memory pool and they can be retrieved in subsequent iterations to model temporal interaction. This effective strategy enables us to train the whole network in an end-to-end manner and the computational complexity doesn't increase linearly with the length of temporal memory features. In comparison to previous solution [41] that extracted features in advance, the AMU is much simpler and achieves better performance.
+
+In summary, our key contributions are: (1) A deep IA structure that integrates a diversity of person-context interactions for robust action detection and (2) an AMU algorithm to estimate the memory features dynamically. We perform an extensive ablation study on the AVA [17] dataset for spatio-temporal action localization task. Our method shows a huge boost on performance, which yields the new state-of-the-art on both validation and test set. We also test our method on dataset UCF101-24 [32] and a segment level action recognition dataset EPIC-Kitchens [6]. Results further validate its generality.
+
+# 2 Related Works
+
+Video Classification. Various 3D CNN models [21,34,33,36] have been developed to handle video input. To leverage the huge image dataset, I3D [3] has been proposed to benefit from ImageNet[7] pre-training. In [27,8,35,44,4], the 3D kernels in above models are simulated by temporal filters and spatial filters which can significantly decrease the model size. Previous two-stream methods [30,11] use optical flow to extract motion information, while recent work SlowFast [10] manages to do so using only RGB frames with different sample rates.
+
+Spatio-temporal Action Detection. Action detection is more difficult than action classification because the model needs to not only predict the action labels but also localize the action in time and space. Most of the recent approaches [17,12,10,19,42] follow the object detection frameworks [14,28] by classifying the features generated by the detected bounding boxes. In contrast to our method, their results depend only on the cropped features. While all the other information is discarded and contributes nothing to the final prediction.
+
+Attention Mechanism for Videos. The transformer [37] consists of several stacked self-attention layers and fully connected layers. Non-Local [38] concludes that the previous self-attention model can be viewed as a form of classical computer vision method of non-local means [2]. Hence a generic non-local block[38] is introduced. This structure enables models to compute the response by relating the features at different time or space, which makes the attention mechanism applicable for video-related tasks like action classification. The non-local block also plays an important role in [41] where the model references information from the long-term feature bank via a non-local feature bank operator.
+
+
+Fig.2: Pipeline of the proposed AIA. a. We crop features of persons and objects from the extracted video features. b. Person features, object features and memory features from the feature pool $\Omega$ in c are fed to IA in order to integrate multiple interactions. The output of IA is passed to the final classifier for predictions. c. Our AMU algorithm reads memory features from feature pool and writes fresh person features to it
+
+# 3 Proposed Method
+
+In this section, we will describe our method that localizes actions in space and time. Our approach aims at modeling and aggregating various interactions to achieve better action detection performance. In Section 3.1, we describe two important types of instance level features in short clips and the memory features in long videos. In Section 3.2, the Interaction Aggregation structure (IA) is explored to gather knowledge of interactions. In Section 3.3, we introduce the Asynchronous Memory Update algorithm (AMU) to alleviate the problem of heavy computation and memory consumption in temporal interaction modeling. The overall pipeline of our method is demonstrated in Figure 2.
+
+# 3.1 Instance Level and Temporal Memory Features
+
+To model interactions in video, we need to find correctly what the queried person is interacted with. Previous works such as [38] calculate the interactions among all the pixels in feature map. Being computational expensive, these brute-force methods struggle to learn interactions among pixels due to the limited size of video dataset. Thus we go down to consider how to obtain concentrated interacted features. We observe that persons are often interacting with concrete objects and other persons. Therefore, we extract object and person embedding as the instance level features. In addition, video frames are usually highly correlated, thus we keep the long-term person features as the memory features.
+
+Instance level features are cropped from the video features. Since computing the whole long video is impossible, we split it to consecutive short video clips
+
+$[v_{1}, v_{2}, \ldots, v_{T}]$ . The $d$ -dimensional features of the $t^{th}$ clip $v_{t}$ are extracted using a video backbone model: $f_{t} = \mathcal{F}(v_{t}, \phi_{\mathcal{F}})$ where $\phi_{\mathcal{F}}$ is the parameters.
+
+A detector is applied on the middle frame of $v_{t}$ to get person boxes and object boxes. Based on the detected bounding boxes, we apply RoIAlign [18] to crop the person and object features out from extracted features $f_{t}$ . The person and object features in $v_{t}$ are denoted respectively as $P_{t}$ and $O_{t}$ .
+
+One clip is only a short session and misses the temporal global semantics. In order to model the temporal interaction, we keep tracks of memory features. The memory features consist of person features in consecutive clips: $M_{t} = [P_{t - L},\dots ,P_{t},\dots ,P_{t + L}]$ , where $(2L + 1)$ is the size of clip-wise reception field. In practice, a certain number of persons are sampled from each neighbor clip.
+
+The three features above have semantic meaning and contain concentrated information to recognize actions. With these three features, we are now able to model semantic interactions explicitly.
+
+# 3.2 Interaction Modeling and Aggregation
+
+How do we leverage these extracted features? For a target person, there are multiple detected objects and persons. The main challenge is how to correctly pay more attention to the objects or the persons that the target person is interacted with. In this section, we introduce first our Interaction Block that can adaptively model each type of interactions in a uniform structure. Then we describe our Interaction Aggregation (IA) structure that aggregates multiple interactions.
+
+Overview. Given different human $P_{t}$ , object $O_{t}$ and memory features $M_{t}$ , the proposed IA structure outputs action features $A_{t} = \mathcal{E}(P_{t},O_{t},M_{t},\phi_{\mathcal{E}})$ , where $\phi_{\mathcal{E}}$ denotes the parameters in the IA structure. $A_{t}$ is then passed to the final classifier for final predictions.
+
+The hierarchical IA structure consists of multiple interaction blocks. Each of them is tailored for a single type of interactions. The interaction blocks are deep nested with other blocks to efficiently integrate different interactions for higher level features and more precise attentions.
+
+Interaction Block. The structure of interaction block is adapted from Transformer Block originally proposed in [37] whose specific design basically follows [38,41]. Briefly speaking, one of the two inputs is used as the query and the other is mapped to key and value. Through the dot-product attention, which is the output of the softmax layer in Figure 3 a, the block is able to select value features that are highly activated to the query features and merge them to enhance the query features. There are three types of interaction blocks in our design, which are P-Block, O-Block and M-Block.
+
+- $P$ -Block: P-Block models person-person interaction in the same clip. It is helpful for recognizing actions like listening and talking. Since the query input is already the person features or the enhanced person features, we take the key/value input the same as the query input.
+
+- $O$ -Block: In O-Block, we aim to distill person-object interactions such as pushing and carrying an object. Our key/value input is the detected object
+
+
+Fig. 3: Interaction Block and IA structure. a. The O-Block: the query input is the feature of the target person and the key/value input is the feature of objects. The P-Block and M-Block are similar. b. Serial IA. c. Dense Serial IA
+
+features $O_{t}$ . In the case where detected objects are too many, we sample based on detection scores. Figure 3a is an illustration of O-Block.
+
+-M-Block: Some actions have strong logical connections along the temporal dimension like opening and closing. We model this type of interaction as temporal interactions. To operate this type, we take memory features $M_{t}$ as key/value input of an M-Block.
+
+Interaction Aggregation Structure. The Interaction Blocks extract three types of interaction. We now propose two IA structures to integrate these different interactions. The proposed IA structures are the naive parallel IA, the serial IA and the dense serial IA. For clarity, we use $\mathcal{P}$ , $\mathcal{O}$ , and $\mathcal{M}$ to represent the P-Block, O-Block, and M-Block respectively.
+
+- Parallel IA: A naive approach is to model different interactions separately and merge them at last. As displayed in Figure 4a., each branch follows similar structure to [13] that treats one type of interactions without the knowledge of other interactions. We argue that the parallel structure struggles to find interaction precisely. We illustrate the attention of the last P-Block in Figure 4c. by displaying the output of the softmax layer for different persons. As we can see, the target person is apparently watching and listening to the man in red. However, the P-block pays similar attention to two men.
+
+-Serial IA: The knowledge across different interactions is helpful for recognizing interactions. We propose the serial IA to aggregate different types of interactions. As shown in Figure 3b., different types of interaction blocks are stacked in sequence. The queried features are enhanced in one interaction block and then passed to an interaction block of a different type. Figure 4f. and 4g. demonstrate the advantage of serial IA: The first P-block can not differ the importance of the man in left and the man in middle. After gaining knowledge from O-block and M-block, the second P-block is able to pay more attention to
+
+
+e. Serial IA
+
+
+
+
+f. Attention in $1^{\mathrm{st}}$ P block in serial IA
+
+
+
+
+g. Attention in $2^{\text{nd}}$ P block in serial IA
+Fig. 4: We visualize attention by displaying the output of the softmax layer in P-Block. The original output contains the attention to zero padding person. We remove those meaningless attention and normalize the rest attention to 1
+
+man in left who is talking to the target person. Comparing to the attention in parallel IA (Figure 4c.), our serial IA is better in finding interactions.
+
+-Dense Serial IA: In above structures, the connections between interaction blocks are totally manually designed and the input of an interaction block is simply the output of another one. We expect the model to further learn which interaction features to take by itself. With this in mind, we propose the Dense Serial IA extension. In Dense Serial IA, each interaction block takes all the outputs of previous blocks and aggregates them using a learnable weight. Formally, the query of the $i^{th}$ block can be represent as
+
+$$
+Q _ {t, i} = \sum_ {j \in \mathbf {C}} W _ {j} \odot E _ {t, j}, \tag {1}
+$$
+
+where $\odot$ denotes the element-wise multiplication, $\mathbf{C}$ is the set of indices of previous blocks, $W_{j}$ is a learnable $d$ -dimensional vector normalized with a Softmax function among $\mathbf{C}$ , $E_{t,j}$ is the enhanced output features from the $j^{th}$ block. Dense Serial IA is illustrated in Figure 3c.
+
+# 3.3 Asynchronous Memory Update Algorithm
+
+Long-term memory features can provide useful temporal semantics to aid recognizing actions. Imagine a scene where a person opens the bottle cap, drinks water, and finally closes the cap, it could be hard to detect opening and closing with subtle movements. But knowing the context of drinking water, things get much easier.
+
+
+Fig. 5: Joint training with memory features is restricted by limited hardware resource. In this minor experiment, we take a 32-frame video clip with $256 \times 340$ resolution as input. The backbone is ResNet-50. During joint training (yellow line), rapidly growing GPU memory and computation time restricted the length of memory features to be very small value (8 in this experiment). With larger input or deeper backbone, this problem will be more serious. Our method (cyan line) doesn't have such problem.
+
+Resource Challenge. To capture more temporal information, we hope our $M_t$ can gather features from enough number of clips, however, using more clips will increase the computation and memory consumption dramatically. Depicted with Figure 5, when jointly training, the memory usage and computation consumption increase rapidly as the temporal length of $M_t$ grows. To train on one target person, we must propagate forward and backward $(2L + 1)$ video clips at one time, which consumes much more time, and even worse, cannot make full use of enough long-term information due to limited GPU memory.
+
+Insight. In the previous work [41], they pre-train another duplicated backbone to extract memory features to avoid this problem. However, this method makes use of frozen memory features, whose representation power can not be enhanced as model training goes. We expect the memory features can be updated dynamically and benefit from the improvement from parameter update in training process. Therefore, we propose the asynchronous memory update method which can generate effective dynamic long-term memory features and make the training process more lightweight. The details of training process with this algorithm are presented in Algorithm 1.
+
+A naive design could be: pass forward all clips to get memory features and propagate current clip backward to calculate the gradients. This method alleviates the memory issue but is still slow in the training speed. We could also try to utilize the memory features like in Transformer-XL [5], but this requires training along the sequence direction and is thus unable to access future information.
+
+Inspired by [40], our algorithm is composed of a memory component, the memory pool $\Omega$ and two basic operations, $READ$ and $WRITE$ . The memory pool $\Omega$ records memory features. Each feature $\hat{P}_t^{(i)}$ in this pool is an estimated
+
+Algorithm 1 Training with asynchronous memory update
+Input: Video dataset $\mathbf{V} = \{v^{(1)},v^{(2)},\dots,v^{(|\mathbf{V}|)}\}$ with $v^{(i)} = [v_1^{(i)},v_2^{(i)},\dots,v_{T_i}^{(i)}]$ ; The whole network $\mathcal{N}$ , with its parameter $\phi_{\mathcal{N}}$ ;
+Output: Optimized network $\mathcal{N}$ with $\phi_{\mathcal{N}}$ for inference. // Initialization :
+1: $\Omega = \{(\hat{P}_t^{(i)}\gets \mathrm{zero~vectors},\delta_t^{(i)}\gets 0)\mid \forall t,i\}$ .
+2: err $\leftarrow \infty$
+// Training Process:
+3: for iter = 1 to itermax do
+4: Sample a video clip $v_{t}^{(i)}$ from dataset $\mathbf{V}$ .
+5: for $t^{\prime} = t - L$ to $t + L$ do
+6: if $t^{\prime}\neq t$ then
+7: READ $\hat{P}_{t'}^{(i)}$ and $\delta_{t'}^{(i)}$ from memory pool $\Omega$ .
+8: $w_{t'}^{(i)} = \min \{err / \delta_{t'}^{(i)},\delta_{t'}^{(i)} / err\}$ .
+9: Impose penalty: $\hat{P}_{t'}^{(i)}\gets w_{t'}^{(i)}\hat{P}_{t'}^{(i)}$ .
+10: end if
+11: end for
+12: Extract $P_{t}^{(i)}$ and $O_{t}^{(i)}$ with the backbone in $\mathcal{N}$ .
+13: Estimated memory features: $\hat{M}_t^{(i)}\gets [\hat{P}_{t - L}^{(i)},\dots,\hat{P}_{t - 1}^{(i)},P_t^{(i)},\hat{P}_{t + 1}^{(i)},\dots,\hat{P}_{t + L}^{(i)}]$ .
+14: Forward $(P_t^{(i)},O_t^{(i)},M_t^{(i)})$ with the head in $\mathcal{N}$ and backward to optimize $\phi_{\mathcal{N}}$ .
+15: Update err as the output of current loss function.
+16: WRITE $\hat{P}_t^{(i)}\gets P_t^{(i)},\delta_t^{(i)}\gets err$ back to $\Omega$ .
+17: end for
+18: return $\mathcal{N}$ , $\phi_{\mathcal{N}}$
+
+value and tagged with a loss value $\delta_t^{(i)}$ . This loss value $\delta_t^{(i)}$ logs the convergence state of the whole network. Two basic operations are invoked at each iteration of training:
+
+-READ: At the beginning of each iteration, given a video clip $v_{t}^{(i)}$ from the $i^{th}$ video, estimated memory features around the target clip are read from the memory pool $\Omega$ , which are $[\hat{P}_{t - L}^{(i)},\dots,\hat{P}_{t - 1}^{(i)}]$ and $[\hat{P}_{t + 1}^{(i)},\dots,\hat{P}_{t + L}^{(i)}]$ specifically.
+
+-WRITE: At the end of each iteration, personal features for the target clip $P_{t}^{(i)}$ are written back to the memory pool $\Omega$ as estimated memory features $\hat{P}_{t}^{(i)}$ , tagged with current loss value.
+
+-Reweighting: The features we READ are written at different training steps. Therefore, some early written features are extracted from the model whose parameters are much different from current ones. Therefore, we impact a penalty factor $w_{t'}^{(i)}$ to discard badly estimated features. We design a simple yet effective way to compute such penalty factor by using loss tag. The difference between the loss tag $\delta_{t'}^{(i)}$ and current loss value is expressed as,
+
+$$
+w _ {t ^ {\prime}} ^ {(i)} = \min \left\{\operatorname {e r r} / \delta_ {t ^ {\prime}} ^ {(i)}, \delta_ {t ^ {\prime}} ^ {(i)} / \operatorname {e r r} \right\}, \tag {2}
+$$
+
+which should be very close to 1 when the difference is small. As the network converges, the estimated features in the memory pool are expected to be closer and closer to the precise features and $w_{t'}^{(i)}$ approaches to 1.
+
+As shown in Figure 5, the consumption of our algorithm has no obvious increase in both GPU memory and computation as the length of memory features grows, and thus we can use long enough memory features on current common devices. With dynamic updating, the asynchronous memory features can be better exploited than frozen ones.
+
+# 4 Experiments on AVA
+
+The Atomic Visual Actions(AVA) [17] dataset is built for spatio-temporal action localization. In this dataset, each person is annotated with a bounding box and multiple action labels at 1 FPS. There are 80 atomic action classes which cover pose actions, person-person interactions and person-object interactions. This dataset contains 235 training movie videos and 64 validation movie videos.
+
+Since our method is originally designed for spatio-temporal action detection, we use AVA dataset as the main benchmark to conduct detailed ablation experiments. The performances are evaluated with official metric frame level mean average precision(mAP) at spatial IoU $\geq 0.5$ and only the top 60 most common action classes are used for evaluation, according to [17].
+
+# 4.1 Implementation Details
+
+Instance Detector. We apply Faster R-CNN [28] framework to detect persons and objects on the key frames of each clip. A model with ResNeXt-101-FPN [43,23] backbone from maskrcnn-benchmark [26] is adopted for object detection. It is firstly pre-trained on ImageNet [7] and then fine-tuned on MSCOCO [25] dataset. For human detection, we further fine-tune the model on AVA for higher detection precision.
+
+Backbone. Our method can be easily applied to any kind of 3D CNN backbone. We select state-of-the-art backbone SlowFast [10] network with ResNet-50 structure as our baseline model. Basically following the recipe in [10], our backbone is pre-trained on Kinetics-700 [3] dataset for action classification task. This pre-trained backbone produces $66.34\%$ top-1 and $86.66\%$ top-5 accuracy on the Kinetics-700 validation set.
+
+Training and Inference. Initialized from Kinetics pre-trained weights, we then fine-tune the whole model with focal loss [24] on AVA dataset. The inputs of our network are 32 RGB frames, sampled from a 64-frame raw clip with one frame interval. Clips are scaled such that the shortest side becomes 256, and then fed into the fully convolution backbone. We use only the ground-truth human boxes for training and the randomly jitter them for data augmentation. For the object boxes, we set the detection threshold to 0.5 in order to have higher recall. During inference, detected human boxes with a confidence score larger than 0.8 are used. We set $L = 30$ for memory features in our experiments. We train our network using the SGD algorithm with batch size 64 on 16 GPU(4 clips per device). BatchNorm(BN) [20] statistics are set frozen. We train for 27.5k iterations with base learning rate 0.004 and the learning rate is reduced by a factor 10 at 17.5k
+
+(a) 3 Interactions
+
+Table 1: Ablation Experiments. We use a ResNet-50 SlowFast backbone to perform our ablation study. Models are trained on the AVA(v2.2) training set and evaluated on the validation set. The evaluation metric mAP is shown in %
+
+| POM | mAP |
| 26.54 |
| ✓ | 28.04 |
| ✓ | 28.86 |
| ✓ | 28.92 |
| ✓ | 29.26 |
+
+(b) Num of I-Blocks
+
+| blocks | mAP |
| 1 × {P, M, O} | 29.26 |
| 2 × {P, M, O} | 29.64 |
| 3 × {P, M, O} | 29.61 |
+
+(c) Interaction Order
+
+| order | mAP | order | mAP |
| M → O → P | 29.48 | M → P → O | 29.46 |
| O → P → M | 29.51 | O → M → P | 29.53 |
| P → M → O | 29.44 | P → O → M | 29.64 |
+
+(d) IA Structure
+
+| structure | mAP |
| Parallel | 28.85 |
| Serial | 29.64 |
| Dense Serial | 29.80 |
+
+(e) Asynchronous Memory Update
+
+| model | params | FLOPs | mAP |
| Baseline | 1.00× | 1.00× | 26.54 |
| LFB(w/o AMU) | 2.18× | 2.12× | 27.02 |
| LFB(w/ AMU) | 1.18× | 1.12× | 28.57 |
| IA(w/o AMU) | 2.35× | 2.15× | 28.07 |
| IA(w/ AMU) | 1.35× | 1.15× | 29.64 |
+
+(f) Compare to NL
+
+| model | mAP |
| Baseline | 26.54 |
| +NL | 26.85 |
| +IA(w/o M) | 28.23 |
+
+and 22.5k iteration. A linear warm-up [16] scheduler is applied for the first 2k iterations.
+
+# 4.2 Ablation Experiments
+
+Three Interactions. We first study the importance of three kinds of interactions. For each interaction type, we use at most one block in the experiment. These blocks are then stacked in serial. To evaluate the importance of person-object interaction, we remove the O-Block in the structure. Other interactions are evaluated in the same way. Table 1a compares the model performance, where used interaction types are marked with $\sqrt{}$ . A backbone baseline without any interaction is also listed in this table. Overall we observe that removing any of these three type interactions results in a significant performance decrease, which confirms that all these three interactions are important for action detection.
+
+Number of Interaction Blocks. We then experiment with different settings for the number of interaction blocks in our IA structure. The interaction blocks are nested in serial structure in this experiment. In Table 1b, $N \times \{\mathcal{P}, \mathcal{M}, \mathcal{O}\}$ denotes $N$ blocks are used for each interaction type, with the total number as $3N$ . We find that with the setting $N = 2$ our method can achieve the best performance, so we use this as our default configuration.
+
+Interaction Order. In our serial IA, different type of interactions are alternately integrated in sequential. We investigate effect of different interaction order design in Table 1c. As shown in this experiment, the performance with different order are quite similar, we thus choose the slightly better one $\mathcal{P} \to \mathcal{O} \to \mathcal{M}$ as our default setting.
+
+
+Fig. 6: Per category results comparison on the validation set of AVA v2.2
+
+Interaction Aggregation Structure. We analyze different IA structure in this part. Parallel IA, serial IA and the dense serial IA extension are compared in Table 1d. As we expect, the parallel IA performs much worse than serial structure. With dense connections between blocks, our model is able to learn more knowledge of interactions, which further boosts the performance.
+
+Asynchronous Memory Update. In the previous work LFB [41], the memory features are extracted with another backbone, which is frozen during training. In this experiment we compare our asynchronous memory features with the frozen ones. For fair comparison, we re-implement LFB with SlowFast backbone, and also apply our AMU algorithm to LFB. In Table 1e, we find that our asynchronous memory features can gain much better performance than the frozen method with nearly half of the parameters and computation cost. We argue that this is because our dynamic features can provide better representation.
+
+Comparison to Non-local Attention. Finally we compare our interaction aggregation method with prior work non-local block [38] (NL). Following [10], we augment the backbone with a non-local branch, where attention is computed between the person features and global pooled features. Since there is no long-term features in this branch, we eliminate $\mathcal{M}$ in this experiment. In Table 1f, we see that our serial IA works significantly better than NL block. This confirms that our method can better learn to find potential interactions than NL block.
+
+# 4.3 Main Results
+
+Finally, we compare our results on AVA v2.1 and v2.2 with previous methods in Table 2. Our method surpasses all previous works on both versions.
+
+The AVA v2.2 dataset, is the newer benchmark used in ActivityNet challenge 2019 [9]. On the validation set, our method reports a new state-of-the-art 33.11 $mAP$ with one single model, which outperforms the strong baseline SlowFast by 3.7 $mAP$ . On the test split, we train our model on both training and validation splits and use a relative longer scheduler. With an ensemble of three models with
+
+Table 2: Main results on AVA. Here, we display our best results with both ResNet50(R50) and ResNet101(R101). “*” indicates multi-scale testing. The input sizes are shown in frame number and sample rate. SlowFast R101 backbone models re-implemented in this work are also displayed as “ours” for comparison.
+(a) Comparison on AVA v2.1.
+
+| model | input | pretrain | val | model | input | pretrain | val | test |
| SlowFast [10] | 32 × 2 | K400 | 26.3 | SlowFast+NL [10] | 32 × 2 | K600 | 29.1 | - |
| LFB [41] | 32 × 2 | K400 | 27.7 | SlowFast+NL [10] | 64 × 2 | K600 | 29.4 | - |
| I3D [12] | 64 × 1 | K600 | 21.9 | SlowFast*, 7 ens. [10] | - | K600 | - | 34.25 |
| SlowFast+NL [10] | 32 × 2 | K600 | 28.2 | SlowFast (ours) | 32 × 2 | K600 | 28.7 | - |
| SlowFast (ours) | 32 × 2 | K600 | 27.7 | SlowFast (ours) | 32 × 2 | K700 | 29.3 | - |
| SlowFast (ours) | 32 × 2 | K700 | 28.1 | AIA R50 | 32 × 2 | K700 | 29.80 | - |
| AIA R50 | 32 × 2 | K700 | 28.9 | AIA R101 | 32 × 2 | K700 | 32.26 | - |
| AIA R101 | 32 × 2 | K700 | 31.2 | AIA R101* | 32 × 2 | K700 | 33.11 | 32.25 |
| | | | AIA R101*, 3 ens. | - | K700 | - | 34.42 |
+
+Table 3: Results on UCF101-24 Split1
+
+| method | mAP | method | mAP | method | mAP | method | mAP |
| ACT [22] | 69.5 | Gu et al. [17] | 76.3 | C2D (ours) | 75.5 | I3D (ours) | 79.6 |
| STEP [45] | 75.0 | Zhang et al. [46] | 77.9 | C2D+AIA | 78.8 | I3D+AIA | 81.7 |
+
+different learning rates and aggregation structures, our method achieves better performance than the winning entry of AVA challenge 2019 (an ensemble with 7 SlowFast [10] networks). The per category results for our method and SlowFast baseline is illustrated in Figure 6. We can observe the performance gain for each category, especially for those who contain interactions with video context.
+
+As shown in Table 2, we pre-train the backbone model with a new larger Kinetics-700 for better performance. However, it is worth noting that we do not use non-local block in our backbone model and there are some other slight differences between our implementation and the official one [10]. As a result, our K700 backbone model has a similar performance to the official K600 one. That is to say, very most of the performance advantages benefit from our proposed method instead of the backbone.
+
+# 5 Experiments on UCF101-24
+
+UCF101-24 [32] is an action detection set with 24 action categories. We conduct experiments on the first split of this dataset following previous works and use the corrected annotations provided by Singh et al. [31].
+
+We experiment two different backbone models, C2D and I3D. Both of them are pre-trained on the Kinetics-400 dataset. Other settings are basically the same as AVA experiments. More implementation details are provided in Supplementary Material. Table 3 shows the result on UCF101-24 test split in terms of
+
+Table 4: EPIC-Kitchens Validation Results
+
+ | Verbs | Nouns | Actions |
| top-1 | top-5 | top-1 | top-5 | top-1 | top-5 |
| Baradel [1] | 40.9 | - | - | - | - | - |
| LFB NL [41] | 52.4 | 80.8 | 29.3 | 54.9 | 20.8 | 39.8 |
| SlowFast (ours) | 56.8 | 82.8 | 32.3 | 56.7 | 24.1 | 42.0 |
| AIA-Parallel | 57.6 | 83.9 | 36.3 | 63.0 | 26.4 | 47.4 |
| AIA-Serial | 59.2 | 84.2 | 37.2 | 63.2 | 27.7 | 48.0 |
| AIA-Dense-Serial | 60.0 | 84.6 | 37.2 | 62.1 | 27.1 | 47.8 |
+
+Frame-mAP with 0.5 IOU threshold. As we can see in the table, AIA achieves $3.3\%$ and $2.1\%$ improvement over two different backbones. Moreover, with a relative weak 2D backbone, our method still achieves very competitive results.
+
+# 6 Experiments on EPIC-Kitchens
+
+To demonstrate the generalizability of AIA, we evaluate our method on the segment level dataset EPIC-Kitchens [6]. In EPIC Kitchens, each segment is annotated with one verb and one noun. The action is defined by their combination.
+
+For both verb model and noun model, we use the extracted segment features (global average pooling of $f_{t}$ ) as query input for IA structure. Hand features and object features are cropped and then fed into IA to model person-person and person-object interactions. For verb model, the memory features are the segment features. For noun model, the memory features are the object features extracted from object detector feature map, thus the AMU algorithm is only applied to the verb model. More details are available in Supplementary Material. From Table 4, we observe a significant gain for all three tasks. All the variants of AIA outperform the SlowFast baseline. Among them, the dense serial IA achieves the best performance for the verbs test, leading to $3.2\%$ improvement on top-1 score. The serial IA results in $4.9\%$ for the nouns test and $3.6\%$ for the action test.
+
+# 7 Conclusion
+
+In this paper, we present the Asynchronous Interaction Aggregation network and its performance in action detection. Our method reports the new start-of-the-art on AVA dataset. Nevertheless, the performance of action detection and the interaction recognition is far from perfect. The poor performance is probably due to the limited video dataset. Transferring the knowledge of action and interaction from image could be a further improvement for AIA network.
+
+# 8 Acknowledgements
+
+This work is supported in part by the National Key R&D Program of China, No. 2017YFA0700800, National Natural Science Foundation of China under Grants 61772332 and Shanghai Qi Zhi Institute.
+
+# References
+
+1. Baradel, F., Neverova, N., Wolf, C., Mille, J., Mori, G.: Object level visual reasoning in videos. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 105-121 (2018)
+2. Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). vol. 2, pp. 60-65. IEEE (2005)
+3. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6299-6308 (2017)
+4. Christoph, R., Pinz, F.A.: Spatiotemporal residual networks for video action recognition. Advances in Neural Information Processing Systems pp. 3468-3476 (2016)
+5. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860 (2019)
+6. Damen, D., Doughty, H., Maria Farinella, G., Fidler, S., Furnari, A., Kazakos, E., Moltisanti, D., Munro, J., Perrett, T., Price, W., et al.: Scaling egocentric vision: The epic-kitchens dataset. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 720-736 (2018)
+7. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. pp. 248-255. IEEE (2009)
+8. Diba, A., Fayyaz, M., Sharma, V., Mahdi Arzani, M., Yousefzadeh, R., Gall, J., Van Gool, L.: Spatio-temporal channel correlation networks for action classification. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 284-299 (2018)
+9. Fabian Caba Heilbron, Victor Escorcia, B.G., Niebles, J.C.: Activitynet: A large-scale video benchmark for human activity understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 961-970 (2015)
+0. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 6202-6211 (2019)
+1. Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1933-1941 (2016)
+2. Girdhar, R., Carreira, J., Doersch, C., Zisserman, A.: A better baseline for AVA. CoRR abs/1807.10066 (2018), http://arxiv.org/abs/1807.10066
+3. Girdhar, R., Carreira, J., Doersch, C., Zisserman, A.: Video action transformer network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 244-253 (2019)
+4. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 1440-1448 (2015)
+5. Gkioxari, G., Girshick, R., Dollar, P., He, K.: Detecting and recognizing human-object interactions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8359-8367 (2018)
+6. Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., He, K.: Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677 (2017)
+
+17. Gu, C., Sun, C., Ross, D.A., Vondrick, C., Pantofaru, C., Li, Y., Vijayanarasimhan, S., Toderici, G., Ricco, S., Sukthankar, R., et al.: Ava: A video dataset of spatiotemporally localized atomic visual actions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6047-6056 (2018)
+18. He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 2961-2969 (2017)
+19. Hou, R., Chen, C., Shah, M.: Tube convolutional neural network (t-cnn) for action detection in videos. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 5822-5831 (2017)
+20. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
+21. Ji, S., Xu, W., Yang, M., Yu, K.: 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence 35(1), 221-231 (2012)
+22. Kalogeiton, V., Weinzaepfel, P., Ferrari, V., Schmid, C.: Action tubelet detector for spatio-temporal action localization. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4405-4413 (2017)
+23. Lin, T.Y., Dólar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2117-2125 (2017)
+24. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision. pp. 2980-2988 (2017)
+25. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740-755. Springer (2014)
+26. Massa, F., Girshick, R.: maskrcnn-benchmark: Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch. https://github.com/facebookresearch/maskrcnn-benchmark (2018), accessed: 2020-2-29
+27. Qiu, Z., Yao, T., Mei, T.: Learning spatio-temporal representation with pseudo-3d residual networks. In: proceedings of the IEEE International Conference on Computer Vision. pp. 5533-5541 (2017)
+28. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp. 91-99 (2015)
+29. Sigurdsson, G.A., Divvala, S., Farhadi, A., Gupta, A.: Asynchronous temporal fields for action recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (July 2017)
+30. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Advances in neural information processing systems. pp. 568-576 (2014)
+31. Singh, G., Saha, S., Sapienza, M., Torr, P.H., Cuzzolin, F.: Online real-time multiple spatiotemporal action localisation and prediction. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 3637-3646 (2017)
+32. Soomro, K., Zamir, A.R., Shah, M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)
+33. Taylor, G.W., Fergus, R., LeCun, Y., Bregler, C.: Convolutional learning of spatiotemporal features. In: European conference on computer vision. pp. 140-153. Springer (2010)
+
+34. Tran, D., Bourdev, L.D., Fergus, R., Torresani, L., Paluri, M.: C3d: generic features for video analysis. CoRR, abs/1412.0767 2(7), 8 (2014)
+35. Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer look at spatiotemporal convolutions for action recognition. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. pp. 6450-6459 (2018)
+36. Varol, G., Laptev, I., Schmid, C.: Long-term temporal convolutions for action recognition. IEEE transactions on pattern analysis and machine intelligence 40(6), 1510-1517 (2017)
+37. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L.u., Polosukhin, I.: Attention is all you need. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems 30, pp. 5998-6008. Curran Associates, Inc. (2017), http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf
+38. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7794-7803 (2018)
+39. Wang, X., Gupta, A.: Videos as space-time region graphs. In: Proceedings of the European conference on computer vision (ECCV). pp. 399-417 (2018)
+40. Weston, J., Chopra, S., Bordes, A.: Memory networks. arXiv preprint arXiv:1410.3916 (2014)
+41. Wu, C.Y., Feichtenhofer, C., Fan, H., He, K., Krahenbuhl, P., Girshick, R.: Long-term feature banks for detailed video understanding. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019)
+42. Xia, J., Tang, J., Lu, C.: Three branches: Detecting actions with richer features. arXiv preprint arXiv:1908.04519 (2019)
+43. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1492-1500 (2017)
+44. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning for video understanding. arXiv preprint arXiv:1712.04851 1(2), 5 (2017)
+45. Yang, X., Yang, X., Liu, M.Y., Xiao, F., Davis, L.S., Kautz, J.: Step: Spatiotemporal progressive learning for video action detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 264-272 (2019)
+46. Zhang, Y., Tokmakov, P., Hebert, M., Schmid, C.: A structured model for action detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9975-9984 (2019)
+47. Zhou, B., Andonian, A., Oliva, A., Torralba, A.: Temporal relational reasoning in videos. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 803-818 (2018)
\ No newline at end of file
diff --git a/asynchronousinteractionaggregationforactiondetection/images.zip b/asynchronousinteractionaggregationforactiondetection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..36ae464f872f43f988490575cbf2bf66b15dbe8e
--- /dev/null
+++ b/asynchronousinteractionaggregationforactiondetection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:efdaff5b68b0f5725324d59347685e8b93fdc46b508c7878b7812d345b5eda39
+size 434788
diff --git a/asynchronousinteractionaggregationforactiondetection/layout.json b/asynchronousinteractionaggregationforactiondetection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..97fe5de8e13e5cb3fce073a8484771ded29cdf48
--- /dev/null
+++ b/asynchronousinteractionaggregationforactiondetection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e07c78cdb8059426c38560583dd93cfdf7e33a95227f8fa5834469377f64b9a9
+size 420664
diff --git a/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/a60b3203-b116-42d2-9fef-131928b2e783_content_list.json b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/a60b3203-b116-42d2-9fef-131928b2e783_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..da49561a9d6fade1e168c5910c9ca901e58bb7d5
--- /dev/null
+++ b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/a60b3203-b116-42d2-9fef-131928b2e783_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27fc35c5aa8328a7095758e1fb842d7e61611dd3c95a5a448cc2320595256f3e
+size 74515
diff --git a/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/a60b3203-b116-42d2-9fef-131928b2e783_model.json b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/a60b3203-b116-42d2-9fef-131928b2e783_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a088b211df74bc915909ffe6b24206b67120190f
--- /dev/null
+++ b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/a60b3203-b116-42d2-9fef-131928b2e783_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f27e05408e6250d5a90da20c0c51e242997df81622a29756d4d0ea8e2d73dcd7
+size 88324
diff --git a/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/a60b3203-b116-42d2-9fef-131928b2e783_origin.pdf b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/a60b3203-b116-42d2-9fef-131928b2e783_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ced8a6a6929a771937a023a3160b07d64d7bdad6
--- /dev/null
+++ b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/a60b3203-b116-42d2-9fef-131928b2e783_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:73ef3aed7765e55c4b6b17578e65d5c8ddd49776c032d0dbd80a340dd5a25d4e
+size 10736388
diff --git a/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/full.md b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..039105959dc0ca9c85e48ca4b67ea1eeece993ce
--- /dev/null
+++ b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/full.md
@@ -0,0 +1,246 @@
+# AtlantaNet: Inferring the 3D Indoor Layout from a Single $360^{\circ}$ Image beyond the Manhattan World Assumption
+
+Giovanni Pintore $^{1[0000-0001-8944-1045]}$ , Marco Agus $^{2,1[0000-0003-2752-3525]}$ , and Enrico Gobbetti $^{1[0000-0003-0831-2458]}$
+
+Visual Computing, CRS4, Italy
+giovanni.pintore@crs4.it enrico.gobbetti@crs4.it
+2 College of Science and Engineering, HBKU, Qatar
+magus@hbku.edu.qa
+
+
+Fig. 1. Examples of automatically recovered 3D layouts. Our method returns a 3D room model from a single panorama even in cases not supported by current state-of-the-art methods, such as, for example, vertical walls meeting at non-right angles or with a curved 2D footprints.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Abstract. We introduce a novel end-to-end approach to predict a 3D room layout from a single panoramic image. Compared to recent state-of-the-art works, our method is not limited to Manhattan World environments, and can reconstruct rooms bounded by vertical walls that do not form right angles or are curved - i.e., Atlanta World models. In our approach, we project the original gravity-aligned panoramic image on two horizontal planes, one above and one below the camera. This representation encodes all the information needed to recover the Atlanta World 3D bounding surfaces of the room in the form of a 2D room footprint on the floor plan and a room height. To predict the 3D layout, we propose an encoder-decoder neural network architecture, leveraging Recurrent Neural Networks (RNNs) to capture long-range geometric patterns, and exploiting a customized training strategy based on domain-specific knowledge. The experimental results demonstrate that our method outperforms state-of-the-art solutions in prediction accuracy, in particular in cases of complex wall layouts or curved wall footprints.
+
+Keywords: 3D floor plan recovery, panoramic images, 360 images, data-driven reconstruction, structured indoor reconstruction, indoor panorama, room layout estimation, holistic scene structure
+
+# 1 Introduction
+
+Automatic 3D reconstruction of a room's bounding surfaces from a single image is a very active research topic [20].
+
+In this context, $360^{\circ}$ capture is very appealing, since it provides the quickest and most complete single-image coverage and is supported by a wide variety of professional and consumer capture devices that make acquisition fast and cost-effective [31]. Since rooms are full of clutter, single images produce anyway only partial coverage and imperfect sampling, thus reconstruction problem is difficult and ambiguous without prior assumptions. In particular, current approaches, see Sec. 2, are either tuned to simple structures with a limited number of corners [6] or bound by the Indoor World assumption [16] (i.e., the environment has a single horizontal floor and ceiling, and vertical walls which all meet at right angles). In this context, recent data-driven approaches [33,26,30] have produced excellent results in recovering the room layout from a single panoramic image [34]. However, state-of-the-art data-driven methods usually follow a costly and constraining framework: a heavy pre-processing to generate Manhattan-aligned panoramas (e.g., edge-based alignment and warping of generated perspective views [16]), a deep neural network that predicts the layout elements on a rectified equirectangular image, and a post-processing that fits the (Manhattan) 3D layout to the predicted elements.
+
+In this work, we present *AtlantaNet*, a novel data-driven solution to estimate a 3D room layout from a single RGB panorama. As its name suggests, we exploit the less restrictive *Atlanta World* model [23], in which the environment is expected to have horizontal floor and ceiling and vertical walls, but without the restriction of walls meeting at right angles or having a limited number of corners (supporting, e.g., curved walls). In our approach, the original equirectangular image, assumed roughly aligned with the gravity vector, is projected on two arbitrary horizontal planes, one above and one below the camera (see Fig. 2(a)). Exploiting the *Atlanta World* assumption, this representation encodes all the information needed to recover 3D bounding surfaces of the room, i.e., the 2D floor plan and the room height (see Sec. 4). To predict the 3D layout from this representation, we propose an encoder-decoder architecture, leveraging Recurrent Neural Networks (RNNs) to capture the long-range geometric pattern of room layouts. The network maps a projected image, represented as a tensor, to a binary segmentation mask separating the interior and exterior space, respectively for the ceiling and for the floor projection. The walls footprint is found by extracting a polygonal approximation of the contour of the mask generated from the above-camera image (ceiling mask), and the room height is determined by the scale that maximizes the correlation between the lower and upper contour (see Sec. 4). A customized training strategy based on domain-specific knowledge makes it possible to perform data augmentation and reuse the same network for both projected images. For training, we exploit previously released annotated datasets [33,26,30,6]. Our experimental results (see Sec. 5) demonstrate that our method outperforms state-of-the-art methods [33,26,30] in prediction accuracy, especially on rooms with multiple corners or non-Manhattan layouts. Fig. 1 shows some 3D layouts predicted by our method.
+
+Our contributions are summarized as follows:
+
+- We introduce a data encoding based on the Atlanta World indoor model, that allows layout prediction on planar projections free from spherical image deformations, unlike previous approaches that are predominantly based on features extracted from the equirectangular view [33,26,30,6]. As supported by results, working on such a transformed domain simplifies structure detection [19,21,30]. In addition, representative tensors can be treated as conventional 2D images, simplifying, for example, data augmentation and the use of powerful network architectures such as RNNs [2,1].
+
+- We reconstruct the 3D layout, in terms of 2D footprint and room height, by inferring the 2D layout from the contour of a solid segmentation masks and the room height from the geometric analysis of the correlation between two contours. Our approach is more stable and well suited to modeling complex structures, such as curved walls, than previous approaches that infer layout from sparse corner positions [33,26,6]. Moreover, we do not need an additional dense network [30] or a post-processing voting scheme [26] to infer the layout height, which can directly determined from a geometric analysis of the masks.
+
+- We propose an end-to-end network that, differently from current state-of-the-art approaches [33,26,30], does not require heavy pre-processing, such as detection of main Manhattan-world directions from vanishing lines analysis [34,32,16] and related image warping, nor complex layout post-processing, such as Manhattan-world regularization of detected features [33,26,30]. Our only requirement is that input images are roughly aligned with the gravity vector, a constraint which is easily met by hardware or software means [9], and is verified in all current benchmark databases. As a result, our method, in addition to being faster, does not require complex per-image deformations that make multi-view analysis difficult (see discussion in Sec. 6).
+
+- We propose a training strategy based on feeding both ceiling and floor view on the same network instance, improving inference performance compared to a dual joined branches architecture or on separate training for ceiling and floor (see results and ablation study at Sec. 5.3).
+
+We tested our approach on both conventional benchmarks (see Zou et al. [34]) and more challenging non-Manhattan scenes annotated by us (see Sec. 5.1). Results demonstrate how our method outperforms previous works on both testing sets (Sec. 5). Code and data are made available at https://github.com/crs4/AtlantaNet.
+
+# 2 Related work
+
+3D reconstruction and modeling of indoor scenes has attracted a lot of research in recent years. Here, we analyze only the approaches closer to ours, referring the reader to a very recent survey for a general coverage of the subject [20].
+
+A noticeable series of works concentrate on parsing the room layout from a single RGB image. Since man-made interiors often follow very strict rules, several successful approaches have been proposed by imposing specific priors.
+
+Delage et al. [4] presented one the first monocular approaches to automatically recover a 3D reconstruction from a single indoor image. They adopt a dynamic Bayesian network trained to recognize the floor-wall boundary in each column of the image, assuming the indoor scene consists only of a flat floor and straight vertical walls. However, in its original formulation, such a reconstruction is limited to partial views (e.g., a room corner).
+
+Full-view geometric context (GC) estimation from appearance priors, i.e., the establishment of a correspondence between image pixels and geometric surface labels, was proposed as a method to analyze outdoor scenes by Hoiem et al. [13]. In combination with Orientation Maps (OM) [16], which are map of local belief of region orientations computed from line segments through heuristic rules, GC is the basis for almost all methods based on geometric reasoning on a single image. Hedau et al. [12], in particular, successfully analyzed the labeling of pixels under the cuboid prior, while Lee et al. [16] considered the less constraining Indoor World Model (IWM), i.e., a Manhattan World with single-floor and single-ceiling, by noting that projections of building interiors under the Indoor World can be fully represented by corners, so a valid structure can be obtained by imposing geometric constraints on corners. Such a geometric reasoning on IWM supports several efficient reconstruction methods. A notable example is the work of Flint et al. [8,7], who, exploiting the homography between floor and ceiling, reduce the structure classification problem to the estimation of the y-coordinate of the ceiling-wall boundary in each image column.
+
+One of the main limitations of single-image methods lies, in fact, on the restricted field of view (FOV) of conventional perspective images, which inevitably results in a limited geometric context [32]. With the emergence of consumer-level $360^{\circ}$ cameras, a wide indoor context can now be captured with one or at least few shots. As a result, most of the research on reconstruction from sparse imagery is now focused in this direction. Zhang et al. [32] propose a whole-room 3D context model that maps a full-view panorama to a 3D bounding box of the room, also detecting all major objects inside (e.g., PanoContext). By combining OM for the top part and GC for the bottom part, they demonstrate that by using panoramas, their algorithm significantly outperforms results on regular-FOV images. More recently, Xu et al. [27] extended this approach of by assuming IWM instead of a box-shaped room, thus obtaining a more accurate shape of the room, and Yang et al. [28] proposed an algorithm that, starting from a single full-view panorama, automatically infers a 3D shape from a collection of partially oriented super-pixel facets and line segments, exploiting the Manhattan World constraint. Pintore et al. [19] tackle the problem of recovering room boundaries in a top-down 2D domain, in a manner conceptually similar to that of dense approaches. To recover the shape of the room from the single images they combine the ceiling-floor homography [8] to a spatial transform (E2P - i.e., equirectangular to perspective) [19], based on the Unified projection model for spherical images [10]. Such E2P transform highlights the shape of the room projected on a 2D floorplan, generating two projections, respectively for the floor and for the ceiling edges. Applying the ceiling-floor homography, they
+
+recover the height of the walls and enforce the 2D shape estimation from the projected contours. As for all feature-based methods, the effectiveness of these approaches depend on the quality of extracted features (e.g., edges or flat uniform patches). To overcome these problems, more and more solutions are turning towards data-driven approaches [34].
+
+The peculiarity of indoor reconstruction makes generic segmentation solutions (e.g., U-Net [22] or DeepLab [3]) not appropriate. In particular, defining a graphical model at the pixel-level makes it hard to incorporate global shape priors. Recent data-driven approaches have demonstrated impressive performance in recovering the 3D boundary of a single room meeting the Manhattan World constraint. Zou et al. [33] predict the corner probability map and boundary map of directly from a panorama (e.g., LayoutNet). They also extend Stanford 2D-3D dataset [25] with annotated layouts for training and evaluation. Yang et al. [30] propose a deep learning framework, called DuLa-Net, which exploits features fusion between the original panoramic view and the ceiling E2P transform [19], to output a floor plan probability map. A Manhattan regularization step is then performed to recover the 2D floor plan shape, through a grid aligned to the main Manhattan axes. Similarly to LayoutNet approach [33], a number of recent works [6,26] focus on inferring the room layout from the sparse corners position in the panoramic image. Sun et al. [26] represent room layout as three 1D vectors that encode, at each image column, the boundary positions of floor-wall and ceiling-wall, and the existence of wall-wall boundary. The 2D layout is then obtained by fitting Manhattan World segments on the estimated corner positions.
+
+Recently, Zou et al. [34] have presented an extensive evaluation of the latest high-performance methods. In their classification, such methods basically share the same pipeline: a Manhattan World pre-processing step (e.g., based on Zhang et al. [32]), the prediction of layout elements and a post-processing for fitting the 3D model to the predicted elements after a series of regularization. Differently to almost all recent methods [33,26,30], we do not need complex pre-processing steps, such as computation of Manhattan vanishing lines [16] and warping the panoramic image according to them, but only perform projection along the gravity vector. While our method, like many recent ones, shares with HorizonNet [26] and Dula-Net [30] the encoder-decoder concept, we introduce important novelties in the network architecture. In particular, HorizonNet fully works in a 1D domain derived from the equirectangular projection, while we work entirely in a 2D domain derived from projections on horizontal planes. Moreover, in contrast to Dula-Net, we use a single branch working in the transformed domain (both for floor and ceiling), while Dula-Net uses two parallel branches for the ceiling-view probability and for ceiling-floor probability in the equirectangular domain, plus an additional linear branch for deriving the height. Our results show the advantages of our solution. Furthermore, in contrast to many other works, we predict the room layout from dense 2D segmentation maps by simply extracting the largest connected component, rather than from a sparse number of inferred corner positions [6,26]. Such an approach is more robust, particularly in cases of non-Manhattan shapes.
+
+# 3 Overview
+
+
+(a) Dataencoding
+
+
+(b) Layout recovery
+Fig. 2. Data encoding and layout prediction. Fig. 2(a): the Atlanta Transform $A_{h}$ maps all the points of the equirectangular image in 3D space as if their height was $h_{f}$ (focal height), where $h_{f}$ can assume only two possible values: $-h_{e}$ (eye height) and $h_{c}$ (ceiling height). Since at least $h_{c}$ is an unknown value, we apply the transform by imposing a single, fixed, $h_{f}$ , which depends by a fixed FOV. Fig. 2(b): We infer through our network the ceiling or floor shapes. The height is directly proportional to the ratio $h_{r}$ (height ratio) between these shapes, and the 2D footprint of the room is recovered from the ceiling shape.
+
+Our method takes as input a single panoramic image, that we assume aligned to the gravity vector. This is easily obtained on all modern mobile devices that have an IMU on board, or can be achieved prior to the application of our pipeline through standard image processing means [9]. Starting from the oriented image, our approach, depicted in Fig. 2 determines the room structure.
+
+The first module generates, from the input equirectangular image (e.g., panorama original size), an *Atlanta Transform* (e.g., $3 \times 1024 \times 1024$ ) on two horizontal planes placed above and below the camera. For training, the ground truth annotations, conventionally provided on a panoramic image, are transformed in the same way. To simplify discussion, we call the projection on the upper plane the ceiling projection, and the projection on the lower plane the floor projection. Note, however, that the selected planes do not need to be exactly corresponding to the ceiling or for the floor plane, since the room dimensions are determined automatically by our method and are not known in advance.
+
+During training, the network (see Fig.3) is fed by alternating ceiling or floor images, according to a probability function (see Sec. 4.3 and Sec. 5.3). In prediction mode, the same trained network is used to infer ceiling or floor shapes.
+
+The height of the layout is directly proportional to the ratio $h_r$ between the ceiling shape and the floor shape (i.e., a scaling factor). Since in real cases, the floor shape is partially occluded by the clutter, we assume as inferred $h_r$ the value that maximizes the intersection-over-union between the contours of the ceiling and the floor shapes (see Fig. 2(b)).
+
+On output, the 2D shape of the room is simply the contour of the largest connected region of the mask resulting from the network, without applying any post-process regularization, as opposed to, e.g., solutions based on Manhattanworld constraints. The final 3D layout is then determined by extruding a 2D shape from the ceiling shape using the recovered layout height.
+
+# 4 Approach
+
+# 4.1 Data encoding
+
+Assuming the Atlanta World model [23], we project the panoramic image on two horizontal planes, building, respectively, one representative tensor (i.e., $3 \times 1024 \times 1024$ ) for the ceiling and one for the floor horizontal plane (see Fig. 2(b)). To transform the equirectangular map we adopt the following relation:
+
+$$
+A _ {h} (\theta , \gamma , h _ {f}) = \left\{ \begin{array}{l} x = h _ {f} / \tan \gamma * \cos \theta \\ y = h _ {f} / \tan \gamma * \sin \theta \\ z = h _ {f} \end{array} \right. \tag {1}
+$$
+
+The function $A_{h}$ , called Atlanta Transform, maps all the points of the equirectangular image in 3D space as if their height was $h_f$ [19]. Compared to a classic pin-hole model, $h_f$ can be seen as the focal length for a 180 degree field-of-view. In the specific case of the Atlanta World model, $h_f$ can assume only two possible values: $-h_e$ , that is the floor plane below camera center, and $h_c$ , that is the distance between the camera center and the ceiling plane (see Fig. 2(a)).
+
+Considering $h_e$ a known constant or at most fixed as a scale factor, the 3D layout of an Atlanta model is fully defined by a two-dimensional shape - i.e. the 2D footprint of the layout on the floorplan, and by the ceiling distance $h_c$ . Ideally in order to directly apply equation 1 we should also know the value of $h_c$ . Since, in our case, $h_c$ is unknown before reconstruction and must be inferred by the network, we apply a modified version of the transform [30] by imposing a single, fixed, $h_f$ , which depends by a fixed field-of-view (FOV), i.e., $h_f = w / 2 * \tan(FOV / 2)$ , where $w \times w$ is the extent in pixels of each transform (that we assume square). As a consequence, the height of the room is determined by the ratio between $h_c$ and $h_e$ , and is directly proportional to the ratio $h_r$ between the ceiling shape and the floor shape. Ideally, $h_r$ should be the value that makes the floor shape match with the ceiling shape. Since in real cases, the floor shape is heavily occluded by clutter, we assume as inferred $h_r$ the value that maximizes the intersection-over-union between the contours of the ceiling and the floor shapes (see Fig. 2(b)).
+
+# 4.2 Network architecture
+
+Fig. 3 shows an overview of AtlantaNet. The network takes as an input a transform of size $3 \times w \times w$ (see Sec. 4.1) and produces a segmentation mask of size $1 \times w \times w$ . We tested different sizes for the input transform, and we found that $1024 \times 1024$ is the best size in terms of performance, so as to guarantee sufficient detail for the most complex forms and not to require large memory resources (see Sec. 5). The size of the output is $1 \times 1024 \times 1024$ , that is a binary segmentation mask describing the ceiling or floor shape. We adopt ResNet [11] as feature extractor, which has proven to be one of the most effective encoder for both panoramic and perspective images [34]. The output of each ResNet block has half spatial resolution compared to that of the previous block. To capture both low-level
+
+
+Fig. 3. Network architecture. The network takes as input a transforms of size $3 \times w \times w$ (see Sec. 4.1) and passes it to a ResNet encoder. To capture both low-level and high-level features, we keep the last four feature maps of the encoder. Each feature map is then reduced to the same size, $256 \times 32 \times 32$ , through a sequence of convolutional layers and reshaped to $256 \times 1024$ . The 4 features maps are concatenated to a sequential feature map of $1024 \times 1024$ . We feed such a sequence to a RNN, obtaining, after a reshaping, a $1024 \times 32 \times 32$ map. We upsample such map to recover a $1 \times 1024 \times 1024$ binary segmentation mask describing the ceiling or floor shape.
+
+and high-level features, we keep the last four feature maps of the encoder [26]. Each feature map is then reduced to the same size, $256 \times 32 \times 32$ , through a sequence of convolutional layers (Convs in Fig. 3), where each layer contains: a 2D convolution having stride 2 (e.g., except for the last block, having stride 1), a batch normalization module and a rectified linear unit function (ReLU). Finally, we reshape the 4 features maps to $256 \times 1024$ , and we concatenate them layers to obtain a single sequential feature map of $1024 \times 1024$ (i.e., 1024 layers for a sequence having length 1024).
+
+We feed such a sequence to a RNN, that is exploited to capture the shape of the object and thus make coherent predictions even in ambiguous cases such as occlusions and cluttered scenes. In particular, we employ convolutional LSTM [24] modules in our model as the decoder core. Specifically we adopt a bi-directional LSTM with 512 features in the hidden state and 2 hidden internal layers. The output of the RNN decoder is a $1024 \times 1024$ feature map, which collect all the time steps of the RNN layers.
+
+We reshape the RNN output to $1024 \times 32 \times 32$ , and, after a drop-off, we up-sample it through a sequence of 6 convolutional layers (same of ConvS but with stride 1) each one followed by an interpolation (e.g., factor 2 for each layer). In the final layer of the decoder the ReLU is replaced by Sigmoid. As a result we obtain a prediction mask $1 \times 1024 \times 1024$ of the targeted shape (see Fig. 3).
+
+At inferring time the same trained network is applied to the ceiling and floor transform respectively (See Fig. 2(b)). The 2D room layout $F2D$ is obtained with a simple polygonal approximation of the ceiling shape contour, while the ratio of heights $h_r$ (and therefore $h_c$ - see Sec. 4.1), is obtained from the ratio between the contours of the two inferred shapes. In particular, being $h_r$ actually a scale factor between the ceiling and floor transform, it is determined by the scale that maximizes the matching points between the two contours (see Fig. 2(b)). We build the final 3D model just extruding $F2D$ , using $h_r$ to determine $h_c$ and $h_e$ (see Sec. 4.1).
+
+# 4.3 Training
+
+To train our network, we adopt a specific loss function based on the binary cross entropy error of the predicted pixel probability in the mask $M$ and in its gradient $M'$ , compared to ground truth:
+
+$$
+- \frac {1}{n} \sum_ {p = M} (\hat {p} \log p + (1 - \hat {p}) \log (1 - p)) - \frac {1}{n} \sum_ {q = M ^ {\prime}} (\hat {q} \log q + (1 - \hat {q}) \log (1 - q)) \tag {2}
+$$
+
+where $p$ is the probability of one pixel in $M$ , $\hat{p}$ is the ground truth of $p$ in $M$ , $q$ is the pixel probability in $M'$ , $\hat{q}$ is the ground truth, and $n$ is the number of pixels in $M$ and $M'$ which is the transform resolution. The gradient of binary masks is obtained by a Sobel filter of kernel size 3. Even though the gradient component provides a value only near edges, its presence improves the sharpening of the contour in cases of small boundary surface details. This is very important in our case, since in our approach we extract the contour of the largest detected component without performing any post-processing. It also improves noise filtering in highly textured images (see ablation study in Sec. 5.3).
+
+Working completely in a plane-projected domain clearly simplifies data augmentation, compared to panorama augmentation [26]. In practice, for each training iteration, we augment the input panorama set with random rotations and mirrorings, performing all operations in 2D space.
+
+We could separately perform training of an instance for floor prediction and a second instance for ceiling mask prediction, or create an architecture that performs parallel training with a common loss function, or use a single instance capable to handle both ceiling and floors.
+
+In the first case, we experienced, for the ceiling branch training, a tendency to over-fit and a rapid decay of the learning rate after a small number of iterations. At the same time, training the floor branch with only floor images results in rough shapes. This is a predictable behavior, taking into account that the ceiling part usually has cleaner areas but with less features, while in the floor part the architectural structure is more occluded and therefore more difficult to match, alone, with the ground-truth shape of the room[30].
+
+In the second case, we tested two parallel branches by jointly training two instances of AtlantaNet, where the loss function is the sum of the ceiling and floor loss respectively. It should be noted that in this case a direct feature fusion is not possible, since this would imply knowledge of the scale factor between the two transformed tensors, which is itself an unknown value. In this case, we obtained an appreciable improvement of the performance compared to single training. However, the resulting shape is not accurate enough, especially in cases of multiple corners or more complex shapes (see results in Sec.5.3).
+
+We thus adopted a strategy that uses a single Atlanta Net instance, but trained to predict indifferently the ceiling or floor shape. To do this, we feed the same network with examples of ceiling and floor transforms, coupled with their respective ground truth. As showed by comparative results (see Sec. 5.3), such a strategy boosts the performances, as it guides the network to find commonalities between clean structures, mostly present in the ceiling transforms, and highly cluttered structures, mostly present in floor transforms.
+
+# 5 Results
+
+We implemented our method with PyTorch [18], adopting ResNet50 as feature encoder. The presented results are obtained using the Adam optimizer [14] with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ and learning rate 0.0001. We trained the network on 4 NVIDIA RTX 2080Ti GPUs for 300 epochs (best valid around 200 epoch, varying with dataset), with a batch size of 8 ( $3 \times 1024 \times 1024$ input size). As an example, training with the MatterportLayout [34] dataset takes about 2 minutes per epoch. The final layout extraction is obtained by applying a simple polygonal approximation [5] to the larger connected region contour (see Sec. 4.2), thus eliminating excess vertices and saving the resulting model as a json file (we adopt the same convention as MatterportLayout [34] and PanoAnnotator [29]).
+
+# 5.1 Datasets
+
+We trained AtlantaNet using publicly available datasets: PanoContext [32], Stanford 2D-3D [25] and Matterport3D [17]. To simplify comparison, we arrange testing by following the split (cuboid layout, or general Manhattan World), adopted by other works [33,6,26,30]. In addition, we introduce a specific testing set of a hundred images to benchmark more complex Atlanta World cases (AtlantaLayout). The testing set was created by annotating a selection of images from Matterport3D [17] and Structured3D [15]. For cuboid and simple Manhattan layout, we follow the same training/validation/test splitting proposed by LayoutNet [33] and HorizonNet [26], while for general Manhattan World we follow the data split and annotation provided by Zou et al. [34] (e.g., MatterportLayout).
+
+To test Atlanta World layouts, we extend existing testing set with annotated 3D layouts having less restrictive assumptions, as, for example, rooms with curved walls or non-right corner angles. In this case, to ensure a fair evaluation we have prepared the test set by combining the new annotations with a subset of test images taken from the MatterportLayout testing set.
+
+# 5.2 Performance
+
+We evaluate the performance of our approach by following the standard evaluation metrics proposed by Zou et al. [34] and adopted by others [33,26,30]. Specifically, we considered the following metrics: 3D IoU (volumetric intersection-over-union), 2D IoU (pixel-wise intersection-over-union), cornererror (L2 distance normalized to bounding box diagonal), pixelerror (floor, ceiling, wall labeling accuracy of the original image) and $\delta_{i}$ (percentage of pixels where the ratio between the prediction label and the ground truth label is within a threshold of 1.25). Following Zou et al. [34], we adopt 3D IoU, cornererror and pixelerror for cuboid layouts, and 3D IoU, 2D IoU, $\delta_{i}$ for other layouts.
+
+We present a comparison with recent state-of-the-art methods [33,6,26,30] for which comparable results are published or for which source code and data are available. For comparison purposes, we adhere to the methodology reported in the mentioned papers, and we split results into Cuboid layouts, General Manhattan
+
+
+Fig. 4. Qualitative results and comparison. For each row, we show: the original panoramic image annotated with our reconstruction; an intersection-over-union visual comparison, between our approach (green line), HorizonNet [26] (red line) and ground truth (azure mask); the 3D layout obtained with the compared approach [26] (third column) and with ours (fourth column).
+
+World and Atlanta World, preserving the same metrics and setup of the original papers. All results are collected with the same ResNet50 feature encoder. Missing fields in tables indicate cases not reported in original papers.
+
+Tab. 1 reports on performance obtained on Cuboid layouts, a worst-case comparison for our method, since, in contrast to competitors, we do not assume that walls must meet at right angles. Following the same convention presented by Sun et al. [26] and Zou et al. [34], the networks have been trained with three different datasets (i.e., PanoContext, Stanford 2D-3D-S, both of them), and tested with same testing set - e.g. Stanford 2D-3D-S [25]. Results demonstrate how our approach, on these constrained indoors, has a performance similar to state-of-the-art approaches tuned for Manhattan-world environments, although it does not employ any specific post-processing and cuboid regularization.
+
+In Tab. 2, we report on performance obtained on General Manhattan World and Atlanta World layouts (see Sec. 5.1). We compare our method results with results for methods having best performance in general Manhattan cases [30,26]. All the tested approaches are trained with the same MatterportLayout dataset [34]
+
+| Training dataset: | PanoContext | S-2D-3D | PC+Stanford |
| Metrics [%]: | 3D IoU | Corner error | Pixel error | 3D IoU | Corner error | Pixel error | 3D IoU | Corner error | Pixel error |
| CFL [6] | 65.13 | 1.44 | 4.75 | - | - | - | - | - | - |
| LayoutNet [33] | - | - | - | 76.33 | 1.04 | 2.70 | 82.66 | 0.83 | 2.59 |
| Dula-Net [30] | - | - | - | 79.36 | - | - | 86.60 | 0.67 | 2.48 |
| HorizonNet [26] | 75.57 | 0.94 | 3.18 | 79.79 | 0.71 | 2.39 | 82.66 | 0.69 | 2.27 |
| Ours | 75.56 | 0.96 | 3.05 | 82.43 | 0.70 | 2.25 | 83.94 | 0.71 | 2.18 |
+
+Table 1. Cuboid layout performance. All the methods have been tested with the same $S - {2D} - {3D}$ testing set [25] and trained with the enlisted training sets. Our method, even without Manhattan World pre-processing and regularization, is aligned with the performance of the best state-of-art methods that exploit Manhattan-world constraints.
+
+ | Dula-Net [30] | HorizonNet [26] | Ours |
| 3D IoU | 2D IoU | δi | 3D IoU | 2D IoU | δi | 3D IoU | 2D IoU | δi |
| Manhattan 4 corners | 77.02 | 81.12 | 0.818 | 81.88 | 84.67 | 0.945 | 82.64 | 85.12 | 0.950 |
| Manhattan 6 corners | 78.79 | 82.69 | 0.859 | 82.26 | 84.82 | 0.938 | 80.10 | 82.00 | 0.815 |
| Manhattan 8 corners | 71.03 | 74.00 | 0.823 | 71.78 | 73.91 | 0.903 | 71.79 | 74.15 | 0.911 |
| Manhattan >10 corners | 63.27 | 66.12 | 0.741 | 68.32 | 70.58 | 0.861 | 73.89 | 76.93 | 0.915 |
| Manhattan Overall | 75.05 | 78.82 | 0.818 | 79.11 | 81.71 | 0.929 | 81.59 | 84.00 | 0.945 |
| Atlanta 6 corners | - | - | - | 74.45 | 77.13 | 0.862 | 84.26 | 88.78 | 0.972 |
| Atlanta 8 corners | - | - | - | 65.00 | 66.93 | 0.820 | 78.37 | 80.50 | 0.907 |
| Atlanta >10 corners-odd | - | - | - | 64.40 | 67.72 | 0.812 | 75.34 | 77.75 | 0.870 |
| Atlanta Overall | - | - | - | 67.08 | 70.57 | 0.845 | 72.50 | 76.49 | 0.879 |
| Atlanta FT Overall | - | - | - | 73.53 | 76.38 | 0.851 | 80.01 | 84.33 | 0.924 |
+
+Table 2. General layout performance. All methods are trained with the same MatterportLayout dataset [34] and tested on the MatterportLayout test set and on a specific set of complex Manhattan and non-Manhattan scenes (e.g., AtlantaLayout). For Dula-Net [30] performance we refer to the latest available results using MatterportLayout training [34]. $>10$ - corners-odd row refer to complex layouts, including curved walls.
+
+and evaluated both on the MatterportLayout testing set (labeled Manhattan in tab. 2) and on a specific testing set (labeled Atlanta in tab. 2), containing more complex shapes, such as non-right angles and curved walls. For Dula-Net [30] performance, we refer to the latest available results obtained by training with the MatterportLayout dataset by Zou et al. [34]. The Atlanta FT Overall line presents, in addition, results that have been obtained by augmenting the MatterportLayout training dataset with selected Atlanta scenes for fine-tuning. The results demonstrate the accuracy of our approach with both testing sets, and how it outperforms other approaches as the layout complexity grows. It should be noted that a portion of the error depends, for all the approaches, by the approximated ground truth annotation, which clearly affects both training and performance evaluation.
+
+In Fig. 4, we show a selection of scenes for a qualitative evaluation of our method compared to ground truth and HorizonNet [26]. At the first column we show the original panoramic image annotated with our results. It should be noted how, in these complex cases, even the manual labeling of an equirectangular image is not trivial, as well as the visual understanding of the room structure. In order to provide a more intuitive comparison, we show, besides, the intersection-over-union of the recovered layout (green) with the ground truth floorplan (azure mask) and the same layout reconstructed by HorizonNet [26] (red). In the third and fourth
+
+column, we show the 3D layout obtained, respectively, with our approach and with the HorizonNet approach [26]. Visual results confirm numerical performances in terms of footprint and height recovery.
+
+# 5.3 Ablation Study
+
+| Backbone | Setup | Gradient loss | 3D IoU | 2D IoU | δi | Train. params |
| ResNet50 | Two instances trained separately | | 75.48 | 78.26 | 0.856 | 200M |
| ResNet50 | Two instances trained jointly | | 76.04 | 79.92 | 0.815 | 200M |
| ResNet50 | One instance and mixed feeding | | 79.26 | 83.35 | 0.854 | 100M |
| ResNet50 | One instance and mixed feeding | V | 80.79 | 84.12 | 0.902 | 100M |
| Resnet101 | One instance and mixed feeding | V | 83.22 | 86.96 | 0.940 | 119M |
+
+Table 3. Ablation. The ablation study demonstrates how our proposed designs improve the accuracy of prediction. Results are sorted by increasing performance, showing only those cases that actually increase it.
+
+Our ablation experiments are presented in Tab. 3. We report the results averaged across general Manhattan and Atlanta World testing instances (Tab. 2) First, we tested, with the same ResNet50 backbone and without gradient loss function (Sec. 4.3), different configurations: two instances trained separately, two instances trained jointly with a common (overall) loss function and the adopted mixed approach. While the difference between separate and joined training of two instances is quite small, results confirm instead that the mixed feeding approach (see Sec. 4.3) provides a consistent performance boost. For the winning set-up (One instance and mixed feeding), we also evaluate the contribution of the gradient loss component. Including the gradient leads to an accuracy improvement, mainly due to increased performance with more complex shapes.
+
+At last, we show how our method changes its performance by adopting a deeper backbone - i.e., ResNet101. While the ResNet50 encoder (also adopted by compared works) provides consistent results for the given datasets (see Sec. 5.1), increasing the backbone depth appears to be a better option for more complex layouts.
+
+# 5.4 Limitations and Failure Cases
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Fig. 5. Failure case. Fig. 5(a) shows a circular room where the ceiling level is not correctly identified, resulting in the wrong layout of Fig. 5(b) and Fig. 5(c) (ground truth as green line).
+
+Our method is trained to return a single connected region for each projection (ceiling and floor), containing the information needed to recover the room layout
+
+(see Sec. 4.2). Fig. 5 shows an example where the layout of a semi-circular room (Fig. 5(a)) is wrongly predicted. Although geometrically self-consistent (see recovered 3D at Fig. 5(b)), the recovered shape (yellow ceiling mask in Fig. 5(c)) does not describe the real room layout (green annotation). From the topological point-of-view, this happens where the horizontal planes are not clearly identifiable, so, in our example, when the horizontal ceiling is partially occluded by other horizontal structures.
+
+# 6 Conclusions
+
+We have introduced a novel end-to-end approach to predict the 3D room layout from a single panoramic image. We project the original panoramic image on two horizontal planes, one above and one below the camera, and use a suitably trained deep neural network to recover the inside-outside segmentation mask of these two images. The upper image mask, which contains less clutter, is used to determine the 2D floor plan in form of a polygonal layout, while the correlation between upper and lower mask is used to determine the room height under the Atlanta world model. Our experimental results clearly demonstrate that our method outperforms state-of-the-art solutions in prediction accuracy, in particular in cases of complex wall layouts or curved wall footprints. Moreover, the method requires much less pre- and post-processing than competing solutions based on the more constraining Manhattan world model.
+
+Our current work is concentrating in several directions. In particular, we are planning to exploit multiple images to perform a multi-view recovery of rooms with large amount of clutter or complex convex shapes. Moreover, we are also working on the integration of this approach in a multi-room structured reconstruction environment, in order to automatically reconstruct complete building floors.
+
+Acknowledgments. This work has received funding from Sardinian Regional Authorities under projects VIGECLAB, AMAC, and TDM (POR FESR 2014-2020). We also acknowledge the contribution of the European Union's H2020 research and innovation programme under grant agreements 813170 (EVOCATION).
+
+# References
+
+1. Acuna, D., Ling, H., Kar, A., Fidler, S.: Efficient interactive annotation of segmentation datasets with polygon-rnn++. In: Proc. CVPR (2018)
+2. Castrejon, L., Kundu, K., Urtasun, R., Fidler, S.: Annotating object instances with a polygon-rnn. In: Proc. CVPR (2017)
+3. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE TPAMI 40(4), 834-848 (2017)
+4. Delage, E., Honglak Lee, Ng, A.Y.: A dynamic Bayesian network model for autonomous 3D reconstruction from a single indoor image. In: Proc. CVPR. vol. 2, pp. 2418-2428 (2006)
+5. Douglas, D.H., Peucker, T.K.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: The International Journal for Geographic Information and Geovisualization 10(2), 112-122 (1973)
+6. Fernandez-Labrador, C., Fácil, J.M., Perez-Yus, A., Demonceaux, C., Civera, J., Guerrero, J.J.: Corners for layout: End-to-end layout recovery from 360 images. arXiv:1903.08094 (2019)
+7. Flint, A., Murray, D., Reid, I.: Manhattan scene understanding using monocular, stereo, and 3D features. In: Proc. ICCV. pp. 2228-2235 (2011)
+8. Flint, A., Mei, C., Murray, D., Reid, I.: A dynamic programming approach to reconstructing building interiors. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) Proc ECCV. pp. 394-407 (2010)
+9. Gallagher, A.C.: Using vanishing points to correct camera rotation in images. In: Proc. CVR. pp. 460-467 (2005)
+10. Geyer, C., Daniilidis, K.: A unifying theory for central panoramic systems and practical implications. In: Proc. ECCV. pp. 445-461 (2000)
+1. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. CVPR. pp. 770-778 (2016)
+2. Hedau, V., Hoiem, D., Forsyth, D.: Recovering the spatial layout of cluttered rooms. In: Proc. ICCV. pp. 1849-1856 (2009)
+3. Hoiem, D., Efros, A.A., Hebert, M.: Recovering surface layout from an image. International Journal of Computer Vision 75(1), 151-172 (Oct 2007)
+4. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2014)
+5. Kujiale.com: Structured3D Data. https://structured3d-dataset.org/ (2019), [Accessed: 2019-09-25]
+6. Lee, D.C., Hebert, M., Kanade, T.: Geometric reasoning for single image structure recovery. In: Proc. CVPR. pp. 2136-2143 (2009)
+7. Matterport: Matterport3D. https://github.com/niessner/Matterport (2017), [Accessed: 2019-09-25]
+8. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch. In: Proc. NIPS (2017)
+9. Pintore, G., Garro, V., Ganovelli, F., Agus, M., Gobbetti, E.: Omnidirectional image capture on mobile devices for fast automatic generation of 2.5D indoor maps. In: Proc. IEEE WACV. pp. 1-9 (2016)
+20. Pintore, G., Mura, C., Ganovelli, F., Fuentes-Perez, L., Pajarola, R., Gobbetti, E.: State-of-the-art in automatic 3d reconstruction of structured indoor environments. Comput. Graph. Forum 39(2), 667-699 (2020)
+
+21. Pintore, G., Pintus, R., Ganovelli, F., Scopigno, R., Gobbetti, E.: Recovering 3D existing-conditions of indoor structures from spherical images. Computers & Graphics 77, 16-29 (2018)
+22. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. pp. 234-241 (2015)
+23. Schindler, G., Dellaert, F.: Atlanta world: an expectation maximization framework for simultaneous low-level edge grouping and camera calibration in complex manmade environments. In: Proc. CVPR. vol. 1, pp. I-I (2004)
+24. Shi, X., Chen, Z., Wang, H., Yeung, D.Y., Wong, W.k., Woo, W.c.: Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In: Proc. NIPS. p. 802-810 (2015)
+25. Stanford University: BuildingParser Dataset. http://buildingparser.stanford.edu/dataset.html (2017), [Accessed: 2019-09-25]
+26. Sun, C., Hsiao, C.W., Sun, M., Chen, H.T.: HorizonNet: Learning room layout with 1D representation and pano stretch data augmentation. In: Proc. CVPR (June 2019)
+27. Xu, J., Stenger, B., Kerola, T., Tung, T.: Pano2CAD: Room layout from a single panorama image. In: Proc. WACV. pp. 354-362 (2017)
+28. Yang, H., Zhang, H.: Efficient 3D room shape recovery from a single panorama. In: Proc. CVPR. pp. 5422-5430 (2016)
+29. Yang, S.T., Peng, C.H., Wonka, P., Chu, H.K.: PanoAnnotator: A semi-automatic tool for indoor panorama layout annotation. In: Proc. SIGGRAPH Asia 2018 Posters. pp. 34:1-34:2 (2018)
+30. Yang, S.T., Wang, F.E., Peng, C.H., Wonka, P., Sun, M., Chu, H.K.: DuLa-Net: A dual-projection network for estimating room layouts from a single RGB panorama. In: Proc. CVPR (2019)
+31. Yang, Y., Jin, S., Liu, R., Yu, J.: Automatic 3D indoor scene modeling from single panorama. In: Proc. CVPR. pp. 3926-3934 (2018)
+32. Zhang, Y., Song, S., Tan, P., Xiao, J.: PanoContext: A whole-room 3D context model for panoramic scene understanding. In: Proc. ECCV. pp. 668-686 (2014)
+33. Zou, C., Colburn, A., Shan, Q., Hoiem, D.: LayoutNet: Reconstructing the 3D room layout from a single RGB image. In: Proc. CVPR. pp. 2051-2059 (2018)
+34. Zou, C., Su, J.W., Peng, C.H., Colburn, A., Shan, Q., Wonka, P., Chu, H.K., Hoiem, D.: 3d manhattan room layout reconstruction from a single 360 image (2019)
\ No newline at end of file
diff --git a/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/images.zip b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e5091c65abcda9e9636c206cc74a3f49946dd20c
--- /dev/null
+++ b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05b6578faa6d73d1b16be17a9e8e02afcfafa61aa42d59a94401af216aa8ade5
+size 347648
diff --git a/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/layout.json b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5848b0ca26cf32329dbcf3787d3b659838198a0d
--- /dev/null
+++ b/atlantanetinferringthe3dindoorlayoutfromasingle360imagebeyondthemanhattanworldassumption/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b6ca439e300500193ecfc042fd5763928a8ee7b12f87001c65a3b32b71fb87bb
+size 342753
diff --git a/atlasendtoend3dscenereconstructionfromposedimages/9c5c21c3-35dd-442c-bbb8-1689dd44bf72_content_list.json b/atlasendtoend3dscenereconstructionfromposedimages/9c5c21c3-35dd-442c-bbb8-1689dd44bf72_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9affd987cd2240c5bd1256137e16fdd653c10a26
--- /dev/null
+++ b/atlasendtoend3dscenereconstructionfromposedimages/9c5c21c3-35dd-442c-bbb8-1689dd44bf72_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7606cc1f8a287f3c9305e8996dec51efc6511d2fa8291d38f9142427c086e94b
+size 68978
diff --git a/atlasendtoend3dscenereconstructionfromposedimages/9c5c21c3-35dd-442c-bbb8-1689dd44bf72_model.json b/atlasendtoend3dscenereconstructionfromposedimages/9c5c21c3-35dd-442c-bbb8-1689dd44bf72_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ca0781cc85f04ef36b8be6c9bbbe7ae54a621ea7
--- /dev/null
+++ b/atlasendtoend3dscenereconstructionfromposedimages/9c5c21c3-35dd-442c-bbb8-1689dd44bf72_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:541d39fd99826cc0053e451d295510927ea46f25c018e77deae9d960bd071b35
+size 87956
diff --git a/atlasendtoend3dscenereconstructionfromposedimages/9c5c21c3-35dd-442c-bbb8-1689dd44bf72_origin.pdf b/atlasendtoend3dscenereconstructionfromposedimages/9c5c21c3-35dd-442c-bbb8-1689dd44bf72_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f95d18361fe4f311a2f14ee7ef4b01d23967da11
--- /dev/null
+++ b/atlasendtoend3dscenereconstructionfromposedimages/9c5c21c3-35dd-442c-bbb8-1689dd44bf72_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c18584908c1ca036dc83be7765fda6e682d2230909d0c483f0508d276be5f224
+size 16583004
diff --git a/atlasendtoend3dscenereconstructionfromposedimages/full.md b/atlasendtoend3dscenereconstructionfromposedimages/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bc2038fa7bd21fe5aefe7c24a45dc573ea676780
--- /dev/null
+++ b/atlasendtoend3dscenereconstructionfromposedimages/full.md
@@ -0,0 +1,262 @@
+# Atlas: End-to-End 3D Scene Reconstruction from Posed Images
+
+Zak Murez1, Tarrence van As2*, James Bartolozzi*†, Ayan Sinha1, Vijay Badrinarayanan3*, and Andrew Rabinovich2*
+
+$^{1}$ Magic Leap Inc., CA, USA zak@murez.com, asinha@magicleap.com *Work done at Magic Leap bartolozzij@gmail.com
+
+$^{2}$ InsideIQ Inc., CA, USA {tarrence,andrew}@insideiq team Wayve.ai, London, UK vijay@wayve.ai
+
+Abstract. We present an end-to-end 3D reconstruction method for a scene by directly regressing a truncated signed distance function (TSDF) from a set of posed RGB images. Traditional approaches to 3D reconstruction rely on an intermediate representation of depth maps prior to estimating a full 3D model of a scene. We hypothesize that a direct regression to 3D is more effective. A 2D CNN extracts features from each image independently which are then back-projected and accumulated into a voxel volume using the camera intrinsics and extrinsics. After accumulation, a 3D CNN refines the accumulated features and predicts the TSDF values. Additionally, semantic segmentation of the 3D model is obtained without significant computation. This approach is evaluated on the Scannet dataset where we significantly outperform state-of-the-art baselines (deep multiview stereo followed by traditional TSDF fusion) both quantitatively and qualitatively. We compare our 3D semantic segmentation to prior methods that use a depth sensor since no previous work attempts the problem with only RGB input.
+
+Keywords: Multiview Stereo; TSDF; 3D Reconstruction
+
+# 1 Introduction
+
+Reconstructing the world around us is a long standing goal of computer vision. Recently many applications have emerged, such as autonomous driving and augmented reality, which rely heavily upon accurate 3D reconstructions of the surrounding environment. These reconstructions are often estimated by fusing depth measurements from special sensors, such as structured light, time of flight, or LIDAR, into 3D models. While these sensors can be extremely effective, they require special hardware making them more cumbersome and expensive than systems that rely solely on RGB cameras. Furthermore, they often suffer from noise and missing measurements due to low albedo and glossy surfaces as well as occlusion.
+
+Another approach to 3D reconstruction is to use monocular [18,31,32], binocular [3,5] or multiview [23,27,28,51] stereo methods which take RGB images (one,
+
+two, or multiple respectively) and predict depth maps for the images. Despite the plethora of recent research, these methods are still much less accurate than depth sensors, and do not produce satisfactory results when fused into a 3D model.
+
+
+Fig.1: Overview of our method. Features from each image are backprojected along rays and accumulated into a feature volume. Then a 3D CNN refines the features and regresses a TSDF volume. Finally a mesh is extracted from the TSDF. Semantic Labels can also be output.
+
+In this work, we observe that depth maps are often just intermediate representations that are then fused with other depth maps into a full 3D model. As such, we propose a method that takes a sequence of RGB images and directly predicts a full 3D model in an end-to-end trainable manner. This allows the network to fuse more information and learn better geometric priors about the world, producing much better reconstructions. Furthermore, it reduces the complexity of the system by eliminating steps like frame selection, as well as reducing the required compute by amortizing the cost over the entire sequence.
+
+Our method is inspired by two main lines of work: cost volume based multi view stereo [28,57] and Truncated Signed Distance Function (TSDF) refinement [12,15]. Cost volume based multi view stereo methods construct a cost volume using a plane sweep. Here, a reference image is warped onto the target image for each of a fixed set of depth planes and stacked into a 3D cost volume. For the correct depth plane, the reference and target images will match while for other depth planes they will not. As such, the depth is computed by taking the argmin over the planes. This is made more robust by warping image features extracted by a CNN instead of the raw pixel measurements, and by filtering the cost volume with another CNN prior to taking the argmin.
+
+TSDF refinement starts by fusing depth maps from a depth sensor into an initial voxel volume using TSDF fusion [10], in which each voxel stores the truncated signed distance to the nearest surface. Note that a triangulated mesh can then be extracted from this implicit representation by finding the zero crossing surface using marching cubes [34]. TSDF refinement methods [12, 15] take this noisy, incomplete TSDF as input and refine it by passing it through a 3D convolutional encoder-decoder network.
+
+Similar to cost volume multi view stereo approaches, we start by using a 2D CNN to extract features from a sequence of RGB images. These features
+
+are then back projected into a 3D volume using the known camera intrinsics and extrinsics. However, unlike cost volume approaches which back project the features into a target view frustum using image warping, we back project into a canonical voxel volume, where each pixel gets mapped to a ray in the volume (similar to [46]). This avoids the need to choose a target image and allows us to fuse an entire sequence of frames into a single volume. We fuse all the frames into the volume using a simple running average. Next, as in both cost volume and TSDF refinement, we pass our voxel volume through a 3D convolutional encoder-decoder to refine the features. Finally, as in TSDF refinement, our feature volume is used to regress the TSDF values at each voxel (see Figure 1).
+
+We train and evaluate our network on real scans of indoor rooms from the Scannet [11] dataset. Our method significantly outperforms state-of-the-art multi view stereo baselines [28,51] producing accurate and complete meshes.
+
+As an additional bonus, for minimal extra compute, we can add an additional head to our 3D CNN and perform 3D semantic segmentation. While the problems of 3D semantic and instance segmentation have received a lot of attention recently [21,25], all previous methods assume the depth was acquired using a depth sensor. Although our 3D segmentations are not competitive with the top performers on the Scannet benchmark leader board, we establish a strong baseline for the new task of 3D semantic segmentation from multi view RGB.
+
+# 2 Related Work
+
+# 2.1 3D reconstruction
+
+Reconstructing a 3D model of a scene usually involves acquiring depth for a sequence of images and fusing the depth maps using a 3D data structure. The most common 3D structure for depth accumulation is the voxel volume used by TSDF fusion [10]. However, surfels (oriented point clouds) are starting to gain popularity [44,55]. These methods are usually used with a depth sensor, but can also be applied to depth maps predicted from monocular or stereo images.
+
+With the rise of deep learning, monocular depth estimation has seen huge improvements [18, 31, 32], however their accuracy is still far below state-of-the-art stereo methods. A popular classical approach to stereo [23] uses mutual information and semi global matching to compute the disparity between two images. Similar approaches have been incorporated into SLAM systems such as COLMAP [42, 43] and CNN-SLAM [50]. More recently, several end-to-end plane sweep algorithms have been proposed. DeepMVS [27] uses a patch matching network. MVDepthNet [51] constructs the cost volume from raw pixel measurements and performs 2D convolutions, treating the planes as feature channels. GPMVS [26] builds upon this and aggregates information into the cost volume over long sequences using a Gaussian process. MVSNet [57] and DPSNet [28] construct the cost volume from features extracted from the images using a 2D CNN. They then filter the cost volume using 3D convolutions on the 4D tensor. R-MVSNet [58] reduces the memory requirements of MVSNet by replacing the
+
+3D CNN with a recurrent CNN, while P-MVSNet [6] starts with a low resolution MVSNet and then iteratively refines the estimate using their point flow module. All of these methods require choosing a target image to predict depth for and then finding suitable neighboring reference images. Recent binocular stereo methods [3,5] use a similar cost volume approach, but avoid frame selection by using a fixed baseline stereo pair. Depth maps over a sequence are computed independently (or weakly coupled in the case of [26]). In contrast to these approaches, our method constructs a single coherent 3D model from a sequence of input images directly.
+
+While TSDF fusion is simple and effective, it cannot reconstruct partially occluded geometry and requires averaging many measurements to reduce noise. As such, learned methods have been proposed to improve the fusion. OctNet-Fusion [40] uses a 3D encoder-decoder to aggregate multiple depth maps into a TSDF and shows results on single objects and portions of scans. ScanComplete [15] builds upon this and shows results for entire rooms. SG-NN [12] improves upon ScanComplete by increasing the resolution using sparse convolutions [21] and training using a novel self-supervised training scheme. 3D-SIC [24] focuses on 3D instance segmentation using region proposals and adds a per instance completion head. Routed fusion [54] uses 2D filtering and 3D convolutions in view frustums to improve aggregation of depth maps.
+
+More similar in spirit to ours are networks that take one or more images and directly predict a 3D representation. 3D-R2N2 [9] encodes images to a latent space and then decodes a voxel occupancy volume. Octtree-Gen [49] increases the resolution by using an octtree data structure to improve the efficiency of 3D voxel volumes. Deep SDF [38] chooses to learn a generative model that can output an SDF value for any input position instead of discretizing the volume. These methods encode the input to a small latent code and report results on single objects, mostly from shapenet [4]. This small latent code is unlikely to contain enough information to be able to reconstruct an entire scene (follow up work [2], concurrent with ours, addresses this problem, but they do not apply it to RGB only reconstruction). Pix2Vox [56] encodes each image to a latent code and then decodes a voxel representation for each and then fuses them. This is similar to ours, but we explicitly model the 3D geometry of camera rays allowing us to learn better representations and scale to full scenes. SurfNet [45] learns a 3D offset from a template UV map of a surface. Point set generating networks [17] learns to generate point clouds with a fixed number of points. Pixel2Mesh++ [52] uses a graph convolutional network to directly predict a triangulated mesh. Mesh-RCNN [20] builds upon 2D object detection [22] and adds an additional head to predict a voxel occupancy grid for each instance and then refines them using a graph convolutional network on a mesh.
+
+Back projecting image features into a voxel volume and then refining them using a 3D CNN has also been used for human pose estimation [29,59]. These works regress 3D heat maps that are used to localize joint locations.
+
+Deep Voxels [46] and the follow up work of scene representation networks [47] accumulate features into a 3D volume forming an unsupervised representation
+
+of the world which can then be used to render novel views without the need to form explicit geometric intermediate representations.
+
+# 2.2 3D Semantic Segmentation
+
+In addition to reconstructing geometry, many applications require semantic labeling of the reconstruction to provide a richer representation. Broadly speaking, there are two approaches to solving this problem: 1) Predict semantics on 2D input images using a 2D segmentation network [1,7,22] and back project the labels to 3D [35-37] 2) Directly predict the semantic labels in the 3D space. All of these methods assume depth is provided by a depth sensor. A notable exception is Kimera [41], which uses multiview stereo [23] to predict depth, however, they only show results on synthetic data and ground truth 2D segmentations.
+
+SGPN [53] formulates instance segmentation as a 3D point cloud clustering problem. Predicting a similarity matrix and clustering the 3D point cloud to derive semantic and instance labels. 3D-SIS [25] improves upon these approaches by fusing 2D features in a 3D representation. RGB images are encoded using a 2D CNN and back projected onto the 3D geometry reconstructed from depth maps. A 3D CNN is then used to predict 3D object bounding boxes and semantic labels. SSCN [21] predicts semantics on a high resolution voxel volume enabled by sparse convolutions.
+
+In contrast to these approaches, we propose a strong baseline to the relatively untouched problem of 3D semantic segmentation without a depth sensor.
+
+# 3 Method
+
+Our method takes as input an arbitrary length sequence of RGB images, each with known intrinsic and pose. These images are passed through a 2D CNN backbone to extract features. The features are then back projected into a 3D voxel volume and accumulated using a running average. Once the image features have been fused into 3D, we regress a TSDF directly using a 3D CNN (See Fig. 2). We also experiment with adding an additional head to predict semantic segmentation.
+
+# 3.1 Feature Volume Construction
+
+Let $I_{t} \in \mathbb{R}^{3 \times h \times w}$ be an image in a sequence of $T$ RGB images. We extract features $F_{t} = F(I_{t}) \in \mathbb{R}^{c \times h \times w}$ using a standard 2D CNN where $c$ is the feature dimension. These 2D features are then back projected into a 3D voxel volume using the known camera intrinsics and extrinsics, assuming a pinhole camera model. Consider a voxel volume $V \in \mathbb{R}^{c \times H \times W \times D}$
+
+$$
+V _ {t} (:, i, j, k) = F _ {t} (:, \hat {i}, \hat {j}), \quad \text {w i t h} \tag {1}
+$$
+
+
+Fig. 2: Schematic of our method. Features are extracted from a sequence of images using a 2D CNN and then back projected into a 3D volume. These volumes are accumulated and then passed through a 3D CNN to directly regress a TSDF reconstruction of the scene. We can also jointly predict the 3D semantic segmentation of the scene.
+
+$$
+\left[ \begin{array}{l} \hat {i} \\ \hat {j} \end{array} \right] = \Pi K _ {t} P _ {t} \left[ \begin{array}{l} i \\ j \\ k \\ 1 \end{array} \right], \tag {2}
+$$
+
+where $P_{t}$ and $K_{t}$ are the extrinsics and intrinsics matrices for image $t$ respectively, $\varPi$ is the perspective mapping and : is the slice operator. Here $(i,j,k)$ are the voxel coordinates in world space and $(\hat{i},\hat{j})$ are the pixel coordinates in image space. Note that this means that all voxels along a camera ray are filled with the same features corresponding to that pixel.
+
+These feature volumes are accumulated over the entire sequence using a weighted running average similar to TSDF fusion as follows:
+
+$$
+\bar {V} _ {t} = \frac {\bar {V} _ {t - 1} \bar {W} _ {t - 1} + V _ {t}}{\bar {W} _ {t - 1} + W _ {t}}, \tag {3}
+$$
+
+$$
+\bar {W} _ {t} = \bar {W} _ {t - 1} + W _ {t}. \tag {4}
+$$
+
+For the weights we use a binary mask $W_{t}(i,j,k) \in \{0,1\}$ which stores if voxel $(i,j,k)$ is inside or outside the view frustum of the camera.
+
+# 3.2 3D Encoder-Decoder
+
+Once the features are accumulated into the voxel volume, we use a 3D convolutional encoder-decoder network to refine the features and regress the output TSDF (Fig. 3). Each layer of the encoder and decoder uses a set of $3 \times 3 \times 3$ residual blocks. Downsampling is implemented with $3 \times 3 \times 3$ stride 2 convolution, while upsampling uses trilinear interpolation followed by a $1 \times 1 \times 1$ convolution to change the feature dimension. The feature dimension is doubled with each downsampling and halved with each upsampling. All convolution layers are followed by
+
+
+
+
+
+
+Fig. 3: Our 3D encoder-decoder architecture. Blue boxes denote residual blocks, green boxes are stride 2 convolutions and red boxes are trilinear upsampling. The arrows from the encoder to the decoder indicate skip connections. Our network predicts TSDFs in a coarse to fine manner with the previous resolution being used to sparsify the next resolution (shown as small arrows in the decoder).
+
+
+
+batchnorm and relu. We also include additive skip connections from the encoder to the decoder.
+
+At the topmost layer of the encoder-decoder, we use a 1x1x1 convolution followed by a tanh activation to regress the final TSDF values. For our semantic segmentation models we also include an additional 1x1x1 convolution to predict the segmentation logits.
+
+We also include intermediate output heads at each decoded resolution prior to upsampling. These additional predictions are used both for intermediate supervision to help the network train faster, as well as to guide the later resolutions to focus on refining predictions near surfaces. At each resolution, any voxel that is predicted beyond a fraction (.99) of the truncation distance is clamped to one at the following resolutions. Furthermore, loss is only backpropageted for non-clamped voxels. Without this, the loss at the higher resolutions is dominated by the large number of empty space voxels and the network has a harder time learning fine details.
+
+Note that since our features are back projected along entire rays, the voxel volume is filled densely and thus we cannot take advantage of sparse convolutions [21] in the encoder. However, the multiscale outputs can be used to sparsify the feature volumes in the decoder allowing for the use of sparse convolutions similar to [12]. In practice, we found that we were able to train our models at $4cm^3$ voxel resolution without the need for sparse convolutions.
+
+# 4 Implementation Details
+
+We use a Resnet50-FPN [33] followed by the merging method of [30] with 32 output feature channels as our 2D backbone. Our 3D CNN consists of a four scale resolution pyramid where we double the number of channels each time we half the resolution. The encoder consists of $(1,2,3,4)$ residual blocks at each scale respectively, and the decoder consists of $(3,2,1)$ residual blocks.
+
+We supervise the multiscale TSDF reconstructions using $\ell_1$ loss to the ground truth TSDF values. Following [14], we log-transform the predicted and target values before applying the $\ell_1$ loss, and only backpropagate loss for voxels that were observed in the ground truth (i.e. have TSDF values strictly less than 1.) However, to prevent the network from hallucinating artifacts behind walls, outside the room, we also mark all the voxels where their entire vertical column is equal to 1 and penalize in these areas too. The intuition for this is that if the entire vertical column was not observed it was probably not within the room. To construct the ground truth TSDFs we run TSDF fusion at each resolution on the full sequences, prior to training.
+
+We train the network end-to-end using 50 images selected randomly throughout the full sequence. We use a voxel size of $4cm^3$ with a grid of $(160\times 160\times 64)$ voxels, corresponding to a volume of $(6.4\times 6.4\times 2.56)$ meters. At test time, we accumulate the feature volumes in place (since we do not need to store the intermediate activations for backpropagation), allowing us to operate on arbitrary length sequences (often thousands of frames for ScanNet) and we use a $400\mathrm{x}400\mathrm{x}104$ sized voxel grid corresponding to a volume of $(16\times 16\times 4.16)$ meters. We use the ADAM optimizer with a learning rate of $5\mathrm{e}-4$ and 16bit mixed precision operations. Training the network takes around 24 hours on 8 Titan RTX GPUs with a batch size of 8 (1 sequence per GPU) and synchronized batchnorm. Our model is implemented with PyTorch and PyTorch Lightning [16].
+
+# 5 Results
+
+We evaluate our method on ScanNet [11], which consists of 2.5M images across 707 distinct spaces. Standard train/validation/test splits are adopted. The 3D reconstructions are benchmarked using standard 2D depth metrics (Table 2) and 3D metrics (Table 3), which are defined in Table 1. We also show qualitative comparisons in Figure 6 where our method really stands out.
+
+We compare our method to 4 state-of-the-art baselines: COLMAP [42, 43], MVDepthNet [51], GPMVS [26], and DPSNet [28]. For COLMAP we use the default dense reconstruction parameters but use the ground truth poses provided by Scannet. For each of the learned methods we fine tuned the models provided by the authors on Scannet. At inference time, 6 reference frames were selected temporally with stride 10 centered around the target view. We also mask the boundary pixels since the networks have visible edge effects that cause poor depth predictions here (leading to $92.8\%$ completeness).
+
+To evaluate these in 3D we fuse the predicted depth maps using two techniques: TSDF Fusion [10] and point cloud fusion. For COLMAP we use their
+
+default point cloud fusion, while for the other methods we use the implementation of [19]. We found point cloud fusion was more robust to the outliers present in the depth predictions than our implementation of TSDF Fusion. As such, we only report the point cloud fusion results in Table 3 which are strictly better than the TSDF Fusion results (Note that the $L_{1}$ metric is computed using the TSDF Fusion approach as it is not computed in the point cloud fusion approach).
+
+
+Fig. 4: Our method learns to fill holes that are missing from the ground truth. These holes arise from two causes: A) limitations of depth sensors on low albedo and specular surfaces, and B) unobserved regions caused by occlusion and incomplete scans. While other multiview stereo method often learn to predict depth for these troublesome surfaces, they are not able to complete unobserved geometry.
+
+As seen in Figure 4 our method is able to fill holes that are missing from the ground truth. These holes arise from two causes: A) limitations of depth sensors on low albedo and specular surfaces, and B) unobserved regions caused by occlusion and incomplete scans. While other multiview stereo method often learn to predict depth for these troublesome surfaces, they are not able to complete unobserved geometry. On the other hand, since our method directly regresses the full TSDF for a scene, it is able to reason about and complete unobserved regions. However, this means that we must take extra care when evaluating the point cloud metrics, otherwise we will be falsely penalized in these regions. We remove geometry that was not observed in the ground truth by taking the rendered depth maps from our predicted mesh and re-fuse them using TSDF
+
+Fusion into a trimmed mesh. This guarantees that there is no mesh in areas that were not observed in the ground truth.
+
+Our method achieves state-of-the-art on about half of the metrics and is competitive on all metrics. However, as seen in Figure 6, qualitatively our results our significantly better than previous methods. While the $L_{1}$ metric on the TSDF seems to reflect this performance gap better, the inability of the other metrics to capture this indicates a need for additional more perceptual metrics.
+
+As mentioned previously, we augment the existing 3D-CNN with a semantic segmentation head, requiring only a single $1 \times 1 \times 1$ convolution, to be able to not only reconstruct the 3D structure of the scene but also provide semantic labels to the surfaces. Since no prior work attempts to do 3D semantic segmentation from only RGB images, and there are no established benchmarks, we propose a new evaluation procedure. The semantic labels from the predicted mesh are transferred onto the ground truth mesh using nearest neighbor lookup on the vertices, and then the standard IOU metric can be used. The results are reported in Table 4 and Fig. 7 (note that this is an unfair comparison since all prior methods include depth as input).
+
+Table 1: Definitions of metrics: $n$ is the number of pixels with both valid ground truth and predictions, $d$ and ${d}^{ * }$ are the predicted and ground truth depths (the predicted depth from our method is computed by rendering the predicted mesh). $t$ and ${t}^{ * }$ are the predicted and ground truth TSDFs while $p$ and ${p}^{ * }$ are the predicted and ground truth point clouds.
+
+ | 2D | 3D |
| Abs Rel | 1/n∑|d-d*|/d* | L1 | mean t*<1|t-t*| |
| Abs Diff | 1/n∑|d-d*| | Acc | mean p∈P(minp*∈P* ||p-p*||) |
| Sq Rel | 1/n∑|d-d*|2/d* | Comp | mean p*∈P*(min p∈P ||p-p*||) |
| RMSE | √1/n∑|d-d*|2 | Prec | mean p∈P(minp*∈P* ||p-p*|| <.05) |
| δ<1.25i | 1/n∑(max(d/d*, d*) < 1.25i) | Recal | mean p*∈P*(min p∈P ||p-p*|| <.05) |
| Comp | % valid predictions | F-score | 2×Perc×Recal/Perc+Recal |
+
+Table 2: 2D Depth Metrics
+
+| Method | AbsRel | AbsDiff | SqRel | RMSE | δ < 1.25 | δ < 1.252 | δ < 1.253 | Comp |
| COLMAP [43] | .137 | .264 | .138 | .502 | .834 | .908 | .938 | .871 |
| MVDepthNet [51] | .098 | .191 | .061 | .293 | .896 | .977 | .994 | .928 |
| GPMVS [26] | .130 | .239 | .339 | .472 | .906 | .967 | .980 | .928 |
| DPSNet [28] | .087 | .158 | .035 | .232 | .925 | .984 | .995 | .928 |
| Ours (plain) | .061 | .120 | .042 | .248 | .940 | .972 | .985 | .999 |
| Ours (semseg) | .065 | .124 | .043 | .251 | .936 | .971 | .986 | .999 |
+
+Table 3: 3D Geometry Metrics
+
+| Method | L1 | Acc | Comp | Prec | Recal | F-score |
| COLMAP [43] | .599 | .069 | .135 | .634 | .505 | .558 |
| MVDepthNet [51] | .518 | .040 | .240 | .831 | .208 | .329 |
| GPMVS [26] | .475 | .031 | .879 | .871 | .188 | .304 |
| DPSNet [28] | .421 | .045 | .284 | .793 | .223 | .344 |
| Ours (plain) | .162 | .065 | .130 | .725 | .383 | .499 |
| Ours (semseg) | .172 | .074 | .124 | .711 | .413 | .520 |
+
+Table 4: 3D Semantic Label Benchmark
+
+| Method | mIOU |
| ScanNet [11] | 30.6 |
| PointNet++ [39] | 33.9 |
| SPLATNet [48] | 39.3 |
| 3DMV [13] | 48.4 |
| 3DMV-FTSDF | 50.1 |
| PointNet++SW | 52.3 |
| SparseConvNet [21] | 72.5 |
| MinkowskiNet [8] | 73.4 |
| Ours | 34.0 |
+
+ScanNet 3D Semantic Segmentation metrics. We transfer our labels from the predicted mesh to the ground truth mesh using nearest neighbors.
+
+From the results in Table 4 we see that our approach is surprisingly competitive with (and even beats some) prior methods that include depth as input. Having depth as an input makes the problem significantly easier because the only source of error is from the semantic predictions. In our case, in order to correctly label a vertex we must both predict the geometry correct as well as the semantic label. From Fig. 7 we can see that mistakes in geometry compounds with mistakes in semantics which leads to lower IOUs.
+
+In Figure 5 we show an example of how our method degrades as the number of frames is reduced at inference time. We see that there is almost no degradation with as few as 25 frames. See accompanying video for more examples.
+
+# 5.1 Inference Time
+
+Since our method only requires running a small 2D CNN on each frame, the cost of running the large 3D CNN is amortized over a sequence of images. On the other hand, MVS methods must run all their compute on every frame. Note that they must also run depth map fusion to accumulate the depth maps into a mesh, but we do not include this additional time here. We report inference times using 2 neighbors. All models are run on a single NVidia TiTan RTX GPU. From
+
+
+
+
+
+
+Fig. 5: Quality as a function of number of input frames at inference time. There is almost no degradation with as few as 25 frames (out of 784 total).
+
+
+
+Table 5 we can see that after approximately 4 frames, ours becomes faster than DPSNet (note that most Scannet scenes are a few thousands of frames).
+Table 5: Inference Time
+
+| Method | Per Frame Time (sec) | Per Sequence Time (sec) |
| COLMAP [43] | 2.076 | 0 |
| MVDepthNet [51] | 0.048 | 0 |
| GPMVS [26] | 0.051 | 0 |
| DPSNet [28] | 0.322 | 0 |
| Ours | .071 | .840 |
+
+# 6 Conclusions
+
+In this work, we present a novel approach to 3D scene reconstruction. Notably, our approach does not require depth inputs; is unbounded temporally, allowing the integration of long frame sequences; completes unobserved geometry; and supports the efficient prediction of other quantities such as semantics. We have experimentally verified that the classical approach to 3D reconstruction via per view depth estimation is inferior to direct regression to a 3D model from an input RGB sequence. We have also demonstrated that without significant additional compute, a semantic segmentation objective can be added to the model to accurately label the resultant surfaces. In our future work, we aim to improve the back projection and accumulation process. One approach is to allow the network to learn where along a ray to place the features (instead of uniformly). This will improve the models ability to handle occlusions and large multi room scenes. We also plan to add additional tasks such as instance segmentation and intrinsic image decomposition. Our method is particularly well suited for intrinsic image decomposition because the network has the ability to reason with information from multiple views in 3D.
+
+
+Fig. 6: Qualitative 3D reconstruction results.
+
+
+Fig.7: Qualitative 3D semantic segmentations. Left to right: Ours, our labels transferred to the ground truth mesh, ground truth labels. We are able to accurately segment the 3D scene despite not using a depth sensor.
+
+# References
+
+1. Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation (2015)
+2. Chabra, R., Lenssen, J.E., Ilg, E., Schmidt, T., Straub, J., Lovegrove, S., Newcombe, R.: Deep local shapes: Learning local sdf priors for detailed 3d reconstruction. arXiv preprint arXiv:2003.10983 (2020)
+3. Chabra, R., Straub, J., Sweeney, C., Newcombe, R., Fuchs, H.: Stereodrnet: Dilated residual stereonet. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 11786-11795 (2019)
+4. Chang, A.X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., et al.: Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012 (2015)
+5. Chang, J.R., Chen, Y.S.: Pyramid stereo matching network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5410-5418 (2018)
+6. Chen, R., Han, S., Xu, J., Su, H.: Point-based multi-view stereo network. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1538-1547 (2019)
+7. Cheng, B., Collins, M.D., Zhu, Y., Liu, T., Huang, T.S., Adam, H., Chen, L.C.: Panoptic-deeplab. arXiv preprint arXiv:1910.04751 (2019)
+8. Choy, C., Gwak, J., Savarese, S.: 4d spatio-temporal convnets: Minkowski convolutional neural networks (2019)
+9. Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In: European conference on computer vision. pp. 628-644. Springer (2016)
+10. Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. pp. 303-312 (1996)
+1. Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: Scannet: Richly-annotated 3d reconstructions of indoor scenes. In: Proc. Computer Vision and Pattern Recognition (CVPR), IEEE (2017)
+2. Dai, A., Diller, C., Nießner, M.: Sg-nn: Sparse generative neural networks for self-supervised scene completion of rgb-d scans. arXiv preprint arXiv:1912.00036 (2019)
+3. Dai, A., Nießner, M.: 3dmv: Joint 3d-multi-view prediction for 3d semantic scene segmentation (2018)
+4. Dai, A., Qi, C.R., Nießner, M.: Shape completion using 3d-encoder-predictor cnns and shape synthesis (2016)
+5. Dai, A., Ritchie, D., Bokeloh, M., Reed, S., Sturm, J., Nießner, M.: Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4578-4587 (2018)
+6. Falcon, W.: Pytorch lightning. GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning Cited by 3 (2019)
+7. Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3d object reconstruction from a single image. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 605-613 (2017)
+18. Fu, H., Gong, M., Wang, C., Batmanghelich, K., Tao, D.: Deep ordinal regression network for monocular depth estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2002-2011 (2018)
+
+19. Galliani, S., Lasinger, K., Schindler, K.: Massively parallel multiview stereopsis by surface normal diffusion. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 873-881 (2015)
+20. Gkioxari, G., Malik, J., Johnson, J.: Mesh r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 9785-9795 (2019)
+21. Graham, B., Engelcke, M., van der Maaten, L.: 3d semantic segmentation with submanifold sparse convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 9224-9232 (2018)
+22. He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 2961-2969 (2017)
+23. Hirschmuller, H.: Stereo processing by semiglobal matching and mutual information. IEEE Transactions on pattern analysis and machine intelligence 30(2), 328-341 (2007)
+24. Hou, J., Dai, A., Nießner, M.: 3d-sic: 3d semantic instance completion for rgb-d scans. arXiv preprint arXiv:1904.12012 (2019)
+25. Hou, J., Dai, A., Nießner, M.: 3d-sis: 3d semantic instance segmentation of rgb-d scans (2018)
+26. Hou, Y., Kannala, J., Solin, A.: Multi-view stereo by temporal nonparametric fusion. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2651-2660 (2019)
+27. Huang, P.H., Matzen, K., Kopf, J., Ahuja, N., Huang, J.B.: Deepmvs: Learning multi-view stereopsis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2821-2830 (2018)
+28. Im, S., Jeon, H.G., Lin, S., Kweon, I.S.: Dpsnet: End-to-end deep plane sweep stereo. In: 7th International Conference on Learning Representations, ICLR 2019. International Conference on Learning Representations, ICLR (2019)
+29. Iskakov, K., Burkov, E., Lempitsky, V., Malkov, Y.: Learnable triangulation of human pose. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 7718-7727 (2019)
+30. Kirillov, A., Girshick, R., He, K., Dollar, P.: Panoptic feature pyramid networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6399-6408 (2019)
+31. Lasinger, K., Ranftl, R., Schindler, K., Koltun, V.: Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. arXiv preprint arXiv:1907.01341 (2019)
+32. Lee, J.H., Han, M.K., Ko, D.W., Suh, I.H.: From big to small: Multi-scale local planar guidance for monocular depth estimation. arXiv preprint arXiv:1907.10326 (2019)
+33. Lin, T.Y., Dólar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2117-2125 (2017)
+34. Lorensen, W.E., Cline, H.E.: Marching cubes: A high resolution 3d surface construction algorithm. ACM siggraph computer graphics 21(4), 163-169 (1987)
+35. McCormac, J., Clark, R., Bloesch, M., Davison, A., Leutenegger, S.: Fusion++, Volumetric object-level slam. In: 2018 international conference on 3D vision (3DV). pp. 32-41. IEEE (2018)
+36. McCormac, J., Handa, A., Davison, A., Leutenegger, S.: Semanticfusion: Dense 3d semantic mapping with convolutional neural networks. In: 2017 IEEE International Conference on Robotics and automation (ICRA). pp. 4628-4635. IEEE (2017)
+
+37. Narita, G., Seno, T., Ishikawa, T., Kaji, Y.: Panopticfusion: Online volumetric semantic mapping at the level of stuff and things. arXiv preprint arXiv:1903.01177 (2019)
+38. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: Deepsdf: Learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 165-174 (2019)
+39. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space (2017)
+40. Riegler, G., Osman Ulusoy, A., Geiger, A.: Octnet: Learning deep 3d representations at high resolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3577-3586 (2017)
+41. Rosinol, A., Abate, M., Chang, Y., Carlone, L.: Kimera: an open-source library for real-time metric-semantic localization and mapping. In: IEEE Intl. Conf. on Robotics and Automation (ICRA) (2020)
+42. Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
+43. Schonberger, J.L., Zheng, E., Pollefeys, M., Frahm, J.M.: Pixelwise view selection for unstructured multi-view stereo. In: European Conference on Computer Vision (ECCV) (2016)
+44. Schöps, T., Sattler, T., Pollefeys, M.: Surfelmeshing: Online surfel-based mesh reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence (2019)
+45. Sinha, A., Unmesh, A., Huang, Q., Ramani, K.: Surfnet: Generating 3d shape surfaces using deep residual networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 6040-6049 (2017)
+46. Sitzmann, V., Thies, J., Heide, F., Nießner, M., Wetzstein, G., Zollhofer, M.: Deepvoxels: Learning persistent 3d feature embeddings. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2437-2446 (2019)
+47. Sitzmann, V., Zollhöfer, M., Wetzstein, G.: Scene representation networks: Continuous 3d-structure-aware neural scene representations. In: Advances in Neural Information Processing Systems (2019)
+48. Su, H., Jampani, V., Sun, D., Maji, S., Kalogerakis, E., Yang, M.H., Kautz, J.: Splatnet: Sparse lattice networks for point cloud processing (2018)
+49. Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2088-2096 (2017)
+50. Tateno, K., Tombari, F., Laina, I., Navab, N.: Cnn-slam: Real-time dense monocular slam with learned depth prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6243-6252 (2017)
+51. Wang, K., Shen, S.: Mvdepthnet: real-time multiview depth estimation neural network. In: 2018 International Conference on 3D Vision (3DV). pp. 248-257. IEEE (2018)
+52. Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.G.: Pixel2mesh: Generating 3d mesh models from single rgb images. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 52-67 (2018)
+53. Wang, W., Yu, R., Huang, Q., Neumann, U.: Sgpn: Similarity group proposal network for 3d point cloud instance segmentation. In: In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. p. 2569-2578 (2018)
+54. Weder, S., Schonberger, J., Pollefeys, M., Oswald, M.R.: Routedfusion: Learning real-time depth map fusion. arXiv preprint arXiv:2001.04388 (2020)
+
+55. Whelan, T., Leutenegger, S., Salas-Moreno, R., Glocker, B., Davison, A.: Elastic-fusion: Dense slam without a pose graph. Robotics: Science and Systems
+56. Xie, H., Yao, H., Sun, X., Zhou, S., Zhang, S.: Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2690-2698 (2019)
+57. Yao, Y., Luo, Z., Li, S., Fang, T., Quan, L.: Mvsnet: Depth inference for unstructured multi-view stereo. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 767-783 (2018)
+58. Yao, Y., Luo, Z., Li, S., Shen, T., Fang, T., Quan, L.: Recurrent mvsnet for high-resolution multi-view stereo depth inference. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5525-5534 (2019)
+59. Zimmermann, C., Ceylan, D., Yang, J., Russell, B., Argus, M., Brox, T.: Freihand: A dataset for markerless capture of hand pose and shape from single rgb images. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 813-822 (2019)
\ No newline at end of file
diff --git a/atlasendtoend3dscenereconstructionfromposedimages/images.zip b/atlasendtoend3dscenereconstructionfromposedimages/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..485562bf8a382c0d491655571b9703427bc5cec0
--- /dev/null
+++ b/atlasendtoend3dscenereconstructionfromposedimages/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:726a4245d69f6bc9201a49fb1edc6b61b287ab443412caea3dbc1612b4954d8a
+size 753851
diff --git a/atlasendtoend3dscenereconstructionfromposedimages/layout.json b/atlasendtoend3dscenereconstructionfromposedimages/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..53441b0006c00f4c32301ee10cf97d46d29bc35b
--- /dev/null
+++ b/atlasendtoend3dscenereconstructionfromposedimages/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eeb7d21ac38b55b12d153ca283899a7e2dcf4c12a0d124b96fc5a9883cabbe4f
+size 333166
diff --git a/attendandsegmentattentionguidedactivesemanticsegmentation/bcfbc876-e9a2-491f-bf63-8a5b709ffce4_content_list.json b/attendandsegmentattentionguidedactivesemanticsegmentation/bcfbc876-e9a2-491f-bf63-8a5b709ffce4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2c023453a83140f9659d0043775d911031e81b1f
--- /dev/null
+++ b/attendandsegmentattentionguidedactivesemanticsegmentation/bcfbc876-e9a2-491f-bf63-8a5b709ffce4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:644fb83296419d680e7db01477819aadbb0d82e6e80b75f09c83516b88c2438a
+size 69916
diff --git a/attendandsegmentattentionguidedactivesemanticsegmentation/bcfbc876-e9a2-491f-bf63-8a5b709ffce4_model.json b/attendandsegmentattentionguidedactivesemanticsegmentation/bcfbc876-e9a2-491f-bf63-8a5b709ffce4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..418536e3b967b7f0f8ce7a68f6775968d7be095d
--- /dev/null
+++ b/attendandsegmentattentionguidedactivesemanticsegmentation/bcfbc876-e9a2-491f-bf63-8a5b709ffce4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7de32dd8bc0b7830a3bb346a6ec5b0b3f916637d4ca211d6ee99f39a15083052
+size 85784
diff --git a/attendandsegmentattentionguidedactivesemanticsegmentation/bcfbc876-e9a2-491f-bf63-8a5b709ffce4_origin.pdf b/attendandsegmentattentionguidedactivesemanticsegmentation/bcfbc876-e9a2-491f-bf63-8a5b709ffce4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..961ebeabcf9b96c1acc4c1dd87edca1b26c2701b
--- /dev/null
+++ b/attendandsegmentattentionguidedactivesemanticsegmentation/bcfbc876-e9a2-491f-bf63-8a5b709ffce4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:141646d50065a97b6fcfa210b70dbba536050530ed9a7101843341ed7f206cc4
+size 2513919
diff --git a/attendandsegmentattentionguidedactivesemanticsegmentation/full.md b/attendandsegmentattentionguidedactivesemanticsegmentation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e53d2011666a3698643723055dd778cedfb9d005
--- /dev/null
+++ b/attendandsegmentattentionguidedactivesemanticsegmentation/full.md
@@ -0,0 +1,265 @@
+# Attend and Segment: Attention Guided Active Semantic Segmentation
+
+Soroush Seifi[0000-0002-4791-5350] and Tinne Tuytelaars[0000-0003-3307-9723]
+
+KU Leuven, Kasteelpark Arenberg 10, 3001 Leuven, Belgium {FirstName.LastName}@esat.kuleuven.be
+
+Abstract. In a dynamic environment, an agent with a limited field of view/resource cannot fully observe the scene before attempting to parse it. The deployment of common semantic segmentation architectures is not feasible in such settings. In this paper we propose a method to gradually segment a scene given a sequence of partial observations. The main idea is to refine an agent's understanding of the environment by attending the areas it is most uncertain about. Our method includes a self-supervised attention mechanism and a specialized architecture to maintain and exploit spatial memory maps for filling-in the unseen areas in the environment. The agent can select and attend an area while relying on the cues coming from the visited areas to hallucinate the other parts. We reach a mean pixel-wise accuracy of $78.1\%$ , $80.9\%$ and $76.5\%$ on CityScapes, CamVid, and Kitti datasets by processing only $18\%$ of the image pixels (10 retina-like glimpses). We perform an ablation study on the number of glimpses, input image size and effectiveness of retina-like glimpses. We compare our method to several baselines and show that the optimal results are achieved by having access to a very low resolution view of the scene at the first timestep.
+
+Keywords: Visual attention, active exploration, partial observability, semantic segmentation.
+
+# 1 Introduction
+
+Semantic segmentation has been extensively studied in the recent years due to its crucial role in many tasks such as autonomous driving, medical imaging, augmented reality etc. [1-4]. Architectures such as FCN, U-Net, DeepLab etc. [5-8] have pushed its accuracy further and further each year. All these architectures assume that the input is fully observable. They deploy deep layers of convolutional kernels on all input pixels to generate a segmentation mask.
+
+In contrast, in this paper we study the problem of parsing an environment with very low observability. We define an active agent with a highly limited camera bandwidth (less than $2\%$ of all input pixels) which cannot see the whole scene (input image) at once. Instead it can choose a very small part of it, called a 'glimpse', to focus its attention on. The agent has the freedom to change its
+
+
+Fig. 1. Our model predicts a segmentation map for the full environment (last row) by attending 8 downscaled glimpses containing only $18\%$ of the pixels (third row).
+
+viewing direction at each time step and take a new glimpse of the scene. However, depending on a pixel budget, it is limited in the number of glimpses it can see. After reaching this limit, the agent should output a segmentation map for the whole scene including the unvisited areas.
+
+This setting is in line with previous works on 'active visual exploration' such as [9-11] where an agent tries to explore, reconstruct and classify its environment after taking a series of glimpses. Inspired by those works, we take a step forward to solve an 'active semantic segmentation' problem which: 1) is more practical compared to image reconstruction and 2) is more challenging compared to scene classification as there is a need to classify all visited and unvisited pixels. Furthermore we introduce a novel self-supervised attention mechanism which tells the agent where to look next without the need for reinforcement learning [9, 10] or supervision coming from the image reconstruction loss [11].
+
+Our agent is trained end-to-end, segments the visited glimpses and uses their extracted features to extrapolate and segment areas of the environment it has never seen before. We use specialized modules to segment the local neighbourhood of the glimpses and to exploit long-range dependencies between the visited pixels to segment the other unseen parts.
+
+Our proposed method can be applied in scenarios where processing the whole scene in full resolution is not an option. This could be because 1) the agent's field of view is restricted and cannot capture the whole scene at once, 2) there is a limited bandwidth for data transmission between the agent and the processing unit, 3) processing all pixels from the scene in a sliding window fashion is redundant or impossible due to resource limitations, or 4) there is a need to process at least some parts in higher resolution.
+
+We propose two solutions for such an agent: 1) Start from a random glimpse and intelligently choose the next few glimpses to segment the whole scene or 2) Start from a (very) low resolution view of the whole scene and refine the segmentation by attending the areas with highest uncertainties. We show that the first method outperforms various baselines where the agent selects the next location based on a given heuristic while the second method can yield results comparable to processing the whole input at full resolution, for a fraction of the pixel budget.
+
+Similar to the arguments in [9-11], autonomous systems relying on high resolution $360^{\circ}$ cameras could benefit the most from our architecture. However, due to lack of annotated segmentation datasets with $360^{\circ}$ images we adapted standard benchmark datasets for semantic segmentation, namely CityScapes, Kitti and CamVid [1, 2, 12], to our setting. Figure 1 illustrates the segmentations produced by our method after taking 8 retina-like glimpses on these datasets. We provide several baselines for our work along with an ablation study on the number of glimpses for each dataset. To the best of our knowledge, we are the first to tackle the problem of 'active semantic segmentation' with very low observability.
+
+The remainder of this paper is organized as follows. Section 2 provides a literature review. Section 3 defines our method. In section 4 we provide our experimental results and we conclude the paper in section 5.
+
+# 2 Related Work
+
+Semantic Segmentation Semantic segmentation is one of the key challenges towards understanding a scene for an autonomous agent [13]. Different methods and tricks have been proposed to solve this task relying on deep Convolutional Neural Networks (CNNs) [5-8, 13, 14]. In this paper, we tackle the problem where an agent dynamically changes its viewing direction and receives partial observations from its environment. This agent is required to intelligently explore and segment its environment. Therefore, this study deviates from the common semantic segmentation architectures where the input is static and fully observable. Our work is close to [15] where an agent tries to segment an object in a video stream by looking at a specific part of each frame. However, in this work we produce a segmentation map for all input pixels for a static image.
+
+Active Vision Active vision gives the freedom to an autonomous agent to manipulate its sensors and choose the input data which it finds most useful for learning a task [16]. Such an agent might manipulate objects, move in an environment, change its viewing direction etc. [17-20]. In this paper, we study the same active setting as [9-11] where an agent can decide where to look next in the scene (i.e. selecting a glimpse) with a goal of exploration. These studies evaluate their work on image reconstruction and scene classification. Such tasks demonstrate that the agent can potentially learn an attention policy and build a good representation of the environment with few glimpses. However, the practical use case for such an agent is not clear. Besides, the results from those works imply that the extrapolation beyond the seen glimpses in the image reconstruction case is mostly limited to filling in the unseen areas with uniform colors. Therefore, instead in this paper we tackle the active exploration problem for semantic segmentation where the agent needs to reason about the unseen areas and assign a semantic label to every pixel in the image. This allows focusing on the semantics, rather than the precise color or texture, which is difficult to predict. We believe such an agent is fundamentally more useful than the one solving an image reconstruction task.
+
+Memory in Partially Observable Environments A critical challenge for an active agent in a partially observable environment is to understand the correlations and the spatial organization of the observations it receives. Many architectures combine LSTM layers with deep reinforcement learning to update their representation of the environment at each timestep [9, 10, 21-24]. However, studies such as [11, 25-27] show that maintaining a spatial memory of the environment is more effective albeit being more expensive in terms of memory usage. In this study we use similar architectures to those proposed in [11, 15] and maintain the extracted features in spatial memory maps. These partially filled memory maps are exploited at each time step to segment the whole scene.
+
+Visual Attention We use the word 'attention' to denote a mechanism for choosing the best possible location in the environment to attend next. This is different from those works in the literature where the attention mechanism weights the extracted features from the whole input according to their importance/relevance (a.k.a self-attention [28, 29], soft attention [30, 21, 31] or 'global' attention [32]). Instead, this work is close to the hard attention mechanism defined in [21, 22, 15] where the information about the input is gathered sequentially by attending only a specific part of the input at each timestep. However, unlike the studies on hard attention, our attention mechanism does not rely on reinforcement learning, is differentiable and is trained with self-supervision. We take inspiration from [33] to derive an uncertainty value for each pixel in the predicted segmentation map. Consequently, the area with the highest uncertainty is attended to next.
+
+Image Generation and Out-painting Unlike various inpainting methods which reconstruct missing image regions based on their surrounding pixels [34-36], image outpainting's purpose is to restore an image given only a part of it [37-39]. The active agent defined in [9-11] implicitly solves an outpainting problem. Such an agent should be able to exploit the spatial relationship of the visible areas to extrapolate and reconstruct the missing parts. Studies such as [9, 10, 40] incorporate the spatial information using explicit coordinates while [11, 15] maintain spatial memory maps for this purpose. In this study, we follow the later approach to extrapolate beyond the seen glimpses and assign a semantic label to each pixel in those regions.
+
+Retina Camera Technology Taking inspiration from the human's retina setting, our method benefits from the retina-like glimpses where the resolution changes spatially based on the distance to a point of interest [41]. This way the agent can use its pixel budget more efficiently. In this work we use common downscaling techniques to construct a retina-like glimpse. However, in practice, our method can be implemented on top of retina sensors introduced in [41-43] to visit the parts of the environment suggested by our attention mechanism without seeing and processing the other parts.
+
+
+Fig.2. Architecture Overview
+
+# 3 Method
+
+Our architecture consists of four main components. Figure 2 shows an overview of our architecture. The 'Extraction Module' extracts features for each attended glimpse. The 'Memory Module' gathers the features for all visited glimpses in spatial memory maps. The 'Local Module' segments the attended regions and their neighborhood while the 'Global Module' predicts a general layout of the whole scene. The final segmentation and uncertainty maps at each step are derived based on the outputs of the local and global modules and the final segmentation map from the previous step. The area with the highest uncertainty is selected as the next location for attendance. Figure 2 provides an overview of our architecture. In the following subsections we describe each module in more detail.
+
+# 3.1 Extraction Module
+
+Retina Glimpses The extraction module receives a glimpse which is scaled down on the areas that are located further from its center ('Retina-like glimpses' [11, 22]). This way the agent can use its pixel budget more efficiently. Figure 3 shows 3 different retina setting used in our experiments.
+
+Architecture This module uses a shallow stack of convolutional layers to extract features $F_{t}$ from the visited glimpse at time step $t$ . Its architecture resembles the encoder part of U-net with only 32 channels for its bottleneck activations. Figure 4 shows the architecture for this module.
+
+# 3.2 Memory Module
+
+The memory module maintains 3 different matrices, one for each encoder level in Figure 4. We denote these matrices as 'Level 1', 'Level 2' ('intermediate'
+
+
+Fig. 3. Left to right: a glimpse in full-resolution, a retina glimpse with 2 scales and a retina glimpse with 3 scales. For a glimpse with size $48 \times 48$ , there are 2304, 768 and 590 pixels from the original image in each one of these settings respectively. These images are only for illustration purpose and have a size of $96 \times 96$ rather than $48 \times 48$ .
+
+
+Fig. 4. Extraction module: The extracted features in each level of this encoder are stored for all glimpses by memory module.
+
+memories) and 'Bottleneck' memory. In case that the agent visits all possible non-overlapping glimpses in the image, these matrices would contain the extracted features for the whole input image. Otherwise they are only partially filled with the information from the visited glimpses. In our setting, where the number of glimpses is limited, one can think of these memories as the representation for the whole input image after applying a dropout layer on top. This implicit drop out mechanism prevents the agent from overfitting to the data. Figure 5 illustrates the memory module for the 'Bottleneck memory'; since bottleneck features are derived after two $2 \times 2$ pooling layers, their position in the feature memory is equal to glimpse's position in the image divided by 4. In case of overlap between two glimpses, these memories are updated with the features of the newest glimpse in the overlapping area.
+
+# 3.3 Local Module
+
+This module exploits the local correlations of the features in the memory to expand the segmentations for the visited glimpses. Since the convolutional kernels have a limited receptive field, these expansions remain local to each glimpse. At the same time, for two glimpses which are located close to each other it can benefit from the features from both glimpses to expand a larger area. Figure 6 (top) illustrates this for 4 time-steps.
+
+
+Fig. 5. Memory Module: Bottleneck features are stored in their corresponding spatial position in the memory.
+
+
+Fig. 6. Local module segments and expands the predictions for each glimpse while the global module predicts the general structure of the whole scene.
+
+The features in the 'Bottleneck memory' are extracted using the encoder represented in figure 4. Consequently, we define a decoder architecture symmetrical with this encoder to generate the segmentations. The features in the 'intermediate' memories are used as skip connections while decoding. The extraction and local module together define an architecture similar to U-net. However, the encoder extracts the features for each glimpse separately from the others while the decoder operates on a partially filled memory which contains the features for all glimpses visited until the current timestep. Figure 7 illustrates the architecture of the local module. We denote the segmentation produced by this module at each step $t$ as $L_{t}$ and measure its error $e_{L_t}$ using a binary cross-entropy loss.
+
+# 3.4 Global Module
+
+To complement the task of the local module, the global module exploits the long-range dependencies of the features in the memory and predicts the general structure of the scene.
+
+To achieve this, it compresses the 'Bottleneck memory' with strided convolutions to 4 times smaller in each dimension (height, width and depth). Next, it deploys convolutional layers with a kernel size equal to the size of the compressed memory, thus taking into account all the features in the memory at once to predict a downscaled segmentation of the environment. This segmentation gets upscaled to the input's resolution with the help of 'intermediate' memories and with a similar architecture to the one depicted in figure 7 (though starting
+
+from a compressed bottleneck memory). Figure 6 shows that the global module captures and mostly relies on the dataset's prior to hallucinate the unseen areas in the first steps. However, with more glimpses, its prediction changes towards the correct prediction of the structure of environment.
+
+We denote the segmentation produced by this module at each step $t$ as $G_{t}$ and again measure its error $e_{G_t}$ using a binary cross-entropy loss.
+
+
+Fig. 7. Local Module's Architecture.
+
+# 3.5 Final Segmentation, Certainty and Attention
+
+At each step our architecture produces a segmentation map $S_{t}$ along with an extra channel $C_t$ as our certainty map. These maps are derived by concatenating the previous segmentation map $S_{t - 1}$ , the local segmentation $L_{t}$ and the global segmentation $G_{t}$ and using a series of convolution layers to combine them into a refined segmentation and a new certainty map.
+
+Inspired by the proposed method in [33] for learning the aleatoric and epistemic uncertainty measures while optimizing the loss function, we define the loss for each module at step $t$ according to the equations 1, 2 and 3:
+
+$$
+L _ {L _ {t}} = L _ {L _ {t - 1}} + C _ {t} \times e _ {L _ {t}} + U _ {t} \tag {1}
+$$
+
+$$
+L _ {G _ {t}} = L _ {G _ {t - 1}} + C _ {t} \times e _ {G _ {t}} + U _ {t} \tag {2}
+$$
+
+$$
+L _ {S _ {t}} = L _ {S _ {t - 1}} + C _ {t} \times e _ {S _ {t}} + U _ {t} \tag {3}
+$$
+
+$L_{L_0}$ , $L_{G_0}$ and $L_{S_0}$ are initialized to zero. $C_t$ denotes the predicted certainty map at step $t$ while $U_t$ is a regularizer term to prevent minimizing the loss by setting $C_t$ to zero. We define $U_t$ as:
+
+$$
+U _ {t} = \exp^ {- C _ {t}} \tag {4}
+$$
+
+$U_{t}$ measures the uncertainty for each pixel. The agent learns to minimize $L_{L_t}$ , $L_{G_t}$ and $L_{S_t}$ by assigning low values to $C_t$ (high values to $U_{t}$ ) in the areas where the loss is high (i.e. uncertain areas). Similarly, it assigns high values to $C_t$ (low values to $U_{t}$ ) for the areas with high certainty where the loss is low.
+
+At step t, the optimizer minimizes the sum of the loss functions defined above. We denote this sum as $L_{t}$ :
+
+$$
+L _ {t} = L _ {L _ {t}} + L _ {G _ {t}} + L _ {S _ {t}} \tag {5}
+$$
+
+At the final stage of each step, the certainty map $C_t$ is divided into $16 \times 16$ non-overlapping patches and the patch with lowest sum (lowest certainty) is selected as the next location for attendance.
+
+# 4 Experiments
+
+We evaluate our method on the CityScapes, Kitti and CamVid datasets [1,2, 12]. For the CityScapes dataset we report our results on the provided validation set while for the Kitti and CamVid datasets we set a random $20\%$ split of the data to validate our method.
+
+# 4.1 Retina Setting
+
+In a first experiment, we show our results for the 3 different retina settings depicted in figure 3. In this figure, although all glimpses cover the same area, they differ in the number of pixels they process from the input image. Table 1 compares the ratio of processed pixels to the input image size for different retina settings. Each glimpse covers a $48 \times 48$ patch of a $128 \times 256$ input image (or $96 \times 96$ patch of a $256 \times 512$ image). As is clear from this table, retina glimpses allow the agent to cover larger areas of the environment while efficiently using its pixel budget.
+
+| # Glimpses | Full resolution | 2 Scales | 3 Scales |
| 1 | 7.0 % | 2.3% | 1.8% |
| 2 | 14.0% | 4.6% | 3.6% |
| 3 | 21.0% | 7.0% | 5.4% |
| 4 | 28.1% | 9.3% | 7.2% |
| 5 | 35.1% | 11.7% | 9.0% |
| 6 | 42.1% | 14.0% | 10.8% |
| 7 | 49.2% | 16.4% | 12.6% |
| 8 | 56.2% | 18.7% | 14.4% |
| 9 | 63.2% | 21.0% | 16.2% |
| 10 | 70.3% | 23.4% | 18.0% |
+
+Table 1. Ratio of pixels in a glimpse to the image size for different retina settings.
+
+Figure 8 (Left) demonstrates the performance of our model for each retina setting. In these experiments we set the input image size to $128 \times 256$ and each glimpse covers a $48 \times 48$ patch of the input. Similarly, the right part of this figure summarises the experiments where the input image size is $256 \times 512$ and each glimpse covers a $96 \times 96$ area of the input (ratios remain consistent with table 1).
+
+Table 1 and figure 8 imply that the agent can use its pixel budget most efficiently using the 3-scales retina setting. An agent with a pixel budget of $18\%$ can achieve an accuracy of $78.1\%$ with 3 scales. With the same pixel budget, the 2-scales glimpse and full resolution glimpse cover a smaller area of the input image and thus their accuracy decreases to less than $77.2\%$ and $71.9\%$ respectively.
+
+Furthermore, a comparison of the left and the right part of figure 8 implies that if we maintain the ratio for the glimpse's coverage according to the input size, our method achieves similar results. Therefore, we evaluate the rest of our experiments in this paper using the $128 \times 256$ input size and a 3-scales retina
+
+
+Fig. 8. Comparison of different retina settings' performance. 3-scales retina can perform equally well while using a much lower pixel budget.
+
+
+
+with a coverage of $48 \times 48$ pixels. Table 2 reports the results for Cityscapes, Camvid and Kitti datasets in such settings.
+
+| Glimpses | CityScapes | Camvid | Kitti |
| 1 | 63% | 68.2% | 64.3% |
| 2 | 68.1% | 73.0% | 69.6% |
| 3 | 70.7% | 75.3% | 72.1% |
| 4 | 72.8% | 77.8% | 72.4% |
| 5 | 73.5% | 78.5% | 73.2% |
| 6 | 75.2% | 78.9% | 74.9% |
| 7 | 76.2% | 79.8% | 75.1% |
| 8 | 77.1% | 80.4% | 75.3% |
| 9 | 77.2% | 80.6% | 76.0% |
| 10 | 78.1% | 80.9% | 76.1% |
+
+Table 2. Mean Pixel Accuracy for each dataset for different number of glimpses.
+
+# 4.2Baselines
+
+In this section we evaluate our attention mechanism using different baselines. We compare against a 'random agent' which selects the next glimpse's location by randomly sampling from the input locations. Next, we consider the fact that the images in the datasets with road scenes are captured through a dashboard camera. In this case, salient parts of the image typically lie somewhere near the horizon. Consequently, we compare our method against a 'Horizon agent' where it can only look at the uncertain areas in the middle rows of the image. Finally, we compare our method against a 'Restricted Movement agent' that looks at positions nearby to the current glimpse in the next step. This baseline is in
+
+line with the setting in previous literature on image reconstruction [9, 10]. It evaluates our attention mechanism's exploratory performance and our method's ability to correlate glimpses coming from far spatial locations.
+
+Figure 9 summarises our results on CityScapes dataset (See supplementary material for Camvid and Kitti.) Results presented in figure 9 suggest that re
+
+
+Fig. 9. Comparison against baselines.
+
+maining local to the horizon or the visited regions of the image forces the agent to hallucinate larger parts of the environment thus making the task more difficult. Furthermore, overlapping glimpses which are more likely to occur for the horizon and restricted movement agents can potentially waste a part of the agent's pixel budget without adding much information for the segmentation. Therefore, solving this task requires a more sophisticated strategy for exploration of the input rather than scan of the nearby locations. Finally, the comparison between our method and the random agent shows the effectiveness of our proposed attention/uncertainty prediction. Figure 10 confirms this by illustrating the output of the glimpse-only agent's modules for 6 time-steps. While remaining uncertain about most parts of the environment after the first glimpse, the agent imagines itself to be in a road with cars to its side. By taking the next glimpse above the horizon it predicts the general structure of the buildings and trees surrounding the road. In the few next steps it attends the areas along the horizon which contain more details that the agent is uncertain about.
+
+# 4.3 Glimpse-only, Hybrid and Scale-only agents
+
+In this section, we propose an extension of our proposed method which can achieve higher accuracy with smaller number of glimpses in case it is allowed to capture the whole scene at once at a low resolution. To evaluate this, we define three agents for the experiments in this section: 1) Glimpse-only agent: Similar to the previous experiments, the agent cannot capture the whole scene at once. It takes the first glimpse randomly and relies on the attention mechanism to select
+
+
+Fig. 10. The glimpse-only agent refines its predictions by attending the most uncertain areas. The local module expands the segmentations for the visited areas. The global module predicts the general layout of the environment. The final segmentation is derived by combining the last step's segmentation (initialized to zero) and the local and global modules' segmentations.
+
+the attended areas in the next steps. 2) Hybrid agent: The agent can capture the whole scene but cannot process all pixels. It dedicates a part of its pixel budget to see the whole scene in low resolution. This helps the agent to capture the general structure of the environment and use its remaining pixel budget to refine its segmentation by attending the uncertain areas. For this setting we experimented with an agent which scales down the input to $32 \times 32$ (see supplementary materials for $16 \times 8$ ), which corresponds to almost 2 retina glimpses with 3 scales. 3) Scale-only agent: The agent 'must' scale down the whole scene to its pixel budget. In this case, it does not take any glimpses and only relies on the scaled down view of the input. We define this agent as a baseline for the hybrid agent. The hybrid and scale-only agents use an architecture similar to the extraction module to encode the downscaled input. These features are decoded to a segmentation map using a symmetrical architecture to the extraction module. This would resemble a shallow U-net architecture. The scale-only agent upscales its segmentation to the input's resolution with bilinear interpolation.
+
+Figure 11 and table 3 summarise our results for the agents defined above. As is clear from Figure 11, the hybrid agent outperforms the glimpse-only one. However, the performance gap between these two agents decreases with the number of glimpses. For smaller number of glimpses the glimpse-only agent needs to hallucinate larger parts of the environment while the hybrid agent can rely on the downscaled input to fill-in the missing parts. Another interesting property for the hybrid agent is that it can achieve optimal results in much smaller number of steps (e.g. 2 glimpses in case of Kitti.)
+
+Finally, a comparison between table 3 and figure 11 suggests that the glimpse-only agent performs favorably compared to the scale-only agent given the same
+
+pixel budget. However, in most cases the hybrid agent performs the best. This is due to the fact that such agent can decide which areas to attend in full resolution while its scaled down view of the scene is sufficient for parsing the other areas.
+
+
+Fig. 11. Our method's performance for different number of glimpses. The gap between the glimpse-only and the hybrid agent decreases for higher number of glimpses.
+
+
+
+
+
+| Scales | Glimpse Budget | CityScapes | Camvid | Kitti |
| 1 (128 × 256) (Full) | ≈ 56 | 80.7 | 81.3 | 81.7 |
| 1/4 (64 × 128) | ≈ 14 | 80.4 | 80.9 | 80.4 |
| 1/16 (32 × 64) | ≈ 4 | 78.9 | 79.4 | 75.5 |
+
+Table 3. Scale-only agent; segmentation results by scaling down the input. Second column denotes the number of possible retina-like glimpses given the pixel budget for each experiment.
+
+# 4.4 IOU Evaluation
+
+In this section we compare the Mean IOU accuracy of the glimpse-only agent with 10 glimpses to the accuracy of an architecture similar to U-net (with 256 channels at its bottleneck) working on full $128 \times 256$ images from the CityScapes dataset. Table 4 compares our results for different categories in this dataset. For this evaluation all segmentations are bilinearly upscaled to the raw input image size of $(1024 \times 2048)$ .
+
+Our method compares well to an architecture working on the full image taking into account that our approach only processes $18\%$ of the input pixels. The most difficult category for our method is 'Object'. In a partial view of an environment it is easy to miss small objects such as traffic signs and poles. Therefore it would be a difficult task for our method to hallucinate such objects lying in the unseen regions of the environment.
+
+| Category | Our Method | U-net |
| Flat | 0.907 | 0.938 |
| Construction | 0.641 | 0.746 |
| Object | 0.046 | 0.138 |
| Nature | 0.647 | 0.808 |
| Sky | 0.503 | 0.809 |
| Human | 0.216 | 0.006 |
| Vehicle | 0.599 | 0.798 |
| Average | 0.508 | 0.590 |
+
+Table 4. Mean IOU comparison on CityScapes dataset. Our method using only $18\%$ of the pixels in the image comes relatively close to U-net which observes the full image.
+
+# 5 Conclusion
+
+By taking inspiration from the recent works on active visual exploration [9-11], in this study we tackled the problem of semantic segmentation with partial observability. In this scenario an agent with limited field of view and computational resources needs to understand the scene. Given a limited budget in terms of the number of pixels that can be processed, such an agent should look at the most informative parts of an environment to segment it in whole. We proposed a self-supervised attention mechanism to guide the agent on deciding where to attend next. The agent uses spatial memory maps and exploits the correlations among the visited areas in the memory in order to hallucinate the unseen parts of the environment. Moreover, we introduced a two-stream architecture, with one stream specialized on the local information and the other working on the global cues. We demonstrated that our model performs favorably in comparison to a solution obtained by scaling down the input to the pixel budget. Finally, our experiments indicated that an agent which combines a scaled down segmentation of the whole environment with the proposed attention mechanism performs the best.
+
+In the future, we would investigate datasets with less prior knowledge consisting of various scene categories such as ADE20k [44]. Next, having in mind that consecutive frames in a video stream share most of their content, we would look into a video segmentation problem with partial observability.
+
+Acknowledgment This work was supported by the FWO SBO project Omnidrone1.
+
+# References
+
+1. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213-3223, 2016.
+2. Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 3354-3361. IEEE, 2012.
+3. Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, and Hugo Larochelle. Brain tumor segmentation with deep neural networks. Medical image analysis, 35:18-31, 2017.
+4. Lequan Yu, Xin Yang, Hao Chen, Jing Qin, and Pheng Ann Heng. Volumetric convnets with mixed residual connections for automated prostate segmentation from 3d mr images. In Thirty-first AAAI conference on artificial intelligence, 2017.
+5. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015.
+6. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015.
+7. Hanchao Li, Pengfei Xiong, Jie An, and Lingxue Wang. Pyramid attention network for semantic segmentation. The British Machine Vision Conference, 2018.
+8. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834-848, 2017.
+9. Santhosh K Ramakrishnan and Kristen Grauman. Sidekick policy learning for active visual exploration. In Proceedings of the European Conference on Computer Vision (ECCV), pages 413-430, 2018.
+0. Dinesh Jayaraman and Kristen Grauman. Learning to look around: Intelligently exploring unseen environments for unknown tasks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1238-1247, 2018.
+1. Soroush Seifi and Tinne Tuytelaars. Where to look next: Unsupervised active visual exploration on $360^{\circ}$ input. arXiv preprint arXiv:1909.10304, 2019.
+2. Gabriel J Brostow, Jamie Shotton, Julien Fauqueur, and Roberto Cipolla. Segmentation and recognition using structure from motion point clouds. In European conference on computer vision, pages 44-57. Springer, 2008.
+3. Alberto Garcia-Garcia, Sergio Orts-Escalano, Sergiu Oprea, Victor Villena-Martinez, and Jose Garcia-Rodriguez. A review on deep learning techniques applied to semantic segmentation. arXiv preprint arXiv:1704.06857, 2017.
+4. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062, 2014.
+5. Yuning Chai. Patchwork: A patch-wise attention network for efficient object detection and segmentation in video streams. In Proceedings of the IEEE International Conference on Computer Vision, pages 3415-3424, 2019.
+
+16. John Aloimonos, Isaac Weiss, and Amit Bandyopadhyay. Active vision. International journal of computer vision, 1(4):333-356, 1988.
+17. David Navarro-Alarcon, Hiu Man Yip, Zerui Wang, Yun-Hui Liu, Fangxun Zhong, Tianxue Zhang, and Peng Li. Automatic 3-d manipulation of soft objects by robotic arms with an adaptive deformation model. IEEE Transactions on Robotics, 32(2):429-441, 2016.
+18. Juan C Caicedo and Svetlana Lazebnik. Active object localization with deep reinforcement learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2488-2496, 2015.
+19. Alper Aydemir, Andrzej Pronobis, Moritz Göbelbecker, and Patric Jensfelt. Active visual object search in unknown environments using uncertain semantics. IEEE Transactions on Robotics, 29(4):986-1002, 2013.
+20. Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, and Jitendra Malik. Cognitive mapping and planning for visual navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2616-2625, 2017.
+21. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048-2057, 2015.
+22. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advances in neural information processing systems, pages 2204-2212, 2014.
+23. Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. In 2015 AAAI Fall Symposium Series, 2015.
+24. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928-1937, 2016.
+25. Emilio Parisotto and Ruslan Salakhutdinov. Neural map: Structured memory for deep reinforcement learning. arXiv preprint arXiv:1702.08360, 2017.
+26. Joao F Henriques and Andrea Vedaldi. Mapnet: An allocentric spatial memory for mapping environments. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8476-8484, 2018.
+27. Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in mycraft. arXiv preprint arXiv:1605.09128, 2016.
+28. Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016.
+29. Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In International Conference on Machine Learning, pages 7354-7363, 2019.
+30. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, 2017.
+31. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
+
+32. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2015.
+33. Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In Advances in neural information processing systems, pages 5574-5584, 2017.
+34. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2536-2544, 2016.
+35. Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 85-100, 2018.
+36. Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5505-5514, 2018.
+37. Miao Wang, Yukun Lai, Yuan Liang, Ralph Robert Martin, and Shi-Min Hu. Big-gerpicture: data-driven image extrapolation using graph matching. ACM Transactions on Graphics, 33(6), 2014.
+38. Yi Wang, Xin Tao, Xiaoyong Shen, and Jiaya Jia. Wide-context semantic image extrapolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1399-1408, 2019.
+39. Mark Sabini and Gili Rusak. Painting outside the box: Image outpainting with gans. arXiv preprint arXiv:1808.08483, 2018.
+40. Chieh Hubert Lin, Chia-Che Chang, Yu-Sheng Chen, Da-Cheng Juan, Wei Wei, and Hwann-Tzong Chen. Coco-gan: generation by parts via conditional coordinating. In Proceedings of the IEEE International Conference on Computer Vision, pages 4512-4521, 2019.
+41. Giulio Sandini and Giorgio Metta. Retina-like sensors: motivations, technology and applications. In Sensors and sensing in biology and engineering, pages 251-262. Springer, 2003.
+42. Oliver Graydon. Retina-like single-pixel camera. Nature Photonics, 11(6):335-335, 2017.
+43. Ales Ude. Foveal vision for humanoid robots. In Humanoid Robotics and Neuroscience: Science, Engineering and Society. CRC Press/Taylor & Francis, 2015.
+44. Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 633-641, 2017.
\ No newline at end of file
diff --git a/attendandsegmentattentionguidedactivesemanticsegmentation/images.zip b/attendandsegmentattentionguidedactivesemanticsegmentation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..944e3ad757405861a7d7fdf1277269eaa0268dcc
--- /dev/null
+++ b/attendandsegmentattentionguidedactivesemanticsegmentation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a9685556248f81477f04d88baace0947223c8b9184f445f150f043479d3ad25e
+size 428188
diff --git a/attendandsegmentattentionguidedactivesemanticsegmentation/layout.json b/attendandsegmentattentionguidedactivesemanticsegmentation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a980b04ca8f554210138374cb3fe54e061a0d343
--- /dev/null
+++ b/attendandsegmentattentionguidedactivesemanticsegmentation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:67424507e8015a0eb99ef657302c622b30d19238359d8e661525f33e5a83cffb
+size 344511
diff --git a/attentionbasedqueryexpansionlearning/155ce47b-25fb-464f-b1fd-373895a6da21_content_list.json b/attentionbasedqueryexpansionlearning/155ce47b-25fb-464f-b1fd-373895a6da21_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2a8a1d217a5abb3000acc8d3155a6316ef3abaae
--- /dev/null
+++ b/attentionbasedqueryexpansionlearning/155ce47b-25fb-464f-b1fd-373895a6da21_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:48308a73e59f28467a252f94e4d81de12601a9ecd39b64e94803d31f7289031c
+size 76502
diff --git a/attentionbasedqueryexpansionlearning/155ce47b-25fb-464f-b1fd-373895a6da21_model.json b/attentionbasedqueryexpansionlearning/155ce47b-25fb-464f-b1fd-373895a6da21_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1fa2ea8d7d218b37446bf5c31666236719510b0f
--- /dev/null
+++ b/attentionbasedqueryexpansionlearning/155ce47b-25fb-464f-b1fd-373895a6da21_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1dcc2c33ca62ed3091a5875be74119b3237e140141012a5dd5027a4af06c0572
+size 93740
diff --git a/attentionbasedqueryexpansionlearning/155ce47b-25fb-464f-b1fd-373895a6da21_origin.pdf b/attentionbasedqueryexpansionlearning/155ce47b-25fb-464f-b1fd-373895a6da21_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..129212420fa9c75e8307c0d04e0199b1b3eca2a4
--- /dev/null
+++ b/attentionbasedqueryexpansionlearning/155ce47b-25fb-464f-b1fd-373895a6da21_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33f4baf6255cf777ea1b5a2baf16079e6c7c35a1a065e9a9a3eaf72ca0cb087c
+size 2325807
diff --git a/attentionbasedqueryexpansionlearning/full.md b/attentionbasedqueryexpansionlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4fcbf29889db62cde2eeb9708d080f3ddb068b5d
--- /dev/null
+++ b/attentionbasedqueryexpansionlearning/full.md
@@ -0,0 +1,247 @@
+# Attention-Based Query Expansion Learning
+
+Albert Gordo $^{[0000-0001-9229-4269]}$ , Filip Radenovic $^{[0000-0002-7122-2765]}$ , and Tamara Berg $^{[0000-0002-1272-3359]}$
+
+Facebook AI
+
+Abstract. Query expansion is a technique widely used in image search consisting in combining highly ranked images from an original query into an expanded query that is then reissued, generally leading to increased recall and precision. An important aspect of query expansion is choosing an appropriate way to combine the images into a new query. Interestingly, despite the undeniable empirical success of query expansion, ad-hoc methods with different caveats have dominated the landscape, and not a lot of research has been done on learning how to do query expansion. In this paper we propose a more principled framework to query expansion, where one trains, in a discriminative manner, a model that learns how images should be aggregated to form the expanded query. Within this framework, we propose a model that leverages a self-attention mechanism to effectively learn how to transfer information between the different images before aggregating them. Our approach obtains higher accuracy than existing approaches on standard benchmarks. More importantly, our approach is the only one that consistently shows high accuracy under different regimes, overcoming caveats of existing methods.
+
+Keywords: image retrieval, query expansion learning, attention-based aggregation
+
+# 1 Introduction
+
+Image search is a fundamental task in computer vision, directly applied in a number of applications such as visual place localization [21,39,2], 3D reconstruction [16,40,24], content-based image browsing [50,27,1], etc. Image search is typically cast as a nearest neighbor search problem in the image representation space, originally using local feature matching and bag-of-words-like representations [43], and, more recently, CNN-based global image representations [13,33].
+
+To increase the accuracy of image search systems, a robust representation of the query image is desirable. Query expansion (QE) is a commonly used technique to achieve this goal, where relevant candidates produced during an initial ranking are aggregated into an expanded query, which is then used to search more images in the database. Aggregating the candidates reinforces the information shared between them and injects new information not available in the original query. This idea was originally exploited in the work of Chum et al. [7], introducing the first attempt at image retrieval QE. This averaging of query and top ranked
+
+
+Fig. 1. Outline of our proposed approach. During training, we sample a query $q$ and its nearest neighbors in the training dataset (where their features have been precomputed with the function $\phi$ , typically a CNN) and use our proposed attention-based model $\theta$ to aggregate them into an expanded query $\tilde{\mathbf{q}}$ . Given positive $(\mathbf{d}_{+})$ and/or negative $(\mathbf{d}_{-})$ samples, we use a ranking loss to optimize $\theta$ . Images with the green (red) border represent relevant (non-relevant) samples to the query. At inference, we construct the expanded $\tilde{\mathbf{q}}$ given $\mathbf{q}$ and its neighbors in the index, and use it to query the index again.
+
+results [7], or ad-hoc variations of it [6,45,3,13,33], are now used as a standard method of performance boosting in image retrieval.
+
+Selecting which images from the initial ranking should be used in the QE procedure is however a challenging problem, since we do not have guarantees that they are actually relevant to the query. Early methods use strong geometrical verification of local features to select true positives [7,6,3,45]. As CNN-based global features lack this possibility, the most common approach is to use the $k$ -nearest neighbors to the query [13,33], potentially including false positives. Yet, if $k$ is larger than the number of relevant images, topic drift will degrade the results significantly. This leads to two unsatisfying alternatives: either use a very small $k$ , potentially not leveraging relevant images, or use weighted average approaches with decreasing weights as a function of ranking [13] or image similarity [33], where setting the appropriate decay is a task just as challenging as choosing the optimal $k$ . This has unfortunately led to many works tuning the $k$ parameter directly on test, as well as to use different values of $k$ for each dataset. Replacing $k$ -nearest neighborhoods with similarity-based neighborhoods turn out to be just as unstable, as, unlike inlier count for local features, cosine similarity of CNN global features is not directly comparable between different query images [29].
+
+We argue that existing QE approaches are generally not robust and use ad-hoc aggregation methods, and instead propose to cast QE as a discriminative learning problem. Similar to recent methods that learn embeddings suitable for image retrieval using large-scale datasets [13,33], we formulate the problem as a ranking one, where we train an aggregator that produces the expanded query, optimized to rank relevant samples ahead of non-relevant ones, cf. Figure 1. We use a large-scale dataset, disjoint from the evaluation ones, to train and validate our model and its parameters. We then leverage a self-attention mechanism to design an aggregator model that can transfer information between the candidates (Figure 2), enabling the model to learn the importance of each sample before aggregating them. We call this model Learnable Attention-based Query Expansion, or LAttQE. Unlike
+
+previous QE approaches, LAttQE does not produce monotonically decreasing weights, allowing it to better leverage the candidates in the expansion. LAttQE is more robust to the choice of $k$ thanks to the large-scale training, which enables the model to better handle false positive amongst the top neighbors, and is usable across a wide range of class distributions without sacrificing the performance at any number of relevant images.
+
+Our contributions are as follows: (i) We show that standard query expansion methods, albeit seemingly different, can be cast under the same mathematical framework, allowing one to compare their advantages and shortcomings in a principled way. (ii) We propose to treat query expansion as a discriminative learning problem, where an aggregation model is learned in a supervised manner. (iii) We propose LAttQE, an aggregation model designed to share information between the query and the top ranked items by means of self-attention. We extend this query expansion model to also be useful for database-side augmentation. (iv) We show that our proposed approach outperforms commonly-used query expansion methods in terms of both accuracy and robustness on standard benchmarks.
+
+# 2 Related work
+
+Image retrieval query expansion. Average query expansion (AQE) in image retrieval was originally proposed for representations based on local features [7], and tuned for the bag-of-words search model [43], where local features are aggregated after a strict filtering step, usually based on strong feature geometry [7,6] or Hamming embedding distance [45]. For CNN-based global image representation, AQE is implemented by mean-aggregating the top $k$ retrieved images [13,33]. It has been argued that setting an optimal $k$ for several datasets of different positive image distributions is a non-trivial task [33]. Instead, Gordo et al. [13] propose using a weighted average, where the weight is a monotonically decaying function over the rank of retrieved images. We denote this method as average query expansion with decay, or AQEwD. Likewise, Radenovic et al. [33] use a weighted average, where the weights are computed as a power-normalized similarity between the query and the top ranked images. This method, known as alpha query expansion ( $\alpha \mathrm{QE}$ ), has proven to be fairly robust to the number of neighbors $k$ , and is used as a de facto standard by a number of recent state-of-the-art image retrieval works [34,36,14,18,11,17]. Finally, Arandjelovic et al. [3] proposed discriminative query expansion (DQE) where they train a linear SVM using top ranked images as positives, and low ranking images as negatives, and use the resulting classifier as the expanded query. Note that this is very different from our method, as DQE trains independent classifiers for each query, while we train one single model using a large disjoint dataset.
+
+Image retrieval database pre-processing. If the database is fixed at indexing time, one can pre-process the database to refine the image representations and improve the accuracy accuracy. Database-side augmentation (DBA) [3] is a method that applies QE to each image of the database and replaces the original representation of the image by its expanded version. Although it increases the
+
+offline pre-processing time, it does not increase the memory requirements of the pipeline or the online search time. All aggregation-based QE methods described in the previous paragraph [7,13,3,33] can be applied as different flavors of DBA, including our proposed LAttQE. A different line of work [32,42,8] indexes local neighborhoods of database images together with their respective representations, in order to refine the search results based on the reciprocal neighborhood relations between the query and database images. Besides offline pre-processing, these approaches require additional storage and are slower at query time. Finally, some works [19,5] build a nearest neighbor graph using the database image representations and traverse it at query time, or, alternatively, encode graph information into image descriptors [23]. It increases the amount of required memory by storing the graph structure of the database, and increases online search complexity by orders of magnitude. Both reciprocal-nearest-neighbor and graph-based methods are complementary to our work, and can be applied after augmenting the database representations with our method. When dealing with dynamically-growing indexes, applying these methods becomes even more challenging, which makes them generally unappealing despite the accuracy gains.
+
+Self-attention. The self-attention transformer [47] has established itself as the core component of strong language representation models such as BERT [10] or GPT-2 [35] due to its ability to capture complex interactions between tokens and due to how easy it is to increase the capacity of models simply by stacking more encoders. Self-attention has also shown applications outside of NLP. Wang et al. [48] leverage self-attention to aggregate descriptors from different parts of the image in order to capture interactions between them in a non-local manner. In a similar way, Girdhar and Ramanan [12] use self-attention as an approximation for second order pooling. In a different context, Lee et al. [22] use self-attention as a graph pooling mechanism to combine both node features and graph topology in the pooling. In this paper we use self-attention as a way to transfer information between the top $k$ results so we can construct a more discriminative query. As we describe in Section 3, self-attention is an excellent mechanism to this end.
+
+Query expansion and relevance feedback in information retrieval. The information retrieval community has leveraged query expansion techniques for several decades [26,37,4]. Most interestingly, in the information retrieval community, query expansion methods expand or reformulate query terms independently of the query and results returned from it, via, e.g., reformulation with a thesaurus [25]. What the image search community denotes as query expansion is generally known as relevance feedback (RF), and more precisely, pseudo-RF, as one generally does not have access to the true relevance of the neighbors - although a case could be made for geometrical verification methods [7] providing explicit feedback. Our focus in this work is not on information retrieval methods for two reasons: (i) they generally deal with explicit or implicit RF instead of pseudo-RF; (ii) they generally assume high-dimensional, sparse features (e.g. bags of terms), and learn some form of term weighting that is not applicable in our case.
+
+# 3 Attention-based query expansion learning
+
+We start this section by presenting a generalized form of query expansion, and by showing that well-known query expansion methods can be cast under this framework. We then propose a general framework for learning query expansion in a discriminative manner. Last, we propose LAttQE (Learnable Attention-Based Query Expansion), an aggregation model that leverages self attention to construct the augmented query and that can be trained within this framework.
+
+# 3.1 Generalized query expansion
+
+We assume that there exists a known function $\phi : \Omega \to \mathcal{R}^D$ that can embed items (e.g. images) into an $l_2$ -normalized $D$ -dimensional vectorial space. For example, $\phi$ could be a CNN trained to perform image embedding [13,33,36]. Let us denote with $q$ a query item, and, following standard convention of using bold typeface for vectors, let us denote with $\mathbf{q} = \phi(q)$ its $D$ -dimensional embedding. Similarly, let us denote with $\{\mathbf{d}\}^k = \mathbf{d}_1, \mathbf{d}_2, \ldots, \mathbf{d}_k$ the embeddings of the top $k$ nearest neighbors of $\mathbf{q}$ in a dataset $\mathcal{D}$ according to some measure of similarity, e.g. the cosine similarity, and sorted in decreasing order. Let us also denote with $\{\mathbf{d}\}^-$ a collection of dataset items that are not close to the query, according to the same measure of similarity. Last, for convenience, let us alias $\mathbf{d}_0 := \mathbf{q}$ .
+
+We propose the following generalized form of query expansion:
+
+$$
+\hat {\mathbf {q}} = \frac {1}{Z} \sum_ {i = 0} ^ {k} \theta \left(\mathbf {d} _ {i} \mid \mathbf {q}, \{\mathbf {d} \} ^ {k}, \{\mathbf {d} \} ^ {-}, i\right), \tag {1}
+$$
+
+where $Z$ is a normalization factor, and $\theta$ is a learnable function that takes an individual sample and applies a transformation conditioned on the original query $\mathbf{q}$ , the top $k$ retrieved results $\{\mathbf{d}\}^k$ , a collection of low-ranked samples $\{\mathbf{d}\}^-$ , and its position $i$ in the ranking. The final augmented query is computed by aggregating the transformed top $k$ results, including the query, and applying a normalization $Z$ (e.g. $\ell_2$ normalization)1.
+
+Standard query expansion methods can be cast under this framework. In fact, they can be cast under a more constrained form: $\theta (\mathbf{d}_i\mid \mathbf{q},\{\mathbf{d}\} ^k,\{\mathbf{d}\}^{-},i) = w_i\mathbf{d}_i$ where the value of $w_{i}$ is method-dependent, see Table 1. Two things are worth noticing. First, for all methods, $w_{i}$ depends either on positional information (e.g. the sample got ranked at position $i$ out of $k$ , as done by AQEWD), or on information about the content (e.g. the power-normalized similarity between the item and the query, as done by $\alpha \mathrm{QE}$ ). None of the methods leverage both the positional and the content information simultaneously. Second, except for DQE, all methods produce a monotonically decreasing $\mathbf{w}$ , i.e., if $i > j$ , then $w_{i}\leq w_{j}$ . The implication is that these methods do not have the capacity to uplift the samples amongst the top $k$ retrieved results that are indeed relevant to the query but were ranked after some non-relevant samples. That is, any top-ranked,
+
+| Method | θ(d_i | q, {d}^k, {d}^-, i) = wi d_i |
| [7] AQE: Average QE | wi = 1 |
| [13] AQEwD: AQE with decay | wi = (k - i) / k |
| [3] DQE: Discriminative QE | w is the dual-form solution of an SVM optimization problem using {d}^k as positives and {d}^- as negatives |
| [33] αQE: α-weighted QE | wi = sim(q, d_i)α,
+with α being a hyperparameter. |
+
+Table 1. Standard query expansion (QE) methods and their associated transformations. More details about the methods can be found in Section 2.
+
+non-relevant item will contribute more to the construction of the expanded query than any relevant item ranked after it, with clear negative consequences.
+
+# 3.2 Query expansion learning
+
+We propose that, following recent approaches in representation learning [33,13], one can learn a differentiable $\theta$ transformation in a data-driven way (Figure 1). This training is done in a supervised manner, and ensures that relevant items to the (expanded) query are closer to it than elements that are not relevant. This is achieved by means of losses such as the triplet loss [49] or the contrastive loss [15]. The approach requires access to an annotated dataset (e.g. rSfM120k [33]), but the training data and classes used to learn $\theta$ can be disjoint from the pool of index images that will be used during deployment, as long as the distributions are similar. From that point of view, the requirements are similar to other existing image embedding learning methods in the literature.
+
+At training time, besides sampling queries, positive, and negative samples, one also has to consider the nearest neighbors of the query for the expansion. Sampling a different subset of neighbors each time, as a form of data augmentation, can be useful to improve the model robustness. We provide more details about the process in the experimental section. Finally, we note that this framework allows one to learn $\theta$ and $\phi$ jointly, as well as to learn how to perform QE and DBA jointly, but we consider those variations out of the scope of this work.
+
+# 3.3 Learnable Attention-based Query Expansion (LAttQE)
+
+We propose a more principled $\theta$ function that overcomes the caveats of previous methods, and that can be trained using the framework described in the previous section. In particular, our $\theta$ function is designed to be capable of transferring information between the different retrieved items, giving all top-ranked relevant samples the opportunity to significantly contribute to the construction of the expanded query. To achieve this we rely on a self-attention mechanism. We leverage the transformer-encoder module developed by Vaswani et al. [47], where, in a nutshell, a collection of inputs first share information through a multi-head attention mechanism, and later are reprojected into an embedding space using
+
+fully connected layers with layer normalization and residual connections - see Fig. 1 of Vaswani et al. [47] for a diagram of this module (left) and the decoder module (right), not used in this work. Stacking several of these encoders increases the capacity of the model and enables sharing more contextual information. The exact mechanism that the stack of self-attention encoders uses to transfer information is particularly suited for our problem:
+
+1. The encoder's scaled dot-product attention [47] performs a weighted sum of the form $\sum_{j=0}^{k} \operatorname{Softmax}(\mathbf{d}_i^T[\mathbf{d}_0, \mathbf{d}_1, \ldots, \mathbf{d}_k] / C)_j \mathbf{d}_j$ , where $C$ is a constant, in practice computing the similarity between $\mathbf{d}_i$ and all other inputs and using that as weights to aggregate all the inputs. Observing equations (1) and (3), one can see self-attention as a way to perform expansion of the input samples, leading to richer representations that are then used to compute the weights.
+2. The multihead attention enables focusing on different parts of the representations. This is important because computing similarities using only the original embedding will make it difficult to change the original ranking. By using multihead attention, we discover parts of the embeddings that are still similar between relevant items and dissimilar between non-relevant items, permitting the model to further upweight relevant items and downweight non-relevant ones.
+3. Under this interpretation of the encoder, the stack of encoders allows the model to "refine" the expansion process in an iterative manner. One can see this as expanding the queries, making a first search, using the new neighbors to expand a better query, find new neighbors, etc. Although the pool of neighbors remains constant, we expect the expansion to become more and more accurate. Aggregation. The stack of encoders takes the query $\mathbf{q}$ and the top results $\mathbf{d}_1\ldots \mathbf{d}_k$ as input, and produces outputs $\tilde{\mathbf{q}}$ and $\tilde{\mathbf{d}}_1\ldots \tilde{\mathbf{d}}_k$ . To construct the expanded query, a direct solution consists in aggregating them (e.g. through average or weighted average) into a single vector that represents the expanded query. However, this is challenging in practice, as it requires the encoder to learn how to create outputs that lie in the same space as the original data, something particularly hard when the embedding function $\phi$ is not being simultaneously learned. We empirically verify that learning such a function leads to weak results. Although we speculate that learning a "direct" $\theta$ function jointly with $\phi$ could lead to superior results, the practical difficulties involved in doing so make this approach unappealing. Instead, to ensure that we stay in a similar space, we relax the problem and also construct the expanded query as a weighted sum of the top $k$ results, where the weights $\mathbf{w}$ are predicted by our model. If we denote with $M$ the stack of encoders, the transformed outputs can be represented as
+
+$$
+\tilde {\mathbf {d}} _ {i} = M \left(\left\{\mathbf {q} \right\} \cup \left\{\mathbf {d} \right\} ^ {k}\right) _ {i}. \tag {2}
+$$
+
+Then, inspired by other methods such as $\alpha \mathrm{QE}$ , we can construct the weight $w_{i}$ as the similarity between item $\mathbf{d}_i$ and the query $\mathbf{q}$ in the transformed space, i.e., $w_{i} = \mathrm{sim}(\tilde{\mathbf{q}},\tilde{\mathbf{d}}_{i})$ . This leads to our proposed $\theta$ :
+
+$$
+\theta \left(\mathbf {d} _ {i} \mid \mathbf {q}, \{\mathbf {d} \} ^ {k}, \{\mathbf {d} \} ^ {-}, i\right) = \operatorname {s i m} \left(\tilde {\mathbf {q}}, \tilde {\mathbf {d}} _ {i}\right) \mathbf {d} _ {i}. \tag {3}
+$$
+
+Including rank information. As presented, the proposed method does not leverage in any way the ranking of the results. Indeed, the encoders see the inputs
+
+
+Fig. 2. Proposed aggregator. The output $\hat{\mathbf{q}}$ is constructed as the weighted sum $(\mathbf{w}\Sigma)$ of the query $\mathbf{q}$ and the nearest neighbors $\mathbf{d}_1\dots \mathbf{d}_k$ . The weights are computed by running the inputs through a stack of self-attention encoders after including positional information $(\odot)$ and computing the similarity (through a normalized dot product $\otimes$ ) between the transformed query $\tilde{\mathbf{q}}$ and all the transformed samples $\tilde{\mathbf{d}}_1\dots \tilde{\mathbf{d}}_k$ .
+
+as a set, and not as a sequence of results. This prevents the model from leveraging this information, e.g. by learning useful biases such as "top results tend to be correct, so pay more attention to them to learn the transformations". To enable the model to reason not only about the content of the results but also about their ranking, we follow standard practice when dealing with transformers and include a positional encoding that is added to the inputs before being consumed by the encoder, i.e., $\mathrm{pe}(\mathbf{d}_i) = \mathbf{d}_i + \mathbf{p}_i$ , where each $\mathbf{p}_i \in \mathcal{R}^D$ is a learnable variable within our model. The full proposed aggregator that leverages $\theta$ with positional encoding is depicted in Figure 2.
+
+Auxiliary classification loss. Since, at training time, we have access to the annotations of the images, we know which of the top $k$ results are relevant to the query and which ones are not. This enables us to have an auxiliary linear classifier that predicts whether $\tilde{\mathbf{d}}_i$ is relevant to the query or not. The role of this classifier, which is only used at train time and discarded at inference time, is to encourage the relevant and non-relevant outputs of the encoder to be linearly separable, inducing the relevant items to be more similar to the query than the non-relevant ones. Our empirical evaluation in Section 4 shows that the use of this auxiliary loss can noticeably increase the accuracy of the model.
+
+# 3.4 Database-side augmentation
+
+Database-side augmentation (DBA) is technique complementary to query expansion. Although different variations have been proposed [46,3,44,13], the main idea is that one can perform query expansion, offline, on the database images. This produces an expanded version of the database images, which are then indexed, instead of indexing the original ones. When issuing a new query, one searches on the expanded index, and not on the original one.
+
+Our proposed approach can also be used to perform better database-side augmentation, using $\theta$ to aggregate the top $k$ neighbors of each database image. However, this approach did not work in practice. We believe that the reason is that, on the database side, many images are actually distractors, unrelated to any query, and our model was assigning weights too high for unrelated images when using them as queries. To address this, we propose to use a tempered softmax over the weights, i.e., instead of computing our weights as $w_{i} = \mathrm{sim}(\tilde{\mathbf{q}},\tilde{\mathbf{d}}_{i})$ , we
+
+compute it as
+
+$$
+w _ {i} = \operatorname {S o f t m a x} (\operatorname {s i m} (\tilde {\mathbf {q}}, [ \tilde {\mathbf {d}} _ {0}, \tilde {\mathbf {d}} _ {1}, \dots , \tilde {\mathbf {d}} _ {k} ]) / T) _ {i}, \tag {4}
+$$
+
+where $\mathrm{sim}(\tilde{\mathbf{q}},[\tilde{\mathbf{d}}_0,\tilde{\mathbf{d}}_1,\dots ,\tilde{\mathbf{d}}_k])$ is a vector of similarities between $\tilde{\mathbf{q}}$ and all the $\tilde{\mathbf{ds}}$ , and $T$ is a learnable scalar.
+
+To achieve the best results, we employ a curriculum learning strategy, where first we train our model without softmax, and then we freeze the parameters of the model, incorporate the tempered softmax, and continue training while updating only $T$ . This strategy led to a DBA that not only gave the best results in terms of accuracy but that was also more stable than other variants.
+
+# 4 Experiments
+
+In this section we discuss implementation details of our training, evaluate different components of our method, and compare to the state of the art.
+
+# 4.1 Training setup and implementation details
+
+Image representation. For all experiments we use a publicly-available, state-of-the-art model for image retrieval $[33]^2$ to extract the underlying features. We use the best-performing model from the project page (trained on Google Landmarks 2018 data [29]), consisting of a ResNet101 trunk followed by generalized-mean pooling and a whitening layer, which produces features of 2048 dimensions. Following [33], we extract features at 3 scales $(1, \sqrt{2}, 1 / \sqrt{2})$ , mean-aggregate them, and finally $\ell_2$ -normalize to form the final 2048D representation.
+
+Training dataset. We use the publicly available rSfM120k created by Radenovic et al. [33], which comprises images selected from 3D reconstructions of landmarks and urban scenes. These reconstructions are obtained from an unordered image collection using a combined local-feature-based image retrieval and structure-from-motion pipeline. The 3D reconstruction cluster ids serve as a supervision for selecting positive and negative pairs. In total, 91642 images from 551 classes are used for training, while additional 6403 database images - 1691 of which are used as queries - from 162 classes, disjoint from the training ones, are set aside for validation. Performance on validation is measured as mean average precision (mAP) [30] over all 1691 queries.
+
+Learning configuration. To train LAttQE we follow [33] and use a contrastive loss of the form $yz^{2} + (1 - y)max(0, m - z)^{2}$ , with $m$ being the margin, $z = ||\hat{q} - d||$ , and $y \in \{0, 1\}$ denotes whether $d$ is relevant to $q$ or not. We backpropagate through $\hat{q}$ , which in turn optimizes the transformers (see Fig 2). Other recent ranking losses [9,36,28] could also be used. Since the base representations are already strong, we use a margin of 0.1, which ensures that positives are pulled together while only pushing away negatives that are too close to the query. LAttQE consists of a stack of 3 transformer encoders, each one with 64 heads. We did not see any improvement after further increasing the capacity of the
+
+model. The self-attention and fully-connected layers within the encoders preserve the original dimensionality of the inputs, 2048D. We also follow [33] regarding the sampling strategy for positives and negatives: we select 5 negatives per positive, found in a pool of 20000 samples that gets refreshed every 2000 updates. When sampling neighbors to construct the augmented query, as a form of data augmentation, the exact number of neighbors is drawn randomly between 32 and 64, and neighbors are also randomly dropped according to a Bernoulli distribution (where the probability of dropping neighbors in each query is itself drawn from a uniform distribution between 0 and 0.6). The auxiliary classification head uses a binary cross-entropy loss. We use Adam to optimize the model, with a batch size of 64 samples, a weight decay of $1e - 6$ , and an initial learning rate of $1e - 4$ with an exponential decay of 0.99. The optimal number of epochs (typically between 50 and 100) is decided based on the accuracy on the validation set, and is typically within $1\%$ of the optimal iteration if it was validated directly on test.
+
+# 4.2 Test datasets and evaluation protocol
+
+Revisited Oxford and Paris. Popular Oxford Buildings [30] and Paris [31] datasets have been revisited by Radenovic et al. [34], correcting and improving the annotation, adding new more difficult queries, and updating the evaluation protocol. Revisited Oxford (ROxford) and Revisited Paris (RParis) datasets contain 4,993 and 6,322 images respectively, with 70 held out images with regions of interest that are used as queries. Unlike the original datasets, where the full-size version of query images are present in the database side, this is not the case in revisited versions, making query expansion a more challenging task. For each query, the relevant database images were labeled according to the "difficulty" of the match. The labels are then used to define three evaluation protocols for ROxford and RParis: Easy (E), Medium (M), and Hard (H). As suggested by Radenovic et al. [34], which points out that the Easy protocol is saturated, we only report results on the Medium and Hard protocols. Note that Oxford and Paris landmarks are not present in rSfM120k training and validation datasets.
+
+Distractors. A set of 1 million hard distractor images ( $\mathcal{R}1\mathrm{M}$ ) were collected in [34]. These distractors can, optionally, be added to both $\mathcal{R}$ Oxford and $\mathcal{R}$ Paris to evaluate performance on a more realistic large-scale setup.
+
+We do not evaluate on INRIA Holidays [20], another common retrieval dataset, since performing query expansion on Holidays is not a standard practice.
+
+# 4.3 Model study
+
+Table 2 displays the results of our proposed model, using all components (row ii), and compares it with the results without query expansion (row i). We use 64 neighbors for query expansion, as validated on the validation set of rSfM120k. Our model clearly improves results on $\mathcal{R}$ Oxford and $\mathcal{R}$ Paris, both on the M and H settings. We further study the impact of the components introduced in Sec. 3.
+
+ | ROxford | RParis | Mean |
| M | H | M | H |
| (i) | No QE | 67.3 | 44.3 | 80.6 | 61.5 | 63.4 |
| (ii) | Full model | 73.4 | 49.6 | 86.3 | 70.6 | 70.0 |
| (iii) | Without self-attention | 66.0 | 41.5 | 86.1 | 70.2 | 66.0 |
| (iv) | Without positional encoding | 58.6 | 33.2 | 87.8 | 73.4 | 63.2 |
| (v) | Without visual embedding | 67.1 | 42.9 | 83.8 | 66.7 | 65.1 |
| (vi) | Without auxiliary loss | 71.8 | 47.0 | 85.8 | 69.4 | 68.5 |
+
+Table 2. Mean average precision (mAP) performance of the proposed model (ii) compared to the baseline without query expansion (i) and to variations where parts of the model have been removed (iii-vi).
+
+Self-attention: replacing the stack of self-attention encoders with a stack of fully-connected layers leads to a very noticeable drop in accuracy (iii), highlighting how important the attention is for this model.
+
+Positional encoding (PE): Removing the PE (iv) leads to a very pronounced loss in accuracy for ROxford (which has very few relevant images per query). PE is necessary for queries with few relevant items because the model has to learn which images are important, and anchoring to the query (through the PE) enables it to do so. This is less important for queries with many relevant items, as in RParis. We additionally experiment with a position-only setup (v), where the self-attention computes the weights using only the positional encodings, not the actual image embeddings. This leads to a content-unaware weighting function, such as the AQE or AQEwD methods. The drop in accuracy is also remarkable, highlighting the need to combine both content and positional information.
+
+Auxiliary loss: Removing the auxiliary loss (vi) leads to a small but consistent drop in accuracy. Although the model is fully functional without this auxiliary loss, it helps the optimization process to find better representations.
+
+Inference time: When considering 64 neighbors for the expansion, our non-optimized PyTorch implementation can encode, on average, about 250 queries per second on a single Tesla M40 GPU. This does not include the time to extract the query embedding, which is orders of magnitude slower than our method (about 4 images per second on the same GPU) and the main bottleneck. Techniques such as distillation [38] and quantization [41], that have worked for transformer-based models, could further increase speed and reduce memory use.
+
+# 4.4 Comparison with existing methods
+
+Query expansion (QE). We compare the performance of our proposed method with existing QE approaches. All methods and their associated transformations are given in Table 1. For LAttQE, hyper-parameters are tuned on the validation set of rSfM120k, that has no overlapping landmarks or images with the test datasets. For competing methods, we select their hyper-parameters on the mean performance over test datasets, giving them an advantage. We denote the
+
+
+Fig. 3. Mean average precision over all queries of four protocols ( $\mathcal{R}$ Oxford (M & H) and $\mathcal{R}$ Paris (M & H)) as a function of the number of neighbors used for query expansion.
+
+number of neighbors used for QE as nQE. AQE: nQE=2; AQEwD: nQE=4; $\alpha$ QE: nQE=72, $\alpha = 3$ ; DQE: nQE=4, neg=5, $C = 0.1$ ; LAttQE: nQE=64.
+
+Database-side augmentation (DBA). All of the before-mentioned methods can be combined with DBA. We separately tune all hyper-parameters in this combined scenario. We denote number of neighbors used for DBA as nDBA. ADBA+AQE: nDBA=4, nQE=4; ADBAwD+AQEwD: nDBA=4, nQE=6; $\alpha$ DBA+ $\alpha$ QE: nDBA=36, nQE=10, $\alpha = 3$ ; DDBA+DQE: nDBA=4, nQE=2, $C = 0.1$ , neg=5; LAttDBA+LAttQE: nDBA=48, nQE=64.
+
+Sensitivity to the number of neighbors used in the QE. Figure 3 shows the mean accuracy of LAttQE as well as other query expansion methods on ROxford and RParis, as a function of the number of neighbors used in the expansion. We highlight: (i) Unsurprisingly, methods that assume all samples are positive (e.g. AQE, DQE) degrade very fast when the number of neighbors is not trivially small. AQEWD degrades a bit more gracefully, but can still obtain very bad results if nQE is not chosen carefully. (ii) It is also unsurprising that $\alpha$ QE has become a standard, since the accuracy is high and results do not degrade when nQE is high. However, this only happens because of the weighting function is of the form $r^{\alpha}$ , with $r < 1$ , i.e., the weight rapidly converges to zero, and therefore most neighbors barely have any impact in the aggregation. (iii) Our proposed LAttQE consistently obtains the best results across the whole range of nQE. Our method is not limited by a weight that converges to zero, and therefore can still improve when $\alpha$ QE has essentially converged ( $nQE > 40$ ).
+
+Different "number of relevant images" and "AP" regimes. We evaluate query expansion impact at different regimes to showcase further differences
+
+
+
+
+
+
+Fig. 4. Relative mean average precision (mAP) improvement at different number of relevant images (top) and AP regimes (bottom) split into 3 groups. Evaluation performed on ROxford and RParis at two difficulty setups, Medium (left) and Hard (right). Mean number of relevant images over all queries in the group (top) and mean average precision over all queries in the group (bottom) shown under respective group's bar plot.
+
+
+
+between methods. In all cases we report the relative improvement in mAP introduced by using query expansion. In the first set of experiments, see Figure 4 (top), we group queries based on the number of relevant images, using percentiles 33 and 66 as cut-off. AQE (with nQE=4) works very well for queries with very few relevant samples, but leads to small improvements when the number of relevants is high, as they are not leveraged. On the other hand, $\alpha$ QE, with $\alpha = 3$ and nQE=72 obtains good results when the number of relevants is high, but really struggles when the number of relevants is low. LAttQE is the only method that is able to obtain high accuracy on all regimes. Figure 4 (bottom) groups queries based on their accuracy before query expansion. Similarly, LAttQE is the only method that consistently obtains high accuracy.
+
+State-of-the-art comparison. Table 3 reports the accuracy of different methods on $\mathcal{R}$ Oxford and $\mathcal{R}$ Paris, both with and without the $\mathcal{R}1\mathrm{M}$ distractor set. The optimal number of neighbors for our approach (64 for LAttQE and 48 for LAttDBA) was decided on the validation set of rSfM120k. On the other hand, the optimal number of neighbors for the remaining methods was adjusted on test
+
+ | ROxf | ROxf + R1M | RPar | RPar + R1M | Mean |
| M | H | M | H | M | H | M | H |
| No QE | | | | | | | | | |
| - | 67.3 | 44.3 | 49.5 | 25.7 | 80.6 | 61.5 | 57.3 | 29.8 | 52.0 |
| QE | | | | | | | | | |
| [7] AQE | 72.3 | 49.0 | 57.3 | 30.5 | 82.7 | 65.1 | 62.3 | 36.5 | 56.9 |
| [13] AQEwD | 72.0 | 48.7 | 56.9 | 30.0 | 83.3 | 65.9 | 63.0 | 37.1 | 57.1 |
| [3] DQE | 72.7 | 48.8 | 54.5 | 26.3 | 83.7 | 66.5 | 64.2 | 38.0 | 56.8 |
| [33] αQE | 69.3 | 44.5 | 52.5 | 26.1 | 86.9 | 71.7 | 66.5 | 41.6 | 57.4 |
| ★ LAttQE | 73.4 | 49.6 | 58.3 | 31.0 | 86.3 | 70.6 | 67.3 | 42.4 | 59.8 |
| DBA + QE | | | | | | | | | |
| [7] ADBA + AQE | 71.9 | 53.6 | 55.3 | 32.8 | 83.9 | 68.0 | 65.0 | 39.6 | 58.8 |
| [13] ADBAwD + AQEwD | 73.2 | 53.2 | 57.9 | 34.0 | 84.3 | 68.7 | 65.6 | 40.8 | 59.7 |
| [3] DDBA + DQE | 72.0 | 50.7 | 56.9 | 32.9 | 83.2 | 66.7 | 65.4 | 39.1 | 58.4 |
| [33] αDBA + αQE | 71.7 | 50.7 | 56.0 | 31.5 | 87.5 | 73.5 | 70.6 | 48.5 | 61.3 |
| ★ LAttDBA + LAttQE | 74.0 | 54.1 | 60.0 | 36.3 | 87.8 | 74.1 | 70.5 | 48.3 | 63.1 |
+
+Table 3. Performance evaluation via mean average precision (mAP) on ROxford (ROxf) and RParis (RPar) with and without 1 million distractors (R1M). Our method is validated on validation part of rSfM120k and is marked with $\star$ . Other methods are validated directly on mAP over all queries of 4 protocols of ROxford and RParis.
+
+to maximize their mean accuracy on $\mathcal{R}$ Oxford and $\mathcal{R}$ Paris, giving them an unfair edge. Our method is the only one that consistently obtains good results on both $\mathcal{R}$ Oxford and $\mathcal{R}$ Paris. Compare this to other methods, where, for example, $\alpha$ QE obtains the best results on $\mathcal{R}$ Paris but the worst results on $\mathcal{R}$ Oxford, while AQE obtains the best results on $\mathcal{R}$ Oxford (excepting our method) but the worst results on $\mathcal{R}$ Paris. Generally, this gap becomes even larger when including the $\mathcal{R}1\mathrm{M}$ distractors. When using DBA and QE we observe the same trends: although some method can be slightly more accurate on specific datasets, our approach is the only one that obtains consistently good results on all datasets.
+
+# 5 Conclusions
+
+In this paper we have presented a novel framework to learn how to perform query expansion and database side augmentation for image retrieval tasks. Within this framework we have proposed LAttQE, an attention-based model that outperforms commonly used query expansion techniques on standard benchmark while being more robust on different regimes. Beyond LAttQE, we believe that the main idea of our method, tackling the aggregation for query expansion as a supervised task learned in a discriminative manner, is general and novel, and hope that more methods build on top of this idea, proposing new aggregation models that lead to more efficient and accurate search systems.
+
+# References
+
+1. Alletto, S., Abati, D., Serra, G., Cucchiara, R.: Exploring architectural details through a wearable egocentric vision device. Sensors (2016) 1
+2. Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: Netvlad: Cnn architecture for weakly supervised place recognition. In: CVPR (2016) 1
+3. Arandjelovic, R., Zisserman, A.: Three things everyone should know to improve object retrieval. In: CVPR (2012) 2, 3, 4, 6, 8, 14
+4. Azad, H.K., Deepak, A.: Query expansion techniques for information retrieval: a survey. IP&M (2019) 4
+5. Chang, C., Yu, G., Liu, C., Volkovs, M.: Explore-exploit graph traversal for image retrieval. In: CVPR (2019) 4
+6. Chum, O., Mikulík, A., Perdoch, M., Matas, J.: Total recall II: Query expansion revisited. In: CVPR (2011) 2, 3
+7. Chum, O., Philbin, J., Sivic, J., Isard, M., Zisserman, A.: Total recall: Automatic query expansion with a generative feature model for object retrieval. In: CVPR (2007) 1, 2, 3, 4, 6, 14
+8. Delvinioti, A., Jégou, H., Amsaleg, L., Houle, M.E.: Image retrieval with reciprocal and shared nearest neighbors. In: VISAPP (2014) 4
+9. Deng, J., Guo, J., Xue, N., Zafeiriou, S.: Arcface: Additive angular margin loss for deep face recognition. In: CVPR (2019) 9
+0. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: NAACL (2019) 4
+1. Fan, L., Zhao, H., Zhao, H., Liu, P., Hu, H.: Image retrieval based on learning to rank and multiple loss. IJGI (2019) 3
+2. Girdhar, R., Ramanan, D.: Attentional pooling for action recognition. In: NeurIPS (2017) 4
+3. Gordo, A., Almazan, J., Revaud, J., Larlus, D.: End-to-end learning of deep visual representations for image retrieval. IJCV (2017) 1, 2, 3, 4, 5, 6, 8, 14
+4. Gu, Y., Li, C., Xie, J.: Attention-aware generalized mean pooling for image retrieval arXiv:1811.00202 (2019) 3
+5. Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: CVPR (2006) 6
+6. Heinly, J., Schonberger, J.L., Dunn, E., Frahm, J.M.: Reconstructing the world* in six days*(as captured by the yahoo 100 million image dataset). In: CVPR (2015) 1
+7. Husain, S.S., Bober, M.: Remap: Multi-layer entropy-guided pooling of dense cnn features for image retrieval. TIP (2019) 3
+8. Husain, S.S., Ong, E.J., Bober, M.: ACTNET: end-to-end learning of feature activations and multi-stream aggregation for effective instance image retrieval arXiv:1907.05794 (2019) 3
+9. Iscen, A., Tobias, G., Avrithis, Y., Furon, T., Chum, O.: Efficient diffusion on region manifolds: Recovering small objects with compact cnn representations. In: CVPR (2017) 4
+20. Jegou, H., Douze, M., Schmid, C.: Hamming embedding and weak geometric consistency for large scale image search. In: ECCV (2008) 10
+21. Kalantidis, Y., Tobias, G., Avrithis, Y., Phinikettos, M., Spyrou, E., Mylonas, P., Kollias, S.: Viral: Visual image retrieval and localization. Multimedia Tools and Applications (2011) 1
+22. Lee, J., Lee, I., Kang, J.: Self-attention graph pooling. In: ICML (2019) 4.
+
+23. Liu, C., Yu, G., Volkovs, M., Chang, C., Rai, H., Ma, J., Gorti, S.K.: Guided similarity separation for image retrieval. In: NIPS (2019) 4
+24. Makantasis, K., Doulamis, A., Doulamis, N., Ioannides, M.: In the wild image retrieval and clustering for 3d cultural heritage landmarks reconstruction. Multimedia Tools and Applications (2016) 1
+25. Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. Cambridge University Press (2008) 4
+26. Maron, M.E., Kuhns, J.L.: On relevance, probabilistic indexing and information retrieval. JACM (1960) 4
+27. Mikulik, A., Chum, O., Matas, J.: Image retrieval for online browsing in large image collections. In: SISAP (2013) 1
+28. Ng, T., Balntas, V., Tian, Y., Mikolajczyk, K.: Solar: Second-order loss and attention for image retrieval. arXiv:2001.08972 (2020) 9
+29. Noh, H., Araujo, A., Sim, J., Weyand, T., Han, B.: Large-scale image retrieval with attentive deep local features. In: ICCV (2017) 2, 9
+30. Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: CVPR (2007) 9, 10
+31. Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Lost in quantization: Improving particular object retrieval in large scale image databases. In: CVPR (2008) 10
+32. Qin, D., Gammeter, S., Bossard, L., Quack, T., Van Gool, L.: Hello neighbor: Accurate object retrieval with k-reciprocal nearest neighbors. In: CVPR (2011) 4
+33. Radenovic, F., Tolias, G., Chum, O.: Fine-tuning cnn image retrieval with no human annotation. TPAMI (2018) 1, 2, 3, 4, 5, 6, 9, 10, 14
+34. Radenović, F., Iscen, A., Tolias, G., Avrithis, Y., Chum, O.: Revisiting oxford and paris: Large-scale image retrieval benchmarking. In: CVPR (2018) 3, 10
+35. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog (2019) 4
+36. Revaud, J., Almazan, J., de Rezende, R.S., de Souza, C.R.: Learning with average precision: Training image retrieval with a listwise loss. In: ICCV (2019) 3, 5, 9
+37. Rocchio, J.: Relevance feedback in information retrieval. The SMART Retrieval System (1971) 4
+38. Sanh, V., Debut, L., Chaumont, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In: NeurIPS Workshop (2019) 11
+39. Sattler, T., Weyand, T., Leibe, B., Kobbelt, L.: Image retrieval for image-based localization revisited. In: BMVC (2012) 1
+40. Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016) 1
+41. Shen, S., Dong, Z., Ye, J., Ma, L., Yao, Z., Gholami, A., Mahoney, M.W., Keutzer, K.: Q-BERT: Hessian based ultra low precision quantization of BERT. In: AAAI (2020) 11
+42. Shen, X., Lin, Z., Brandt, J., Wu, Y.: Spatially-constrained similarity measure for large-scale object retrieval. TPAMI (2013) 4
+43. Sivic, J., Zisserman, A.: Video google: A text retrieval approach to object matching in videos. In: ICCV (2003) 1, 3
+44. Tolias, G., Avrithis, Y., Jégou, H.: Image search with selective match kernels: aggregation across single and multiple images. IJCV (2015) 8
+45. Tolias, G., Jégou, H.: Visual query expansion with or without geometry: refining local descriptors by feature aggregation. PR (2014) 2, 3
+46. Turcot, T., Lowe, D.G.: Better matching with fewer features: The selection of useful features in large database recognition problems. In: ICCV Workshop (2009) 8
+
+47. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Lukasz Kaiser, Polosukhin, I.: Attention is all you need. In: NeurIPS (2017) 4, 6, 7
+48. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR (2018) 4
+49. Weinberger, K.Q., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. JMLR (2009) 6
+50. Weyand, T., Leibe, B.: Discovering favorite views of popular places with iconoid shift. In: ICCV (2011) 1
\ No newline at end of file
diff --git a/attentionbasedqueryexpansionlearning/images.zip b/attentionbasedqueryexpansionlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ecf159c23f3957ce1087bc16c2c8f885c91e61d6
--- /dev/null
+++ b/attentionbasedqueryexpansionlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f2cf471e8b0954c8ecd3c9bd75d6c0e047747c4407852a07d8680f6e83ad7b43
+size 358965
diff --git a/attentionbasedqueryexpansionlearning/layout.json b/attentionbasedqueryexpansionlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..254e44034cd8733823f2dbfdab6ae3e6e6e1a660
--- /dev/null
+++ b/attentionbasedqueryexpansionlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:41afd0f48d5f5424b73ac0ba47fffff288b4b28184b60840b49c18373102abc6
+size 421738
diff --git a/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/1b0f2369-13b1-4b50-9f85-c3a30a07b61e_content_list.json b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/1b0f2369-13b1-4b50-9f85-c3a30a07b61e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ee92d04f3036249d7ab18e768e341b6f3ef5e16f
--- /dev/null
+++ b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/1b0f2369-13b1-4b50-9f85-c3a30a07b61e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c745240bb75290f239c134ed2ef3e6d1a7a70ecb5ed69c809cc629e6f0276896
+size 89177
diff --git a/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/1b0f2369-13b1-4b50-9f85-c3a30a07b61e_model.json b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/1b0f2369-13b1-4b50-9f85-c3a30a07b61e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..dcbed9e30de592ae69991bd30588da2c00b54d7a
--- /dev/null
+++ b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/1b0f2369-13b1-4b50-9f85-c3a30a07b61e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7dad05f879c5e1401b0ea681a74dc9e5e3089e3376fae09886d92d6b44facf57
+size 102814
diff --git a/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/1b0f2369-13b1-4b50-9f85-c3a30a07b61e_origin.pdf b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/1b0f2369-13b1-4b50-9f85-c3a30a07b61e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ff828d7269c46fb2d647943607a270d60f7fcdb4
--- /dev/null
+++ b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/1b0f2369-13b1-4b50-9f85-c3a30a07b61e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8073a0c698f51fa0a596306c13407ddf61c53244f19e3a7d00515a508c9a71b3
+size 1359373
diff --git a/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/full.md b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..cdf7f96dd474115e1bb7e73db609b3f9c4da3c2a
--- /dev/null
+++ b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/full.md
@@ -0,0 +1,316 @@
+# Attention-Driven Dynamic Graph Convolutional Network for Multi-Label Image Recognition
+
+Jin Ye $^{1*}$ , Junjun He $^{1,2*}$ , Xiaojiang Peng $^{1*}$ , Wenhao Wu $^{1}$ , and Yu Qiao $^{1\dagger}$
+
+1 ShenZhen Key Lab of Computer Vision and Pattern Recognition, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
+
+$^{2}$ School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
+
+Abstract. Recent studies often exploit Graph Convolutional Network (GCN) to model label dependencies to improve recognition accuracy for multi-label image recognition. However, constructing a graph by counting the label co-occurrence possibilities of the training data may degrade model generalizability, especially when there exist occasional co-occurrence objects in test images. Our goal is to eliminate such bias and enhance the robustness of the learnt features. To this end, we propose an Attention-Driven Dynamic Graph Convolutional Network (ADD-GCN) to dynamically generate a specific graph for each image. ADD-GCN adopts a Dynamic Graph Convolutional Network (D-GCN) to model the relation of content-aware category representations that are generated by a Semantic Attention Module (SAM). Extensive experiments on public multi-label benchmarks demonstrate the effectiveness of our method, which achieves mAPs of $85.2\%$ , $96.0\%$ , and $95.5\%$ on MS-COCO, VOC2007, and VOC2012, respectively, and outperforms current state-of-the-art methods with a clear margin.
+
+Keywords: Multi-label image recognition, semantic attention, label dependency, dynamic graph convolutional network
+
+# 1 Introduction
+
+Nature scenes usually contains multiple objects. In the computer vision community, multi-label image recognition is a fundamental computer vision task and plays a critical role in wide applications such as human attribute recognition [19], medical image recognition [9] and recommendation systems [15, 33]. Unlike single-label classification, multi-label image recognition needs to assign multiple labels to a single image. Therefore it is reasonable to take account of the relationships of different labels to enhance recognition performance.
+
+Recently, Graph Convolutional Network (GCN) [16] achieves great success in modeling relationship among vertices of a graph. Current state-of-the-art methods [2, 4] build a complete graph to model the label correlations between each two categories by utilizing prior frequency of label co-occurrence of the
+
+
+(a) Example of static graph
+
+
+(b) Example of dynamic graph
+Fig. 1. Static graph and dynamic graph. Solid line indicates higher relation and dashed line indicates lower relation of the categories. (a) illustrates all images share a static graph [2,3]. (b) shows our motivation that different image has its own graph that can describe the relations of co-occurred categories in the image.
+
+target dataset and achieved remarkable results. However, building such a global graph for the whole dataset could cause the frequency-bias problem in most common datasets. As highlighted in [25, 26], most prominent vision datasets are afflicted with the co-occur frequency biases despite the best efforts of their creators. Let us consider a common category "car", which always appears with different kind of vehicles such as "truck", "motorbike", and "bus". This may inadvertently cause the frequency-bias in these datasets, which would guide the model to learn higher relations among them. Specifically, as shown in Fig 1(a), each image share a static graph which is built by calculating the co-occurrence frequency of categories in target dataset. The static graph gives higher relation values between "car" and "truck" and lower ones between "car" and "toilet" in each image. This may result in several problems as follows: 1) failing to identify "car" in a different context such as in the absence of "truck", 2) hallucinating "truck" even in a scene containing only "car", and 3) ignoring "toilet" when "car" co-occurs with "toilet".
+
+Given these issues, our goal is to build a dynamic graph that can capture the content-aware category relations for each image. Specifically, as shown in Fig 1(b), we construct the image-specific dynamic graph in which "car" and "toilet" has strong connections for the image that "car" and "toilet" appear together and vice versa. To this end, we propose a novel Attention-Driven Dynamic Graph Convolutional Network (ADD-GCN) for multi-label image recognition which leverages content-aware category representations to construct dynamic graph representation. Unlike previous graph based methods [2,4], ADD-GCN models semantic relation for each input image by estimating an image-specific dynamic graph. Specifically, we first decompose the convolutional feature map
+
+into multiple content-aware category representations through the Semantic Attention Module (SAM). Then we feed these representations into a Dynamic GCN (D-GCN) module which performs feature propagation via two joint graphs: static graph and dynamic graph. Finally discriminative vectors are generated by D-GCN for multi-label classification. The static graph mainly captures coarse label dependencies over the training dataset and learns such semantic relations as shown in Fig 1(a). The correlation matrix of dynamic graph is the output feature map of a light-weight network applied upon content-aware category representations for each image, and is used to capture fine dependencies of those content-aware category representations as illustrated in Fig 1(b).
+
+Our main contributions can be summarized as follows,
+
+- The major contribution of this paper is that we introduce a novel dynamic graph constructed from content-aware category representations for multi-label image recognition. The dynamic graph is able to capture category relations for a specific image in an adaptive way, which further enhance its representative and discriminative ability.
+
+- We elaborately design an end-to-end Attention-Driven Dynamic Graph Convolutional Network (ADD-GCN), which consists of two joint modules. i) Semantic Attention Module (SAM) for locating semantic regions and producing content-aware category representations for each image, and ii) Dynamic Graph Convolutional Network (D-GCN) for modeling the relation of content-aware category representations for final classification.
+
+- Our ADD-GCN significantly outperforms recent state-of-the-art approaches on popular multi-label datasets: MS-COCO, VOC2007, and VOC2012. Specifically, our ADD-GCN achieves mAPs of $85.2\%$ on MS-COCO, $96.0\%$ on VOC2007, and $95.5\%$ on VOC2012, respectively, which are new records on these benchmarks.
+
+# 2 Related work
+
+Recent renaissance of deep neural network remarkably accelerates the progresses in single-label image recognition. Convolutional Neural Networks (CNNs) can learn powerful features from large scale image datasets such as MS-COCO [20], PASCAL VOC [7] and ImageNet [6], which greatly alleviates the difficulty of designing hand-crafted features. Recently, many CNN-based approaches have been proposed for multi-label image recognition as well [2, 5, 10, 23, 37, 27, 30], which can be roughly categorized into two main directions as following.
+
+Region based methods. One direction aims to first coarsely localize multiple regions and then recognize each region with CNNs [5, 10, 23, 37]. Wei et al. [31] propose a Hypotheses-CNN-Pooling (HCP) framework which generates a large number of proposals by objectness detection methods [5, 37] and treats each proposal as a single-label image recognition problem. Yang et al. [32] formulate the task as a multi-class multi-instance learning problem. Specifically, they incorporate local information by generating a bag of instances for each image and enhance the discriminative features with label information. However,
+
+these object proposal based methods lead to numerous category-agnostic regions, which make the whole framework sophisticated and require massive computational cost. Moreover, these methods largely ignore the label dependencies and region relations, which are essential for multi-label image recognition.
+
+Relation based methods. Another direction aims to exploit the label dependencies or region relations [27, 30, 17, 4, 2, 29]. Wang et al. [27] propose CNN-RNN framework to predict the final scores and formulate label relation by utilizing Recurrent Neural Network (RNN). Wang et al. [30] attempt to discover such relations by iterative locating attention regions with spatial transformer [14] and LSTM [13]. Actually, these RNN/LSTM based methods explore the relation between labels or semantic regions in a sequential way, which cannot fully exploit the direct relations among them. Different from these sequential methods, some works resort to Graphical architectures. Li et al. [17] cope with such relations by image-dependent conditional label structures with Graphical Lasso framework. Li et al. [18] use a maximum spanning tree algorithm to create a tree-structured graph in the label space. Recently, the remarkable capacity of Graph Convolutional Networks (GCNs) has been proved in several vision tasks, Chen et al. [4] utilize GCN to propagate prior label representations (e.g. word embeddings) and generate a classifier, which replaces the last linear layer in a normal deep convolutional neural network such as ResNet [11]. With the help of label annotations, Chen et al. [2] compute a probabilistic matrix as the relation edge between each label in a graph. Our work is largely inspired by these GCN based methods for multi-label image recognition. However, instead of using external word embedding for category representations and label statistics for graph construction, our Attention-Driven Dynamic Graph Convolutional Network (ADD-GCN) directly decomposes the feature map extracted by a CNN backbone into content-aware category representations and optimizes the D-GCN, which consists of a static graph for capturing the global coarse category dependencies and a dynamic graph for exploiting content-dependent category relations, respectively.
+
+# 3 Method
+
+This section presents Attention-Driven Dynamic Graph Convolutional Network (ADD-GCN) for multi-label image recognition. We first give a brief overview of ADD-GCN, and then describe its key modules (Semantic Attention Module and Dynamic GCN module) in details.
+
+# 3.1 Overview of ADD-GCN
+
+As objects always co-occur in image, how to effectively capture the relations among them is important for multi-label recognition. Graph based representations provide a practical way to model label dependencies. We can use nodes $\mathbf{V} = [\mathbf{v}_1,\mathbf{v}_2,\dots ,\mathbf{v}_C]$ to represent labels and correlation matrix $\mathbf{A}$ to represent the label relations (edges). Recent studies [2, 4] exploited Graph Convolutional Network (GCN) to improve the performance of multi-label image recognition
+
+
+Fig. 2. Overall framework of our approach. Given an image, ADD-GCN first uses a CNN backbone to extract convolutional feature maps $\mathbf{X}$ . Then, SAM decouples $\mathbf{X}$ to content-aware category representations $\mathbf{V}$ , and D-GCN models global and local relations among $\mathbf{V}$ to generate the final robust representations $\mathbf{Z}$ that contains rich relation information with other categories.
+
+with a clear margin. However, they construct correlation matrix $\mathbf{A}$ in a static way, which mainly accounts for the label co-occurrence in the training dataset, and is fixed for each input image. As a result, they fail to explicitly utilize the content of each specific input image.
+
+To address this problem, this paper proposes ADD-GCN with two elaborately designed modules: We first introduce Semantic Attention Module (SAM) to estimate content-aware category representation $\mathbf{v}_c$ for each class $c$ from the extracted feature map and the representations are input to another module, Dynamic-GCN, for final classification. We will detail them in the next part.
+
+# 3.2 Semantic Attention Module
+
+The objective of Semantic Attention Module (SAM) is to obtain a set of content-aware category representations, each of which describes the contents related to a specific label from input feature map $\mathbf{X} \in \mathbb{R}^{H \times W \times D}$ . As shown in Fig 2, SAM first calculates category-specific activation maps $\mathbf{M} = [\mathbf{m}_1, \mathbf{m}_2, \dots, \mathbf{m}_C] \in \mathbb{R}^{H \times W \times C}$ and then they are used to convert the transformed feature map $\mathbf{X}' \in \mathbb{R}^{H \times W \times D'}$ into the content-aware category representations $\mathbf{V} = [\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_C] \in \mathbb{R}^{C \times D}$ . Specifically, each class representation $\mathbf{v}_c$ is formulated as a weighted sum on $\mathbf{X}'$ as follows, such that the produced $\mathbf{v}_c$ can selectively aggregate features
+
+related to its specific category $c$ .
+
+$$
+\mathbf {v} _ {c} = \mathbf {m} _ {c} ^ {T} \mathbf {X} ^ {\prime} = \sum_ {i = 1} ^ {H} \sum_ {j = 1} ^ {W} m _ {i, j} ^ {c} \mathbf {x} _ {i, j} ^ {\prime}, \tag {1}
+$$
+
+where $m_{i,j}^{c}$ and $\mathbf{x}_{i,j}^{\prime}\in \mathbb{R}^{D^{\prime}}$ are the weight of $c$ th activation map and the feature vector of the feature map at $(i,j)$ , respectively. Then the problem reduces to how to calculate the category-specific activation maps $\mathbf{M}$ , where the difficulty comes from we do not have explicit supervision like bounding box or category segmentation for images.
+
+Activation map generation. We generate the category-specific activation maps $\mathbf{M}$ based on Class Activation Mapping (CAM) [35], which is a technique to expose the implicit attention on an image without bounding box and segmentation. Specifically, we can perform Global Average Pooling (GAP) or Global Max Pooling (GMP) on the feature map $\mathbf{X}$ and classify these pooled features with FC classifiers. Then these classifiers are used to identify the category-specific activation maps by convolving the weights of FC classifiers with feature map $\mathbf{X}$ . Unlike CAM, we put a convolution layer as the classifier as well as a Sigmoid( $\cdot$ ) to regularize $\mathbf{M}$ before the global spatial pooling, which has better performance in experiments. Ablation studies on these methods are presented in Table 6.
+
+# 3.3 Dynamic GCN
+
+With content-aware category representations $\mathbf{V}$ obtained in previous section, we introduce Dynamic GCN (D-GCN) to adaptively transform their coherent correlation for multi-label recognition. Recently Graph Convolutional Network (GCN) [16] has been widely proven to be effective in several computer vision tasks and is applied to model label dependencies for multi-label image recognition with a static graph [2, 4]. Different from these works, we propose a novel D-GCN to fully exploit relations between content-aware category representations to generate discriminative vectors for final classification. Specifically, our D-GCN consists of two graph representations, static graph and dynamic graph, as shown in Fig 2. We first revisit the traditional GCN and then detail our D-GCN.
+
+Revisit GCN. Given a set of features $\mathbf{V} \in \mathbb{R}^{C \times D}$ as input nodes, GCN aims to utilize a correlation matrix $\mathbf{A} \in \mathbb{R}^{C \times C}$ and a state-update weight matrix $\mathbf{W} \in \mathbb{R}^{D \times D_u}$ to update the values of $\mathbf{V}$ . Formally, The updated nodes $\mathbf{V}_u \in \mathbb{R}^{C \times D_u}$ can be formulated by a single-layer GCN as
+
+$$
+\mathbf {V} _ {u} = \delta (\mathbf {A V W}), \tag {2}
+$$
+
+where $\mathbf{A}$ is usually pre-defined and $\mathbf{W}$ is learned during training. $\delta(\cdot)$ denotes an activation function, such as the $ReLU(\cdot)$ or $Sigmoid(\cdot)$ , which makes the whole operation nonlinear. The correlation matrix $\mathbf{A}$ reflects the relations between the features of each node. During inference, the correlation matrix $\mathbf{A}$ first diffuse the correlated information among all nodes, then each node receives all necessary information and its state is updated through a linear transformation $\mathbf{W}$ .
+
+D-GCN. As shown in the bottom of Fig 2, D-GCN takes the content-aware category representations $\mathbf{V}$ as input node features, and sequentially feeds them into a static GCN and a dynamic GCN. Specifically, the single-layer static GCN is simply defined as $\mathbf{H} = LReLU(\mathbf{A}_s\mathbf{V}\mathbf{W}_s)$ , where $\mathbf{H} = [\mathbf{h}_1,\mathbf{h}_2,\dots,\mathbf{h}_C] \in \mathbb{R}^{C \times D_1}$ , the activation function $LReLU(\cdot)$ is LeakyReLU, and the correlation matrix $\mathbf{A}_s$ and state-update weights $\mathbf{W}$ is randomly initialized and learned by gradient descent during training. Since $\mathbf{A}_s$ is shared for all images, it is expected that $\mathbf{A}_s$ can capture global coarse category dependencies.
+
+In the next, we introduces dynamic GCN to transform $\mathbf{H}$ , whose correlation matrix $\mathbf{A}_d$ is estimated adaptively from input features $\mathbf{H}$ . Note this is different from static GCN whose correlation matrix is fixed and shared for all input samples after training, while our $\mathbf{A}_d$ is constructed dynamically dependent on input feature. Since every sample has different $\mathbf{A}_d$ , it makes model increase its representative ability and decrease the over-fitting risk that static graph brings. Formally, the output $\mathbf{Z} \in \mathbb{R}^{C \times D_2}$ of the dynamic GCN can be defined as,
+
+$$
+\mathbf {Z} = f \left(\mathbf {A} _ {d} \mathbf {H} \mathbf {W} _ {d}\right), \text {w h e r e} \mathbf {A} _ {d} = \delta \left(\mathbf {W} _ {A} \mathbf {H} ^ {\prime}\right), \tag {3}
+$$
+
+where $f(\cdot)$ is the LeakyReLU activation function, $\delta(\cdot)$ is the Sigmoid activation function, $\mathbf{W}_d \in \mathbb{R}^{D_1 \times D_2}$ is the state-update weights, $\mathbf{W}_A \in \mathbb{R}^{C \times 2D_1}$ is the weights of a conv layer to formulate the dynamic correlation matrix $\mathbf{A}_d$ , and $\mathbf{H}' \in \mathbb{R}^{2D_1 \times C}$ is obtained by concatenating $\mathbf{H}$ and its global representations $\mathbf{h}_g \in \mathbb{R}^{D_1}$ , which is obtained by global average pooling and one conv layer, sequentially. Formally, $\mathbf{H}'$ is defined as,
+
+$$
+\mathbf {H} ^ {\prime} = \left[ \left(\mathbf {h} _ {1}; \mathbf {h} _ {g}\right), \left(\mathbf {h} _ {2}; \mathbf {h} _ {g}\right), \dots , \left(\mathbf {h} _ {c}; \mathbf {h} _ {g}\right) \right]. \tag {4}
+$$
+
+It is worth mentioning that the dynamic graph $\mathbf{A}_d$ is specific for each image which may capture content-dependent category dependencies. Overall, our D-GCN enhances the content-aware category representations from $\mathbf{V}$ to $\mathbf{Z}$ by the dataset-specific graph and the image-specific graph.
+
+# 3.4 Final Classification and Loss
+
+Final Classification. As shown in Fig 2, the final category representation $\mathbf{Z} = [\mathbf{z}_1,\mathbf{z}_2,\dots ,\mathbf{z}_C]$ is used for final classification. Due to each vector $\mathbf{z}_i$ is aligned with its specific class and contains rich relation information with others, we simply put each category vector into a binary classifier to predict its category score. In particular, we concatenate the score for each category to generate the final score vector $\mathbf{s}_r = [s_r^1,s_r^2,\ldots ,s_r^C ]$ . In addition, we can also get another confident scores $\mathbf{s}_m = [s_m^1,s_m^2,\dots ,s_m^C ]$ through global spatial pooling on the category-specific activation map $\mathbf{M}$ estimated by SAM in Section 3.2. Thus, we can aggregate the two score vectors to predict more reliable results. Here we simply average them to produce the final scores $\mathbf{s} = [s^{1},s^{2},\dots ,s^{C}]$ .
+
+Training Loss. We supervise the final score $\mathbf{s}$ and train the whole ADD-GCN with the traditional multi-label classification loss as follows,
+
+$$
+L (\mathbf {y}, \mathbf {s}) = \sum_ {c = 1} ^ {C} y ^ {c} \log \left(\sigma \left(s ^ {c}\right)\right) + \left(1 - y ^ {c}\right) \log \left(1 - \sigma \left(s ^ {c}\right)\right), \tag {5}
+$$
+
+where $\sigma (\cdot)$ is Sigmoid $(\cdot)$ function.
+
+# 4 Experiments
+
+In this section, we first introduce the evaluation metrics and our implementation details. And then, we compare our ADD-GCN with other existing state-of-the-art methods on three public multi-label image recognition dataset, i.e., MS-COCO [20], Pascal VOC 2007 [7], and Pascal VOC 2012 [7]. Finally, we conduct extensive ablation studies and present some visualization results of the category-specific activation maps and the dynamic graphs.
+
+# 4.1 Evaluation Metrics
+
+To compare with other existing methods in a fair way, we follow previous works [2, 4, 36] to adopt the average of overall/per-class precision (OP/CP), overall/per-class recall (OR/CR), overall/per-class F1-score (OF1/CF1) and the mean Average Precision (mAP) as evaluation metrics. When measuring precision/recall/F1-score, the label is considered as positive if its confident score is great than 0.5. Besides, top-3 results of precision/recall/F1-score are also reported. Generally, the OF1, CF1 and mAP are more important than other metrics.
+
+# 4.2 Implementation Details
+
+For the whole ADD-GCN framework, we use ResNet-101 [11] as our backbone. The channel of $\mathbf{V}$ is 1024 and the nonlinear activation function LeakyReLU with negative slop of 0.2 is adopted in our SAM and D-GCN. During training, we adopt the data augmentation suggested in [4] to avoid over-fitting: the input image is random cropped and resized to $448 \times 448$ with random horizontal flips for data augmentation. To make our model converge quickly, we follow [2] to choose the model that trained on COCO as the pre-train model for Pascal VOC. We choose SGD as our optimizer, with momentum of 0.9 and weight decay of $10^{-4}$ . The batch size of each GPU is 18. The initial learning rate is set to 0.5 for SAM/D-GCN and 0.05 for backbone CNN. We train our model for 50 epoch in total and the learning rate is reduced by a factor of 0.1 at 30 and 40 epoch, respectively. During testing, we simply resize the input image to $512 \times 512$ for evaluation. All experiments are implemented based on PyTorch [22].
+
+Table 1. Comparison of our ADD-GCN and other state-of-the-art methods on MS-COCO dataset. The best results are marked as bold.
+
+| Method | All | Top-3 |
| mAP | CP | CR | CF1 | OP | OR | OF1 | CP | CR | CF1 | OP | OR | OF1 |
| RARL [1] | - | - | - | - | - | - | - | 78.8 | 57.2 | 66.2 | 84.0 | 61.6 | 71.1 |
| RDAR [30] | - | - | - | - | - | - | - | 79.1 | 58.7 | 67.4 | 84.0 | 63.0 | 72.0 |
| Multi-Evidence [8] | - | 80.4 | 70.2 | 74.9 | 85.2 | 72.5 | 78.4 | 84.5 | 62.2 | 70.6 | 89.1 | 64.3 | 74.7 |
| ResNet-101 [11] | 79.7 | 82.7 | 67.4 | 74.3 | 86.4 | 71.8 | 78.4 | 85.9 | 60.5 | 71.0 | 90.2 | 64.2 | 75.0 |
| DecoupleNet [21] | 82.2 | 83.1 | 71.6 | 76.3 | 84.7 | 74.8 | 79.5 | - | - | - | - | - | - |
| ML-GCN [4] | 83.0 | 85.1 | 72.0 | 78.0 | 85.8 | 75.4 | 80.3 | 89.2 | 64.1 | 74.6 | 90.5 | 66.5 | 76.7 |
| SSGRL [2] | 83.8 | 89.9 | 68.5 | 76.8 | 91.3 | 70.8 | 79.7 | 91.9 | 62.5 | 72.7 | 93.8 | 64.1 | 76.2 |
| Ours | 85.2 | 84.7 | 75.9 | 80.1 | 84.9 | 79.4 | 82.0 | 88.8 | 66.2 | 75.8 | 90.3 | 68.5 | 77.9 |
+
+# 4.3 Comparison with State of The Arts
+
+To demonstrate the scalability and effectiveness of our proposed ADD-GCN, extensive experiments are conducted on three widely used benchmarks, i.e., MSCOCO [20], Pascal VOC 2007 [7], and Pascal VOC 2012 [7].
+
+MS-COCO. Microsoft COCO [20] is primarily built for object segmentation and detection, and it is also widely used for multi-label recognition recently. It is composed of a training set with 82081 images, a validation set with 40137 images. The dataset covers 80 common object categories with about 2.9 object labels per image. The number of labels for each image varies considerably, rendering MS-COCO more challenging. Since the labels of the test set are not given, we compare our performance to other previous methods on the validation set.
+
+Table 1 shows the comparison between our ADD-GCN and other state-of-the-art methods. In particular, we compare with RARL [1], RDAR [30], Multi-Evidence [8], ResNet-101 [11], DecoupleNet [21], ML-GCN [4], and SSGRL [2]. Our ADD-GCN consistently outperforms the other state-of-the-art approaches in terms of OF1, CF1, and mAP, as well as some other less important metrics. In particular, both ML-GCN and SSGRL also construct graphs for multi-label classification, our ADD-GCN respectively outperforms ML-GCN by $2.2\%$ and SSGRL by $1.4\%$ in terms of mAP. In addition, our ADD-GCN improves the baseline by $5.5\%$ . This demonstrates the superiority of our approach.
+
+VOC 2007. Pascal VOC 2007 [7] is widely used multi-label dataset, which contains 9963 images from 20 common object categories. It is divided into a train set, a validation set, and a test set. For fair comparisons, following previous works [2,4], we train our model on the trainval set (5011 images) and evaluate on the test set (4952 images). The evaluation metrics are the Average Precision (AP) and the mean of Average Precision (mAP).
+
+The comparison between our ADD-GCN and other methods is presented in Table 2. Our method consistently outperforms these methods with a clear margin, and improves our baseline from $90.8\%$ to $96.0\%$ . Particularly, compared
+
+Table 2. Comparison of our ADD-GCN and other state-of-the-art methods on Pascal VOC 2007 dataset. The best results are marked as bold.
+
+| Method | aero | bike | bird | boat | bottle | bus | car | cat | chair | cow | table | dog | horse | mbike | person | plant | sheep | sofa | train | tv | mAP |
| CNN-RNN [27] | 96.7 | 83.1 | 94.2 | 92.8 | 61.2 | 82.1 | 89.1 | 94.2 | 64.2 | 83.6 | 70.0 | 92.4 | 91.7 | 84.2 | 93.7 | 59.8 | 93.2 | 75.3 | 99.7 | 78.6 | 84.0 |
| RMIC [12] | 97.1 | 91.3 | 94.2 | 57.1 | 86.7 | 90.7 | 93.1 | 63.3 | 83.3 | 76.4 | 92.8 | 94.4 | 91.6 | 95.1 | 92.3 | 59.7 | 86.0 | 69.5 | 96.4 | 79.0 | 84.5 |
| RLSD [34] | 96.4 | 92.7 | 93.8 | 94.1 | 71.2 | 92.5 | 94.2 | 95.7 | 74.3 | 90.0 | 74.2 | 95.4 | 96.2 | 92.1 | 97.9 | 66.9 | 93.5 | 73.7 | 97.5 | 87.6 | 88.5 |
| VeryDeep [24] | 98.9 | 95.0 | 96.8 | 95.4 | 69.7 | 90.4 | 93.5 | 96.0 | 74.2 | 86.6 | 87.8 | 96.0 | 96.3 | 93.1 | 97.2 | 70.0 | 92.1 | 80.3 | 98.1 | 87.0 | 89.7 |
| ResNet-101 [11] | 99.1 | 97.3 | 96.2 | 94.7 | 68.3 | 92.9 | 95.9 | 94.6 | 77.9 | 89.9 | 85.1 | 94.7 | 96.8 | 94.3 | 98.1 | 80.8 | 93.1 | 79.1 | 98.2 | 91.1 | 90.8 |
| HCP [31] | 98.6 | 97.1 | 98.0 | 95.6 | 75.3 | 94.7 | 95.8 | 97.3 | 73.1 | 90.2 | 80.0 | 97.3 | 96.1 | 94.9 | 96.3 | 78.3 | 94.7 | 76.2 | 97.9 | 91.5 | 90.9 |
| RDAR [30] | 98.6 | 97.4 | 96.3 | 96.2 | 75.2 | 92.4 | 96.5 | 97.1 | 76.5 | 92.0 | 87.7 | 96.8 | 97.5 | 93.8 | 98.5 | 81.6 | 93.7 | 82.8 | 98.6 | 89.3 | 91.9 |
| FeV+LV [32] | 98.2 | 96.9 | 97.1 | 95.8 | 74.3 | 94.2 | 96.7 | 96.7 | 76.7 | 90.5 | 88.0 | 96.9 | 97.7 | 95.9 | 98.6 | 78.5 | 93.6 | 82.4 | 98.4 | 90.4 | 92.0 |
| RARL [1] | 98.6 | 97.1 | 97.1 | 95.5 | 75.6 | 92.8 | 96.8 | 97.3 | 78.3 | 92.2 | 87.6 | 96.9 | 96.5 | 93.6 | 98.5 | 81.6 | 93.1 | 83.2 | 98.5 | 89.3 | 92.0 |
| RCP [28] | 99.3 | 97.6 | 98.0 | 96.4 | 79.3 | 93.8 | 96.6 | 97.1 | 78.0 | 88.7 | 87.1 | 97.1 | 96.3 | 95.4 | 99.1 | 82.1 | 93.6 | 82.2 | 98.4 | 92.8 | 92.5 |
| ML-GCN [4] | 99.5 | 98.5 | 98.6 | 98.1 | 80.8 | 94.6 | 97.2 | 98.2 | 82.3 | 95.7 | 86.4 | 98.2 | 98.4 | 96.7 | 99.0 | 84.7 | 96.7 | 84.3 | 98.9 | 93.7 | 94.0 |
| SSGRL [2] | 99.7 | 98.4 | 98.0 | 97.6 | 85.7 | 96.2 | 98.2 | 98.8 | 82.0 | 98.1 | 98.7 | 98.8 | 98.7 | 97.0 | 99.0 | 86.9 | 98.1 | 85.8 | 99.0 | 93.7 | 95.0 |
| Ours | 99.8 | 99.0 | 98.4 | 99.0 | 86.7 | 98.1 | 98.5 | 98.3 | 85.8 | 98.3 | 88.9 | 98.8 | 99.0 | 97.4 | 99.2 | 88.3 | 98.7 | 90.7 | 99.5 | 97.0 | 96.0 |
+
+Table 3. Comparison of our ADD-GCN and other state-of-the-art methods on Pascal VOC 2012 dataset. The best results are marked as bold.
+
+| Methods | aero | bike | bird | boat | bottle | bus | car | chair | cow | table | dog | horse | mbike | person | plant | sheep | sofa | train | tv | mAP | |
| RMIC [12] | 98.0 | 85.5 | 92.6 | 88.7 | 64.0 | 86.8 | 82.0 | 94.9 | 72.7 | 83.1 | 73.4 | 95.2 | 91.7 | 90.8 | 95.5 | 58.3 | 87.6 | 70.6 | 93.8 | 83.0 | 84.4 |
| VeryDeep [24] | 99.1 | 88.7 | 95.7 | 93.9 | 73.1 | 92.1 | 84.8 | 97.7 | 79.1 | 90.7 | 83.2 | 97.3 | 96.2 | 94.3 | 96.9 | 63.4 | 93.2 | 74.6 | 97.3 | 87.9 | 89.0 |
| HCP [31] | 99.1 | 92.8 | 97.4 | 94.4 | 79.9 | 93.6 | 89.8 | 98.2 | 78.2 | 94.9 | 79.8 | 97.8 | 97.0 | 93.8 | 96.4 | 74.3 | 94.7 | 71.9 | 96.7 | 88.6 | 90.5 |
| FeV+LV [32] | 98.4 | 92.8 | 93.4 | 90.7 | 74.9 | 93.2 | 90.2 | 96.1 | 78.2 | 89.8 | 80.6 | 95.7 | 96.1 | 95.3 | 97.5 | 73.1 | 91.2 | 75.4 | 97.0 | 88.2 | 89.4 |
| RCP [28] | 99.3 | 92.2 | 97.5 | 94.9 | 82.3 | 94.1 | 92.4 | 98.5 | 83.8 | 93.5 | 83.1 | 98.1 | 97.3 | 96.0 | 98.8 | 77.7 | 95.1 | 79.4 | 97.7 | 92.4 | 92.2 |
| SSGRL [2] | 99.7 | 96.1 | 97.7 | 96.5 | 86.9 | 95.8 | 95.0 | 98.9 | 88.3 | 97.6 | 87.4 | 99.1 | 99.2 | 97.3 | 99.0 | 84.8 | 98.3 | 85.8 | 99.2 | 94.1 | 94.8 |
| Ours | 99.8 | 97.1 | 98.6 | 96.8 | 89.4 | 97.1 | 96.5 | 99.3 | 89.0 | 97.7 | 87.5 | 99.2 | 99.1 | 97.7 | 99.1 | 86.3 | 98.8 | 87.0 | 99.3 | 95.4 | 95.5 |
+
+with other two current state-of-the-art methods ML-GCN and SSGRL [2], the gain of overall mAP is $2.0\%$ and $1.0\%$ , respectively.
+
+VOC 2012. Pascal VOC 2012 [7] is the dataset that is widely used for multi-label image recognition task, which consists of 11540 images as trainval set and 10991 as test set from 20 common object categories. For fair comparisons with previous state-of-the-art methods, we train our model on the trainval set and evaluate our results on test set.
+
+We present the AP of each category and mAP over all categories of VOC 2012 in Table 3. Our ADD-GCN also achieves the best performance compared with other state-of-the-art methods. Concretely, the proposed ADD-GCN obtains $95.5\%$ mAP, which outperforms another state-of-the-art SSGRL by $0.7\%$ . And the AP of each category is higher than other methods except "horse". The results demonstrate the effectiveness of our framework.
+
+# 4.4 Ablation Studies
+
+In this section, we conduct ablation experiments on MS-COCO and VOC 2007.
+
+Evaluation of SAM and D-GCN. To investigate the contribution of each module in ADD-GCN, we separately apply SAM and D-GCN with certain adaptations upon a standard ResNet backbone. We evaluate the effectiveness of SAM by removing D-GCN and adding binary classifiers upon the output of SAM (V)
+
+
+(a) Comparisons on MS-COCO.
+Fig. 3. Evaluation of SAM and D-GCN on MS-COCO and VOC 2007.
+
+
+(b) Comparisons on VOC 2007.
+
+Table 4. The performance of different combinations of static and dynamic graph. "S": static graph and "D": dynamic graph. "P": we propagate information through the static and dynamic graph in a parallel way, and fuse them by either addition (add) or element-wise multiplication (mul) or concatenation (cat).
+
+| Methods | All (COCO) | All (VOC 2007) |
| mAP | OF1 | CF1 | mAP | OF1 | CF1 |
| ResNet-101 | 79.7 | 78.4 | 74.3 | 90.8 | 84.3 | 83.4 |
| S | 82.9 | 78.3 | 74.7 | 94.5 | 89.3 | 88.3 |
| D | 83.7 | 79.4 | 76.6 | 94.9 | 89.9 | 88.7 |
| P (add) | 84.0 | 79.4 | 76.9 | 94.6 | 88.8 | 88.2 |
| P (mul) | 83.7 | 80.8 | 78.5 | 94.6 | 89.6 | 88.5 |
| P (cat) | 83.3 | 80.0 | 76.9 | 94.9 | 89.7 | 88.8 |
| D→S | 84.5 | 81.4 | 79.3 | 95.0 | 90.1 | 88.8 |
| S→D | 85.2 | 82.0 | 80.1 | 96.0 | 91.0 | 89.9 |
+
+directly, while for evaluating the effectiveness of D-GCN, we simply replace SAM with a Conv-LReLU block. The results are shown in Fig 3. As can be seen, on both MS-COCO and VOC-2007, SAM and D-GCN individually improve the baseline with large margins. Compared to the baseline that directly learn classifier upon global-pooled features, SAM first decomposes the feature map into content-aware category representations and train classifiers upon them. The improvement from SAM shows that the decomposed representations are more discriminative. We also find that D-GCN is able to enhance the discriminative ability of features from the results compared with the baseline. Combining SAM and D-GCN further boosts performance as we expect, since they focus on different aspects. Specifically, the gain of the mAP, OF1 and CF1 over the baseline is $5.5\%$ , $3.6\%$ and $5.8\%$ on MS-COCO, while $5.2\%$ , $6.7\%$ and $6.5\%$ on VOC 2007.
+
+Table 5. Comparison of different final representations.
+
+| Methods | All (COCO) | All (VOC 2007) |
| mAP | OF1 | CF1 | mAP | OF1 | CF1 |
| Sum | 84.5 | 81.5 | 79.5 | 94.7 | 89.4 | 88.4 |
| Avg | 84.5 | 81.5 | 79.2 | 94.8 | 89.6 | 88.6 |
| Max | 83.9 | 81.2 | 78.8 | 94.7 | 89.6 | 88.8 |
| Bi | 85.2 | 82.0 | 80.1 | 96.0 | 91.0 | 89.9 |
+
+Table 6. Evaluation of activation map generation.
+
+| Methods | All (COCO) | All (VOC 2007) |
| mAP | OF1 | CF1 | mAP | OF1 | CF1 |
| GAP→cls | 85.0 | 82.0 | 79.8 | 94.8 | 89.7 | 88.5 |
| GMP→cls | 84.1 | 80.9 | 79.0 | 93.9 | 89.1 | 87.7 |
| cls → GMP | 85.2 | 82.0 | 80.1 | 96.0 | 91.0 | 89.9 |
+
+Static graph vs Dynamic graph. We investigate the effects of static graph and dynamic graph in D-GCN. Results are shown in Table 4. Firstly, we study the case with only one graph. Both static and dynamic graph can achieve better performance compared with baseline ResNet-101, and the dynamic graph performs better on both MS-COCO and VOC 2007. The results show that modeling local (i.e., image-level) category-aware dependencies is more effective than coarse label dependencies over the whole dataset. To further explore whether the static graph is complementary with the dynamic graph, we attempt to combine them in different ways as shown in Table 4. "S" stands for static graph, "D" donates dynamic graph, "P" denotes that we propagate information through the static graph and dynamic graph in a parallel way, and fuse them by either addition (add) or element-wise multiplication (mul) or concatenation (cat). From the results, "S→D" achieves the best performance among all settings.
+
+Final representations. To demonstrate the effectiveness and rationality of category-specific feature representations, we compare it with image-level feature representations by aggregating the category-specific feature representations to an image feature vector. For aggregation, Sumation (Sum), Average (Avg) and Maximum (Max) are adopted to fuse category-specific feature representations $\mathbf{Z}$ , which are the output of D-GCN for obtaining image-level feature representations. "Bi" means that we utilize binary classifier for each category-specific feature representation to decide whether this class exists or not. Table 5 shows the results that the category-specific feature representations outperforms other aggregated representations on all metrics. Thus, we can believe it is an effective way to represent an input image by decomposing the feature map to category-specific representations for multi-label recognition.
+
+Evaluation of activation map generation. As mentioned in Section 3.2, we first adopt the standard CAM as baseline. Here we compare the final performance of ADD-GCN between our method and the standard CAM. Specifically, CAM can be donated as “ $\mathrm{GAP} \rightarrow \mathrm{cls}$ ” or “ $\mathrm{GMP} \rightarrow \mathrm{cls}$ ”, and ours is “ $\mathrm{cls} \rightarrow \mathrm{GMP}$ ”. “ $\mathrm{GAP} \rightarrow \mathrm{cls}$ ” equals to “ $\mathrm{cls} \rightarrow \mathrm{GAP}$ ” since the classifier is linear operator. The results are shown in Table 6. Comparing the results of GAP (i.e., $\mathrm{GAP} \rightarrow \mathrm{cls}$ ) and GMP (i.e., $\mathrm{GMP} \rightarrow \mathrm{cls}$ ), we believe that GMP loses lots of information as GMP only identify one discriminative part. However, our adaption “ $\mathrm{cls} \rightarrow \mathrm{GMP}$ ” outperforms “ $\mathrm{GAP} \rightarrow \mathrm{cls}$ ”, which indicates that the modified GMP( $\mathrm{cls} \rightarrow \mathrm{GMP}$ ) may compensate for the disadvantages that “ $\mathrm{GMP} \rightarrow \mathrm{cls}$ ” brings.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Fig. 4. Visualization results of category-specific activation maps on MS-COCO.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# 4.5 Visualization
+
+In this section, we visualize some examples of category-specific activation maps and dynamic correlation matrix $\mathbf{A}_d$ to illustrate whether SAM can locate semantic targets and what relations dynamic graph has learned, respectively.
+
+Visualization of category-specific activation maps. We visualize original images with their corresponding category-specific activation maps to illustrate the capability of capturing the semantic region of each category appeared in the image with our SAM module. Some examples are shown in Fig 4, each row presents the original image, corresponding category-specific activation maps and the final score of each category. For the categories appeared in image, we observe that our model can locate their semantic regions accurately. In contrast, the activation map has low activation of categories that the image does not contain. For example, the second row has labels of "person", "tie" and "chair", our ADD-GCN can accurately highlight related semantic regions of the three classes. Besides, the final scores demonstrate that the category-aware representations are discriminative enough, and can be accurately recognized by our method.
+
+Visualization of dynamic graph. As shown in Fig 5, we visualize an original image with its corresponding dynamic correlation matrix $\mathbf{A}_d$ to illustrate what relations D-GCN has learned. For the input image in Fig 5(a), its ground truths are "car", "dog" and "person". Fig 5(b) is the visualization of the $\mathbf{A}_d$ of the input image. We can find that $\mathbf{A}_d^{car;dog}$ and $\mathbf{A}_d^{car;person}$ rank top (about top
+
+
+(a) Input Image
+Fig. 5. Visualization of an example and what its dynamic correlation matrix $\mathbf{A}_d$ looks like on Pascal-VOC 2007.
+
+
+(b) Dynamic Matrix
+
+$10\%$ ) in the row of "car". It means that "dog" and "person" are more relevant for "car" in the image. Similar results can also be found in the rows of "dog" and "person". From the observation of the dynamic graph's visualization, we can believe that D-GCN has capacity to capture such semantic relations for a specific input image.
+
+# 5 Conclusion
+
+In this work, we propose an Attention-Driven Dynamic Graph Convolutional Network (ADD-GCN) for multi-label image recognition. ADD-GCN first decomposes the input feature map into category-aware representations by the Semantic Attention Module (SAM), and then models the relations of these representations for final recognition by a novel dynamic GCN which captures content-aware category relations for each image. Extensive experiments on public benchmarks (MS-COCO, Pascal VOC 2007, and Pascal VOC 2012) demonstrate the effectiveness and rationality of our ADD-GCN.
+
+Acknowledgements. This work is partially supported by National Natural Science Foundation of China (U1813218, U1713208), Science and Technology Service Network Initiative of Chinese Academy of Sciences (KFJ-STS-QYZX-092), Guangdong Special Support Program (2016TX03X276), and Shenzhen Basic Research Program (JSGG20180507182100698, CXB201104220032A), Shenzhen Institute of Artificial Intelligence and Robotics for Society. We also appreciate Xiaoping Lai and Hao Xing from VIPShop Inc. who cooperate this project with us and provide validation Fashion data.
+
+# References
+
+1. Chen, T., Wang, Z., Li, G., Lin, L.: Recurrent attentional reinforcement learning for multi-label image recognition. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
+2. Chen, T., Xu, M., Hui, X., Wu, H., Lin, L.: Learning semantic-specific graph representation for multi-label image recognition. arXiv preprint arXiv:1908.07325 (2019)
+3. Chen, Y., Rohrbach, M., Yan, Z., Shuicheng, Y., Feng, J., Kalantidis, Y.: Graph-based global reasoning networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 433-442 (2019)
+4. Chen, Z.M., Wei, X.S., Wang, P., Guo, Y.: Multi-label image recognition with graph convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5177-5186 (2019)
+5. Cheng, M.M., Zhang, Z., Lin, W.Y., Torr, P.: Bing: Binarized normed gradients for objectness estimation at 300fps. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3286-3293 (2014)
+6. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. pp. 248-255. IEEE (2009)
+7. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The Pascal visual object classes (voc) challenge. International journal of computer vision 88(2), 303-338 (2010)
+8. Ge, W., Yang, S., Yu, Y.: Multi-evidence filtering and fusion for multi-label classification, object detection and semantic segmentation based on weakly supervised learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1277-1286 (2018)
+9. Ge, Z., Mahapatra, D., Sedai, S., Garnavi, R., Chakravorty, R.: Chest x-rays classification: A multi-label and fine-grained problem. arXiv preprint arXiv:1807.07247 (2018)
+0. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 1440-1448 (2015)
+1. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016)
+2. He, S., Xu, C., Guo, T., Xu, C., Tao, D.: Reinforced multi-label image classification by exploring curriculum. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
+3. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735-1780 (1997)
+4. Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in neural information processing systems. pp. 2017-2025 (2015)
+5. Jain, H., Prabhu, Y., Varma, M.: Extreme multi-label loss functions for recommendation, tagging, ranking & other missing label applications. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 935-944. ACM (2016)
+6. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: International Conference on Learning Representations (ICLR) (2017)
+7. Li, Q., Qiao, M., Bian, W., Tao, D.: Conditional graphical lasso for multi-label image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2977-2986 (2016)
+
+18. Li, X., Zhao, F., Guo, Y.: Multi-label image classification with a probabilistic label enhancement model. In: Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence. pp. 430-439. UAI'14, AUAI Press, Arlington, Virginia, United States (2914), http://dl.acm.org/citation.cfm?id=3020751.3020796
+19. Li, Y., Huang, C., Loy, C.C., Tang, X.: Human attribute recognition by deep hierarchical contexts. In: European Conference on Computer Vision. pp. 684-700. Springer (2016)
+20. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740-755. Springer (2014)
+21. Liu, L., Guo, S., Huang, W., Scott, M.: Decoupling category-wise independence and relevance with self-attention for multi-label image classification. In: ICASSP 2019. pp. 1682-1686 (05 2019). https://doi.org/10.1109/ICASSP.2019.8683665
+22. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch. In: NIPS Workshop (2017)
+23. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp. 91-99 (2015)
+24. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
+25. Tommasi, T., Patricia, N., Caputo, B., Tuytelaars, T.: A deeper look at dataset bias. In: Domain adaptation in computer vision applications, pp. 37-55. Springer (2017)
+26. Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR 2011. pp. 1521-1528. IEEE (2011)
+27. Wang, J., Yang, Y., Mao, J., Huang, Z., Huang, C., Xu, W.: Cnn-rnn: A unified framework for multi-label image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2285-2294 (2016)
+28. Wang, M., Luo, C., Hong, R., Tang, J., Feng, J.: Beyond object proposals: Random crop pooling for multi-label image recognition. IEEE Transactions on Image Processing 25(12), 5678-5688 (2016)
+29. Wang, Y., He, D., Li, F., Long, X., Zhou, Z., Ma, J., Wen, S.: Multi-label classification with label graph superimposing. arXiv preprint arXiv:1911.09243 (2019)
+30. Wang, Z., Chen, T., Li, G., Xu, R., Lin, L.: Multi-label image recognition by recurrently discovering attentional regions. In: Proceedings of the IEEE international conference on computer vision. pp. 464-472 (2017)
+31. Wei, Y., Xia, W., Lin, M., Huang, J., Ni, B., Dong, J., Zhao, Y., Yan, S.: Hcp: A flexible cnn framework for multi-label image classification. IEEE transactions on pattern analysis and machine intelligence 38(9), 1901-1907 (2015)
+32. Yang, H., Tianyi Zhou, J., Zhang, Y., Gao, B.B., Wu, J., Cai, J.: Exploit bounding box annotations for multi-label object recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 280-288 (2016)
+33. Yang, X., Li, Y., Luo, J.: Pinterest board recommendation for twitter users. In: Proceedings of the 23rd ACM international conference on Multimedia. pp. 963-966. ACM (2015)
+34. Zhang, J., Wu, Q., Shen, C., Zhang, J., Lu, J.: Multilabel image classification with regional latent semantic dependencies. IEEE Transactions on Multimedia 20(10), 2801-2813 (2018)
+
+35. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Computer Vision and Pattern Recognition (2016)
+36. Zhu, F., Li, H., Ouyang, W., Yu, N., Wang, X.: Learning spatial regularization with image-level supervisions for multi-label image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5513-5522 (2017)
+37. Zitnick, C.L., Dollar, P.: Edge boxes: Locating object proposals from edges. In: European conference on computer vision. pp. 391-405. Springer (2014)
\ No newline at end of file
diff --git a/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/images.zip b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e8e03aa43d4b90c3c8a2eb09d4bf4239435d6941
--- /dev/null
+++ b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cc277d81e2c5fd6424d5ba05134e930ae918efdb2d2ea218150c8f96bef977d1
+size 721897
diff --git a/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/layout.json b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4f3c490d77d56c5f2b1272fc44e4875cf0f1525b
--- /dev/null
+++ b/attentiondrivendynamicgraphconvolutionalnetworkformultilabelimagerecognition/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f32fe2b5429d2963c78185d83d4183224fa4d39d96390c0b831b4a0cc4686c6e
+size 450515
diff --git a/attentionguidedanomalylocalizationinimages/f4bc1db9-83ee-40cc-817b-a8c25329ecd7_content_list.json b/attentionguidedanomalylocalizationinimages/f4bc1db9-83ee-40cc-817b-a8c25329ecd7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..cfea4c310c903f48be44030dc95435650b9fe312
--- /dev/null
+++ b/attentionguidedanomalylocalizationinimages/f4bc1db9-83ee-40cc-817b-a8c25329ecd7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1ee4169509f18381281e7a07413be9544f25cb1f979be19afdb094a4c7c671e
+size 88612
diff --git a/attentionguidedanomalylocalizationinimages/f4bc1db9-83ee-40cc-817b-a8c25329ecd7_model.json b/attentionguidedanomalylocalizationinimages/f4bc1db9-83ee-40cc-817b-a8c25329ecd7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c58324f1ed3e7b00f2011ba383d334eb4279e933
--- /dev/null
+++ b/attentionguidedanomalylocalizationinimages/f4bc1db9-83ee-40cc-817b-a8c25329ecd7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0b207e8b65166470d9755bcfbc7aa5261c732581edf8cb21a657db96a53a6b71
+size 110143
diff --git a/attentionguidedanomalylocalizationinimages/f4bc1db9-83ee-40cc-817b-a8c25329ecd7_origin.pdf b/attentionguidedanomalylocalizationinimages/f4bc1db9-83ee-40cc-817b-a8c25329ecd7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..747fb832fad028c36b48809e00b7e02c65bcdc93
--- /dev/null
+++ b/attentionguidedanomalylocalizationinimages/f4bc1db9-83ee-40cc-817b-a8c25329ecd7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c50c264f65bac35629648864cfaa0271fb6b2645682d49b4c9488ae3be2b9450
+size 4291155
diff --git a/attentionguidedanomalylocalizationinimages/full.md b/attentionguidedanomalylocalizationinimages/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e5afdc6af5e14896193428fd03d4bfe0282ec43c
--- /dev/null
+++ b/attentionguidedanomalylocalizationinimages/full.md
@@ -0,0 +1,275 @@
+# Attention Guided Anomaly Localization in Images
+
+Shashanka Venkataramanan\*[0000-0003-1096-1342], Kuan-Chuan Peng\*[0000-0002-2682-9912], Rajat Vikram Singh\*[0000-0002-1416-8344], and Abhijit Mahalanobis\*[0000-0002-2782-8655]
+
+$^{\star}$ Center for Research in Computer Vision, University of Central Florida, Orlando, FL
+ ${}^{\dagger}$ Mitsubishi Electric Research Laboratories, Cambridge, MA
+ ${}^{\ddagger}$ Siemens Corporate Technology, Princeton, NJ
+shashankv@Knights.ucf.edu, kpeng@merl.com, singh.rajat@siemens.com, amahalan@crcv.ucf.edu
+
+Abstract. Anomaly localization is an important problem in computer vision which involves localizing anomalous regions within images with applications in industrial inspection, surveillance, and medical imaging. This task is challenging due to the small sample size and pixel coverage of the anomaly in real-world scenarios. Most prior works need to use anomalous training images to compute a class-specific threshold to localize anomalies. Without the need of anomalous training images, we propose Convolutional Adversarial Variational autoencoder with Guided Attention (CAVGA), which localizes the anomaly with a convolutional latent variable to preserve the spatial information. In the unsupervised setting, we propose an attention expansion loss where we encourage CAVGA to focus on all normal regions in the image. Furthermore, in the weakly-supervised setting we propose a complementary guided attention loss, where we encourage the attention map to focus on all normal regions while minimizing the attention map corresponding to anomalous regions in the image. CAVGA outperforms the state-of-the-art (SOTA) anomaly localization methods on MVTec Anomaly Detection (MVTAD), modified ShanghaiTech Campus (mSTC) and Large-scale Attention based Glaucoma (LAG) datasets in the unsupervised setting and when using only $2\%$ anomalous images in the weakly-supervised setting. CAVGA also outperforms SOTA anomaly detection methods on the MNIST, CIFAR-10, Fashion-MNIST, MVTAD, mSTC and LAG datasets.
+
+Keywords: guided attention, anomaly localization, convolutional adversarial variational autoencoder
+
+# 1 Introduction
+
+Recognizing whether an image is homogeneous with its previously observed distribution or whether it belongs to a novel or anomalous distribution has been identified as an important problem [5]. In this work, we focus on a related task, anomaly localization in images, which involves segmenting the anomalous regions
+
+
+(i) CAVGA main idea
+Fig. 1: (i) CAVGA uses the proposed complementary guided attention loss to encourage the attention map to cover the entire normal regions while suppressing the attention map corresponding to anomalous class in the training image. This enables the trained network to generate the anomalous attention map to localize the anomaly better at testing (ii) CAVGA's improvement over SOTA in the form of (number of outperforming/total categories; improvement $(\%)$ in its metric)
+
+| dataset | task | improvement |
| MVTAD [5] | l | (9/15; 4~85%) |
| MVTAD [5] | d | (9/15; 2~30%) |
| mSTC [31] | l | (7/12; 2~42%) |
| mSTC [31] | d | (8/12; 1~38%) |
| LAG [29] | l | (1/1; 16%) |
| LAG [29] | d | (1/1; 1.1%) |
| MNIST [27] | d | (8/10; 0.1~2.5%) |
| CIFAR-10 [25] | d | (7/10; 3~31%) |
| F-MNIST [57] | d | (8/10; 2~24%) |
+
+$l$ : localization; $d$ : detection F-MNIST: Fashion-MNIST [57]
+
+- metric for $l$ : IoU
+- metric for $d$ in MVTAD, mSTC, and LAG: classification accuracy
+- metric for $d$ in MNIST.
+
+CIFAR-10 and Fashion-MNIST: area under ROC curve
+
+(ii) improvement summary
+
+within them. Anomaly localization has been applied in industrial inspection settings to segment defective product parts [5], in surveillance to locate intruders [38], in medical imaging to segment tumor in brain MRI or glaucoma in retina images [4, 29], etc. There has been an increase in analysis towards segmenting potential anomalous regions in images as acknowledged in [13].
+
+Existing state-of-the-art (SOTA) anomaly localization methods [6, 47] are based on deep learning. However, developing deep learning based algorithms for this task can be challenging due to the small pixel coverage of the anomaly and lack of suitable data since images with anomalies are rarely available in real-world scenarios [5]. Existing SOTA methods tackle this challenge using autoencoders [15, 47] and GAN based approaches [3, 43, 59], which use a thresholded pixel-wise difference between the input and reconstructed image to localize anomalies. But, their methods need to determine class-specific thresholds using anomalous training images which can be unavailable in real-world scenarios.
+
+To tackle these drawbacks of using anomalous training images, we propose Convolutional Adversarial Variational autoencoder with Guided Attention (CAVGA), an unsupervised anomaly localization method which requires no anomalous training images. CAVGA comprises of a convolutional latent variable to preserve the spatial relation between the input and latent variable. Since real-world applications may have access to only limited training data [5], we propose to localize the anomalies by using supervision on attention maps. This
+
+is motivated by the finding in [28] that attention based supervision can alleviate the need of using large amount of training data. Intuitively, without any prior knowledge of the anomaly, humans need to look at the entire image to identify the anomalous regions. Based on this idea, we propose an attention expansion loss where we encourage the network to generate an attention map that focuses on all normal regions of the image.
+
+Since annotating segmentation training data can be laborious [22], in the case when the annotator provides few anomalous training images without ground truth segmented anomalous regions, we extend CAVGA to a weakly supervised setting. Here, we introduce a classifier in CAVGA and propose a complementary guided attention loss computed only for the normal images correctly predicted by the classifier. Using this complementary guided attention loss, we expand the normal attention but suppress the anomalous attention on the normal image, where normal/anomalous attention represents the areas affecting the classifier's normal/anomalous prediction identified by existing network visualization methods (e.g. Grad-CAM [49]). Fig. 1 (i) (a) illustrates our attention mechanism during training, and Fig. 1 (i) (b) demonstrates that the resulting normal attention and anomalous attention on the anomalous testing images are visually complementary, which is consistent with our intuition. Furthermore, Fig. 1 (ii) summarizes CAVGA's ability to outperform SOTA methods in anomaly localization on industrial inspection (MVTAD) [5], surveillance (mSTC) [31] and medical imaging (LAG) [29] datasets. We also show CAVGA's ability to outperform SOTA methods in anomaly detection on common benchmarks.
+
+To the best of our knowledge, we are the first in anomaly localization to propose an end-to-end trainable framework with attention guidance which explicitly enforces the network to learn representations from the entire normal image. As compared to the prior works, our proposed approach CAVGA needs no anomalous training images to determine a class-specific threshold to localize the anomaly. Our contributions are:
+
+- An attention expansion loss $(L_{ae})$ , where we encourage the network to focus on the entire normal images in the unsupervised setting.
+- A complementary guided attention loss $(L_{cga})$ , which we use to minimize the anomalous attention and simultaneously expand the normal attention for the normal images correctly predicted by the classifier.
+- New SOTA: In anomaly localization, CAVGA outperforms SOTA methods on the MVTAD and mSTC datasets in IoU and mean Area under ROC curve (AuROC) and also outperforms SOTA anomaly localization methods on LAG dataset in IoU. We also show CAVGA's ability to outperform SOTA methods for anomaly detection on the MVTAD, mSTC, LAG, MNIST [27], CIFAR-10 [25] and Fashion-MNIST [57] datasets in classification accuracy.
+
+# 2 Related Works
+
+Often used interchangeably, the terms anomaly localization and anomaly segmentation involve pixel-accurate segmentation of anomalous regions within an
+
+Table 1: Comparison between CAVGA and other anomaly localization methods in the unsupervised setting in terms of the working properties. Among all the listed methods, only CAVGA satisfies all the listed properties
+
+| Does the method satisfy each property? | [3, 48] [6, 43] | [4] | [47] | [54] [50] | [13, 32] [2] | CAVGA |
| not using anomalous training images | N | N | Y | Y | Y | Y |
| localize multiple modes of anomalies | Y | N | N | N | Y | Y |
| pixel (not patch) based localization | Y | Y | N | Y | Y | Y |
| use convolutional latent variable | N | Y | N | N | N | Y |
+
+image [5]. They have been applied to industrial inspection settings to segment defective product parts [5], medical imaging to segment glaucoma in retina images [29], etc. Image based anomaly localization has not been fully studied as compared to anomaly detection, where methods such as [3, 4, 6, 43, 48] employ a thresholded pixel wise difference between the input and reconstructed image to segment the anomalous regions. [47] proposes an inpainter-detector network for patch-based localization in images. [13] proposes gradient descent on a regularized autoencoder while Liu et al. [32] (denoted as ADVAE) generate gradient based attention maps from the latent space of the trained model. We compare CAVGA with the existing methods relevant to anomaly localization in the unsupervised setting in Table 1 and show that among the listed methods, only CAVGA shows all the listed properties.
+
+Anomaly detection involves determining an image as normal or anomalous [3]. One-class classification and anomaly detection are related to novelty detection [41] which has been widely studied in computer vision [3, 20, 35, 37, 53] and applied to video analysis [10], remote sensing [36], etc. With the advance in GANs [17], SOTA methods perform anomaly detection by generating realistic normal images during training [21, 22, 42, 46, 48]. [12] proposes to search the latent space of the generator for detecting anomalies. [41] introduces latent-space-sampling-based network with information-negative mining while [30] proposes normality score function based on capsule network's activation and reconstruction error. [2] proposes a deep autoencoder that learns the distribution of latent representation through autoregressive procedure. Unlike [7, 11, 44, 55] where anomalous training images are used for anomaly detection, CAVGA does not need anomalous training images.
+
+# 3 Proposed Approach: CAVGA
+
+# 3.1 Unsupervised Approach: $\mathbf{CAVGA}_u$
+
+Fig. 2 (a) illustrates CAVGA in the unsupervised setting (denoted as $\mathrm{CAVGA}_u$ ). $\mathrm{CAVGA}_u$ comprises of a convolutional latent variable to preserve the spatial information between the input and latent variable. Since attention maps obtained from feature maps illustrate the regions of the image responsible for specific
+
+
+
+
+Fig. 2: (a) The framework of $\mathrm{CAVGA}_u$ where the attention expansion loss $L_{ae}$ guides the attention map $A$ computed from the latent variable $z$ to cover the entire normal image. (b) Illustration of $\mathrm{CAVGA}_w$ with the complementary guided attention loss $L_{cga}$ to minimize the anomalous attention $A_x^{c_a}$ and expand the normal attention $A_x^{c_n}$ for the normal images correctly predicted by the classifier
+
+activation of neurons in the feature maps [58], we propose an attention expansion loss such that the feature representation of the latent variable encodes all the normal regions. This loss encourages the attention map generated from the latent variable to cover the entire normal training image as illustrated in Fig. 1 (i) (a). During testing, we localize the anomaly from the areas of the image that the attention map does not focus on.
+
+Convolutional latent variable Variational Autoencoder (VAE) [23] is a generative model widely used for anomaly detection [24, 40]. The loss function of training a vanilla VAE can be formulated as:
+
+$$
+L = L _ {R} (x, \hat {x}) + K L \left(q _ {\phi} (z | x) | | p _ {\theta} (z | x)\right), \tag {1}
+$$
+
+where $L_{R}(x,\hat{x}) = \frac{-1}{N}\sum_{i = 1}^{N}x_{i}\log (\hat{x}_{i}) + (1 - x_{i})\log (1 - \hat{x}_{i})$ is the reconstruction loss between the input $(x)$ and reconstructed images $(\hat{x})$ , and $N$ is the total number of images. The posterior $p_{\theta}(z|x)$ is modeled using a standard Gaussian distribution prior $p(z)$ with the help of Kullback-Liebler (KL) divergence through $q_{\phi}(z|x)$ . Since the vanilla VAE results in blurry reconstruction [26], we use a discriminator $(D(.))$ to improve the stability of the training and generate sharper reconstructed images $\hat{x}$ using adversarial learning [34] formulated as follows:
+
+$$
+L _ {a d v} = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \left(D \left(x _ {i}\right)\right) + \log \left(1 - D \left(\hat {x} _ {i}\right)\right) \tag {2}
+$$
+
+Unlike traditional autoencoders [6, 18] where the latent variable is flattened, inspired from [4], we use a convolutional latent variable to preserve the spatial relation between the input and the latent variable.
+
+Attention expansion loss $L_{ae}$ The main contribution of our work involves using supervision on attention maps to spatially localize the anomaly in the image. Most methods [3, 48, 53] employ a thresholded pixel-wise difference between the reconstructed image and the input image to localize the anomaly where the threshold is determined by using anomalous training images. However, $\mathrm{CAVGA}_u$ learns to localize the anomaly using an attention map reflected through an end-to-end training process without the need of any anomalous training images. We use the feature representation of the latent variable $z$ to compute the attention map $(A)$ . $A$ is computed using Grad-CAM [49] such that $A_{i,j} \in [0,1]$ , where $A_{i,j}$ is the $(i,j)$ element of $A$ .
+
+Intuitively, $A$ obtained from feature maps focuses on the regions of the image based on the activation of neurons in the feature maps and its respective importance [58, 60]. Due to the lack of prior knowledge about the anomaly, in general, humans need to look at the entire image to identify anomalous regions. We use this notion to learn the feature representation of the entire normal image by proposing an attention expansion loss, where we encourage the network to generate an attention map covering all the normal regions. This attention expansion loss for each image $L_{ae,1}$ is defined as:
+
+$$
+L _ {a e, 1} = \frac {1}{| A |} \sum_ {i, j} \left(1 - A _ {i, j}\right) \tag {3}
+$$
+
+where $|A|$ is the total number of elements in $A$ . The final attention expansion loss $L_{ae}$ is the average of $L_{ae,1}$ over the $N$ images. Since the idea of attention mechanisms involves locating the most salient regions in the image [29] which typically does not cover the entire image, we use $L_{ae}$ as an additional supervision on the network, such that the trained network generates an attention map that covers all the normal regions. Fig. 1 (i) (a) shows that before using $L_{ae}$ i.e. training $\mathrm{CAVGA}_u$ only with adversarial learning $(L_{adv} + L)$ does not encode all the normal regions into the latent variable, and that the attention map fails to cover the entire image, which is overcome after using $L_{ae}$ . Furthermore, supervising on attention maps prevents the trained model to make inference based on incorrect areas and also alleviates the need of using large amount of training data as shown in [28], which is not explicitly enforced in existing methods [3, 6, 47].
+
+We form the final objective function $L_{final}$ below:
+
+$$
+L _ {f i n a l} = w _ {r} L + w _ {a d v} L _ {a d v} + w _ {a e} L _ {a e}, \tag {4}
+$$
+
+where $w_{r}$ , $w_{adv}$ , and $w_{ae}$ are empirically set as 1, 1, and 0.01 respectively.
+
+During testing, we feed an image $x_{test}$ into the encoder followed by the decoder, which reconstructs an image $\hat{x}_{test}$ . As defined in [48], we compute the pixel-wise difference between $\hat{x}_{test}$ and $x_{test}$ as the anomalous score $s_a$ . Intuitively, if $x_{test}$ is drawn from the learnt distribution of $z$ , then $s_a$ is small. Without
+
+using any anomalous training images in the unsupervised setting, we normalize $s_a$ between [0,1] and empirically set 0.5 as the threshold to detect an image as anomalous. The attention map $A_{test}$ is computed from $z$ using Grad-CAM and is inverted ( $\mathbf{1} - A_{test}$ ) to obtain an anomalous attention map which localizes the anomaly. Here, $\mathbf{1}$ refers to a matrix of all ones with the same dimensions as $A_{test}$ . We empirically choose 0.5 as the threshold on the anomalous attention map to evaluate the localization performance.
+
+# 3.2 Weakly Supervised Approach: CAVGA $_w$
+
+$\mathrm{CAVGA}_u$ can be further extended to a weakly supervised setting (denoted as $\mathrm{CAVGA}_w$ ) where we explore the possibility of using few anomalous training images to improve the performance of anomaly localization. Given the labels of the anomalous and normal images without the pixel-wise annotation of the anomaly during training, we modify $\mathrm{CAVGA}_u$ by introducing a binary classifier $C$ at the output of $z$ as shown in Fig. 2 (b) and train $C$ using the binary cross entropy loss $L_{bce}$ . Given an image $x$ and its ground truth label $y$ , we define $p \in \{c_a, c_n\}$ as the prediction of $C$ , where $c_a$ and $c_n$ are anomalous and normal classes respectively. From Fig. 2 (b) we clone $z$ into a new tensor, flatten it to form a fully connected layer $z_{fc}$ , and add a 2-node output layer to form $C$ . $z$ and $z_{fc}$ share parameters. Flattening $z_{fc}$ enables a higher magnitude of gradient backpropagation from $p$ [49].
+
+Complementary guided attention loss $L_{cga}$ . Although, attention maps generated from a trained classifier have been used in weakly supervised semantic segmentation tasks [39, 49], to the best of our knowledge, we are the first to propose supervision on attention maps for anomaly localization in the weakly supervised setting. Since the attention map depends on the performance of $C$ [28], we propose the complementary guided attention loss $L_{cga}$ based on $C$ 's prediction to improve anomaly localization. We use Grad-CAM to compute the attention map for the anomalous class $A_x^{c_a}$ and the attention map for the normal class $A_x^{c_n}$ on the normal image $x$ ( $y = c_n$ ). Using $A_x^{c_a}$ and $A_x^{c_n}$ , we propose $L_{cga}$ where we minimize the areas covered by $A_x^{c_a}$ but simultaneously enforce $A_x^{c_n}$ to cover the entire normal image. Since the attention map is computed by backpropagating the gradients from $p$ , any incorrect $p$ would generate an undesired attention map. This would lead to the network learning to focus on erroneous areas of the image during training, which we avoid using $L_{cga}$ . We compute $L_{cga}$ only for the normal images correctly classified by the classifier i.e. if $p = y = c_n$ . We define $L_{cga,1}$ , the complementary guided attention loss for each image, in the weakly supervised setting as:
+
+$$
+L _ {c g a, 1} = \frac {\mathbb {1} (p = y = c _ {n})}{\left| A _ {x} ^ {c _ {n}} \right|} \sum_ {i, j} \left(1 - \left(A _ {x} ^ {c _ {n}}\right) _ {i, j} + \left(A _ {x} ^ {c _ {a}}\right) _ {i, j}\right), \tag {5}
+$$
+
+where $\mathbb{1}(\cdot)$ is an indicator function. $L_{cga}$ is the average of $L_{cga,1}$ over the $N$ images. Our final objective function $L_{final}$ is defined as:
+
+$$
+L _ {f i n a l} = w _ {r} L + w _ {a d v} L _ {a d v} + w _ {c} L _ {b c e} + w _ {c g a} L _ {c g a}, \tag {6}
+$$
+
+Table 2: Our experimental settings. Notations: $u$ : unsupervised; $w$ : weakly supervised; $D_M$ : MNIST [27]; $D_F$ : Fashion-MNIST [57]; $D_C$ : CIFAR-10 [25]
+
+| property \ dataset | MVTAD [5] | mSTC [31] | LAG [29] | DM | DF | DC |
| setting | u | w | u | w | u | u | u | u |
| # total classes | 15 | 15 | 13 | 13 | 1 | 10 | 10 | 10 |
| # normal training images | 3629 | 3629 | 244875 | 244875 | 2632 | ~6k | 6k | 5k |
| # anomalous training images | 0 | 35 | 0 | 1763 | 0 | 0 | 0 | 0 |
| # normal testing images | 467 | 467 | 21147 | 21147 | 800 | ~1k | 1k | 1k |
| # anomalous testing images | 1223 | 1223 | 86404 | 86404 | 2392 | ~9k | 9k | 9k |
+
+where $w_{r}$ , $w_{adv}$ , $w_{c}$ , and $w_{cga}$ are empirically set as 1, 1, 0.001, and 0.01 respectively. During testing, we use $C$ to predict the input image $x_{test}$ as anomalous or normal. The anomalous attention map $A_{test}$ of $x_{test}$ is computed when $y = c_{a}$ . We use the same evaluation method as that in Sec. 3.1 for anomaly localization.
+
+# 4 Experimental Setup
+
+Benchmark datasets: We evaluate CAVGA on the MVTAD [5], mSTC [31] and LAG [29] datasets for anomaly localization, and the MVTAD, mSTC, LAG, MNIST [27], CIFAR-10 [25] and Fashion-MNIST [57] datasets for anomaly detection. Since STC dataset [31] is designed for video instead of image anomaly detection, we extract every $5^{\text{th}}$ frame of the video from each scene for training and testing without using any temporal information. We term the modified STC dataset as mSTC and summarize the experimental settings in Table 2.
+
+Baseline methods: For anomaly localization, we compare CAVGA with AVID [47], $\mathrm{AE}_{\mathrm{L2}}$ [6], $\mathrm{AE}_{\mathrm{SSIM}}$ [6], AnoGAN [48], CNN feature dictionary (CNNFD) [37], texture inspection (TI) [8], $\gamma$ -VAE grad [13] (denoted as $\gamma$ -VAEg), LSA [2], ADVAE [32] and variation model (VM) [52] based approaches on the MVTAD and mSTC datasets. Since [13] does not provide the code for their method, we adapt the code from [1] and report its best result using our experimental settings. We also compare $\mathrm{CAVGA}_u$ with CAM [60], GBP [51], Smooth-Grad [50] and Patho-GAN [54] on the LAG dataset. In addition, we compare $\mathrm{CAVGA}_u$ with LSA [2], OCGAN [41], ULSLM [56], CapsNet PP-based and CapsNet RE-based [30] (denoted as CapsNetPP and CapsNetRE), AnoGAN [48], ADGAN [12], and $\beta$ -VAE [21] on the MNIST, CIFAR-10 and Fashion-MNIST datasets for anomaly detection.
+
+Architecture details: Based on the framework in Fig. 2 (a), we use the convolution layers of ResNet-18 [19] as our encoder pretrained from ImageNet [45] and finetuned on each category / scene individually. Inspired from [9], we propose to use the residual generator as our residual decoder by modifying it with a convolution layer interleaved between two upsampling layers. The skip connection added from the output of the upsampling layer to the output of the convolution layer, increases mutual information between observations and latent variable and also avoids latent variable collapse [14]. We use the discriminator of DC-GAN
+
+Table 3: Performance comparison of anomaly localization in category-specific IoU, mean IoU (IoU), and mean AuROC (AuROC) on the MVTAD dataset. The darker cell color indicates better performance ranking in each row
+
+| Category | AVID [47] | AESSIM [6] | AEL2 [6] | AnoGAN [48] | γ-VAEg [13] | LSA [2] | ADVAE [32] | CAVGA -Du | CAVGA -Ru | CAVGA -Dw | CAVGA -Rw |
| Bottle | 0.28 | 0.15 | 0.22 | 0.05 | 0.27 | 0.27 | 0.27 | 0.30 | 0.34 | 0.36 | 0.39 |
| Hazelnut | 0.54 | 0.00 | 0.41 | 0.02 | 0.63 | 0.41 | 0.44 | 0.44 | 0.51 | 0.58 | 0.79 |
| Capsule | 0.21 | 0.09 | 0.11 | 0.04 | 0.24 | 0.22 | 0.11 | 0.25 | 0.31 | 0.38 | 0.41 |
| Metal Nut | 0.05 | 0.01 | 0.26 | 0.00 | 0.22 | 0.38 | 0.49 | 0.39 | 0.45 | 0.46 | 0.46 |
| Leather | 0.32 | 0.34 | 0.67 | 0.34 | 0.41 | 0.77 | 0.24 | 0.76 | 0.79 | 0.80 | 0.84 |
| Pill | 0.11 | 0.07 | 0.25 | 0.17 | 0.48 | 0.18 | 0.18 | 0.34 | 0.40 | 0.44 | 0.53 |
| Wood | 0.14 | 0.36 | 0.29 | 0.14 | 0.45 | 0.41 | 0.14 | 0.56 | 0.59 | 0.61 | 0.66 |
| Carpet | 0.25 | 0.69 | 0.38 | 0.34 | 0.79 | 0.76 | 0.10 | 0.71 | 0.73 | 0.70 | 0.81 |
| Tile | 0.09 | 0.04 | 0.23 | 0.08 | 0.38 | 0.32 | 0.23 | 0.31 | 0.38 | 0.47 | 0.81 |
| Grid | 0.51 | 0.88 | 0.83 | 0.04 | 0.36 | 0.20 | 0.02 | 0.32 | 0.38 | 0.42 | 0.55 |
| Cable | 0.27 | 0.01 | 0.05 | 0.01 | 0.26 | 0.36 | 0.18 | 0.37 | 0.44 | 0.49 | 0.51 |
| Transistor | 0.18 | 0.01 | 0.22 | 0.08 | 0.44 | 0.21 | 0.30 | 0.30 | 0.35 | 0.38 | 0.45 |
| Toothbrush | 0.43 | 0.08 | 0.51 | 0.07 | 0.37 | 0.48 | 0.14 | 0.54 | 0.57 | 0.60 | 0.63 |
| Screw | 0.22 | 0.03 | 0.34 | 0.01 | 0.38 | 0.38 | 0.17 | 0.42 | 0.48 | 0.51 | 0.66 |
| Zipper | 0.25 | 0.10 | 0.13 | 0.01 | 0.17 | 0.14 | 0.06 | 0.20 | 0.26 | 0.29 | 0.31 |
| IoU | 0.26 | 0.19 | 0.33 | 0.09 | 0.39 | 0.37 | 0.20 | 0.41 | 0.47 | 0.50 | 0.59 |
| AuROC | 0.78 | 0.87 | 0.82 | 0.74 | 0.86 | 0.79 | 0.86 | 0.85 | 0.89 | 0.92 | 0.93 |
+
+[42] pretrained on the Celeb-A dataset [33] and finetuned on our data as our discriminator and term this network as CAVGA-R. For fair comparisons with the baseline approaches in terms of network architecture, we use the discriminator and generator of DC-GAN pretrained on the Celeb-A dataset as our encoder and decoder respectively. We keep the same discriminator as discussed previously and term this network as CAVGA-D. CAVGA- $\mathbf{D}_u$ and CAVGA- $\mathbf{R}_u$ are termed as $\mathrm{CAVGA}_u$ in the unsupervised setting, and $\mathrm{CAVGA - D}_w$ and $\mathrm{CAVGA - R}_w$ as $\mathrm{CAVGA}_w$ in weakly supervised setting respectively.
+
+Training and evaluation: For anomaly localization and detection on the MVTAD, mSTC and LAG datasets, the network is trained only on normal images in the unsupervised setting. In the weakly supervised setting, since none of the baseline methods provide the number of anomalous training images they use to compute the threshold, we randomly choose $2\%$ of the anomalous images along with all the normal training images for training. On the MNIST, CIFAR-10 and Fashion-MNIST datasets, we follow the same procedure as defined in [12] (training/testing uses single class as normal and the rest of the classes as anomalous. We train CAVGA- $D_u$ using this normal class). For anomaly localization, we show the AuROC [5] and the Intersection-over-Union (IoU) between the generated attention map and the ground truth. Following [5], we use the mean of accuracy of correctly classified anomalous images and normal images to evaluate the performance of anomaly detection on both the normal and anomalous images on the MVTAD, mSTC and LAG datasets. On the MNIST, CIFAR-10, and Fashion-MNIST datasets, same as [12], we use AuROC for evaluation.
+
+# 5 Experimental Results
+
+We use the cell color in the quantitative result tables to denote the performance ranking in that row, where darker cell color means better performance.
+
+Performance on anomaly localization: Fig. 3 (a) shows the qualitative results and Table 3 shows that $\mathrm{CAVGA}_u$ localizes the anomaly better compared to the baselines on the MVTAD dataset. $\mathrm{CAVGA - D_u}$ outperforms the best performing baseline method $(\gamma \text{-VAE_g})$ in mean IoU by $5 \%$ . Most baselines use anomalous training images to compute class-specific threshold to localize anomalies. Needing no anomalous training images, $\mathrm{CAVGA - D_u}$ still outperforms all the mentioned baselines in mean IoU. In terms of mean AuROC, $\mathrm{CAVGA - D_u}$ outperforms CNNFD, TI and VM by $9 \%$ $12 \%$ and $10 \%$ respectively and achieves comparable results with best baseline method. Table 3 also shows that $\mathrm{CAVGA - }$ $D_w$ outperforms $\mathrm{CAVGA - D_u}$ by $22 \%$ and $8 \%$ on mean IoU and mean AuROC respectively. $\mathrm{CAVGA - D_w}$ also outperforms the baselines in mean AuROC. Fig. 4 illustrates that one challenge in anomaly localization is the low contrast between the anomalous regions and their background. In such scenarios, although still outperforming the baselines, CAVGA does not localize the anomaly well.
+
+
+Fig.3: Qualitative results on (a) MVTAD & (b) mSTC datasets respectively. The anomalous attention map (in red) depicts the localization of the anomaly
+
+Fig. 3 (b) illustrates the qualitative results and Table 4 shows that CAVGA also outperforms the baseline methods in mean IoU and mean AuROC on the mSTC dataset. Table 5 shows that CAVGA outperforms the most competitive baseline Patho-GAN [54] by $16\%$ in IoU on the LAG dataset. CAVGA is practically reasonable to train on a single GTX 1080Ti GPU, having comparable training and testing time with baseline methods.
+
+Table 4: Performance comparison of anomaly localization in IoU and its mean (IoU) along with anomaly detection in terms of mean of accuracy of correctly classified anomalous images and normal images on the mSTC dataset for each scene ID $s_i$ . For anomaly localization, we also list the mean AuROC (AuROC)
+
+| Task\Method | si | γ-VAEg[13] | AVID[47] | LSA[2] | AESSIM[6] | AEL2[6] | CAVGA-Du | CAVGA-Ru | CAVGA-Dw | CAVGA-Rw |
| Localization | 01 | 0.239 | 0.182 | 0.244 | 0.201 | 0.163 | 0.267 | 0.316 | 0.383 | 0.441 |
| 02 | 0.206 | 0.206 | 0.183 | 0.081 | 0.172 | 0.190 | 0.234 | 0.257 | 0.349 |
| 03 | 0.272 | 0.162 | 0.265 | 0.218 | 0.240 | 0.277 | 0.293 | 0.313 | 0.465 |
| 04 | 0.290 | 0.263 | 0.271 | 0.118 | 0.125 | 0.283 | 0.349 | 0.360 | 0.381 |
| 05 | 0.318 | 0.234 | 0.287 | 0.162 | 0.129 | 0.291 | 0.312 | 0.408 | 0.478 |
| 06 | 0.337 | 0.314 | 0.238 | 0.215 | 0.198 | 0.344 | 0.420 | 0.455 | 0.589 |
| 07 | 0.168 | 0.214 | 0.137 | 0.191 | 0.165 | 0.198 | 0.241 | 0.284 | 0.366 |
| 08 | 0.220 | 0.168 | 0.233 | 0.069 | 0.056 | 0.219 | 0.254 | 0.295 | 0.371 |
| 09 | 0.174 | 0.193 | 0.187 | 0.038 | 0.021 | 0.247 | 0.284 | 0.313 | 0.365 |
| 10 | 0.146 | 0.137 | 0.146 | 0.116 | 0.141 | 0.149 | 0.166 | 0.245 | 0.295 |
| 11 | 0.277 | 0.264 | 0.286 | 0.101 | 0.075 | 0.309 | 0.372 | 0.441 | 0.588 |
| 12 | 0.162 | 0.180 | 0.108 | 0.203 | 0.164 | 0.098 | 0.141 | 0.207 | 0.263 |
| IoU | 0.234 | 0.210 | 0.215 | 0.143 | 0.137 | 0.239 | 0.281 | 0.330 | 0.412 |
| AuROC | 0.82 | 0.77 | 0.81 | 0.76 | 0.74 | 0.83 | 0.85 | 0.89 | 0.90 |
| Detection | 01 | 0.75 | 0.68 | 0.75 | 0.65 | 0.72 | 0.77 | 0.85 | 0.84 | 0.87 |
| 02 | 0.75 | 0.75 | 0.79 | 0.70 | 0.61 | 0.76 | 0.84 | 0.89 | 0.90 |
| 03 | 0.81 | 0.68 | 0.63 | 0.79 | 0.71 | 0.82 | 0.84 | 0.86 | 0.88 |
| 04 | 0.83 | 0.71 | 0.79 | 0.81 | 0.66 | 0.80 | 0.80 | 0.81 | 0.83 |
| 05 | 0.86 | 0.59 | 0.68 | 0.71 | 0.67 | 0.81 | 0.86 | 0.90 | 0.94 |
| 06 | 0.59 | 0.62 | 0.58 | 0.47 | 0.55 | 0.64 | 0.67 | 0.65 | 0.70 |
| 07 | 0.59 | 0.63 | 0.63 | 0.36 | 0.59 | 0.60 | 0.64 | 0.75 | 0.77 |
| 08 | 0.77 | 0.73 | 0.75 | 0.69 | 0.70 | 0.74 | 0.74 | 0.76 | 0.80 |
| 09 | 0.89 | 0.88 | 0.79 | 0.84 | 0.73 | 0.87 | 0.88 | 0.90 | 0.91 |
| 10 | 0.64 | 0.80 | 0.84 | 0.83 | 0.88 | 0.88 | 0.92 | 0.94 | 0.94 |
| 11 | 0.78 | 0.68 | 0.71 | 0.71 | 0.75 | 0.79 | 0.81 | 0.83 | 0.83 |
| 12 | 0.71 | 0.66 | 0.63 | 0.65 | 0.52 | 0.76 | 0.79 | 0.81 | 0.83 |
| avg | 0.75 | 0.70 | 0.71 | 0.68 | 0.67 | 0.77 | 0.80 | 0.83 | 0.85 |
+
+Table 5: Performance comparison of anomaly localization in IoU along with anomaly detection in terms of classification accuracy on the LAG dataset [29]
+
+| Task \ Method | CAM [60] | GBP [51] | SmoothGrad [50] | Patho-GAN [54] | CAVGA-Du |
| Localization | 0.13 | 0.09 | 0.14 | 0.37 | 0.43 |
| Detection | 0.68 | 0.84 | 0.79 | 0.89 | 0.90 |
+
+
+Fig. 4: Examples of incorrect localization of the anomaly on the MVTAD dataset by CAVGA- $\mathbf{R}_u$ and CAVGA- $\mathbf{R}_w$
+
+Table 6: The mean of accuracy of correctly classified anomalous images and normal images in anomaly detection on the MVTAD dataset
+
+| Category | AVID [47] | AESSIM [6] | AEL2 [6] | AnoGAN [48] | γ-VAEg [13] | LSA [2] | CAVGA -Du | CAVGA -Ru | CAVGA -Dw | CAVGA -Rw |
| Bottle | 0.88 | 0.88 | 0.80 | 0.69 | 0.86 | 0.86 | 0.89 | 0.91 | 0.93 | 0.96 |
| Hazelnut | 0.86 | 0.54 | 0.88 | 0.50 | 0.74 | 0.80 | 0.84 | 0.87 | 0.90 | 0.92 |
| Capsule | 0.85 | 0.61 | 0.62 | 0.58 | 0.86 | 0.71 | 0.83 | 0.87 | 0.89 | 0.93 |
| Metal Nut | 0.63 | 0.54 | 0.73 | 0.50 | 0.78 | 0.67 | 0.67 | 0.71 | 0.81 | 0.88 |
| Leather | 0.58 | 0.46 | 0.44 | 0.52 | 0.71 | 0.70 | 0.71 | 0.75 | 0.80 | 0.84 |
| Pill | 0.86 | 0.60 | 0.62 | 0.62 | 0.80 | 0.85 | 0.88 | 0.91 | 0.93 | 0.97 |
| Wood | 0.83 | 0.83 | 0.74 | 0.68 | 0.89 | 0.75 | 0.85 | 0.88 | 0.89 | 0.89 |
| Carpet | 0.70 | 0.67 | 0.50 | 0.49 | 0.67 | 0.74 | 0.73 | 0.78 | 0.80 | 0.82 |
| Tile | 0.66 | 0.52 | 0.77 | 0.51 | 0.81 | 0.70 | 0.70 | 0.72 | 0.81 | 0.86 |
| Grid | 0.59 | 0.69 | 0.78 | 0.51 | 0.83 | 0.54 | 0.75 | 0.78 | 0.79 | 0.81 |
| Cable | 0.64 | 0.61 | 0.56 | 0.53 | 0.56 | 0.61 | 0.63 | 0.67 | 0.86 | 0.97 |
| Transistor | 0.58 | 0.52 | 0.71 | 0.67 | 0.70 | 0.50 | 0.73 | 0.75 | 0.80 | 0.89 |
| Toothbrush | 0.73 | 0.74 | 0.98 | 0.57 | 0.89 | 0.89 | 0.91 | 0.97 | 0.96 | 0.99 |
| Screw | 0.66 | 0.51 | 0.69 | 0.35 | 0.71 | 0.75 | 0.77 | 0.78 | 0.79 | 0.79 |
| Zipper | 0.84 | 0.80 | 0.80 | 0.59 | 0.67 | 0.88 | 0.87 | 0.94 | 0.95 | 0.96 |
| mean | 0.73 | 0.63 | 0.71 | 0.55 | 0.77 | 0.73 | 0.78 | 0.82 | 0.86 | 0.90 |
+
+Performance on anomaly detection: Table 6 shows that $\mathrm{CAVGA}_u$ outperforms the baselines in the mean of accuracy of correctly classified anomalous images and normal images on the MVTAD dataset. $\mathrm{CAVGA - D_u}$ outperforms the best performing baseline $(\gamma \text{-VAE_g})$ in mean of classification accuracy by $1.3\%$ . Table 4 and Table 5 show that CAVGA outperforms the baseline methods in classification accuracy on both the mSTC and LAG datasets by $2.6\%$ and $1.1\%$ respectively. Furthermore, Table 7 shows that $\mathrm{CAVGA - D_u}$ outperforms all the baselines in mean AuROC in the unsupervised setting on the MNIST, CIFAR-10 and Fashion-MNIST datasets. $\mathrm{CAVGA - D_u}$ also outperforms MemAE [16] and $\beta$ -VAE [21] by $1.1\%$ and $8\%$ on MNIST and by $21\%$ and $38\%$ on CIFAR-10 datasets respectively. $\mathrm{CAVGA - D_u}$ also outperforms all the listed baselines in mean AuROC on the Fashion-MNIST dataset.
+
+# 6 Ablation Study
+
+All the ablation studies are performed on 15 categories on the MVTAD dataset, of which 5 are reported here. The mean of all 15 categories is shown in Table 8. We illustrate the effectiveness of the convolutional $z$ in CAVGA, $L_{ae}$ in the unsupervised setting, and $L_{cga}$ in the weakly supervised setting. The qualitative results are shown in Fig. 5. The column IDs to refer to the columns in Table 8.
+
+Effect of convolutional latent variable $z$ : To show the effectiveness of the convolutional $z$ , we flatten the output of the encoder of CAVGA- $R_u$ and CAVGA- $R_w$ , and connect it to a fully connected layer as latent variable. Following [6], the dimension of latent variable is chosen as 100. We call these network as CAVGA- $R_u^*$ and CAVGA- $R_w^*$ in the unsupervised and weakly supervised settings respectively. In the unsupervised setting, we train CAVGA- $R_u$ and CAVGA- $R_u^*$ using $L + L_{adv}$ as our objective function and compute the anomalous attention
+
+Table 7: Performance comparison of anomaly detection in terms of AuROC and mean AuROC with the SOTA methods on MNIST $(D_M)$ and CIFAR-10 $(D_C)$ datasets. We also report the mean AuROC on Fashion-MNIST $(D_F)$ dataset
+
+| Dataset | Class | γ-VAEg[13] | LSA[2] | OCGAN[41] | ULSLM[56] | CapsNetPP[30] | CapsNetRE[30] | AnoGAN[48] | ADGAN[12] | CAVGA-Du |
| DM [27] | 0 | 0.991 | 0.993 | 0.998 | 0.991 | 0.998 | 0.947 | 0.990 | 0.999 | 0.994 |
| 1 | 0.996 | 0.999 | 0.999 | 0.972 | 0.990 | 0.907 | 0.998 | 0.992 | 0.997 |
| 2 | 0.983 | 0.959 | 0.942 | 0.919 | 0.984 | 0.970 | 0.888 | 0.968 | 0.989 |
| 3 | 0.978 | 0.966 | 0.963 | 0.943 | 0.976 | 0.949 | 0.913 | 0.953 | 0.983 |
| 4 | 0.976 | 0.956 | 0.975 | 0.942 | 0.935 | 0.872 | 0.944 | 0.960 | 0.977 |
| 5 | 0.972 | 0.964 | 0.980 | 0.872 | 0.970 | 0.966 | 0.912 | 0.955 | 0.968 |
| 6 | 0.993 | 0.994 | 0.991 | 0.988 | 0.942 | 0.909 | 0.925 | 0.980 | 0.988 |
| 7 | 0.981 | 0.980 | 0.981 | 0.939 | 0.987 | 0.934 | 0.964 | 0.950 | 0.986 |
| 8 | 0.980 | 0.953 | 0.939 | 0.960 | 0.993 | 0.929 | 0.883 | 0.959 | 0.988 |
| 9 | 0.967 | 0.981 | 0.981 | 0.967 | 0.990 | 0.871 | 0.958 | 0.965 | 0.991 |
| mean | 0.982 | 0.975 | 0.975 | 0.949 | 0.977 | 0.925 | 0.937 | 0.968 | 0.986 |
| DC [25] | 0 | 0.702 | 0.735 | 0.757 | 0.740 | 0.622 | 0.371 | 0.610 | 0.661 | 0.653 |
| 1 | 0.663 | 0.580 | 0.531 | 0.747 | 0.455 | 0.737 | 0.565 | 0.435 | 0.784 |
| 2 | 0.680 | 0.690 | 0.640 | 0.628 | 0.671 | 0.421 | 0.648 | 0.636 | 0.761 |
| 3 | 0.713 | 0.542 | 0.620 | 0.572 | 0.675 | 0.588 | 0.528 | 0.488 | 0.747 |
| 4 | 0.770 | 0.761 | 0.723 | 0.678 | 0.683 | 0.388 | 0.670 | 0.794 | 0.775 |
| 5 | 0.689 | 0.546 | 0.620 | 0.602 | 0.635 | 0.601 | 0.592 | 0.640 | 0.552 |
| 6 | 0.805 | 0.751 | 0.723 | 0.753 | 0.727 | 0.491 | 0.625 | 0.685 | 0.813 |
| 7 | 0.588 | 0.535 | 0.575 | 0.685 | 0.673 | 0.631 | 0.576 | 0.559 | 0.745 |
| 8 | 0.813 | 0.717 | 0.820 | 0.781 | 0.710 | 0.410 | 0.723 | 0.798 | 0.801 |
| 9 | 0.744 | 0.548 | 0.554 | 0.795 | 0.466 | 0.671 | 0.582 | 0.643 | 0.741 |
| mean | 0.717 | 0.641 | 0.656 | 0.736 | 0.612 | 0.531 | 0.612 | 0.634 | 0.737 |
| DF [57] | mean | 0.873 | 0.876 | - | - | 0.765 | 0.679 | - | - | 0.885 |
+
+Table 8: The ablation study on 5 randomly chosen categories showing anomaly localization in IoU on the MVTAD dataset. The mean of all 15 categories is reported. CAVGA- $\mathbf{R}_u^*$ and CAVGA- $\mathbf{R}_w^*$ are our base architecture with a flattened $z$ in the unsupervised and weakly supervised settings respectively. “conv $z$ ” means using convolutional $z$
+
+| Method Category | CAVGA -R*u | CAVGA -R*u + Lae | CAVGA -Ru conv z | CAVGA -Ru + conv z + Lae | CAVGA -R*w | CAVGA -R*w + Lcga | CAVGA -Rw + conv z + Lcga | |
| Column ID | c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 |
| Bottle | 0.24 | 0.27 | 0.26 | 0.33 | 0.16 | 0.34 | 0.28 | 0.39 |
| Hazelnut | 0.16 | 0.26 | 0.31 | 0.47 | 0.51 | 0.76 | 0.67 | 0.79 |
| Capsule | 0.09 | 0.22 | 0.14 | 0.31 | 0.18 | 0.36 | 0.27 | 0.41 |
| Metal Nut | 0.28 | 0.38 | 0.34 | 0.45 | 0.25 | 0.38 | 0.28 | 0.46 |
| Leather | 0.55 | 0.71 | 0.64 | 0.79 | 0.72 | 0.79 | 0.75 | 0.84 |
| mean | 0.24 | 0.34 | 0.33 | 0.47 | 0.39 | 0.52 | 0.48 | 0.60 |
+
+map from the feature map of the latent variable during inference. Similarly, in the weakly supervised setting, we train CAVGA- $\mathbf{R}_w$ and CAVGA- $\mathbf{R}_w^*$ using $L + L_{adv} + L_{bce}$ as our objective function and compute the anomalous attention map from the classifier's prediction during inference. Comparing column $c_1$ with
+
+
+Fig. 5: Qualitative results of the ablation study to illustrate the performance of the anomaly localization on the MVTAD dataset
+
+$c_{3}$ and $c_{5}$ with $c_{7}$ in Table 8, we observe that preserving the spatial relation of the input and latent variable through the convolutional $z$ improves the IoU in anomaly localization without the use of $L_{ae}$ in the unsupervised setting and $L_{cga}$ in the weakly supervised setting. Furthermore, comparing column $c_{2}$ with $c_{4}$ and $c_{6}$ with $c_{8}$ in Table 8, we observe that using convolutional $z$ in CAVGA- $\mathbf{R}_u$ and CAVGA- $\mathbf{R}_w$ outperforms using a flattened latent variable even with the help of $L_{ae}$ in the unsupervised setting and $L_{cga}$ in the weakly supervised setting.
+
+Effect of attention expansion loss $L_{ae}$ : To test the effectiveness of using $L_{ae}$ in the unsupervised setting, we train CAVGA- $\mathbf{R}_u^*$ and CAVGA- $\mathbf{R}_u$ with eq. 4. During inference, the anomalous attention map is computed to localize the anomaly. Comparing column $c_1$ with $c_2$ and $c_3$ with $c_4$ in Table 8, we observe that $L_{ae}$ enhances the IoU regardless of a flattened or convolutional latent variable.
+
+Effect of complementary guided attention loss $L_{cga}$ : We show the effectiveness of $L_{cga}$ by training CAVGA- $\mathbf{R}_w^*$ and CAVGA- $\mathbf{R}_w$ using eq. 6. Comparing column $c_5$ with $c_6$ and $c_7$ with $c_8$ in Table 8, we find that using $L_{cga}$ enhances the IoU regardless of a flattened or convolutional latent variable.
+
+# 7 Conclusion
+
+We propose an end-to-end convolutional adversarial variational autoencoder using guided attention which is a novel use of this technique for anomaly localization. Applicable to different network architectures, our attention expansion loss and complementary guided attention loss improve the performance of anomaly localization in the unsupervised and weakly supervised (with only $2\%$ extra anomalous images for training) settings respectively. We quantitatively and qualitatively show that CAVGA outperforms the state-of-the-art (SOTA) anomaly localization methods on the MVTAD, mSTC and LAG datasets. We also show CAVGA's ability to outperform SOTA anomaly detection methods on the MV-TAD, mSTC, LAG, MNIST, Fashion-MNIST and CIFAR-10 datasets.
+
+Acknowledgments : This work was done when Shashanka was an intern and Kuan-Chuan was a Staff Scientist at Siemens. Shashanka's effort was partially supported by DARPA under Grant D19AP00032.
+
+# Bibliography
+
+[1] Code for iterative energy-based projection on a normal data manifold anomaly localization. https://qiita.com/kogepan102/items/122b2862ad5a51180656, accessed on: 2020-02-29
+[2] Abati, D., Porrello, A., Calderara, S., Cucchiara, R.: Latent space autoregression for novelty detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 481-490 (2019)
+[3] Akcay, S., Atapour-Abarghouei, A., Breckon, T.P.: GANomaly: Semi-supervised anomaly detection via adversarial training. In: Asian Conference on Computer Vision. pp. 622–637. Springer (2018)
+[4] Baur, C., Wiestler, B., Albarqouni, S., Navab, N.: Deep autoencoding models for unsupervised anomaly segmentation in brain mr images. In: International MICCAI Brainlesion Workshop. pp. 161-169. Springer (2018)
+[5] Bergmann, P., Fauser, M., Sattlegger, D., Steger, C.: MVTec AD-a comprehensive real-world dataset for unsupervised anomaly detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9592-9600 (2019)
+[6] Bergmann, P., Löwe, S., Fauser, M., Sattlegger, D., Steger, C.: Improving unsupervised defect segmentation by applying structural similarity to autoencoders. In: International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP). vol. 5 (2019)
+[7] Bian, J., Hui, X., Sun, S., Zhao, X., Tan, M.: A novel and efficient cvaegan-based approach with informative manifold for semi-supervised anomaly detection. IEEE Access 7, 88903-88916 (2019)
+[8] Böttger, T., Ulrich, M.: Real-time texture error detection on textured surfaces with compressed sensing. Pattern Recognition and Image Analysis 26(1), 88-94 (2016)
+[9] Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. In: International Conference on Learning Representations (2019)
+[10] Cheng, K.W., Chen, Y.T., Fang, W.H.: Abnormal crowd behavior detection and localization using maximum sub-sequence search. In: Proceedings of the 4th ACM/IEEE international workshop on Analysis and retrieval of tracked events and motion in imagery stream. pp. 49-58. ACM (2013)
+[11] Daniel, T., Kurutach, T., Tamar, A.: Deep variational semi-supervised novelty detection. arXiv preprint arXiv:1911.04971 (2019)
+[12] Deecke, L., Vandermeulen, R., Ruff, L., Mandt, S., Kloft, M.: Image anomaly detection with generative adversarial networks. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. pp. 3-17. Springer (2018)
+
+[13] Dehaene, D., Frigo, O., Combrexelle, S., Eline, P.: Iterative energy-based projection on a normal data manifold for anomaly localization. International Conference on Learning Representations (2020)
+[14] Dieng, A.B., Kim, Y., Rush, A.M., Blei, D.M.: Avoiding latent variable collapse with generative skip models. In: The 22nd International Conference on Artificial Intelligence and Statistics. pp. 2397-2405 (2019)
+[15] Dimokranitou, A.: Adversarial autoencoders for anomalous event detection in images. Ph.D. thesis (2017)
+[16] Gong, D., Liu, L., Le, V., Saha, B., Mansour, M.R., Venkatesh, S., Hengel, A.v.d.: Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1705-1714 (2019)
+[17] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. pp. 2672–2680 (2014)
+[18] Gutoski, M., Aquino, N.M.R., Ribeiro, M., Lazzaretti, E., Lopes, S.: Detection of video anomalies using convolutional autoencoders and one-class support vector machines. In: XIII Brazilian Congress on Computational Intelligence, 2017 (2017)
+[19] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770-778 (2016)
+[20] Hendrycks, D., Mazeika, M., Dietterich, T.G.: Deep anomaly detection with outlier exposure. In: International Conference on Learning Representations (2019)
+[21] Higgins, I., Matthew, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., Lerchner, A.: beta-VAE: Learning basic visual concepts with a constrained variational framework. International Conference on Learning Representations 2(5), 6 (2017)
+[22] Kimura, D., Chaudhury, S., Narita, M., Munawar, A., Tachibana, R.: Adversarial discriminative attention for robust anomaly detection. In: The IEEE Winter Conference on Applications of Computer Vision (WACV) (March 2020)
+[23] Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: International Conference on Learning Representations (2014)
+[24] Kiran, B., Thomas, D., Parakkal, R.: An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos. Journal of Imaging 4(2), 36 (2018)
+[25] Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Tech. rep., Citeseer (2009)
+[26] Larsen, A.B.L., Sønderby, S.K., Larochelle, H., Winther, O.: Autoencoding beyond pixels using a learned similarity metric. In: International Conference on Machine Learning (2016)
+[27] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278-2324 (1998)
+
+[28] Li, K., Wu, Z., Peng, K.C., Ernst, J., Fu, Y.: Tell me where to look: Guided attention inference network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9215-9223 (2018)
+[29] Li, L., Xu, M., Wang, X., Jiang, L., Liu, H.: Attention based glaucoma detection: A large-scale database and cnn model. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019)
+[30] Li, X., Kiringa, I., Yeap, T., Zhu, X., Li, Y.: Exploring deep anomaly detection methods based on capsule net. International Conference on Machine Learning 2019 Workshop on Uncertainty and Robustness in Deep Learning (2019)
+[31] Liu, W., Luo, W., Lian, D., Gao, S.: Future frame prediction for anomaly detection-a new baseline. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6536-6545 (2018)
+[32] Liu, W., Li, R., Zheng, M., Karanam, S., Wu, Z., Bhanu, B., Radke, R.J., Camps, O.: Towards visually explaining variational autoencoders. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2020)
+[33] Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV) (December 2015)
+[34] Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., Frey, B.: Adversarial autoencoders. In: International Conference on Learning Representations (2016)
+[35] Masana, M., Ruiz, I., Serrat, J., van de Weijer, J., Lopez, A.M.: Metric learning for novelty and anomaly detection. In: British Machine Vision Conference (BMVC) (2018)
+[36] Matteoli, S., Diani, M., Theiler, J.: An overview of background modeling for detection of targets and anomalies in hyperspectral remotely sensed imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 7(6), 2317-2336 (2014)
+[37] Napolletano, P., Piccoli, F., Schettini, R.: Anomaly detection in nanofibrous materials by CNN-based self-similarity. Sensors 18(1), 209 (2018)
+[38] Nguyen, P., Liu, T., Prasad, G., Han, B.: Weakly supervised action localization by sparse temporal pooling network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6752-6761 (2018)
+[39] Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Is object localization for free? weakly-supervised learning with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 685-694 (2015)
+[40] Pawlowski, N., Lee, M.C., Rajchl, M., McDonagh, S., Ferrante, E., Kamnitsas, K., Cooke, S., Stevenson, S., Khetani, A., Newman, T., et al.: Unsupervised lesion detection in brain CT using bayesian convolutional autoencoders. In: Medical Imaging with Deep Learning (2018)
+[41] Perera, P., Nallapati, R., Xiang, B.: OCGAN: One-class novelty detection using GANs with constrained latent representations. In: Proceedings of the
+
+IEEE Conference on Computer Vision and Pattern Recognition. pp. 2898-2906 (2019)
+[42] Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: International Conference on Learning Representations (2016)
+[43] Ravanbakhsh, M., Sangineto, E., Nabi, M., Sebe, N.: Training adversarial discriminators for cross-channel abnormal event detection in crowds. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1896-1904. IEEE (2019)
+[44] Ruff, L., Vandermeulen, R.A., Gornitz, N., Binder, A., Müller, E., Müller, K.R., Kloft, M.: Deep semi-supervised anomaly detection. International Conference on Learning Representations (2020)
+[45] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: ImageNet large scale visual recognition challenge. International journal of computer vision 115(3), 211-252 (2015)
+[46] Sabokrou, M., Khalooei, M., Fathy, M., Adeli, E.: Adversarily learned one-class classifier for novelty detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3379-3388 (2018)
+[47] Sabokrou, M., Pourreza, M., Fayyaz, M., Entezari, R., Fathy, M., Gall, J., Adeli, E.: Avid: Adversarial visual irregularity detection. In: Asian Conference on Computer Vision. pp. 488-505. Springer (2018)
+[48] Schlegl, T., Seebock, P., Waldstein, S.M., Schmidt-Erfurth, U., Langs, G.: Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In: International Conference on Information Processing in Medical Imaging. pp. 146-157. Springer (2017)
+[49] Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 618-626 (2017)
+[50] Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: SmoothGrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017)
+[51] Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 (2014)
+[52] Steger, C.: Similarity measures for occlusion, clutter, and illumination invariant object recognition. In: Joint Pattern Recognition Symposium. pp. 148-154. Springer (2001)
+[53] Vu, H.S., Ueta, D., Hashimoto, K., Maeno, K., Pranata, S., Shen, S.M.: Anomaly detection with adversarial dual autoencoders. arXiv preprint arXiv:1902.06924 (2019)
+[54] Wang, X., Xu, M., Li, L., Wang, Z., Guan, Z.: Pathology-aware deep network visualization and its application in glaucoma image synthesis. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 423-431. Springer (2019)
+[55] Wang, Z., Fan, M., Muknahallipatna, S., Lan, C.: Inductive multi-view semi-supervised anomaly detection via probabilistic modeling. In: 2019 IEEE
+
+International Conference on Big Knowledge (ICBK). pp. 257-264. IEEE (2019)
+[56] Wolf, L., Benaim, S., Galanti, T.: Unsupervised learning of the set of local maxima. International Conference on Learning Representations (2019)
+[57] Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)
+[58] Zagoruyko, S., Komodakis, N.: Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In: International Conference on Learning Representations (2017)
+[59] Zenati, H., Foo, C.S., Lecouat, B., Manek, G., Chandrasekhar, V.R.: Efficient GAN-based anomaly detection. arXiv preprint arXiv:1802.06222 (2018)
+[60] Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2921-2929 (2016)
\ No newline at end of file
diff --git a/attentionguidedanomalylocalizationinimages/images.zip b/attentionguidedanomalylocalizationinimages/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..603798c6789c7b9de35c8e3e1cb8886f44ea5ec5
--- /dev/null
+++ b/attentionguidedanomalylocalizationinimages/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07b151ff232c3d56efc0d6f83e5707484bf3ce9aae6be69b50cda1664035c24c
+size 877324
diff --git a/attentionguidedanomalylocalizationinimages/layout.json b/attentionguidedanomalylocalizationinimages/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f80a16db6c1eeaab2d762503b324eb4da94ddc6f
--- /dev/null
+++ b/attentionguidedanomalylocalizationinimages/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b842fd8dce8af79a0aa481fee8a7ec60a3903b7bafad991469aa3c30ebaee29
+size 533220
diff --git a/attentionnasspatiotemporalattentioncellsearchforvideoclassification/ee3e9430-d356-4e78-8576-f470fb4f7b13_content_list.json b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/ee3e9430-d356-4e78-8576-f470fb4f7b13_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a5b954ac11021bd9d4bc9b83a81ad124c057163f
--- /dev/null
+++ b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/ee3e9430-d356-4e78-8576-f470fb4f7b13_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:26e07f7e9f696049ba26d3cf177c0f625a5e51b75a99c0d409cd8398738d0b6d
+size 79875
diff --git a/attentionnasspatiotemporalattentioncellsearchforvideoclassification/ee3e9430-d356-4e78-8576-f470fb4f7b13_model.json b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/ee3e9430-d356-4e78-8576-f470fb4f7b13_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f126806fdb7f7dbbfc6aaf5c2e40e2a7b416effd
--- /dev/null
+++ b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/ee3e9430-d356-4e78-8576-f470fb4f7b13_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ce3672df117cab694203c73e9e031102f827a7971da8c4d52231558152eb83ff
+size 97458
diff --git a/attentionnasspatiotemporalattentioncellsearchforvideoclassification/ee3e9430-d356-4e78-8576-f470fb4f7b13_origin.pdf b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/ee3e9430-d356-4e78-8576-f470fb4f7b13_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0df97396eb62a85c1b04079e0667350a91d8910b
--- /dev/null
+++ b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/ee3e9430-d356-4e78-8576-f470fb4f7b13_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:13bb2b57f449b94dbcbe0f8ef5ed859c014058f33dbe8e09ddbed4bd3dfd8a68
+size 896911
diff --git a/attentionnasspatiotemporalattentioncellsearchforvideoclassification/full.md b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..74c8e83872fbd39c660700b8f1092cc819474c60
--- /dev/null
+++ b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/full.md
@@ -0,0 +1,293 @@
+# AttentionNAS: Spatiotemporal Attention Cell Search for Video Classification
+
+Xiaofang Wang $^{2\star}$ , Xuehan Xiong $^{1}$ , Maxim Neumann $^{1}$ , AJ Piergiovanni $^{1}$ , Michael S. Ryoo $^{1}$ , Anelia Angelova $^{1}$ , Kris M. Kitani $^{2}$ , and Wei Hua $^{1}$
+
+1 Google 2 Carnegie Mellon University
+
+Abstract. Convolutional operations have two limitations: (1) do not explicitly model where to focus as the same filter is applied to all the positions, and (2) are unsuitable for modeling long-range dependencies as they only operate on a small neighborhood. While both limitations can be alleviated by attention operations, many design choices remain to be determined to use attention, especially when applying attention to videos. Towards a principled way of applying attention to videos, we address the task of spatiotemporal attention cell search. We propose a novel search space for spatiotemporal attention cells, which allows the search algorithm to flexibly explore various design choices in the cell. The discovered attention cells can be seamlessly inserted into existing backbone networks, e.g., I3D or S3D, and improve video classification accuracy by more than $2\%$ on both Kinetics-600 and MiT datasets. The discovered attention cells outperform non-local blocks on both datasets, and demonstrate strong generalization across different modalities, backbones, and datasets. Inserting our attention cells into I3D-R50 yields state-of-the-art performance on both datasets.
+
+Keywords: Attention, Video Classification, Neural Architecture Search
+
+# 1 Introduction
+
+One major contributing factor to the success of neural networks in computer vision is the novel design of network architectures. In early work, most network architectures [12, 28, 10] were manually designed by human experts based on their knowledge and intuition of specific tasks. Recent work on neural architecture search (NAS) [41, 42, 16, 15, 21] proposes to directly learn the architecture for a specific task from data and discovered architectures have been shown to outperform human-designed ones.
+
+Convolutional Neural Networks (CNNs) have been the de facto architecture choice. Most work in computer vision uses convolutional operations as the primary building block to construct the network. However, convolutional operations still have their limitations. It has been shown that attention is complementary
+
+to convolutional operations, and they can be combined to further improve performance on vision tasks [33, 32, 2].
+
+While being complementary to convolution, many design choices remain to be determined to use attention. The design becomes more complex when applying attention to videos, where the following questions arise: What is the right dimension to apply an attention operation to videos? Should an operation be applied to the temporal, spatial, or spatiotemporal dimension? How to compose multiple attention operations applied to different dimensions?
+
+Towards a principled way of applying attention to videos, we address the task of spatiotemporal attention cell search, i.e., the automatic discovery of cells that use attention operations as the primary building block. The discovered attention cells can be seamlessly inserted into a wide range of backbone networks, e.g., I3D [5] or S3D [36], to improve the performance on video understanding tasks.
+
+Specifically, we propose a search space for spatiotemporal attention cells, which allows the search algorithm to flexibly explore all of the aforementioned design choices in the cell. The attention cell is constructed by composing several primitive attention operations. Importantly, we consider two types of primitive attention operations: (1) map-based attention [19, 33] and (2) dot-product attention (a.k.a., self-attention) [30, 32, 2]. Map-based attention explicitly models where to focus in videos, compensating for the fact that convolutional operations apply the same filter to all the positions in videos. Dot-product attention enables the explicit modeling of long-range dependencies between distant positions in videos, accommodating the fact that convolutional operations only operate on a small and local neighborhood.
+
+We aim to find an attention cell from the proposed search space such that the video classification accuracy is maximized when adding that attention cell into the backbone network. But the search process can be extremely costly. One significant bottleneck of the search is the need to constantly evaluate different attention cells. Evaluating the performance of an attention cell typically requires training the selected attention cell as well as the backbone network from scratch, which can take days on large-scale video datasets, e.g., Kinetics-600 [4].
+
+To alleviate this bottleneck, we consider two search algorithms: (1) Gaussian Process Bandit (GPB) [26, 25], which judiciously selects the next attention cell for evaluation based on the attention cells having been evaluated so far, allowing us to find high-performing attention cells within a limited number of trials; (2) differentiable architecture search [16], where we develop a differentiable formulation of the proposed search space, making it possible to jointly learn the attention cell design and network weights through back-propagation, without explicitly sampling and evaluating different cells. The entire differentiable search process only consumes a computational cost similar to fully training one network on the training videos. This formulation also allows us to learn position-specific attention cell designs with zero extra computational cost (see Sec 4.2 for details).
+
+We conduct extensive experiments on two benchmark datasets: Kinetics-600 [4] and Moments in Time (MiT) [18]. Our discovered attention cells can improve the performance of two backbone networks I3D [5] and S3D [36] by more
+
+
+Fig. 1: Illustration of the operation-level search space (left) and cell-level search space (right). The example attention operations use temporal as the attention dimension and the tuple under each feature map denotes its shape.
+
+than $2\%$ on both datasets, and also outperforms non-local blocks - the state-of-the-art manually designed attention cells for videos. Inserting our attention cells into I3D-R50 [32] yields state-of-the-art performance on both datasets. Notably, our discovered attention cells can also generalize well across modalities (RGB to optical flow), backbones (e.g., I3D to S3D or I3D to I3D-R50), and datasets (MiT to Kinetics-600 or Kinetics-600 to MiT).
+
+Contributions: (1) This is the first attempt to extend NAS beyond discovering convolutional cells to attention cells. (2) We propose a novel search space for spatiotemporal attention cells that use attention operations as the primary building block, which can be seamlessly inserted into existing backbone networks to improve their performance on video classification. (3) We develop a differentiable formulation of the proposed search space, making it possible to learn the attention cell design with back-propagation and learn position-specific attention cell designs with zero extra cost. (4) Our discovered attention cells outperform non-local blocks, on both the Kinetics-600 and MiT dataset. We achieve state-of-the-art performance on both datasets by inserting our discovered attention cells into I3D-R50. Our attention cells also demonstrate strong generalization capability when being applied to different modalities, backbones, or datasets.
+
+# 2 Related Work
+
+Video Classification. Early work on video classification extends image classification CNNs with recurrent networks [6, 38] or two-stream architectures [24, 8] that take both RGB frames and optical flow frames as inputs. Recent work on video classification are mainly based on 3D convolution [29] or its variants to directly learn video representations from RGB frames. I3D [5] proposes to inflate the filters and pooling kernels of a 2D CNN into 3D to leverage successful 2D CNN architecture designs and their ImageNet pretrained weights. S3D [36] improves upon I3D by decomposing a 3D convolution into a 2D spatial convolution and a 1D temporal convolution. A similar idea is also explored in P3D [20].
+
+CPNet [17] learns video representations by aggregating information from potential correspondences. SlowFast [7] proposes an architecture operating at two different frame rates, where spatial semantics are learned on low frame rates, and temporal dynamics are learned on high frame rates. Different from them, we do not focus on proposing novel CNN architecture designs for video classification. Instead, we focus on discovering attention cells using attention operations as the primary building block, which are complementary to CNNs.
+
+Attention in Vision. Both map-based attention and dot-product attention are useful for computer vision tasks. Map-based attention [19, 33] has been used to improve the performance of CNNs on image recognition, where spatial attention maps are learned to scale the features given by convolutional layers. Dot-product attention [30] is successfully used in sequence modeling and transduction tasks, e.g., machine translation, and is recently used to augment CNNs and enhances their performance on image recognition [2]. Non-local blocks [32] are proposed to capture long-range dependencies in videos and can significantly improve the video classification accuracy of CNNs. Non-local blocks can be viewed as applying one single dot-product attention operation to the spatiotemporal dimension. In contrast, our attention cells can contain multiple attention operations applied to different dimensions of videos. Non-local blocks are a particular case in our proposed search space, and our attention cells are discovered automatically in a data-driven way instead of being manually designed.
+
+NAS - Search Space. Search space is crucial for NAS. Randwire [35] shows that one random architecture from a carefully designed search space can achieve competitive performance on image recognition. NASNet [42] proposes to search for convolutional cells that can be stacked multiple times to form the entire architecture. Auto-DeepLab [14] proposes a two-level hierarchical architecture search space for semantic image segmentation. AssembleNet [23] proposes to search for the connectivity between multi-stream convolutional blocks for video classification. They all focus on finding convolutional cells or networks for the end task. Different from them, our proposed search space uses attention as the primary building component instead of convolution.
+
+NAS - Search Algorithm. Various search algorithms have been explored in NAS, such as random search [13, 37], reinforcement learning [1, 41, 42, 39], evolutionary algorithms [34, 22, 21], Bayesian optimization (BO) [11, 3], and differentiable methods [16]. We have tried using GPB (belonging to the category of BO) to search for desired attention cells. We also develop a differentiable formulation of our proposed search space. This makes it possible to conduct the search using differentiable methods and greatly improves the search speed.
+
+# 3 Attention Cell Search Space
+
+We aim to search for spatiotemporal attention cells, which can be seamlessly inserted into a wide range of backbone networks, e.g., I3D [5] or S3D [36], to improve the performance on video understanding tasks.
+
+Formally, an attention cell takes a 4D feature map of shape $(T,H,W,C)$ as input and outputs a feature map of the same shape. $T,H$ , and $W$ are the temporal dimension, height, and width of the feature map, respectively. $C$ denotes the number of channels. The output of an attention cell is enforced to have the same shape as its input by design, so that the discovered attention cells can be easily inserted after any layers in any existing backbone networks.
+
+An attention cell is composed of $K$ primitive attention operations. The proposed attention cell search space consists of an operation level search space and a cell level search space (see Fig. 1). The operation level search space contains different choices to instantiate an individual attention operation. The cell level search space consists of different choices to compose the $K$ operations to form a cell, i.e., the connectivity between the $K$ operations within a cell. We first introduce the operation level search space and then the cell level search space.
+
+# 3.1 Operation Level Search Space
+
+An attention operation takes a feature map of shape $(T, H, W, C_{\mathrm{in}})$ as input and outputs an attended featured map of shape $(T, H, W, C_{\mathrm{out}})$ . For an attention operation, $C_{\mathrm{in}}$ and $C_{\mathrm{out}}$ can be different. To construct an attention operation, we need to make two fundamental choices: the dimension to compute the attention weights and the type of the attention operation.
+
+Attention Dimension For brevity, we term the dimension to compute the attention weights as attention dimension. In CNNs for video classification, previous work [20, 36, 7] has studied when to use temporal convolution (e.g., $3 \times 1 \times 1$ ), spatial convolution (e.g., $1 \times 3 \times 3$ ), and spatiotemporal convolution (e.g., $3 \times 3 \times 3$ ). It is also a valid question to ask for attention what is the right dimension to apply an attention operation to videos: temporal, spatial or spatiotemporal (temporal and spatial together). The choice of the attention dimension is important as computing attention weights for different dimensions represents focusing on different aspects of the video.
+
+Attention Operation Type We consider two types of attention operations, each of which helps address a specific limitation of convolutional operations, as mentioned in the introduction:
+
+- Map-based attention [19, 33]: Map-based attention learns a weighting factor for each position in the attention dimension and scales the feature map with the learned attention weights. Map-based attention explicitly models what positions in the attention dimension to attend to in videos.
+- Dot-product attention [30, 32, 2]: A dot-product attention operation computes the feature response at a position as a weighted sum of features of all the positions in the attention dimension, where the weights are determined by a similarity function between features of all the positions [32, 2]. Dot-product attention explicitly models the long-range interactions among distant positions in the attention dimension.
+
+We now describe the details of the two types of attention operations. Let $f_{\mathrm{in}}$ denote the input feature map to an attention operation and denote its shape as $(T, H, W, C_{\mathrm{in}})$ . Applying an attention operation consists of three steps, including reshaping the input feature map $f_{\mathrm{in}}$ , computing the attention weights, and applying the attention weights.
+
+Reshape $f_{\mathrm{in}}$ . We reshape $f_{\mathrm{in}}$ into a 2D feature map $f_{\mathrm{in}}^{\prime}$ before computing the attention weights. The first dimension of $f_{\mathrm{in}}^{\prime}$ is the attention dimension and the second dimension contains the remaining dimensions. For example, $f_{\mathrm{in}}^{\prime}$ has the shape of $(T, HWC_{\mathrm{in}})$ when temporal is the attention dimension and has the shape of $(THW, C_{\mathrm{in}})$ when spatiotemporal is the attention dimension. We denote this procedure as a function ReshapeTo2D, i.e., $f_{\mathrm{in}}^{\prime} = \text{ReshapeTo2D}(f_{\mathrm{in}})$ .
+
+Spatial attention requires extra handling. As video content changes over time, when applying attention to the spatial dimension, each frame $f_{\mathrm{in}}^{t}$ should have its own spatial attention weights, where $f_{\mathrm{in}}^{t}$ is the $t^{th}$ frame in $f_{\mathrm{in}}$ and has the shape of $(H, W, C_{\mathrm{in}})$ . Therefore, when spatial is the attention dimension, instead of reshaping the entire 4D feature map $f_{\mathrm{in}}$ , we reshape $f_{\mathrm{in}}^{t}$ into a 2D feature map $f_{\mathrm{in}}^{\prime t}$ of shape $(HW, C_{\mathrm{in}})$ for every $t$ , i.e., $f_{\mathrm{in}}^{\prime t} = \mathsf{ReshapeTo2D}(f_{\mathrm{in}}^{t})(1 \leq t \leq T)$ .
+
+Map-based attention. Assuming temporal is the attention dimension, map-based attention generates $T$ attention weights to scale the feature map of each temporal frame. The attention weights are computed as follows:
+
+$$
+W _ {\text {m a p}} = \operatorname {D i a g} \left(\phi \left(G _ {2} \left(\operatorname {A v g P o o l} \left(G _ {1} \left(f _ {\mathrm {i n}} ^ {\prime}\right)\right)\right)\right)\right). \tag {1}
+$$
+
+$G_{1}$ is a 1D convolutional layer with kernel size as 1, which reduces the dimension of the feature response of each temporal frame from $HWC_{\mathrm{in}}$ to $C^\prime$ and gives a feature map of shape $(T,C^{\prime})$ . AvgPool denotes an average pooling operation applied to each temporal dimension and outputs a $T$ -dim vector. The multilayer perceptron $G_{2}$ and the activation function $\phi$ (e.g., the sigmoid function) further transform the $T$ -dim vector to $T$ attention weights. More details about the activation function are discussed later. Diag rearranges the $T$ attention weights into a $T\times T$ matrix, where the $T$ attention weights are placed on the diagonal of the matrix. The obtained attention weight matrix $W$ is a diagonal matrix.
+
+Similarly, when spatiotemporal is the attention dimension, map-based attention gives a $THW \times THW$ diagonal matrix containing the attention weights. When spatial is the attention dimension, we generate one $HW \times HW$ diagonal matrix for every $f_{\mathrm{in}}^{\prime t}$ ( $1 \leq t \leq T$ ) separately, using the above described procedure. Note that while different frames have separate spatial attention weights, $G_{1}$ and $G_{2}$ are shared among different frames when computing attention weights.
+
+Dot-product attention. When applying dot-product attention to the temporal dimension, a $T \times T$ attention weight matrix is generated as follows:
+
+$$
+W _ {\text {d o t - p r o d}} = \phi \left(G _ {1} \left(f _ {\text {i n}} ^ {\prime}\right) G _ {2} \left(f _ {\text {i n}} ^ {\prime}\right) ^ {T}\right). \tag {2}
+$$
+
+Here, $G_{1}$ and $G_{2}$ are both a 1D convolutional layer with kernel size as 1 and they both output a feature map of shape $(T,C')$ . Let $Q = G_{1}(f_{\mathrm{in}}^{\prime})$ and $K =$
+
+$G_{2}(f_{\mathrm{in}}^{\prime})$ . $QK^{T}$ computes an similarity matrix between the features of all the temporal frames. We then use $\phi$ , an activation function of our choice, e.g., the softmax function, to convert the similarity matrix into attention weights. Note that different from $W_{\mathrm{map}}$ , $W_{\mathrm{dot - prod}}$ is a full matrix instead of a diagonal matrix.
+
+When being applied to the spatiotemporal dimension, dot-product attention generates a $THW \times THW$ attention weight matrix. When applying dot-product attention to the spatial dimension, each frame has its own attention weights (a $HW \times HW$ matrix), where $G_{1}$ and $G_{2}$ are shared among different frames.
+
+Apply the attention weights. We apply the attention weight matrix to the input feature map through matrix multiplication to obtain the attended feature map:
+
+$$
+f _ {\text {o u t}} = \text {R e s h a p e T o 2 D} ^ {- 1} \left(W \text {R e s h a p e T o 2 D} \left(G _ {3} \left(f _ {\text {i n}}\right)\right)\right). \tag {3}
+$$
+
+$W$ is the weight matrix generated by map-based attention ( $W_{\mathrm{map}}$ ) or dot-product attention ( $W_{\mathrm{dot - prod}}$ ). $G_{3}$ is a $1 \times 1 \times 1$ convolutional layer to reduce the number of channels of $f_{\mathrm{in}}$ from $C_{\mathrm{in}}$ to $C_{\mathrm{out}}$ . If temporal is the attention dimension, $W$ has the shape of $(T,T)$ and ReshapeTo2D( $G_{3}(f_{\mathrm{in}})$ ) has the shape $(T,HWC_{\mathrm{out}})$ . ReshapeTo2D $^{-1}$ is the inverse function of ReshapeTo2D, reshaping the attended feature map back to the shape of $(T,H,W,C_{\mathrm{out}})$ .
+
+For spatial attention, the attention weights are applied to each frame independently, i.e., $f_{\mathrm{out}}^{t} = \mathsf{ReshapeTo2D}^{-1}(W^{t}\mathsf{ReshapeTo2D}(G_{3}(f_{\mathrm{in}}^{t})))$ , where $W^{t}$ is the spatial attention weights for frame $t$ and $f_{\mathrm{out}}^{t}$ has the shape of $(H,W,C_{\mathrm{out}})$ . We stack $\{f_{\mathrm{out}}^{t} \mid 1 \leq t \leq T\}$ along the temporal dimension to form the attended feature map $f_{\mathrm{out}}$ of shape $(T,H,W,C_{\mathrm{out}})$ . Similar to $G_{1}$ and $G_{2}$ used for computing attention weights, $G_{3}$ is also shared among different frames.
+
+Note that by design $G_{3}$ only changes number of channels, i.e., transforms the features at each spatiotemporal position. The spatiotemporal structure of the input $f_{\mathrm{in}}$ is preserved. This ensures that after the application of attention weights, $f_{\mathrm{out}}$ still follows the original spatiotemporal structure of the input $f_{\mathrm{in}}$ .
+
+Activation function. We empirically find that the activation function $\phi$ (see Eq. 1 and Eq. 2) used in the attention operation can influence the performance. So, we also include the choice of the activation function in the operation level search space and rely on the search algorithm to choose the right one for each attention operation. We consider the following four choices for the activation function: (1) no activation function, (2) ReLU, (3) sigmoid, and (4) softmax.
+
+# 3.2 Cell Level Search Space
+
+We define an attention cell as a cell composed of $K$ attention operations. Let $f_0$ denote the input feature map to the entire attention cell and $(T, H, W, C)$ be the shape of $f_0$ . $f_0$ is usually the output of a stack of convolutional layers. An attention cell takes $f_0$ as input and outputs a feature map of the same shape.
+
+The connectivity between convolutional layers is essential to the performance of CNNs, no matter if the network is manually designed, e.g., ResNet [10] and
+
+Inception [28], or automatically discovered [41, 42, 35]. Similarly, to build an attention cell, another critical design choice is how the $K$ attention operations are connected inside the cell, apart from the design of these attention operations.
+
+As shown in Fig. 1, in an attention cell, the first attention operation always takes $f_{0}$ as input and outputs feature map $f_{1}$ . The $k^{th}(2 \leq k \leq K)$ attention operation chooses its input from $\{f_{0}, f_{1}, \ldots, f_{k-1}\}$ and gives feature map $f_{k}$ based on the selected input. We allow the $k^{th}$ operation to choose multiple feature maps from $\{f_{0}, f_{1}, \ldots, f_{k-1}\}$ and compute a weighted sum of selected feature maps as its input, where the weights are learnable parameters. This process is repeated for all $k$ and allows us to explore all possible connectivities between the $K$ attention operations in the cell.
+
+We combine $\{f_1, f_2, \ldots, f_K\}$ to obtain the output feature map of the entire attention cell. For all attention operations inside the cell, we set their output shape to be $(T, H, W, C_{\mathrm{op}})$ , i.e., $f_k$ has the shape of $(T, H, W, C_{\mathrm{op}})$ for all $k (1 \leq k \leq K)$ . $C_{\mathrm{op}}$ is usually smaller than $C$ to limit the computation in an attention cell with multiple attention operations. We concatenate $\{f_1, f_2, \ldots, f_K\}$ along the channel dimension and then employ a $1 \times 1 \times 1$ convolution to transform the concatenated feature map back to the same shape as the input $f_0$ . We denote the feature map after transformation as $f_{\mathrm{comb}}$ . Similar to non-local blocks [32], we add a residual connection between the input and output of the attention cell. So the final output of the attention cell is the sum of $f_0$ and $f_{\mathrm{comb}}$ . The combination procedure is the same for all attention cells.
+
+# 4 Search Algorithm
+
+# 4.1 Gaussian Process Bandit (GPB)
+
+Given $K$ , i.e., the number of attention operations inside the attention cell, the attention cell design can be parameterized by a fixed number of hyper-parameters, including the attention dimension, the type and the activation function of each attention operation, and the input to each attention operation.
+
+We employ GPB [26, 25], a popular hyper-parameter optimization algorithm, to optimize all the hyper-parameters for the attention cell design jointly. Intuitively, GPB can predict the performance of an attention cell at a modest computational cost without actually training the entire network, based on those already evaluated attention cells. Such prediction helps GPB to select promising attention cells to evaluate in the following step and makes it possible to discover high-performing attention cells within a limited number of search steps.
+
+Concretely, in GPB, the performance of an attention cell is modeled as a sample from a Gaussian process. At each search step, GPB selects the attention cell for evaluation by optimizing the Gaussian process upper confidence conditioned on those already evaluated attention cells.
+
+# 4.2 Differentiable Architecture Search
+
+Inspired by recent progress on differentiable architecture search [16], we develop a differentiable formulation of our proposed search space. The formulation makes
+
+
+Fig. 2: Illustration of the supergraph used by the differentiable method.
+
+it possible to jointly learn the attention cell design and network weights with back-propagation, without explicitly sampling and evaluating different cells.
+
+Differentiable Formulation of Search Space We propose to represent the attention cell search space as a supergraph, where all the possible attention cells are different subgraphs of this supergraph. The supergraph representation allows us to parameterize the design of an attention cell with a set of continuous and differentiable connection weights between the nodes in the supergraph.
+
+To be more specific, we define the supergraph to have $m$ levels, where each level has $n$ nodes. Each node is an attention operation of a pre-defined type (map-based or dot-product attention) and a pre-defined attention dimension. Fig. 2 shows an example supergraph with 2 levels, where each level has 4 nodes. The input feature map to the entire attention cell is passed to all the nodes at the first level. Starting from the second level, the input feature map to a node is a weighted sum of the output feature maps of all the nodes at its previous level:
+
+$$
+f _ {i, j} ^ {\text {i n}} = \sum_ {k = 1} ^ {n} w _ {i, j, k} ^ {\text {l e v e l}} \cdot f _ {i - 1, k} ^ {\text {o u t}}, \tag {4}
+$$
+
+where $2 \leq i \leq m$ , $1 \leq j \leq n$ , $f_{i,j}^{\mathrm{in}}$ is the input to the $j^{th}$ node at $i^{th}$ level, $f_{i-1,k}^{\mathrm{out}}$ is the output of the $k^{th}$ node at $(i-1)^{th}$ level, and $w_{i,j}^{\mathrm{level}}$ are the connection weights between the $j^{th}$ node at $i^{th}$ level and all the nodes at $(i-1)^{th}$ level. In practice, $w_{i,j}^{\mathrm{level}}$ is a probability distribution obtained by softmax.
+
+For each node in the supergraph, we also learn a probability distribution over the possible choices of activation functions. The output of a node is a weighted sum of the attended feature map under different activation functions:
+
+$$
+f _ {i, j} ^ {\text {o u t}} = \sum_ {k = 1} ^ {| \mathcal {A} |} w _ {i, j, k} ^ {\text {a c t i v a t i o n}} \cdot f _ {i, j} ^ {\text {o u t}, \phi_ {k}}, \tag {5}
+$$
+
+where $\mathcal{A}$ is the set of available activation functions, $\phi_{k}$ is the $k^{th}$ activation function in $\mathcal{A}$ , $w_{i,j,k}^{\mathrm{activation}}$ is the weighting factor to be learned for $\phi_{k}$ , and $f_{i,j}^{\mathrm{out},\phi_{k}}$
+
+is the attended feature map under the activation function $\phi_{k}$ . The only difference among these attended feature maps $\{f_{i,j}^{\mathrm{out},\phi_k}\}$ is the activation function $\phi$ used in Eq. 1 or Eq. 2. The layers $G_{1}$ , $G_{2}$ and $G_{3}$ are shared by different activation functions within one node.
+
+The supergraph has a sink node, receiving the output feature maps of all the nodes. The sink node is defined as follows:
+
+$$
+f _ {\text {s i n k}} ^ {\text {o u t}} = \sum_ {1 \leq i \leq m, 1 \leq j \leq n} w _ {i, j} ^ {\text {s i n k}} \cdot G _ {i, j} \left(f _ {i, j} ^ {\text {o u t}}\right), \tag {6}
+$$
+
+where $f_{\mathrm{sink}}^{\mathrm{out}}$ is the output of the sink node, $f_{i,j}^{\mathrm{out}}$ is the output of the $j^{th}$ node at $i^{th}$ level, $G_{i,j}$ is a $1 \times 1 \times 1$ convolutional layer changing the number of channels in $f_{i,j}^{\mathrm{out}}$ to $C$ , and $w_{i,j}^{\mathrm{sink}}$ is the weighting factor to be learned. We enforce $f_{\mathrm{sink}}^{\mathrm{out}}$ to have the same shape as the input to the supergraph, so that the supergraph can be inserted into any position of the backbone network. Same as attention cells, a residual connection is added between the input and output of the supergraph.
+
+Attention Cell Design Learning Both the network weights, e.g., weights of convolutional layers in the network, and the connection weights in the supergraph ( $\{w^{\mathrm{level}}, w^{\mathrm{sink}}, w^{\mathrm{activation}}\}$ ) are differentiable. During the search, we insert supergraphs into the backbone network and jointly optimize the network weights and connection weights by minimizing the training loss using gradient descent. The entire search process only consumes a computational cost similar to fully training one network on the training videos. Once the training is completed, we can derive the attention cell design from the learned connection weights.
+
+Note that we insert the supergraphs at positions where the final attention cells will be inserted. In practice, usually multiple supergraphs or attention cells (e.g., 5) are inserted into the backbone network. If we enforce the inserted supergraphs to share the same set of connection weights, we will obtain one single attention cell design, dubbed as the position-agnostic attention cell.
+
+One significant advantage of the differentiable method is that we can also learn separate connection weights for supergraphs inserted at different positions, which will give position-specific attention cells (see Table 2). Searching for separate attention cells for different positions results in an exponentially larger search space than searching for one single attention cell. But thanks to the differentiable method, we can learn position-specific attention cells with zero extra cost compared to learning one position-agnostic attention cell.
+
+Attention Cell Design Derivation We derive the attention cell design from the learned continuous connection weights. We first choose the top $\alpha$ nodes with the highest weights in $w^{\mathrm{sink}}$ and add them to the set $S$ . Then for each node in $S$ , we add its top $\beta$ predecessors in its previous level to $S$ , based on the corresponding connection weights in $w^{\mathrm{level}}$ . This process is conducted recursively for every node in $S$ until we reach the first level. $\alpha$ and $\beta$ are two hyper-parameters.
+
+Recall that each node is an attention operation of a pre-defined type and attention dimension. So, $S$ contains a set of selected attention operations. The
+
+construction process of $S$ also determines how these attention operations are connected. For all the selected attention operations, we decide its activation function based on the corresponding weighting factors in $w^{\text{activation}}$ .
+
+# 5 Experiments
+
+# 5.1 Experimental Setup
+
+Datasets. We conduct experiments on two benchmark datasets: Kinetics-600 [4] and Moments in Time (MiT) [18]. Top-1 and top-5 classification accuracy are used as the evaluation metric for both datasets.
+
+Backbones. We conduct the attention cell search on two backbones: I3D [5] and S3D [36]. Both I3D and S3D are constructed based on the Inception [28] network. When examining the generalization of the found cells, we also consider the backbone I3D-R50 [32], which is constructed based on ResNet-50 [10].
+
+Baselines. Non-local blocks [32] are the state-of-the-art manually designed attention cell for video classification and are the most direct competitor of our automatically searched attention cells. We mainly focus on the relative improvement brought by our attention cells after being inserted into backbones. Besides non-local blocks, we also compare with other state-of-the-art methods for video classification, such as TSN [31], TRN [40], and SlowFast [7].
+
+# 5.2 Search Results
+
+Table 1 shows the search results of GPB and Table 2 summarizes the search results using the differentiable method. Notably, attention cells found by the differentiable method can improve the accuracy of both backbones by more than $2\%$ on both datasets, and consistently outperform non-local blocks on all the combinations of backbones and datasets.
+
+In Table 2, 'Pos-Agnostic' refers to that one attention design is learned for all the positions where the cells are inserted. 'Pos-Specific' means that we learn a separate attention cell design for each position where a cell is inserted, i.e., the cells inserted at different positions can be different. We observe that position-specific attention cells consistently outperform position-agnostic attention cells.
+
+# 5.3 Generalization of Discovered Cells
+
+We examine how well the discovered attention cells can generalize to new settings. We do not perform any search in the following experiments, but directly apply attention cells searched for one setting to a different setting and see if the attention cells can improve the classification performance. Concretely, we evaluate whether our discovered attentions can generalize across different modalities, different backbones, and different datasets.
+
+Modality. We insert the attention cells discovered on RGB frames into the backbone and train the network on optical flow only. The results are summarized in Table 3. 'GPB' refers to cells discovered by GPB and 'Differentiable'
+
+Table 1: Search results on Kinetics-600 and MiT using GPB. Our attention cells improve the classification accuracy for both backbones and on both datasets.
+
+| Model | Kinetics | MiT |
| Top-1 | Top-5 | ΔTop-1 | Top-1 | Top-5 | ΔTop-1 |
| I3D Backbone [5] | 75.58 | 92.93 | - | 27.38 | 54.29 | - |
| Non-local [32] | 76.87 | 93.44 | 1.29 | 28.54 | 55.35 | 1.16 |
| Ours - GPB | 77.39 | 93.63 | 1.81 | 28.41 | 55.49 | 1.03 |
| S3D Backbone [36] | 76.15 | 93.22 | - | 27.69 | 54.68 | - |
| Non-local [32] | 77.56 | 93.68 | 1.41 | 29.52 | 56.91 | 1.83 |
| Ours - GPB | 78.28 | 94.04 | 2.13 | 29.23 | 56.22 | 1.54 |
+
+Table 2: Search results on Kinetics-600 and MiT using the differentiable method. Our attention cells consistently outperform non-local blocks on all the combinations of backbones and datasets. Position-specific attention cells ('Pos-Specific') consistently outperform position-agnostic attention cells ('Pos-Agnostic').
+
+| Model | Kinetics | MiT |
| Top-1 | Top-5 | ΔTop-1 | Top-1 | Top-5 | ΔTop-1 |
| I3D Backbone [5] | 75.58 | 92.93 | - | 27.38 | 54.29 | - |
| Non-local [32] | 76.87 | 93.44 | 1.29 | 28.54 | 55.35 | 1.16 |
| Ours - Pos-Agnostic | 77.56 | 93.63 | 1.98 | 28.18 | 55.01 | 0.80 |
| Ours - Pos-Specific | 77.86 | 93.75 | 2.28 | 29.58 | 56.62 | 2.20 |
| S3D Backbone [36] | 76.15 | 93.22 | - | 27.69 | 54.68 | - |
| Non-local [32] | 77.56 | 93.68 | 1.41 | 29.52 | 56.91 | 1.83 |
| Ours - Pos-Agnostic | 77.82 | 93.72 | 1.67 | 29.19 | 55.96 | 1.50 |
| Ours - Pos-Specific | 78.51 | 93.88 | 2.36 | 29.82 | 57.02 | 2.13 |
+
+refers to cells discovered by the differentiable method. Our attention cells significantly improve the classification accuracy when being applied on optical flow and consistently outperform non-local blocks for both backbones and on both datasets. For example, our attention cells improve the accuracy of I3D by $5.67\%$ on Kinetics-600. Note that the cells are discovered by maximizing its performance on RGB frames and no optical flow is involved during search. This demonstrates that our cells discovered on RGB frames can generalize well to optical flow.
+
+Backbone. Table 4 summarizes the results of inserting cells discovered for one backbone to another backbone. The second row shows that cells discovered for S3D can still improve the classification accuracy of I3D by about $2\%$ on both datasets, even though these cells are never optimized to improve the performance of I3D. We observe similar improvement when inserting cells found for I3D to S3D (third row), or cells found for I3D/S3D to I3D-R50 (last row). Notably, our attention cells can still outperform non-local blocks even after being inserted into a different backbone. For example, cells found for S3D achieve $77.81\%$ accuracy on Kinetics-600 after being inserted to I3D, which outperforms non-local blocks $(76.87\%)$ and performs similar to cells specifically discovered for I3D $(77.86\%)$ .
+
+Table 3: Generalization across different modalities (RGB to Optical flow).
+
+| Model | Kinetics | MiT |
| Top-1 | Top-5 | ΔTop-1 | Top-1 | Top-5 | ΔTop-1 |
| I3D | Backbone [5] | 61.14 | 82.77 | - | 20.01 | 42.42 | - |
| Non-local [32] | 64.88 | 85.77 | 3.74 | 21.86 | 46.59 | 1.85 |
| Ours - GPB | 65.81 | 87.04 | 4.67 | 21.83 | 45.45 | 1.82 |
| Ours - Differentiable | 66.81 | 87.85 | 5.67 | 21.94 | 45.57 | 1.93 |
| S3D | Backbone [36] | 62.46 | 84.59 | - | 20.50 | 42.86 | - |
| Non-local [32] | 65.79 | 86.85 | 3.33 | 22.13 | 46.48 | 1.63 |
| Ours - GPB | 67.02 | 87.72 | 4.56 | 22.29 | 46.16 | 1.79 |
| Ours - Differentiable | 66.29 | 86.97 | 3.83 | 22.52 | 46.30 | 2.02 |
+
+Table 4: Generalization across different backbones.
+
+ | Model | Kinetics | MiT |
| Top-1 | Top-5 | ΔTop-1 | Top-1 | Top-5 | ΔTop-1 |
| I3D | Backbone [5] | 75.58 | 92.93 | - | 27.38 | 54.29 | - |
| S3D - GPB | 77.47 | 93.67 | 1.89 | 28.92 | 56.09 | 1.54 |
| S3D - Differentiable | 77.81 | 93.74 | 2.23 | 29.26 | 56.61 | 1.88 |
| S3D | Backbone [36] | 76.15 | 93.22 | - | 27.69 | 54.68 | - |
| I3D - GPB | 78.23 | 94.07 | 2.08 | 29.45 | 56.50 | 1.76 |
| I3D - Differentiable | 78.46 | 94.05 | 2.31 | 29.67 | 57.05 | 1.98 |
| I3D-R50 | Backbone [32] | 78.10 | 93.79 | - | 30.63 | 58.15 | - |
| I3D - Differentiable | 79.83 | 94.37 | 1.73 | 32.48 | 60.31 | 1.85 |
| S3D - Differentiable | 79.71 | 94.28 | 1.61 | 31.91 | 59.87 | 1.28 |
+
+Dataset. We insert attention cells discovered on MiT to the corresponding backbone, fully train the network on Kinetics-600 and report its accuracy on Kinetics-600 in the middle column ('MiT to Kinetics') of Table 5. We observe that cells discovered on MiT can improve the accuracy on Kinetics-600 by more than $2\%$ , although they are never optimized to improve the Kinetics-600 performance during the search. Similarly, the right column ('Kinetics to MiT') demonstrates that the cells searched on Kinetics-600 can also generalize gracefully to MiT. We conclude that our attention cells generalize well across datasets.
+
+# 5.4 Comparison with State-of-the-art
+
+We insert our attention cells found on I3D into I3D-R50 ('I3D-R50+Cell') and compare with the state-of-the-art methods in Table 6. On Kinetics-600, we obtain similar performance with SlowFast-R50 [7] with fewer inference FLOPs. On MiT, we achieve $32.48\%$ top-1 accuracy and $60.31\%$ top-5 accuracy only using the RGB frames. This significantly outperforms the previous state-of-the-art method AssembleNet-50 [23], which uses both RGB frames and optical flow.
+
+Table 5: Generalization across different datasets.
+
+| Model | MiT to Kinetics | Kinetics to MiT |
| Top-1 | Top-5 | ΔTop-1 | Top-1 | Top-5 | ΔTop-1 |
| I3D Backbone [5] | 75.58 | 92.93 | - | 27.38 | 54.29 | - |
| GPB | 77.34 | 93.47 | 1.76 | 27.62 | 56.70 | 0.24 |
| Differentiable | 77.85 | 93.89 | 2.27 | 29.45 | 56.83 | 2.07 |
| S3D Backbone [36] | 76.15 | 93.22 | - | 27.69 | 54.68 | - |
| GPB | 77.54 | 93.62 | 1.39 | 28.80 | 56.16 | 1.11 |
| Differentiable | 78.19 | 93.98 | 2.04 | 29.33 | 56.33 | 1.64 |
+
+Table 6: Comparison with the state-of-the-art methods. Our method ('I3D-R50+Cell') obtains similar or higher performance with the state-of-the-art methods on both Kinetics-600 and MiT.
+(a) Kinetics-600.
+
+| Model | Top-1 | Top-5 | GFLOPs |
| I3D [5] | 75.58 | 92.93 | 1136 |
| S3D [36] | 76.15 | 93.22 | 656 |
| I3D-R50 [32] | 78.10 | 93.79 | 938 |
| D3D [27] | 77.90 | - | - |
| I3D+NL [32] | 76.87 | 93.44 | 1305 |
| S3D+NL [32] | 77.56 | 93.68 | 825 |
| TSN-IRv2 [31] | 76.22 | - | 411 |
| StNet-IRv2 [9] | 78.99 | - | 440 |
| SlowFast-R50 [7] | 79.9 | 94.5 | 1971 |
| I3D-R50+Cell | 79.83 | 94.37 | 1034 |
+
+(b)MiT.
+
+| Model | Top-1 | Top-5 | Modality |
| I3D [5] | 27.38 | 54.29 | RGB |
| S3D [36] | 27.69 | 54.68 | RGB |
| I3D+NL [32] | 28.54 | 55.35 | RGB |
| S3D+NL [32] | 29.52 | 56.91 | RGB |
| R50-ImageNet [18] | 27.16 | 51.68 | RGB |
| TSN-Spatial [31] | 24.11 | 49.10 | RGB |
| I3D-R50 [32] | 30.63 | 58.15 | RGB |
| I3D-R50+Cell | 32.48 | 60.31 | RGB |
| TSN-2stream [31] | 25.32 | 50.10 | R+F |
| TRN-Multiscale [40] | 28.27 | 53.87 | R+F |
| AssembleNet-50 [23] | 31.41 | 58.33 | R+F |
+
+# 6 Conclusions
+
+We propose a novel search space for spatiotemporal attention cells for the application of video classification. We also propose a differentiable formulation of the search space, allowing us to learn position-specific attention cell designs with zero extra cost compared to learning a single position-agnostic attention cell. We show the significance of our discovered attention cells on two large-scale video classifications benchmarks. The discovered attention cells also outperform non-local blocks and demonstrate strong generalization performance when being applied to different modalities, backbones, or datasets.
+
+Acknowledgement. We thank Guanhang Wu and Yinxiao Li for insightful discussions and the larger Google Cloud Video AI team for the support.
+
+# References
+
+1. Baker, B., Gupta, O., Naik, N., Raskar, R.: Designing neural network architectures using reinforcement learning. In: ICLR (2017)
+2. Bello, I., Zoph, B., Vaswani, A., Shlens, J., Le, Q.V.: Attention augmented convolutional networks. In: ICCV (2019)
+3. Cao, S., Wang, X., Kitani, K.M.: Learnable embedding space for efficient neural architecture compression. In: ICLR (2019)
+4. Carreira, J., Noland, E., Banki-Horvath, A., Hillier, C., Zisserman, A.: A short note about kinetics-600. arXiv preprint arXiv:1808.01340 (2018)
+5. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: CVPR (2017)
+6. Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description. In: CVPR (2015)
+7. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: ICCV (2019)
+8. Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition. In: CVPR (2016)
+9. He, D., Zhou, Z., Gan, C., Li, F., Liu, X., Li, Y., Wang, L., Wen, S.: Stnet: Local and global spatial-temporal modeling for action recognition. In: AAAI (2019)
+0. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
+1. Kandasamy, K., Neiswanger, W., Schneider, J., Poczos, B., Xing, E.P.: Neural architecture search with bayesian optimisation and optimal transport. In: NeurIPS (2018)
+2. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NeurIPS (2012)
+3. Li, L., Talwalkar, A.: Random search and reproducibility for neural architecture search. In: UAI (2019)
+4. Liu, C., Chen, L.C., Schroff, F., Adam, H., Hua, W., Yuille, A.L., Fei-Fei, L.: Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In: CVPR (2019)
+5. Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L.J., Fei-Fei, L., Yuille, A., Huang, J., Murphy, K.: Progressive neural architecture search. In: ECCV (2018)
+6. Liu, H., Simonyan, K., Yang, Y.: DARTS: Differentiable architecture search. In: ICLR (2019)
+7. Liu, X., Lee, J.Y., Jin, H.: Learning video representations from correspondence proposals. In: CVPR (2019)
+8. Monfort, M., Andonian, A., Zhou, B., Ramakrishnan, K., Bargal, S.A., Yan, T., Brown, L., Fan, Q., Gutfreund, D., Vondrick, C., et al.: Moments in time dataset: one million videos for event understanding. TPAMI (2019)
+9. Park, J., Woo, S., Lee, J.Y., Kweon, I.S.: Bam: Bottleneck attention module. In: BMVC (2018)
+20. Qiu, Z., Yao, T., Mei, T.: Learning spatio-temporal representation with pseudo-3d residual networks. In: ICCV (2017)
+21. Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. In: AAAI (2019)
+22. Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y.L., Tan, J., Le, Q.V., Kurakin, A.: Large-scale evolution of image classifiers. In: ICML (2017)
+
+23. Ryoo, M.S., Piergiovanni, A., Tan, M., Angelova, A.: Assemblenet: Searching for multi-stream neural connectivity in video architectures. In: ICLR (2020)
+24. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: NeurIPS (2014)
+25. Snoek, J., Larochelle, H., Adams, R.P.: Practical bayesian optimization of machine learning algorithms. In: NeurIPS (2012)
+26. Srinivas, N., Krause, A., Kakade, S.M., Seeger, M.W.: Gaussian process optimization in the bandit setting: No regret and experimental design. In: ICML (2009)
+27. Stroud, J., Ross, D., Sun, C., Deng, J., Sukthankar, R.: D3d: Distilled 3d networks for video action recognition. In: WACV (2020)
+28. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR (2015)
+29. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3d convolutional networks. In: ICCV (2015)
+30. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: NeurIPS (2017)
+31. Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Van Gool, L.: Temporal segment networks: Towards good practices for deep action recognition. In: ECCV (2016)
+32. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR (2018)
+33. Woo, S., Park, J., Lee, J.Y., So Kweon, I.: Cbam: Convolutional block attention module. In: ECCV (2018)
+34. Xie, L., Yuille, A.: Genetic cnn. In: ICCV (2017)
+35. Xie, S., Kirillov, A., Girshick, R., He, K.: Exploring randomly wired neural networks for image recognition. In: ICCV (2019)
+36. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In: ECCV (2018)
+37. Yu, K., Sciuto, C., Jaggi, M., Musat, C., Salzmann, M.: Evaluating the search phase of neural architecture search. In: ICLR (2020)
+38. Yue-Hei Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.: Beyond short snippets: Deep networks for video classification. In: CVPR (2015)
+39. Zhong, Z., Yan, J., Wu, W., Shao, J., Liu, C.L.: Practical block-wise neural network architecture generation. In: CVPR (2018)
+40. Zhou, B., Andonian, A., Oliva, A., Torralba, A.: Temporal relational reasoning in videos. In: ECCV (2018)
+41. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: ICLR (2017)
+42. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: CVPR (2018)
\ No newline at end of file
diff --git a/attentionnasspatiotemporalattentioncellsearchforvideoclassification/images.zip b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5c0b7a4ef186b9c578bc8bdc026a05972f17bf75
--- /dev/null
+++ b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:886869dedcb4b3a7be171c537d2cabc7d73e20d74f6fdf799ababfb70c8b1570
+size 416842
diff --git a/attentionnasspatiotemporalattentioncellsearchforvideoclassification/layout.json b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7a4be91aa93f6247f2194835fd2cd4e28f85bd23
--- /dev/null
+++ b/attentionnasspatiotemporalattentioncellsearchforvideoclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3b1ba84b4f3efd0435a6d8429b3191492249e12ab5e6b051e94cafd99ca755ba
+size 471810
diff --git a/attentivenormalization/0cbc4da1-ca7c-4341-957f-35bc77522a1f_content_list.json b/attentivenormalization/0cbc4da1-ca7c-4341-957f-35bc77522a1f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..dce0ce8a00a50d0739e26b0cea547d92ce61807c
--- /dev/null
+++ b/attentivenormalization/0cbc4da1-ca7c-4341-957f-35bc77522a1f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15d3be3f6144f2ff3ccf7739b773e39a2331eb133c52f34734ec467819566d6d
+size 93806
diff --git a/attentivenormalization/0cbc4da1-ca7c-4341-957f-35bc77522a1f_model.json b/attentivenormalization/0cbc4da1-ca7c-4341-957f-35bc77522a1f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1396290651c609c199f4b98e712bc5946ee8e4a7
--- /dev/null
+++ b/attentivenormalization/0cbc4da1-ca7c-4341-957f-35bc77522a1f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:226c73a21b86a7d4f5b60d758e1a7cfed96deaaed51db330864b30ee22007c09
+size 112754
diff --git a/attentivenormalization/0cbc4da1-ca7c-4341-957f-35bc77522a1f_origin.pdf b/attentivenormalization/0cbc4da1-ca7c-4341-957f-35bc77522a1f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9777548cc1d2bda2b1ee29541c392e865f778256
--- /dev/null
+++ b/attentivenormalization/0cbc4da1-ca7c-4341-957f-35bc77522a1f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0a0a30c4a3c12eb1fae13e2f79ca3ede58382fa61b78f866821f9237aaaa87c6
+size 757372
diff --git a/attentivenormalization/full.md b/attentivenormalization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5045453785f23270202f4bd7ba73d37c268c89a3
--- /dev/null
+++ b/attentivenormalization/full.md
@@ -0,0 +1,313 @@
+# Attentive Normalization
+
+Xilai Li, Wei Sun, and Tianfu Wu
+
+Department of Electrical and Computer Engineering, NC State University {xli47, wsun12, tianfu_wu}@ncsu.edu
+
+Abstract. In state-of-the-art deep neural networks, both feature normalization and feature attention have become ubiquitous. They are usually studied as separate modules, however. In this paper, we propose a light-weight integration between the two schema and present Attentive Normalization (AN). Instead of learning a single affine transformation, AN learns a mixture of affine transformations and utilizes their weighted-sum as the final affine transformation applied to re-calibrate features in an instance-specific way. The weights are learned by leveraging channelwise feature attention. In experiments, we test the proposed AN using four representative neural architectures in the ImageNet-1000 classification benchmark and the MS-COCO 2017 object detection and instance segmentation benchmark. AN obtains consistent performance improvement for different neural architectures in both benchmarks with absolute increase of top-1 accuracy in ImageNet-1000 between $0.5\%$ and $2.7\%$ , and absolute increase up to $1.8\%$ and $2.2\%$ for bounding box and mask AP in MS-COCO respectively. We observe that the proposed AN provides a strong alternative to the widely used Squeeze-and-Eccitation (SE) module. The source codes are publicly available at the ImageNet Classification Repo and the MS-COCO Detection and Segmentation Repo.
+
+# 1 Introduction
+
+Pioneered by Batch Normalization (BN) [19], feature normalization has become ubiquitous in the development of deep learning. Feature normalization consists of two components: feature standardization and channel-wise affine transformation. The latter is introduced to provide the capability of undoing the standardization (by design), and can be treated as feature re-calibration in general. Many variants of BN have been proposed for practical deployment in terms of variations of training and testing settings with remarkable progress obtained. They can be roughly divided into two categories:
+
+i) Generalizing feature standardization. Different methods are proposed for computing the mean and standard deviation or for modeling/whitening the data distribution in general, within a min-batch. They include Batch Renormalization [18], Decorrelated BN [16], Layer Normalization (LN) [1], Instance Normalization (IN) [42], Instance-level Meta Normalization [20], Group Normalization (GN) [47], Mixture Normalization [21] and Mode Normalization [5]. Switchable
+
+
+Fig.1: Illustration of the proposed Attentive Normalization (AN). AN aims to harness the best of a base feature normalization (e.g., BN or GN) and channelwise feature attention in a single light-weight module. See text for details.
+
+Normalization (SN) [28] and its sparse variant (SSN) [39] learn to switch between different vanilla schema. These methods adopt the vanilla channel-wise affine transformation after standardization, and are often proposed for discriminative learning tasks.
+
+ii) Generalizing feature re-calibration. Instead of treating the affine transformation parameters directly as model parameters, different types of task-induced conditions (e.g., class labels in conditional image synthesis using generative adversarial networks) are leveraged and encoded as latent vectors, which are then used to learn the affine transformation parameters, including different conditional BNs [6,43,33,29,2], style-adaptive IN [22] or layout-adaptive IN [31,40]. These methods have been mainly proposed in generative learning tasks, except for the recently proposed Instance-level Meta Normalization [20] in discriminative learning tasks.
+
+In the meanwhile, feature attention has also become an indispensable mechanism for improving task performance in deep learning. For computer vision, spatial attention is inherently captured by convolution operations within short-range context, and by non-local extensions [45,17] for long-range context. Channel-wise attention is relatively less exploited. The squeeze-and-excitation (SE) unit [13] is one of the most popular designs, which learn instance-specific channel-wise attention weights to re-calibrate an input feature map. Unlike the affine transformation parameters in feature normalization, the attention weights for re-calibrating an feature map are often directly learned from the input feature map in the spirit of self-attention, and often instance-specific or pixel-specific.
+
+Although both feature normalization and feature attention have become ubiquitous in state-of-the-art DNNs, they are usually studied as separate mod
+
+ules. Therefore, in this paper we address the following problem: How to learn to re-calibrate feature maps in a way of harnessing the best of feature normalization and feature attention in a single light-weight module? And, we present Attentive Normalization (AN): Fig. 1 illustrates the proposed AN. The basic idea is straightforward. Conceptually, the affine transformation component in feature normalization (Section 3.1) and the re-scaling computation in feature attention play the same role in learning-to-re-calibrate an input feature map, thus providing the foundation for integration (Section 3.2). More specifically, consider a feature normalization backbone such as BN or GN, our proposed AN keeps the block-wise standardization component unchanged. Unlike the vanilla feature normalization in which the affine transformation parameters ( $\gamma$ 's and $\beta$ 's) are often frozen in testing, we want the affine transformation parameters to be adaptive and dynamic in both training and testing, controlled directly by the input feature map. The intuition behind doing so is that it will be more flexible in accounting for different statistical discrepancies between training and testing in general, and between different sub-populations caused by underlying inter-/intra-class variations in the data.
+
+To achieve the dynamic and adaptive control of affine transformation parameters, the proposed AN utilizes a simple design (Section 3). It learns a mixture of $K$ affine transformations and exploits feature attention mechanism to learn the instance-specific weights for the $K$ components. The final affine transformation used to re-calibrate an input feature map is the weighted sum of the learned $K$ affine transformations. We propose a general formulation for the proposed AN and study how to learn the weights in an efficient and effective way (Section 3.3).
+
+# 2 Related Work
+
+Feature Normalization. There are two types of normalization schema, feature normalization (including raw data) [19,18,1,42,47,28,39,21,5] and weight normalization [36,15]. Unlike the former, the latter is to normalize model parameters to decouple the magnitudes of parameter vectors from their directions. We focus on feature normalization in this paper.
+
+Different feature normalization schema differ in how the mean and variance are computed. BN [19] computes the channel-wise mean and variance in the entire min-batch which is driven by improving training efficiency and model generalizability. BN has been deeply analyzed in terms of how it helps optimization [38]. DecorBN [16] utilizes a whitening operation (ZCA) to go beyond the centering and scaling in the vanilla BN. BatchReNorm [18] introduces extra parameters to control the pooled mean and variance to reduce BN's dependency on the batch size. IN [42] focuses on channel-wise and instance-specific statistics which stems from the task of artistic image style transfer. LN [1] computes the instance-specific mean and variance from all channels which is designed to help optimization in recurrent neural networks (RNNs). GN [47] stands in the sweet spot between LN and IN focusing on instance-specific and channel-groupwise statistics, especially when only small batches are applicable in practice. In practice, synchronized BN [32] across multiple GPUs becomes increasingly favorable against GN in some applications. SN [28] leaves the design choices of fea
+
+ture normalization schema to the learning system itself by computing weighted sum integration of BN, LN, IN and/or GN via softmax, showing more flexible applicability, followed by SSN [39] which learns to make exclusive selection. Instead of computing one mode (mean and variance), MixtureNorm [21] introduces a mixture of Gaussian densities to approximate the data distribution in a mini-batch. ModeNorm [5] utilizes a general form of multiple-mode computation. Unlike those methods, the proposed AN focuses on generalizing the affine transformation component. Related to our work, Instance-level Meta normalization(ILM) [20] first utilizes an encoder-decoder sub-network to learn affine transformation parameters and then add them together to the model's affine transformation parameters. Unlike ILM, the proposed AN utilizes a mixture of affine transformations and leverages feature attention to learn the instance-specific attention weights.
+
+On the other hand, conditional feature normalization schema [6,43,33,2,22,31] [40] have been developed and shown remarkable progress in conditional and unconditional image synthesis. Conditional BN learns condition-specific affine transformations in terms of conditions such as class labels, image style, label maps and geometric layouts. Unlike those methods, the proposed AN learns self-attention data-driven weights for mixture components of affine transformations.
+
+Feature Attention. Similar to feature normalization, feature attention is also an important building block in the development of deep learning. Residual Attention Network [44] uses a trunk-and-mask joint spatial and channel attention module in an encoder-decoder style for improving performance. To reduce the computational cost, channel and spatial attention are separately applied in [46]. The SE module [13] further simplifies the attention mechanism by developing a light-weight channel-wise attention method. The proposed AN leverages the idea of SE in learning attention weights, but formulates the idea in a novel way.
+
+Our Contributions. This paper makes three main contributions: (i) It presents Attentive Normalization which harnesses the best of feature normalization and feature attention (channel-wise). To our knowledge, AN is the first work that studies self-attention based conditional and adaptive feature normalization in visual recognition tasks. (ii) It presents a lightweight integration method for deploying AN in different widely used building blocks of ResNets, DenseNets, MobileNetsV2 and AOGNets. (iii) It obtains consistently better results than the vanilla feature normalization backbones by a large margin across different neural architectures in two large-scale benchmarks, ImageNet-1000 and MS-COCO.
+
+# 3 The Proposed Attentive Normalization
+
+In this section, we present details of the proposed attentive normalization. Consider a DNN for 2D images, denote by $\mathbf{x}$ a feature map with axes in the convention order of $(N,C,H,W)$ (i.e., batch, channel, height and width). $\mathbf{x}$ is represented by a 4D tensor. Let $i = (i_N,i_C,i_H,i_W)$ be the address index in the 4D tensor. $\mathbf{x}_i$ represents the feature response at a position $i$ .
+
+# 3.1 Background on Feature Normalization
+
+Existing feature normalization schema often consist of two components (Fig. 1):
+
+i) Block-wise Standardization. Denote by $B_{j}$ a block (slice) in a given 4-D tensor $\mathbf{x}$ . For example, for BN, we have $j = 1,\dots ,C$ and $B_{j} = \{\mathbf{x}_{i}|\forall i,i_{C} = j\}$ . We first compute the empirical mean and standard deviation in $B_{j}$ , denoted by $\mu_{j}$ and $\sigma_{j}$ respectively: $\mu_{j} = \frac{1}{M}\sum_{x\in B_{j}}x$ , $\sigma_{j} = \sqrt{\frac{1}{M}\sum_{x\in B_{j}}(x - \mu_{j})^{2} + \epsilon}$ , where $M = |B_{j}|$ and $\epsilon$ is a small positive constant to ensure $\sigma_{j} > 0$ for the sake of numeric stability. Then, let $j_{i}$ be the index of the block that the position $i$ belongs to, and we standardize the feature response by,
+
+$$
+\hat {\mathbf {x}} _ {i} = \frac {1}{\sigma_ {j _ {i}}} \left(\mathbf {x} _ {i} - \mu_ {j _ {i}}\right) \tag {1}
+$$
+
+ii) Channel-wise Affine Transformation. Denote by $\gamma_{c}$ and $\beta_{c}$ the scalar coefficient (re-scaling) and offset (re-shifting) parameter respectively for the $c$ -th channel. The re-calibrated feature response at a position $i$ is then computed by,
+
+$$
+\tilde {\mathbf {x}} _ {i} = \gamma_ {i _ {C}} \cdot \hat {\mathbf {x}} _ {i} + \beta_ {i _ {C}}, \tag {2}
+$$
+
+where $\gamma_c$ 's and $\beta_c$ 's are shared by all the instances in a min-batch across the spatial domain. They are usually frozen in testing and fine-tuning.
+
+# 3.2 Background on Feature Attention
+
+We focus on channel-wise attention and briefly review the Squeeze-Excitation (SE) module [13]. SE usually takes the feature normalization result (Eqn. 2) as its input (the bottom-right of Fig. 1), and learns channel-wise attention weights:
+
+i) The squeeze module encodes the inter-dependencies between feature channels in a low dimensional latent space with the reduction rate $r$ (e.g., $r = 16$ ),
+
+$$
+S (\tilde {\mathbf {x}}; \theta_ {S}) = v, v \in \mathbb {R} ^ {N \times \frac {C}{r} \times 1 \times 1}, \tag {3}
+$$
+
+which is implemented by a sub-network consisting of a global average pooling layer (AvgPool), a fully-connected (FC) layer and rectified linear unit (ReLU) [23]. $\theta_{S}$ collects all the model parameters.
+
+ii) The excitation module computes the channel-wise attention weights, denoted by $\lambda$ , by decoding the learned latent representations $v$ ,
+
+$$
+E (v; \theta_ {E}) = \lambda , \lambda \in \mathbb {R} ^ {N \times C \times 1 \times 1}, \tag {4}
+$$
+
+which is implemented by a sub-network consisting of a FC layer and a sigmoid layer. $\theta_{E}$ collects all model parameters.
+
+Then, the input, $\tilde{\mathbf{x}}$ is re-calibrated by,
+
+$$
+\tilde {\mathbf {x}} _ {i} ^ {S E} = \lambda_ {i _ {N}, i _ {C}} \cdot \tilde {\mathbf {x}} _ {i} = \left(\lambda_ {i _ {N}, i _ {C}} \cdot \gamma_ {i _ {C}}\right) \cdot \hat {\mathbf {x}} _ {i} + \lambda_ {i _ {N}, i _ {C}} \cdot \beta_ {i _ {C}}, \tag {5}
+$$
+
+where the second step is obtained by plugging in Eqn. 2. It is thus straightforward to see the foundation facilitating the integration between feature normalization and channel-wise feature attention. However, the SE module often entails a significant number of extra parameters (e.g., $\sim 2.5\mathrm{M}$ extra parameters for ResNet50 [10] which originally consists of $\sim 25\mathrm{M}$ parameters, resulting in $10\%$ increase). We aim to design more parsimonious integration that can further improve performance.
+
+# 3.3 Attentive Normalization
+
+Our goal is to generalize Eqn. 2 in re-calibrating feature responses to enable dynamic and adaptive control in both training and testing. On the other hand, our goal is to simplify Eqn. 5 into a single light-weight module, rather than, for example, the two-module setup using BN+SE. In general, we have,
+
+$$
+\tilde {\mathbf {x}} _ {i} ^ {A N} = \Gamma (\mathbf {x}; \theta_ {\Gamma}) _ {i} \cdot \hat {\mathbf {x}} _ {i} + \mathbb {B} (\mathbf {x}; \theta_ {\mathbb {B}}) _ {i}, \tag {6}
+$$
+
+where both $\Gamma(\mathbf{x};\theta_{\Gamma})$ and $\mathbb{B}(\mathbf{x};\theta_{\mathbb{B}})$ are functions of the entire input feature map (without standardization $^1$ ) with parameters $\theta_{\Gamma}$ and $\theta_{\mathbb{B}}$ respectively. They both compute 4D tensors of the size same as the input feature map and can be parameterized by some attention guided light-weight DNNs. The subscript in $\Gamma(\mathbf{x};\theta_{\Gamma})_i$ and $\mathbb{B}(\mathbf{x};\theta_{\mathbb{B}})_i$ represents the learned re-calibration weights at a position $i$ .
+
+In this paper, we focus on learning instance-specific channel-wise affine transformations. To that end, we have three components as follows.
+
+i) Learning a Mixture of $K$ Channel-wise Affine Transformations. Denote by $\gamma_{k,c}$ and $\beta_{k,c}$ the re-scaling and re-shifting (scalar) parameters respectively for the $c$ -th channel in the $k$ -th mixture component. They are model parameters learned end-to-end via back-propagation.
+
+ii) Learning Attention Weights for the $K$ Mixture Components. Denote by $\lambda_{n,k}$ the instance-specific mixture component weight ( $n \in [1, N]$ and $k \in [1, K]$ ), and by $\lambda$ the $N \times K$ weight matrix. $\lambda$ is learned via some attention-guided function from the entire input feature map,
+
+$$
+\lambda = A (\mathbf {x}; \theta_ {\lambda}), \tag {7}
+$$
+
+where $\theta_{\lambda}$ collects all the parameters.
+
+iii) Computing the Final Affine Transformation. With the learned $\gamma_{k,c}$ , $\beta_{k,c}$ and $\lambda$ , the re-calibrated feature response is computed by,
+
+$$
+\tilde {\mathbf {x}} _ {i} ^ {A N} = \sum_ {k = 1} ^ {K} \lambda_ {i _ {N}, k} \left[ \gamma_ {k, i _ {C}} \cdot \hat {\mathbf {x}} _ {i} + \beta_ {k, i _ {C}} \right], \tag {8}
+$$
+
+where $\lambda_{i_N,k}$ is shared by the re-scaling parameter and the re-shifting parameter for simplicity. Since the attention weights $\lambda$ are adaptive and dynamic in both training and testing, the proposed AN realizes adaptive and dynamic feature re-calibration. Compared to the general form (Eqn. 6), we have,
+
+$$
+\Gamma (\mathbf {x}) _ {i} = \sum_ {k = 1} ^ {K} \lambda_ {i _ {N}, k} \cdot \gamma_ {k, i _ {C}}, \mathbb {B} (\mathbf {x}) _ {i} = \sum_ {k = 1} ^ {K} \lambda_ {i _ {N}, k} \cdot \beta_ {k, i _ {C}}. \tag {9}
+$$
+
+Based on the formulation, there are a few advantages of the proposed AN in training, fine-tuning and testing a DNN:
+
+- The channel-wise affine transformation parameters, $\gamma_{k,i_C}$ 's and $\beta_{k,i_C}$ 's, are shared across spatial dimensions and by data instances, which can learn population-level knowledge in a more fine-grained manner than a single affine transformation in the vanilla feature normalization.
+
+- $\lambda_{i_N,k}$ 's are instance specific and learned from features that are not standardized. Combining them with $\gamma_{k,i_C}$ 's and $\beta_{k,i_C}$ 's (Eqn. 8) enables AN paying attention to both the population (what the common and useful information are) and the individuals (what the specific yet critical information are). The latter is particularly useful for testing samples slightly "drifted" from training population, that is to improve generalizability. Their weighted sum encodes more direct and "actionable" information for re-calibrating standardized features (Eqn. 8) without being delayed until back-propagation updates as done in the vanilla feature normalization.
+
+- In fine-tuning, especially between different tasks (e.g., from image classification to object detection), $\gamma_{k,i_C}$ 's and $\beta_{k,i_C}$ 's are usually frozen as done in the vanilla feature normalization. They carry information from a source task. But, $\theta_{\lambda}$ (Eqn. 7) are allowed to be fine-tuned, thus potentially better realizing transfer learning for a target task. This is a desirable property since we can decouple training correlation between tasks. For example, when GN [47] is applied in object detection in MS-COCO, it is fine-tuned from a feature backbone with GN trained in ImageNet, instead of the one with BN that usually has better performance in ImageNet. As we shall show in experiments, the proposed AN facilitates a smoother transition. We can use the proposed AN (with BN) as the normalization backbone in pre-training in ImageNet, and then use AN (with GN) as the normalization backbone for the head classifiers in MS-COCO with significant improvement.
+
+Details of Learning Attention Weights We present a simple method for computing the attention weights $A(\mathbf{x};\theta_{\lambda})$ (Eqn. 7). Our goal is to learn a weight coefficient for each component from each individual instance in a mini-batch (i.e., a $N\times K$ matrix). The question of interest is how to characterize the underlying importance of a channel $c$ from its realization across the spatial dimensions $(H,W)$ in an instance, such that we will learn a more informative instance-specific weight coefficient for a channel $c$ in re-calibrating the feature map $\mathbf{x}$ .
+
+In realizing Eqn. 7, the proposed method is similar in spirit to the squeeze module in SENets [13] to maintain light-weight implementation. To show the difference, let's first rewrite the vanilla squeeze module (Eqn. 3),
+
+$$
+v = S (\mathbf {x}; \theta_ {S}) = R e L U (f c (A v g P o o l (\mathbf {x}); \theta_ {S})), \tag {10}
+$$
+
+where the mean of a channel $c$ (via global average pooling, $AvgPool(\cdot)$ ) is used to characterize its underlying importance. We generalize this assumption by taking into account both mean and standard deviation empirically computed for a channel $c$ , denoted by $\mu_c$ and $\sigma_c$ respectively. More specifically, we compare three different designs using:
+
+i) The mean $\mu_c$ only as done in SENets.
+
+ii) The concatenation of the mean and standard deviation, $(\mu_c,\sigma_c)$ .
+iii) The coefficient of variation or the relative standard deviation (RSD), $\frac{\sigma_c}{\mu_c}$ . RSD measures the dispersion of an underlying distribution (i.e., the extent to which the distribution is stretched or squeezed) which intuitively conveys more information in learning attention weights for re-calibration.
+
+RSD is indeed observed to work better in our experiments2. Eqn. 7 is then expanded with two choices,
+
+$$
+\text {C h o i c e} 1: A _ {1} (\mathbf {x}; \theta_ {\lambda}) = \operatorname {A c t} (f c (R S D (\mathbf {x}); \theta_ {\lambda})), \tag {11}
+$$
+
+$$
+\text {C h o i c e} 2: A _ {2} (\mathbf {x}; \theta_ {\lambda}) = A c t (B N (f c (R S D (\mathbf {x}); \theta_ {f c}); \theta_ {B N})),
+$$
+
+where $Act(\cdot)$ represents a non-linear activation function for which we compare three designs:
+
+i) The vanilla $ReLU(\cdot)$ as used in the squeeze module of SENets.
+ii) The vanilla sigmoid $(\cdot)$ as used in the excitation module of SENets.
+iii) The channel-wise softmax(·).
+iv) The piece-wise linear hard analog of the sigmoid function, so-called hsigmoid function [12], $hsigmoid(a) = \min(\max(a + 3.0, 0), 6.0) / 6.0$ .
+
+The hsigmoid( $\cdot$ ) is observed to work better in our experiments. In the Choice 2 (Eqn. 11), we apply the vanilla BN [19] after the FC layer, which normalizes the learned attention weights across all the instances in a mini-batch with the hope of balancing the instance-specific attention weights better. The Choice 2 improves performance in our experiments in ImageNet.
+
+In AN, we have another hyper-parameter, $K$ . For stage-wise building block based neural architectures such as the four neural architectures tested in our experiments, we use different $K$ 's for different stages with smaller values for early stages. For example, for the 4-stage setting, we typically use $K = 10, 10, 20, 20$ for the four stages respectively based on our ablation study. The underlying assumption is that early stages often learn low-to-middle level features which are considered to be shared more between different categories, while later stages learn more category-specific features which may entail larger mixtures.
+
+# 4 Experiments
+
+In this section, we first show the ablation study verifying the design choices in the proposed AN. Then, we present detailed comparisons and analyses.
+
+Data and Evaluation Metric. We use two benchmarks, the ImageNet-1000 classification benchmark (ILSVRC2012) [35] and the MS-COCO object detection and instance segmentation benchmark [26]. The ImageNet-1000 benchmark consists of about 1.28 million images for training, and 50,000 for validation, from 1,000 classes. We apply a single-crop with size $224 \times 224$ in evaluation. Following the common protocol, we report the top-1 and top-5 classification error rates tested using a single model on the validation set. For the MS-COCO benchmark,
+
+
+Fig. 2: Illustration of integrating the proposed AN in different building blocks. The first two show the vanilla residual block and the SE-residual block. The remaining four are: the Basicblock and Bottleneck design of a residual block, the inverted residual block (used in MobileNetV2), and the DenseBlock. For the residual block and its variants, the proposed AN is used to replace the vanilla BN(s) followed the last $3 \times 3$ convolution in different blocks. This potentially enables jointly integrating local spatial attention (conveyed by the $3 \times 3$ convolution) in learning the instance-specific attention weights, which is also observed helpful in [30] and is shown beneficial for the SE module itself in our experiments (Table 3). For the dense block, we replace the second vanilla BN (after the $1 \times 1$ convolution applied to the concatenated features) with our AN.
+
+there are 80 categories of objects. We use train2017 in training and evaluate the trained models using val2107. We report the standard COCO metrics of Average Precision (AP) at different intersection-over-union (IoU) thresholds, e.g., $\mathrm{AP}_{50}$ and $\mathrm{AP}_{75}$ , for bounding box detection $(\mathrm{AP}_{IoU}^{bb})$ and instance segmentation $(\mathrm{AP}_{IoU}^{m})$ , and the mean AP over IoU=0.5:0.05:0.75, $\mathrm{AP}^{bb}$ and $\mathrm{AP}^m$ for bounding box detection and instance segmentation respectively.
+
+Neural Architectures and Vanilla Feature Normalization Backbones. We use four representative neural architectures: (i) ResNets [10] (ResNet50 and ResNet101), which are the most widely used architectures in practice, (ii) DenseNets [14], which are popular alternatives to ResNets, (iii) MobileNetV2 [37]. MobileNets are popular architectures under mobile settings and MobileNetV2 uses inverted residuals and linear Bottlenecks, and (iv) AOGNets [24], which are grammar-guided networks and represent an interesting direction of network architecture engineering with better performance than ResNets and DenseNets. So, the improvement by our AN will be both broadly useful for existing ResNets, DenseNets and MobileNets based deployment in practice and potentially insightful for on-going and future development of more advanced and more powerful DNNs in the community.
+
+In classification, we use BN [19] as the feature normalization backbone for our proposed AN, denoted by AN (w/ BN). We compare with the vanilla BN, GN [47] and SN [28]. In object detection and instance segmentation, we use the Mask-RCNN framework [8] and its cascade variant [3] in the MMDetection code platform [4]. We fine-tune feature backbones pretrained on the ImageNet-1000 dataset. We also test the proposed AN using GN as the feature normalization backbone, denoted by AN (w/ GN) in the head classifier of Mask-RCNN.
+
+Where to Apply AN? Fig. 2 illustrates the integration of our proposed AN in different building blocks. At the first thought, it is straightforward to replace all vanilla feature normalization modules (e.g., BN) in a DNN. It may not be necessary to do so, similar in spirit to the SE-residual block which re-calibrates the residual part once in a building block. As we shall see, our ablation study supports the design choice shown in Fig. 2.
+
+Initialization of our AN. The initialization of $\gamma_{k,c}$ 's and $\beta_{k,c}$ 's (Eqn. 8) is based on, $\gamma_{k,c} = 1.0 + \mathcal{N}(0,1)\times 0.1$ and $\beta_{k,c} = \mathcal{N}(0,1)\times 0.1$ , where $\mathcal{N}(0,1)$ represents the standard Gaussian distribution. This type of initialization is also adopted for conditional BN used in the BigGAN [2].
+
+# 4.1 Ablation Study
+
+We compare different design choices in our proposed AN using ResNet50 in ImageNet-1000. Table 1 summarizes the results. There are four categories of design choices: The first three are related to the realization of learning the attention weights (Eqn. 7): three types of inputs, two architectural choices and four activation function choices. The last one refers to the number $K$ of components in the mixture of affine transformation which is used for each of the four stages in ResNet50 and we empirically select three options for simplicity. All the models are trained using the same settings (the vanilla setup in Section 4.2).
+
+| Design Choices in AN (w/ BN) | #Params | FLOPS | top-1 | top-5 |
| mean + A2(·) + hsigmoid + K = (10/20) | 25.76M | 4.09G | 21.85 | 5.92 | |
| (mean,std) + A2(·) + hsigmoid + K = (10/20) | 25.82M | 4.09G | 21.73 | 5.85 | |
| RSD + A1(·) + hsigmoid + K = (10/20) | 25.76M | 4.09G | 21.76 | 6.05 | |
| RSD + A2(·) + softmax + K = (10/20) | 25.76M | 4.09G | 21.72 | 5.90 | |
| RSD + A2(·) + relu + K = (10/20) | 25.96M | 4.09G | 21.89 | 6.04 | |
| RSD + A2(·) + sigmoid + K = (10/20) | 25.76M | 4.09G | 21.96 | 5.91 | |
| RSD + A2(·) + hsigmoid + K = (5/10) | 25.76M | 4.09G | 21.92 | 5.93 | |
| RSD + A2(·) + hsigmoid + K = (20/40) | 25.96M | 4.09G | 21.62 | 5.63 | |
| RSD + A2(·) + hsigmoid + K = (10/20) | 25.76M | 4.09G | 21.59 | 5.58 | |
| * RSD + A2(·) + hsigmoid + K = (10/20) | 26.96M | 4.10G | 22.15 | 6.24 | |
+
+Table 1: Ablation study on different design choices in AN with BN as feature normalization backbone using ResNet50+Bottleneck in ImageNet-1000. * means AN is applied to all the BNs of the network.
+
+The best combination is $\mathrm{RSD} + A_2(\cdot) + \mathrm{hsigmoid} + K = \binom{10}{20}$ . During our development, we first observed the best combination based on our intuitive reasoning and small experiments (a few epochs) in the process, and then design this ablation study to verify the design choices. Based on the observed best combination, we further verify that replacing all vanilla BNs is not helpful (the last row in Table 1). One explanation is that we may not need to re-calibrate the features using our AN (as well as other channel-wise feature attention methods) for both before and after a $1\times 1$ convolution, since channel-wise re-calibration can be tackled by the $1\times 1$ convolution kernel and the vanilla feature normalization
+
+themselves in training. The ablation study is in support of the intuitions and design choices discussed in Section 3.3.
+
+# 4.2 Image Classification in ImageNet-1000
+
+Common Training Settings. We use 8 GPUs (NVIDIA V100) to train models
+
+using the same settings for apple-to-apple comparisons. The method proposed in [9] is used to initialize all convolutions for all models. The batch size is 128 per GPU. with FP16 optimization used in training to reduce the training time. The mean and standard deviation for block-wise standardization are computed within each GPU. The initial learning rate is 0.4, and the cosine learning rate scheduler [27] is used with 5 warm-up epochs and weight decay $1 \times 10^{-4}$ and momentum 0.9. For AN, the best practice observed in our ablation study (Table 1) is used. AN is not used in the stem layer in all the models. In addition to the common settings, we have two different setups in experimental comparisons:
+
+i) The Vanilla Setup. We adopt the basic data augmentation scheme (random crop and horizontal flip) in training as done in [10]. We train the models for 120 epochs. All ResNets [10] use the vanilla stem layer with $7 \times 7$ convolution. The MobileNetsV2 uses $3 \times 3$ convolution in the stem layer. The AOGNets use two consecutive $3 \times 3$ convolution in the stem layer. All the $\gamma$ and $\beta$ parameters of the feature normalization backbones are initialized to 1 and 0 respectively.
+
+| The Vanilla Setup |
| Method | #Params | FLOPS | top-1 | top-5 |
| ResNet34+BN | 21.80M | 3.68G | 25.58↓(1.15) | 8.19↓(0.76) |
| ResNet34+AN | 21.92M | 3.68G | 24.43 | 7.43 |
| ResNet50-BN | 25.56M | 4.09G | 23.01↓(1.42) | 6.68↓(0.80) |
| †ResNet50-GN [47] | 25.56M | 4.09G | 23.52↓(1.93) | 6.83↓(0.97) |
| †ResNet50-SN [28] | 25.56M | - | 22.43↓(0.83) | 6.35↓(0.47) |
| †ResNet50-SE [13] | 28.09M | 4.12G | 22.37↓(0.78) | 6.36↓(0.48) |
| ResNet50-SE | 28.09M | 4.12G | 22.35↓(0.76) | 6.09↓(0.21) |
| ResNet50-AN | 25.76M | 4.09G | 21.59 | 5.88 |
| ResNet101-BN | 44.57M | 8.12G | 21.33↓(0.72) | 5.85↓(0.44) |
| ResNet101-AN | 45.00M | 8.12G | 20.61 | 5.41 |
| DenseNet121-BN | 7.98M | 2.86G | 25.35↓(2.73) | 7.83↓(1.41) |
| DenseNet121-AN | 8.34M | 2.86G | 22.62 | 6.42 |
| MobileNetV2-BN | 3.50M | 0.34G | 28.69↓(2.02) | 9.33↓(0.77) |
| MobileNetV2-AN | 3.56M | 0.34G | 26.67 | 8.56 |
| AOGNet12M-BN | 12.26M | 2.19G | 22.22↓(0.94) | 6.06↓(0.30) |
| AOGNet12M-AN | 12.37M | 2.19G | 21.28 | 5.76 |
| AOGNet40M-BN | 40.15M | 7.51G | 19.84↓(0.51) | 4.94↓(0.22) |
| AOGNet40M-AN | 40.39M | 7.51G | 19.33 | 4.72 |
+
+| The State-of-the-Art Setup |
| Method | #Params | FLOPS | top-1 | top-5 |
| ResNet50-BN | 25.56M | 4.09G | 21.08↓(1.16) | 5.56↓(0.52) |
| ResNet50-AN | 25.76M | 4.09G | 19.92 | 5.04 |
| ResNet101-BN | 44.57M | 8.12G | 19.71↓(0.86) | 4.89↓(0.26) |
| ResNet101-AN | 45.00M | 8.12G | 18.85 | 4.63 |
| AOGNet12M-BN | 12.26M | 2.19G | 21.63↓(1.06) | 5.60↓(0.22) |
| AOGNet12M-AN | 12.37M | 2.19G | 20.57 | 5.38 |
| AOGNet40M-BN | 40.15M | 7.51G | 18.70↓(0.57) | 4.47↓(0.21) |
| AOGNet40M-AN | 40.39M | 7.51G | 18.13 | 4.26 |
+
+Table 2: Comparisons between BN and our AN (w/ BN) in terms of the top-1 and top-5 error rates $(\%)$ in the ImageNet-1000 validation set using the vanilla setup and the state-of-the-art setup. $\dagger$ means the model is not trained by us. All other models are trained from scratch under the same settings.
+
+ii) The State-of-the-Art Setup. There are different aspects in the vanilla setup which have better variants developed with better performance shown [11]. We want to address whether the improvement by our proposed AN are truly fundamental or will disappear with more advanced tips and tricks added in training ConvNets. First, on top of the basic data augmentation, we also use label smoothing [41] (with rate 0.1) and the mixup (with rate 0.2) [48]. We increase the total number of epochs to 200. We use the same stem layer with two consecutive $3 \times 3$
+
+convolution for all models. For ResNets, we add the zero $\gamma$ initialization trick, which uses 0 to initialize the last normalization layer to make the initial state of a residual block to be identity.
+
+Results Summary. Table 2 shows the comparison results for the two setups respectively. Our proposed AN consistently obtains the best top-1 and top-5 accuracy results with more than $0.5\%$ absolute top-1 accuracy increase (up to $2.7\%$ ) in all models without bells and whistles. The improvement is often obtained with negligible extra parameters (e.g., 0.06M parameter increase in MobileNetV2 for $2.02\%$ absolute top-1 accuracy increase, and 0.2M parameter increase in ResNet50 with $1.42\%$ absolute top-1 accuracy increase) at almost no extra computational cost (up to the precision used in measuring FLOPs). With ResNet50, our AN also outperforms GN [47] and SN [28] by $1.93\%$ and $0.83\%$ in top-1 accuracy respectively. For GN, it is known that it works (slightly) worse than BN under the normal (big) mini-batch setting [47]. For SN,
+
+our result shows that it is more beneficial to improve the re-calibration component than to learn-to-switch between different feature normalization schema. We observe that the proposed AN is more effective for small ConvNets in terms of performance gain. Intuitively, this makes sense. Small ConvNets usually learn less expressive features. With the mixture of affine transformations and the instance-specific channel-wise feature re-calibration, the proposed AN offers the flexibility of clustering intra-class data better while separating inter-class data better in training.
+
+| Method | #Params | FLOPS | top-1 | top-5 |
| ResNet50-SE (BN3) | 28.09M | 4.12G | 22.35↓(0.76) | 6.09↓(0.21) |
| ResNet50-SE (BN2) | 26.19M | 4.12G | 22.10↓(0.55) | 6.02↓(0.14) |
| ResNet50-SE (All) | 29.33M | 4.13G | 22.13↓(0.52) | 5.96↓(0.08) |
| ResNet50-AN (w/BN3) | 26.35M | 4.11G | 21.78↓(0.19) | 5.98↓(0.1) |
| ResNet50-AN (w/BN2) | 25.76M | 4.09G | 21.59 | 5.88 |
| ResNet50-AN (All) | 25.92M | 4.10G | 21.85↓(0.26) | 6.06↓(0.18) |
+
+Table 3: Comparisons between SE and our AN (w/ BN) in terms of the top-1 and top-5 error rates $(\%)$ in the ImageNet-1000 validation set using the vanilla setup. By " (All)", it means SE or AN is used for all the three BNs in a bottleneck block.
+
+Comparisons with the SE module. Our proposed AN provides a strong alternative to the widely used SE module. Table 3 shows the comparisons. We observe that applying SE after the second BN in the bottleneck in ResNet50 is also beneficial with better performance and smaller number of extra parameters.
+
+# 4.3 Object Detection and Segmentation in COCO
+
+In object detection and segmentation, high-resolution input images are beneficial and often entailed for detecting medium to small objects, but limit the batch-size in training (often 1 or 2 images per GPU). GN [47] and SN [28] have shown significant progress in handling the applicability discrepancies of feature normalization schema from ImageNet to MS-COCO. We test our AN in MS-COCO following the standard protocol, as done in GN [47]. We build on the MMDetection code platform [4]. We observe further performance improvement.
+
+We first summarize the details of implementation. Following the terminologies used in MMDetection [4], there are four modular components in the R-CNN detection framework [7,34,8]: i) Feature Backbones. We use the pre-trained net-
+
+| Architecture | Backbone | Head | #Params | \( AP^{bb} \) | \( AP_{50}^{bb} \) | \( AP_{75}^{bb} \) | \( AP^m \) | \( AP_{50}^m \) | \( AP_{75}^{m} \) |
| MobileNetV2 | \( \mathbb{B}\mathbb{N} \)AN (w/ BN) | - | 22.72M | 34.2\( \downarrow(1.8) \) | 54.6\( \downarrow(2.4) \) | 37.1\( \downarrow(1.8) \) | 30.9\( \downarrow(1.6) \) | 51.1\( \downarrow(2.7) \) | 32.6\( \downarrow(1.9) \) |
| - | 22.78M | 36.0 | 57.0 | 38.9 | 32.5 | 53.8 | 34.5 |
| ResNet50 | \( \mathbb{B}\mathbb{N} \) | - | 45.71M | 39.2\( \downarrow(1.6) \) | 60.0\( \downarrow(2.1) \) | 43.1\( \downarrow(1.4) \) | 35.2\( \downarrow(1.2) \) | 56.7\( \downarrow(2.2) \) | 37.6\( \downarrow(1.1) \) |
| \( \mathbb{B}\mathbb{N} + SE(BN_3) \) | - | 48.23M | 40.1\( \downarrow(0.7) \) | 61.2\( \downarrow(0.9) \) | 43.8\( \downarrow(0.7) \) | 35.9\( \downarrow(0.5) \) | 57.9\( \downarrow(1.0) \) | 38.1\( \downarrow(0.6) \) |
| \( \mathbb{B}\mathbb{N} + SE(BN_2) \) | - | 46.34M | 40.1\( \downarrow(0.7) \) | 61.2\( \downarrow(0.9) \) | 43.8\( \downarrow(0.7) \) | 35.9\( \downarrow(0.5) \) | 57.9\( \downarrow(1.0) \) | 38.4\( \downarrow(0.3) \) |
| AN (w/ BN) | - | 45.91M | 40.8 | 62.1 | 44.5 | 36.4 | 58.9 | 38.7 |
| \( ^{\dagger}\mathrm{GN} \) | GN [47] | 45.72M | 40.3\( \downarrow(1.3) \) | 61.0\( \downarrow(1.0) \) | 44.0\( \downarrow(1.7) \) | 35.7\( \downarrow(1.7) \) | 57.9\( \downarrow(1.6) \) | 37.7\( \downarrow(2.2) \) |
| \( ^{\dagger}\mathrm{SN} \) | SN [28] | - | 41.0\( \downarrow(0.6) \) | 62.3\( \downarrow(-0.3) \) | 45.1\( \downarrow(0.6) \) | 36.5\( \downarrow(0.9) \) | 58.9\( \downarrow(0.6) \) | 38.7\( \downarrow(1.2) \) |
| AN (w/ BN) | AN (w/ GN) | 45.96M | 41.6 | 62.0 | 45.7 | 37.4 | 59.5 | 39.9 |
| ResNet101 | \( \mathbb{B}\mathbb{N} \) | - | 64.70M | 41.4\( \downarrow(1.7) \) | 62.0\( \downarrow(2.1) \) | 45.5\( \downarrow(1.8) \) | 36.8\( \downarrow(1.4) \) | 59.0\( \downarrow(2.0) \) | 39.1\( \downarrow(1.6) \) |
| AN (w/ BN) | - | 65.15M | 43.1 | 64.1 | 47.3 | 38.2 | 61.0 | 40.7 |
| \( ^{\dagger}\mathrm{GN} \) | GN [47] | 64.71M | 41.8\( \downarrow(1.4) \) | 62.5\( \downarrow(1.5) \) | 45.4\( \downarrow(1.9) \) | 36.8\( \downarrow(2.0) \) | 59.2\( \downarrow(2.1) \) | 39.0\( \downarrow(2.6) \) |
| AN (w/ BN) | AN (w/ GN) | 65.20M | 43.2 | 64.0 | 47.3 | 38.8 | 61.3 | 41.6 |
| AOGNet12M | \( \mathbb{B}\mathbb{N} \) | - | 33.09M | 40.7\( \downarrow(1.3) \) | 61.4\( \downarrow(1.7) \) | 44.6\( \downarrow(1.5) \) | 36.4\( \downarrow(1.4) \) | 58.4\( \downarrow(1.7) \) | 38.8\( \downarrow(1.6) \) |
| AN (w/ BN) | - | 33.21M | 42.0\( \downarrow(1.0) \) | 63.1\( \downarrow(1.1) \) | 46.1\( \downarrow(0.7) \) | 37.8\( \downarrow(0.9) \) | 60.1\( \downarrow(1.0) \) | 40.4\( \downarrow(1.3) \) |
| AN (w/ BN) | AN (w/ GN) | 33.26M | 43.0 | 64.2 | 46.8 | 38.7 | 61.1 | 41.7 |
| AOGNet40M | \( \mathbb{B}\mathbb{N} \) | - | 60.73M | 43.4\( \downarrow(0.7) \) | 64.2\( \downarrow(0.9) \) | 47.5\( \downarrow(0.7) \) | 38.5\( \downarrow(0.5) \) | 61.0\( \downarrow(1.0) \) | 41.4\( \downarrow(0.4) \) |
| AN (w/ BN) | - | 60.97M | 44.1\( \downarrow(0.8) \) | 65.1\( \downarrow(1.1) \) | 48.2\( \downarrow(0.9) \) | 39.0\( \downarrow(1.2) \) | 62.0\( \downarrow(1.2) \) | 41.8\( \downarrow(1.5) \) |
| AN (w/ BN) | AN (w/ GN) | 61.02M | 44.9 | 66.2 | 49.1 | 40.2 | 63.2 | 43.3 |
+
+Table 4: Detection and segmentation results in MS-COCO val2017 [26]. All models use 2x lr scheduling (180k iterations). BN means BN is frozen in finetuning for object detection. $\dagger$ means that models are not trained by us. All other models are trained from scratch under the same settings. The numbers show sequential improvement in the two AOGNet models indicating the importance of adding our AN in the backbone and the head respectively.
+
+works in Table 2 (with the vanilla setup) for fair comparisons in detection, since we compare with some models which are not trained by us from scratch and use the feature backbones pre-trained in a way similar to our vanilla setup and with on par top-1 accuracy. In fine-tuning a network with AN (w/ BN) pre-trained in ImageNet such as ResNet50+AN (w/ BN) in Table 2, we freeze the stem layer and the first stage as commonly done in practice. For the remaining stages, we freeze the standardization component only (the learned mixture of affine transformations and the learned running mean and standard deviation), but allow the attention weight sub-network to be fine-tuned. ii) Neck Backbones: We test the feature pyramid network (FPN) [25] which is widely used in practice. iii) Head Classifiers. We test two setups: (a) The vanilla setup as done in GN [47] and SN [28]. In this setup, we further have two settings: with vs without feature normalization in the bounding box head classifier. The former is denoted by “-” in Table 4, and the latter is denoted by the corresponding type of feature normalization scheme in Table 4 (e.g., GN, SN and AN (w/ GN)). We experiment on using AN (w/ GN) in the bounding box head classifier and keeping GN in the mask head unchanged for simplicity. Adding AN (w/ GN) in the mask head classifier may further help improve the performance. When adding AN (w/ GN) in the bounding box head, we adopt the same design choices except for “Choice 1, $A_{1}(\cdot)$ ” (Eqn. 11) used in learning attention weights. (b) The state-of-the-art setup which is based on the cascade generalization of head classifiers [3] and does not include feature normalization scheme, also denoted by “-” in Table 5. iv) RoI Operations. We test the RoIAAlign operation [8].
+
+| Architecture | Backbone | Head | #Params | \( AP^{bb} \) | \( AP_{50}^{bb} \) | \( AP_{75}^{bb} \) | \( AP^m \) | \( AP_{50}^m \) | \( AP_{75}^m \) |
| ResNet101 | BN | - | 96.32M | 44.41(1.4) | 62.5↓(1.8) | 48.4↓(1.4) | 38.2↓(1.4) | 59.7↓(2.0) | 41.3↓(1.4) |
| AN (w/ BN) | - | 96.77M | 45.8 | 64.3 | 49.8 | 39.6 | 61.7 | 42.7 |
| AOGNet40M | BN | - | 92.35M | 45.61(0.9) | 63.9↓(1.1) | 49.7↓(1.1) | 39.3↓(0.7) | 61.2↓(1.1) | 42.7↓(0.4) |
| AN (w/ BN) | - | 92.58M | 46.5 | 65.0 | 50.8 | 40.0 | 62.3 | 43.1 |
+
+Table 5: Results in MS-COCO using the cascade variant [3] of Mask R-CNN.
+
+Result Summary. The results are summarized in Table 4 and Table 5. Compared with the vanilla BN that are frozen in fine-tuning, our AN (w/ BN) improves performance by a large margin in terms of both bounding box AP and mask AP (1.8% & 1.6% for MobileNetV2, 1.6% & 1.2% for ResNet50, 1.7% & 1.4% for ResNet101, 1.3% & 1.4% for AOGNet12M and 0.7% & 0.5% for AOGNet40M). It shows the advantages of the self-attention based dynamic and adaptive control of the mixture of affine transformations (although they themselves are frozen) in fine-tuning.
+
+With the AN further integrated in the bounding box head classifier of Mask-RCNN and trained from scratch, we also obtain better performance than GN and SN. Compared with the vanilla GN [47], our AN (w/ GN) improves bounding box and mask AP by $1.3\%$ and $1.7\%$ for ResNet50, and $1.4\%$ and $2.2\%$ for ResNet101. Compared with SN [28] which outperforms the vanilla GN in ResNet50, our AN (w/ GN) is also better by $0.6\%$ bounding box AP and $0.9\%$ mask AP increase respectively. Slightly less improvements are observed with AOGNets. Similar in spirit to the ImageNet experiments, we want to verify whether the advantages of our AN will disappear if we use state-of-the-art designs for head classifiers of R-CNN such as the widely used cascade R-CNN [3]. Table 5 shows that similar improvements are obtained with ResNet101 and AOGNet40M.
+
+# 5 Conclusion
+
+This paper presents Attentive Normalization (AN) that aims to harness the best of feature normalization and feature attention in a single lightweight module. AN learns a mixture of affine transformations and uses the weighted sum via a self-attention module for re-calibrating standardized features in a dynamic and adaptive way. AN provides a strong alternative to the Squeeze-and-Eccitation (SE) module. In experiments, AN is tested with BN and GN as the feature normalization backbones. AN is tested in both ImageNet-1000 and MS-COCO using four representative networks (ResNets, DenseNets, MobileNetsV2 and AOGNets). It consistently obtains better performance, often by a large margin, than the vanilla feature normalization schema and some state-of-the-art variants.
+
+# Acknowledgement
+
+This work is supported in part by NSF IIS-1909644, ARO Grant W911NF1810295, NSF IIS-1822477 and NSF IUSE-2013451. The views presented in this paper are those of the authors and should not be interpreted as representing any funding agencies.
+
+# References
+
+1. Ba, L.J., Kiros, R., Hinton, G.E.: Layer normalization. CoRR abs/1607.06450 (2016), http://arxiv.org/abs/1607.06450 1, 3
+2. Brock, A., Donahue, J., Simonyan, K.: Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096 (2018) 2, 4, 10
+3. Cai, Z., Vasconcelos, N.: Cascade R-CNN: delving into high quality object detection. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. pp. 6154-6162 (2018). https://doi.org/10.1109/CVPR.2018.00644, http://openaccess.thecvf.com/content_cvpr_2018/html/Cai_Cascade_R-CNN_Delving_CVPR_2018_paper.html 9, 13, 14
+4. Chen, K., Wang, J., Pang, J., Cao, Y., Xiong, Y., Li, X., Sun, S., Feng, W., Liu, Z., Xu, J., Zhang, Z., Cheng, D., Zhu, C., Cheng, T., Zhao, Q., Li, B., Lu, X., Zhu, R., Wu, Y., Dai, J., Wang, J., Shi, J., Ouyang, W., Loy, C.C., Lin, D.: MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155 (2019) 9, 12
+5. Deecke, L., Murray, I., Bilen, H.: Mode normalization. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 (2019), https://openreview.net/forum?id=HyN-M2Rctm 1, 3, 4
+6. Dumoulin, V., Belghazi, I., Poole, B., Lamb, A., Arjovsky, M., Mastropietro, O., Courville, A.C.: Adversarily learned inference. CoRR abs/1606.00704 (2016), http://arxiv.org/abs/1606.00704 2, 4
+7. Girshick, R.: Fast R-CNN. In: Proceedings of the International Conference on Computer Vision (ICCV) (2015) 12
+8. He, K., Gkioxari, G., Dollar, P., Girshick, R.B.: Mask R-CNN. In: IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017. pp. 2980-2988 (2017). https://doi.org/10.1109/ICCV.2017.322, https://doi.org/10.1109/ICCV.2017.322 9, 12, 13
+9. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015. pp. 1026-1034 (2015). https://doi.org/10.1109/ICCV.2015.123, https://doi.org/10.1109/ICCV.2015.123 11
+0. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016) 5, 9, 11
+1. He, T., Zhang, Z., Zhang, H., Zhang, Z., Xie, J., Li, M.: Bag of tricks for image classification with convolutional neural networks. CoRR abs/1812.01187 (2018), http://arxiv.org/abs/1812.01187 11
+2. Howard, A., Sandler, M., Chu, G., Chen, L., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., Le, Q.V., Adam, H.: Searching for mobilenetv3. CoRR abs/1905.02244 (2019), http://arxiv.org/abs/1905.02244 8
+3. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. CoRR abs/1709.01507 (2017), http://arxiv.org/abs/1709.01507 2, 4, 5, 7, 11
+4. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017) 9
+5. Huang, L., Liu, X., Lang, B., Yu, A.W., Wang, Y., Li, B.: Orthogonal weight normalization: Solution to optimization over multiple dependent stiefel manifolds
+
+in deep neural networks. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018. pp. 3271-3278 (2018), https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17072 3
+16. Huang, L., Yang, D., Lang, B., Deng, J.: Decorrelated batch normalization. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. pp. 791-800 (2018) 1, 3
+17. Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: Ccnet: Crisscross attention for semantic segmentation. CoRR abs/1811.11721 (2018), http://arxiv.org/abs/1811.11721 2
+18. Ioffe, S.: Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA. pp. 1945-1953 (2017) 1, 3
+19. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Blei, D., Bach, F. (eds.) Proceedings of the 32nd International Conference on Machine Learning (ICML-15). pp. 448-456. JMLR Workshop and Conference Proceedings (2015), http://jmlr.org/proceedings/papers/v37/ioffe15.pdf 1, 3, 8, 9
+20. Jia, S., Chen, D., Chen, H.: Instance-level meta normalization. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. pp. 4865-4873 (2019), http://openaccess.thecvf.com/content_CVPR_2019/html/Jia_Instance-Level_Meta_Normalization_CVPR_2019_paper.html 1, 2, 4
+21. Kalayeh, M.M., Shah, M.: Training faster by separating modes of variation in batch-normalized models. IEEE Transactions on Pattern Analysis and Machine Intelligence pp. 1-1 (2019). https://doi.org/10.1109/TPAMI.2019.2895781 1, 3, 4
+22. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. arXiv preprint arXiv:1812.04948 (2018) 2, 4
+23. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems (NIPS). pp. 1106-1114 (2012) 5
+24. Li, X., Song, X., Wu, T.: Aognets: Compositional grammatical architectures for deep learning. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. pp. 6220-6230 (2019) 9
+25. Lin, T., Dólar, P., Girshick, R.B., He, K., Hariharan, B., Belongie, S.J.: Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017. pp. 936-944 (2017). https://doi.org/10.1109/CVPR.2017.106.106 13
+26. Lin, T., Maire, M., Belongie, S.J., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context. CoRR abs/1405.0312 (2014), http://arxiv.org/abs/1405.0312 8, 13
+27. Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with restarts. CoRR abs/1608.03983 (2016), http://arxiv.org/abs/1608.03983 11
+28. Luo, P., Ren, J., Peng, Z.: Differentiable learning-to-normalize via switchable normalization. CoRR abs/1806.10779 (2018), http://arxiv.org/abs/1806.10779 2, 3, 9, 11, 12, 13, 14
+
+29. Miyato, T., Koyama, M.: cgans with projection discriminator. arXiv preprint arXiv:1802.05637 (2018) 2
+30. Pan, X., Zhan, X., Shi, J., Tang, X., Luo, P.: Switchable whitening for deep representation learning. In: 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019. pp. 1863-1871. IEEE (2019). https://doi.org/10.1109/ICCV.2019.00195, https://doi.org/10.1109/ICCV.2019.00195 9
+31. Park, T., Liu, M., Wang, T., Zhu, J.: Semantic image synthesis with spatially-adaptive normalization. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. pp. 2337-2346 (2019) 2, 4
+32. Peng, C., Xiao, T., Li, Z., Jiang, Y., Zhang, X., Jia, K., Yu, G., Sun, J.: Megdet: A large mini-batch object detector. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. pp. 6181-6189 (2018) 3
+33. Perez, E., de Vries, H., Strub, F., Dumoulin, V., Courville, A.C.: Learning visual reasoning without strong priors. CoRR abs/1707.03017 (2017), http://arxiv.org/abs/1707.03017 2, 4
+34. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: Neural Information Processing Systems (NIPS) (2015) 12
+35. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vision (IJCV) 115(3), 211-252 (2015). https://doi.org/10.1007/s11263-015-0816-y 8
+36. Salimans, T., Kingma, D.P.: Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In: Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain. p. 901 (2016) 3
+37. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4510-4520 (2018) 9
+38. Santurkar, S., Tsipras, D., Ilyas, A., Madry, A.: How does batch normalization help optimization? In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada. pp. 2488-2498 (2018), http://papers.nips.cc/paper/7515-how-does-batch-normalization-help-optimization 3
+39. Shao, W., Meng, T., Li, J., Zhang, R., Li, Y., Wang, X., Luo, P.: Ssn: Learning sparse switchable normalization via sparsestmax. CoRR abs/1903.03793 (2019), http://arxiv.org/abs/1903.03793 2, 3, 4
+40. Sun, W., Wu, T.: Image synthesis from reconfigurable layout and style. In: International Conference on Computer Vision, ICCV (2019) 2, 4
+41. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. CoRR abs/1512.00567 (2015), http://arxiv.org/abs/1512.00567 11
+42. Ulyanov, D., Vedaldi, A., Lempitsky, V.S.: Instance normalization: The missing ingredient for fast stylization. CoRR abs/1607.08022 (2016), http://arxiv.org/abs/1607.08022 1, 3
+43. de Vries, H., Strub, F., Mary, J., Larochelle, H., Pietquin, O., Courville, A.C.: Modulating early visual processing by language. In: Advances in
+
+Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA. pp. 6597-6607 (2017), http://papers.nips.cc/paper/7237-modulating-early-visual-processing-by-language 2, 4
+44. Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., Wang, X., Tang, X.: Residual attention network for image classification. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017. pp. 6450-6458 (2017). https://doi.org/10.1109/CVPR.2017.683, https://doi.org/10.1109/CVPR.2017.683 4
+45. Wang, X., Girshick, R.B., Gupta, A., He, K.: Non-local neural networks. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. pp. 7794-7803 (2018). https://doi.org/10.1109/CVPR.2018.00813, http://openaccess.thecvf.com/content_cvpr_2018/html/Wang_Non-Local_Neural_Networks_CVPR_2018_paper.html 2
+46. Woo, S., Park, J., Lee, J., Kweon, I.S.: CBAM: convolutional block attention module. In: Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part VII. pp. 3-19 (2018). https://doi.org/10.1007/978-3-030-01234-2_1, https://doi.org/10.1007/978-3-030-01234-2_1 4
+47. Wu, Y., He, K.: Group normalization. In: Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XIII. pp. 3-19 (2018). https://doi.org/10.1007/978-3-030-01261-8_1, https://doi.org/10.1007/978-3-030-01261-8_1 1, 3, 7, 9, 11, 12, 13, 14
+48. Zhang, H., Cissé, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings (2018), https://openreview.net/forum?id=r1Ddp1-Rb 11
\ No newline at end of file
diff --git a/attentivenormalization/images.zip b/attentivenormalization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..dca0fe269496b4843df7db1f2c885cb04dbcfc97
--- /dev/null
+++ b/attentivenormalization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b8584bc43354f46e1049374a53e27034c5d1cd352a7bc9fa4648ffaa5f6093ef
+size 415245
diff --git a/attentivenormalization/layout.json b/attentivenormalization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..698bb11d534eaf1a020f35642f62d513a9be8898
--- /dev/null
+++ b/attentivenormalization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b6f6cea9b0a546acd09c711c340b97b89ee1606629c83236e0197e2308c0ac78
+size 459435
diff --git a/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/c2b0db35-dd5d-4ba9-a051-3409c6e35e5a_content_list.json b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/c2b0db35-dd5d-4ba9-a051-3409c6e35e5a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a4044aa4be57f504f0afa7fc470ffd4dbcee84e1
--- /dev/null
+++ b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/c2b0db35-dd5d-4ba9-a051-3409c6e35e5a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0219a8a46c5390392889d7240bcd61792e86efdfdcf46d346b97d86692447a13
+size 78009
diff --git a/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/c2b0db35-dd5d-4ba9-a051-3409c6e35e5a_model.json b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/c2b0db35-dd5d-4ba9-a051-3409c6e35e5a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5c9cb2206ff8cdab6be6591dbcf1d561759ccff1
--- /dev/null
+++ b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/c2b0db35-dd5d-4ba9-a051-3409c6e35e5a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:75a57f248e5215aa37f31161164500116b6457a5f2b3505a5c920fe3551bc1cb
+size 96147
diff --git a/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/c2b0db35-dd5d-4ba9-a051-3409c6e35e5a_origin.pdf b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/c2b0db35-dd5d-4ba9-a051-3409c6e35e5a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..afe57b39b3eec25bf2eab4a88a3bbe2e73b10975
--- /dev/null
+++ b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/c2b0db35-dd5d-4ba9-a051-3409c6e35e5a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d74d875b3c76db65b4005875005fb61503826092b07f3131050ecec3dd1ffe20
+size 1653654
diff --git a/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/full.md b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8894ca331d746973be50a0711f7eabc198847da9
--- /dev/null
+++ b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/full.md
@@ -0,0 +1,304 @@
+# Attentive Prototype Few-shot Learning with Capsule Network-based Embedding
+
+Fangyu Wu $^{1,2}$ [0000-0001-9618-8965], Jeremy S.Smith $^{2[0000-0002-0212-2365]}$ ,
+
+Wenjin Lu $^{1}$ , Chaoyi Pang $^{3[0000-0001-7038-3789]}$ , and Bailing
+
+Zhang[0000-0001-5762-5763]
+
+$^{1}$ Department of Computer Science and Software Engineering, Xi'an
+
+Jiaotong-liverpool University, SuZhou, JiangSu Province, China
+
+{fangyu.wu,wenjin.lu}@xjtlu.edu.cn
+
+$^{2}$ Department of Electrical Engineering and Electronic, University of Liverpool,
+
+Liverpool, United Kingdom
+
+J.S.Smith@liverpool.ac.uk
+
+3 School of Computer and Data Engineering, Zhejiang University Ningbo Institute of
+
+Technology, Ningbo, Zhejiang Province, China
+
+{chaoyi.pang,bailing.zhang}@nit.zju.edu.cn
+
+Abstract. Few-shot learning, namely recognizing novel categories with a very small amount of training examples, is a challenging area of machine learning research. Traditional deep learning methods require massive training data to tune the huge number of parameters, which is often impractical and prone to over-fitting. In this work, we further research on the well-known few-shot learning method known as prototypical networks for better performance. Our contributions include (1) a new embedding structure to encode relative spatial relationships between features by applying a capsule network; (2) a new triplet loss designated to enhance the semantic feature embedding where similar samples are close to each other while dissimilar samples are farther apart; and (3) an effective non-parametric classifier termed attentive prototypes in place of the simple prototypes in current few-shot learning. The proposed attentive prototype aggregates all of the instances in a support class which are weighted by their importance, defined by the reconstruction error for a given query. The reconstruction error allows the classification posterior probability to be estimated, which corresponds to the classification confidence score. Extensive experiments on three benchmark datasets demonstrate that our approach is effective for the few-shot classification task.
+
+Keywords: Few-shot learning $\cdot$ Meta learning $\cdot$ Capsule network $\cdot$ Feature embedding $\cdot$ Attentive prototype learning
+
+# 1 Introduction
+
+Deep learning has been greatly advanced in recent years, with many successful applications in image processing, speech processing, natural language processing
+
+and other fields. However, the successes usually rely on the condition to access a large dataset for training. If the amount of training data is not large enough, the deep neural network would not be sufficiently trained. Consequently, it is significant to develop deep learning for image recognition in the case of a small number of samples, and enhance the adaptability of deep learning models in different problem domains.
+
+Few-shot learning is one of the most promising research areas targeting deep learning models for various tasks with a very small amount of training dataset [24], [29], [31], [34], [37], [39], i.e., classifying unseen data instances (query examples) into a set of new categories, given just a small number of labeled instances in each class (support examples). The common scenario is a support set with only $1 \sim 10$ labeled examples per class. As a stark contrast, general classification problems with deep learning models [15], [38] often require thousands of examples per class. On the other hand, classes for training and testing sets are from two exclusive sets in few-shot learning, while in traditional classification problems they are the same. A key challenge, in few-shot learning, is to make best use of the limited data available in the support set in order to find the right generalizations as required by the task.
+
+Few-shot learning is often elaborated as a meta-learning problem, with an emphasis on learning prior knowledge shared across a distribution of tasks [39], [21], [34]. There are two sub-tasks for meta-learning: an embedding that maps the input into a feature space and a base learner that maps the feature space to task variables. As a simple, efficient and the most popularly used few-shot learning algorithm, the prototypical network [34] tries to solve the problem by learning the metric space to perform classification. A query point (new point) is classified based on the distance between the created prototypical representation of each class and the query point. While the approach is extensively applied, there are a number of limitations that we'd like to address and seek better solutions.
+
+Firstly, the prototypical representations [34],[39], generated by deep Convolutional Neural Networks, cannot account for the spatial relations between the parts of the image and are too sensitive to orientation. Secondly, a prototypical network [34] divides the output metric space into disjoint polygons where the nearest neighbor of any point inside a polygon is the pivot of the polygon. This is too rough to reflect various noise effects in the data, thus compromising the discrimination and expressiveness of the prototype. It has been well-known that the performance of such a simple distance-based classification is severely influenced by the existing outliers, especially in the situations of small training sample size [7].
+
+From the aforementioned discussion, we intend to improve the prototype network by proposing a capsule network [32] based embedding model and reconstruction-based prototypical learning within the framework of meta-learning. There are two main components in the proposed scheme: a capsule network-based embedding module which create feature representations, and an improved non-parametric classification scheme with an attentive prototype for each class in the support set, which is obtained by attentive aggregation over the representations
+
+of its support instances, where the weights are calculated using the reconstruction error for the query instance.
+
+The training of the proposed network is based on the metric learning algorithm with an improved triplet-like loss, which generalizes the triplet network [33] to allow joint comparison with $K$ negative prototypes in each mini-batch. This makes the feature embedding learning process more tally with the few-shot classification problem. We further propose a semi-hard mining technique to sample informative hard triplets, thus speeding up the convergence and stabilize the training procedure.
+
+In summary, we proposed a new embedding approach for few-shot learning based on a capsule network, which features the capability to encode the part-whole relationships between various visual entities. An improved routing procedure using the DeepCaps mechanism [27] is designed to implement the embedding. With a class-specific output capsule, the proposed network can better preserve the semantic feature representation, and reduce the disturbances from irrelevant noisy information. The proposed attentive prototype scheme is query-dependent, rather than just averaging the feature points of a class for the prototype as in the vanilla prototype network, which means all of the feature points from the support set are attentively weighted in advance, and then the weighting values completely depend on the affinity relations between two feature points from the support set and the query set. By using reconstruction as an efficient expression of the affinity relation, the training points near the query feature point acquire more attention in the calculation of the weighting values.
+
+The proposed approach has been experimentally evaluated on few-shot image classification tasks using three benchmark datasets, i.e. the miniImageNet, tieredImageNet and Fewshot-CIFAR100 datasets. The empirical results verify the superiority of our method over the state-of-the-art approaches. The main contributions of our work are two-fold:
+
+- We put forward a new few-shot classification approach with a capsule-based model, which combines a 3D convolution based on the dynamic routing procedure to obtain a semantic feature representation while preserving the spatial information between visual entities.
+- We propose a novel attentive prototype concept to take account of all the instances in a given support class, with each instance being weighted by the reconstruction errors between the query and prototype candidates from the support set. The attentive prototype is robust to outliers by design and also allows the performance to be improved by refraining from making predictions in the absence of sufficient confidence.
+
+# 2 Related work
+
+# 2.1 Few-shot learning
+
+Few-shot learning aims to classify novel visual classes when very few labeled samples are available [3], [4]. Current methods usually tackle the challenge using
+
+meta-learning approaches or metric-learning approaches, with the representative works elaborated below.
+
+Metric learning methods aim to learn a task-invariant metric, which provide an embedding space for learning from few-shot examples. Vinyals et al. [39] introduced the concept of episode training in few-shot learning, where metric learning-based approaches learn a distance metric between a test example and the training examples. Prototypical networks [34] learn a metric space in which classification can be performed by computing distances to prototype representations of each class. The learned embedding model maps the images of the same class closer to each other while different classes are spaced far away. The mean of the embedded support samples are utilized as the prototype to represent the class. The work in [18] goes beyond this by incorporating the context of the entire support set available by looking between the classes and identifying task-relevant features.
+
+There are also interesting works that explore different metrics for the embedding space to provide more complex comparisons between support and query features. For example, the relation module proposed in [37] calculates the relation score between query images to identify unlabeled images. Kim et al. [12] proposed an edge-labeling Graph Neural Network (EGNN) for few-shot classification. Metric-based task-specific feature representation learning has also been presented in many related works. Our work is a further exploration of the prototype based approaches [34], [37], aiming to enhance the performance of learning an embedding space by encoding the spatial relationship between features. Then the embedding space generates attentive prototype representations in a query-dependent scheme.
+
+# 2.2 Capsule Networks
+
+The capsule network [11] is a new type of neural network architecture proposed by Geoffrey Hinton, with the main motivation to address some of the shortcomings of Convolutional Neural Networks (CNNs). For example, the pooling layers of CNNs lose the location information of relevant features, one of the so-called instantiation parameters that characterize the object. Other instantiated parameters include scale and rotation, which are also poorly represented in CNNs. Capsule network handles these instantiation parameters explicitly by representing an object or a part of an object. More specifically, a capsule network replaces the mechanisms of the convolution kernel in CNNs by implementing a group of neurons to encode the spatial information and the probability of the existence of objects. The length of the capsule vector is the probability of the features in the image, and the orientation of the vector will represent its instantiation information.
+
+Sabour et al. [32] first proposed a dynamic routing algorithm for capsule networks in 2017 for the bottom-up feature integration, the essence of which is the realization of a clustering algorithm for the information transmission in the model. In [32], a Gaussian mixture model (GMM) was integrated into the feature integration process to adjust network parameters through EM routing.
+
+Since the seminal works [11], [32], a number of approaches have been proposed to implement and improve the capsule architecture [13], [17], [27], [43].
+
+Many applications have been attempted by applying capsule networks, for example, intent detection [40], text classification [25] and computer vision [41], [42]. A sparse, unsupervised capsules network [28] was proposed showing that the network generalizes better than supervised masking, while potentially enabling deeper capsule networks. Rajasegaran et al. [27] proposed a deep capsule network architecture called DeepCaps that adapts the original routing algorithm for 3D convolutions and increases its performance on more complex datasets.
+
+# 3 Method
+
+# 3.1 Approach Details
+
+In this section, we first revisit the DeepCaps network [27], which is designed for more complex image datasets. We then extend it to the scenario of few-shot learning and describe the proposed algorithm in detail.
+
+DeepCaps Revisit DeepCaps is a deep capsule network architecture proposed in [27] to improve the performance of the capsule networks for more complex image datasets. It extends the dynamic routing algorithm in [32] to stacked multiple layers, which essentially uses a 3D convolution to learn the spatial information between the capsules. The model consists of four main modules: skip connected CapsCells, 3D convolutional CapsCells, a fully-connected capsule layer and a decoder network. The skip-connected CapsCells have three ConvCaps layers, the first layer output is convolved and skip-connected to the last layer output. The motivation behind skipping connections is to borrow the idea from residual networks to sustain a sound gradient flow in a deep model. The element-wise layer is used to combine the outputs of the two capsule layers after skipping the connection.
+
+DeepCaps has a unit with a ConvCaps3D layer, in which the number of route iterations is kept at 3. Then, before dynamic routing, the output of ConvCaps is flattened and connected with the output of the capsule, which is then followed by 3D routing (in CapsCell 3). Intuitively, this step helps to extend the model to a wide range of different datasets. For example, for a dataset composed of images with less rich information, such as MNIST, the low-level capsule from cell 1 or cell 2 is sufficient, while for a more complex dataset, we need the deeper 3D ConvCaps to capture rich information content. Once all capsules are collected and connected, they are routed to the class capsule through the fully-connected capsule layer.
+
+Network Architecture As explained in the Introduction, our proposed model has two parts: (1) a modified DeepCaps network with improved triplet-like loss that learns the deep embedding space, and (2) a non-parameter classification scheme that produces a prototype vector for each class candidate, which is
+
+
+Fig. 1. Framework of the proposed method for few-shot learning. We perform joint end-to-end training of the Embedding Module (modified DeepCaps) together with the Prototypical Learning via an improved triplet-like loss from the training dataset. The well-learned embedding features are used to compute the distances among the query images and the attentive prototype generated from the support set. The final classification is performed by calculating the posterior probability for the query instance.
+
+derived from the attentive aggregation over the representations of its support instances, where the weights are calculated using the reconstruction errors for the query instance from respective support instances in the embedding space. The final classification is performed by calculating the posterior probability for the query instance based on the distances between the embedding vectors of the query and the attentive prototype. Figure 1 schematically illustrates an overview of our approach to few-shot image classification. Each of the parts is described in detail below.
+
+Embedding module. We follow the practice of episodic training in [39] which is the most popular and effective meta learning methodology [34], [37]. We construct support set $S$ and query set $Q$ from $D_{train}$ in each episode to train the model.
+
+$$
+\begin{array}{l} S = \left\{s _ {1}, s _ {2},.., s _ {K} \right\}, \\ Q = \left\{q _ {1}, s _ {2}, \dots , q _ {N} \right\}, \tag {1} \\ \end{array}
+$$
+
+where $K$ and $N$ represent the number of samples in the support set and query set for each class, respectively. As shown in Fig. 2, we first feed the samples $S$ and $Q$ into the convolution layer and CapsCells, then the collected capsules are routed to the class capsules after the Flat Caps layer. Here, the decision making happens via $L_{2}$ and the input image is encoded into the final capsule vector. The length of the capsule's output vector represents the probability that the object represented by the capsule exists in the current input. We assume the class capsules as $P \in Y^{b \times d}$ which consists of the activity vectors for all classes, where $b$ and $d$ represents the number of classes in the final class capsule and capsule dimension, respectively. Then, we only feed the activity vector of predicted class $P_{m} \in Y^{1 \times d}$ into the final embedding space in our setting, where
+
+
+Fig. 2. The architecture of the embedding module in which obtains only the activity vectors of the predicted class.
+
+$m = \operatorname{argmax}_i(\| P_i\| _2^2)$ . The embedding space acts as a better regularizer for the capsule networks, since it is forced to learn the activity vectors jointly within a constrained $Y^{d}$ space. The function of margin loss used in DeepCaps enhances the class probability of the true class, while suppressing the class probabilities of the other classes. In this paper, we propose the improved triplet-like loss based on an attentive prototype to train the embedding module and learn more discriminative features.
+
+Attentive prototype. The prototypical network in [34] computes a $D$ dimensional feature representation $p_i \in \mathbb{R}^D$ , or prototype, of each class through an embedding function $f_\phi: \mathbb{R}^D \to \mathbb{R}^M$ with learnable parameters $\phi$ . Each prototype is the mean vector of the embedded support points belonging to its class:
+
+$$
+p _ {i} = \frac {1}{\left| s _ {i} \right|} \sum_ {\left(x _ {i}, y _ {i}\right) \in s _ {i}} f _ {\phi} \left(x _ {i}\right) \tag {2}
+$$
+
+where each $x_{i} \in s_{i}$ is the $D$ -dimensional feature vector of an example from class $i$ . Given a distance function $d: \mathbb{R}^{D} \times \mathbb{R}^{D} \to [0, +\infty)$ , prototypical networks produce a distribution over classes for a query point $x$ based on a softmax over distances to the prototypes in the embedding space:
+
+$$
+p _ {\phi} (y = t | x) = \frac {\exp \left(- d \left(f _ {\phi} (x) , p _ {t}\right)\right)}{\sum_ {t ^ {\prime}} \exp \left(- d \left(f _ {\phi} (x) , p _ {t ^ {\prime}}\right)\right)} \tag {3}
+$$
+
+Learning proceeds by minimizing the negative log-probability $J(\phi) = -\log p_{\phi}(y = t|x)$ of the true class $t$ via Stochastic Gradient Descent (SGD). Most prototypical networks for few-shot learning use some simple non-parametric classifiers, such as kNN. It is well known that non-parametric classifiers are usually affected by existing outliers [6], which is particularly serious when the number of samples
+
+is small, the scenario addressed by few-shot learning. A practical and reliable classifier should be robust to outliers. Motivated by this observation, we propose an improved algorithm based on the local mean classifier [22]. Given all prototype instances of a class, we calculate their reconstruction errors for the query instance, which are then used for the weighted average of prototype instances. The new prototype aggregates attentive contributions from all of the instances. The reconstruction error between the new prototype and the query instance not only provides a discrimination criteria for the classes, but also serves as a reference for the reliability of the classification.
+
+More specifically, with $K$ support samples $\{x_{i1}, x_{i2}, \ldots, x_{iK}\}$ selected for class $i$ , a membership $\gamma_{ij}$ can be defined for a query instance $q$ by employing normalized Gaussian functions with the samples in support sets, e.g.,
+
+$$
+\gamma_ {i j} = \frac {\exp \left(\frac {| | q - x _ {i j} | | ^ {2}}{2 \sigma_ {i} ^ {2}}\right)}{\sum_ {l = 1} ^ {K} \exp \left(\frac {| | q - x _ {i l} | | ^ {2}}{2 \sigma_ {i} ^ {2}}\right)}, j = 1, \dots , K, i = 1, \dots , M \tag {4}
+$$
+
+where $x_{ij}$ are the $j$ -th samples in class $i$ , and $\sigma_i$ is the width of the Gaussian defined for class $i$ , and we set the value $\sigma_i$ relatively small (e.g., $\sigma_i = 0.1$ ).
+
+Then, for each class $i$ , an attentive prototype pattern $\hat{q}_i$ can be defined for a query sample $q$
+
+$$
+\hat {q} _ {i} = \frac {\sum_ {j = 1} ^ {K} \gamma_ {i j} x _ {i j}}{\sum_ {l = 1} ^ {K} \gamma_ {i j}}, i = 1, \dots , M \tag {5}
+$$
+
+Where $\gamma_{ij}$ is defined in Eq. 4 and $\hat{q}_i$ can be considered as the generalized support samples from class $i$ for the query instance $q$ . Here we want to ensure that an image $q^a$ (anchor) of a specific class in the query set is closer to the attentive prototype of the positive class $\hat{q}^p$ (positive) than it is to multiple $\hat{q}^n$ (negative) attentive prototypes.
+
+$$
+\left\| q ^ {a} - \hat {q} ^ {p} \right\| _ {2} ^ {2} + \alpha < \left\| q ^ {a} - \hat {q} ^ {n} \right\| _ {2} ^ {2}, \forall q ^ {a} \in Q. \tag {6}
+$$
+
+f where $\alpha$ is a margin that is enforced between positive and negative pairs, $Q$ is the query set cardinality $MN$ . The loss that is being minimized is then:
+
+$$
+\sum_ {m = 1} ^ {M N} \left[ | | f \left(q _ {m} ^ {a}\right) - f \left(\hat {q} _ {m} ^ {p}\right)) | | _ {2} ^ {2} - | | f \left(q _ {m} ^ {a}\right) - f \left(\hat {q} _ {m} ^ {n}\right) | | _ {2} ^ {2} + \alpha \right] _ {+} \tag {7}
+$$
+
+For image classification, a query image can be classified based on the comparison of the errors between the reconstructed vectors and the presented image. That is, a query image $q$ is assigned to class $m^*$ if
+
+$$
+m ^ {*} = \underset {m} {\operatorname {a r g m i n}} \operatorname {e r r} _ {m} \tag {8}
+$$
+
+where $err_{m} = ||q - \hat{q}_{m}||, m = 1,\dots,M.$
+
+Improved Triplet-like loss. In order to ensure fast convergence it is crucial to select triplets that violate the triplet constraint in Eq. 7. The traditional triplet
+
+loss interacts with only one negative sample (and equivalently one negative class) for each update in the network, while we actually need to compare the query image with multiple different classes in few-shot classification. Hence, the triplet loss may not be effective for the feature embedding learning, particularly when we have several classes to handle in the few-shot classification setting. Inspired by [1], [35], we generalize the traditional triplet loss with $E$ -negatives prototypes to allow simultaneous comparisons jointly with the $E$ negative prototypes instead of just one negative prototype, in one mini-batch. This extension makes the feature comparison more effective and faithful to the few-shot learning procedure, since in each update, the network can compare a sample with multiple negative classes.
+
+In particular, we randomly choose the $E$ negative prototypes $\hat{q}^{n_e}$ , $e = \{1,2,\dots,E\}$ to form into a triplet. Accordingly, the optimization objective evolves to:
+
+$$
+\begin{array}{l} \mathcal {L} \left(q _ {m} ^ {a}, \hat {q} _ {m} ^ {p}, \hat {x} _ {m} ^ {n}\right) = \sum_ {m = 1} ^ {M N} \frac {1}{E} \sum_ {e = 1} ^ {E} \left[ \left| \left| f \left(q _ {m} ^ {a}\right) - f \left(\hat {q} _ {m} ^ {p}\right)\right) \right| \right| _ {2} ^ {2} \tag {9} \\ \left. - \left| \left| f \left(q _ {m} ^ {a}\right) - f \left(\hat {q} _ {m} ^ {n _ {e}}\right) \right| \right| _ {2} ^ {2} + \alpha \right] _ {+} \\ \end{array}
+$$
+
+For the sample $q_{m}^{a}$ in the query set, the optimization shall maximize the distance to the negative prototype $q_{m}^{n}$ to be larger than the distance to the positive prototypes $q_{m}^{p}$ in the feature space. For each anchor sample $q_{m}^{a}$ , we then learn the positive prototype $q_{m}^{p}$ from the support set of the same class as $q_{m}^{a}$ and further randomly select $E$ other negative prototypes whose classes are different from $q_{m}^{a}$ . Compared with the traditional triplet loss, each forward update in our improved Triplet-like loss includes more inter-class variations, thus making the learnt feature embedding more discriminative for samples from different classes.
+
+Mining hard triplets is an important part of metric learning with the triplet loss, as otherwise training will soon stagnate [10]. This is because when the model begins to converge, the embedding space learns how to correctly map the triples relatively quickly. Thus most triples satisfying the margin will not contribute to the gradient in the learning process. To speed up the convergence and stabilize the training procedure, we propose a new hard-triplet mining strategy to sample more informative hard triplets in each episode. Specifically, triplets will be randomly selected in each episode as described above, we then check whether the sampled triplets satisfy the margin. The triplets that have already met the margin will be removed and the network training will proceed with the remaining triplets.
+
+# 4 Experiments
+
+Extensive experiments have been conducted to evaluate and compare the proposed method for few-shot classification using on three challenging few-shot learning benchmarks datasets, miniImageNet [39], tieredImageNet [29] and Fewshot-CIFAR100 (FC100) [24]. All the experiments are implemented based on PyTorch and run with NVIDIA 2080ti GPUs.
+
+# 4.1 Datasets
+
+miniImageNet is the most popular few-shot learning benchmark proposed by [39] and derived from the original ILSVRC-12 dataset [30]. It contains 100 randomly sampled different categories, each with 600 images of size $84 \times 84$ pixels. The tieredImageNet [29] is a larger subset of ILSVRC-12 [30] with 608 classes and 779,165 images in total. The classes in tieredImageNet are grouped into 34 categories corresponding to higher-level nodes in the ImageNet hierarchy curated by humans [2]. Each hierarchical category contains 10 to 20 classes, which are divided into 20 training (351 classes), 6 validation (97 classes) and 8 test (160 classes) categories. Fewshot-CIFAR100 (FC100) is based on the popular object classification dataset CIFAR100 [14]. Oreshkin et al. [24] offer a more challenging class split of CIFAR100 for few-shot learning. The FC100 further groups the 100 classes into 20 superclasses. Thus the training set has 60 classes belonging to 12 superclasses, the validation and test data consist of 20 classes each belonging to 5 superclasses each.
+
+# 4.2 Implementation Details
+
+Following the general few-shot learning experiment settings [34], [37], we conducted 5-way 5-shot and 5-way 1-shot classifications. The Adam optimizer is exploited with an initial learning rate of 0.001. The total training episodes on miniImageNet, tieredImageNet and FC100 are 600,000, 1,000,000 and 1,000,000, respectively. The learning rate is dropped by $10\%$ every 100,000 episodes or when the loss enters a plateau. The weight decay is set to 0.0003. We report the mean accuracy $(\%)$ over 600 randomly generated episodes from the test set.
+
+# 4.3 Results Evaluation
+
+Comparison with the baseline model. Using the training/testing data split and the procedure described in Section 3, the baseline in Table 1, Table 2 and Table 3 evaluate a model with modified DeepCaps, without the attentive prototype. The accuracy is $75.21 \pm 0.43\%$ , $78.41 \pm 0.34\%$ and $59.8 \pm 1.0\%$ and in the 5-way 5-shot setting on miniImageNet, tieredImageNet and FC100 respectively. Our baseline results are on a par with those reported in [37], [34]. As shown in Table 1, Table 2 and Table 3, using the attentive prototype strategy in the model training with improved triplet-like loss, our method significantly improves the accuracy on all three datasets. There are obvious improvements of approximately $+4.96\%$ (from $75.21\%$ to $80.17\%$ ), $+4.83\%$ (from $78.41\%$ to $83.24\%$ ), $+2.5\%$ (from $57.3\%$ to $59.8\%$ ) under the 5-way 5-shot setting for miniImageNet, tieredImageNet and FC100, respectively. These results indicate that the proposed approach is tolerant to large intra- and inter-class variations and produces marked improvements over the baseline.
+
+Comparison with the state-of-the-art methods. We also compare our method with some state-of-the-art methods on miniImageNet, tieredImageNet in Table 1 and Table 2, respectively. On miniImageNet, we achieve a 5-way
+
+| Few-shot learning method | 5-Way 1-Shot | 5-Way 5-Shot |
| Matching Networks [39] | 43.56 ± 0.84 | 55.31±0.73 |
| MAML [5] | 48.70±1.84 | 63.11±0.92 |
| Relation Net [37] | 50.44±0.82 | 65.32±0.70 |
| REPTILE [23] | 49.97±0.32 | 65.99±0.58 |
| Prototypical Net [34] | 49.42±0.78 | 68.20±0.66 |
| Predict Params [26] | 59.60±0.41 | 73.74 ± 0.19 |
| LwoF [8] | 60.06±0.14 | 76.39 ± 0.11 |
| TADAM [24] | 58.50±0.30 | 76.70±0.30 |
| EGNN [12] | - | 66.85 |
| EGNN+Transduction [12] | - | 76.37 |
| CTM [18] | 62.05±0.55 | 78.63±0.06 |
| wDAE-GNN [9] | 62.96±0.15 | 78.85±0.10 |
| MetaOptNet-SVM-trainval [16] | 64.09±0.62 | 80.00±0.45 |
| CTM, data augment [18] | 64.12±0.82 | 80.51±0.13 |
| Baseline | 59.71±0.35 | 75.21±0.43 |
| Ours | 63.23±0.26 | 80.17±0.33 |
| Ours, data augment | 66.43±0.26 | 82.13±0.21 |
+
+Table 1. Few-shot classification accuracies (%) on miniImageNet.
+
+1-shot accuracy $= 63.23 \pm 0.26$ , 5-way 5-shot accuracy $= 80.17 \pm 0.33\%$ when using the proposed method, which has a highly competitive performance compared with the state-of-the-art. On tieredImageNet, we arrive at 5-way 1-shot accuracy $= 65.53 \pm 0.21$ , 5-way 5-shot accuracy $= 83.24 \pm 0.18\%$ which is also very competitive. The previous best result was produced by introducing a Category Traversal Module [18] and data augmentation that can be inserted as a plug-and-play module into most metric-learning based few-shot learners. We further investigate whether the data augmentation could work on our model. By training a version of our model with basic data augmentation, we obtain the improved results 5-way 5-shot accuracy $= 82.13 \pm 0.21\%$ on miniImageNet. On tieredImageNet, we also observe a performance 5-way 5-shot accuracy $= 86.35 \pm 0.41\%$ .
+
+For the FC100 dataset, our proposed method is superior to all the other methods [5], [24], [36] in accuracy. The comparisons consistently confirm the competitiveness of the proposed method on few-shot image classification. In terms of size and computational cost, for the models trained on mini-ImageNet, the proposed model has only 7.22 million parameters, while the ResNet-18 used in the existing SOTA approach has 33.16 million parameters. We also tested both models' inference time, ResNet-18 takes $3.65\mathrm{ms}$ for a $64\times 64\times 3$ image, while our model takes only $1.67\mathrm{ms}$ for a $64\times 64\times 3$ image. In summary, our proposed attentive prototype learning scheme improves over the previous methods, mainly due to the better embedding space provided by the capsule network and the attentive prototyping scheme. The importance value is used as the weighting value for the support set instances, which is completely dependent on the affinity
+
+| Few-shot learning method | 5-Way 1-Shot | 5-Way 5-Shot |
| MAML [5] | 51.67±1.81 | 70.30±0.08 |
| Meta-SGD [19], reported by [31] | 62.95±0.03 | 79.34±0.06 |
| LEO [31] | 66.33±0.05 | 81.44±0.09 |
| Relation Net [37] | 54.48±0.93 | 71.32±0.78 |
| Prototypical Net [34] | 53.31±0.89 | 72.69±0.74 |
| EGNN [12] | - | 70.98 |
| EGNN+Transduction [12] | - | 80.15 |
| CTM [18] | 64.78±0.11 | 81.05±0.52 |
| MetaOptNet-SVM-trainval [16] | 65.81±0.74 | 81.75±0.53 |
| CTM, data augmentation [18] | 68.41±0.39 | 84.28±1.73 |
| Baseline | 63.25±0.31 | 78.41±0.34 |
| Ours | 65.53±0.21 | 83.24±0.18 |
| Ours, data augmentation | 69.87±0.32 | 86.35±0.41 |
+
+relationship between the two feature points from the support set and the query. The importance weighting values vary exponentially, with larger value reflecting nearby pairs of feature points and a smaller value for the distant pair. This conforms that the feature points from the support set that are nearer to the query feature point should be given more attention.
+
+Table 2. Few-shot classification accuracies (%) on tieredImageNet.
+
+| Few-shot learning method | 5-Way 1-Shot | 5-Way 5-Shot | 5-Way 10-Shot |
| MAML [5] | 38.1±1.7 | 50.4±1.0 | 56.2±0.8 |
| TADAM [24] | 40.1±0.4 | 56.1±0.4 | 61.6±0.5 |
| MTL [36] | 45.1±1.8 | 57.6±0.9 | 63.4±0.8 |
| Baseline | 44.2±1.3 | 57.3±0.8 | 62.8±0.6 |
| Ours | 47.5±0.9 | 59.8±1.0 | 65.4±0.5 |
+
+Table 3. Few-shot classification accuracies (%) on the FC100 dataset.
+
+Ablation study: To verify the effectiveness of components in the proposed method, we conducted ablation experiments on the miniImageNet and tieredImageNet datasets. First, to investigate the contribution of the designed attentive prototype method, we compare the performance of the proposed method with vanilla prototypical networks [34]. Then, we verify the effectiveness of our proposed feature embedding module by embedding it into the metric-based algorithm Relation Net [37]. Table 4 summarizes the performance of the different variants of our method.
+
+1) Attentive prototype: In vanilla prototypical networks [34], the prototypes are defined as the averages the embed features of each class in the support set.
+
+| Few-shot learning method | miniImageNet | tieredImageNet |
| 5-Way 5 shot | 10-Way 5 shot | 5-Way 5-shot | 10-Way 5-shot |
| Prototypical Net [34] | 68.20 | - | 72.69 | - |
| Ours (average mechanism) | 76.32 | 58.41 | 80.31 | 62.17 |
| Ours (attentive prototype) | 80.17 | 63.12 | 83.24 | 66.33 |
| Relation Net [37] | 65.32 | - | 71.32 | - |
| Relation Net [37] (our implementation) | 80.91 | 64.34 | 83.98 | 67.86 |
+
+Table 4. Ablation study on the attentive prototype and embedding module.
+
+Such a simple class-wise feature takes all instances into consideration equally. Our attentive prototype scheme is a better replacement. A variant of DeepCaps is applied with improved triplet-like loss to learn the feature embedding instead of a shallow CNN network. To further verify the effectiveness of our attentive prototype, we also compared the average-based prototypes created from our embedding framework. The experimental results on miniImageNet and tieredImageNet are summarized in Table 4. It can be observed that the attentive prototype gains an approximately $3\% - 4\%$ increase after replacing the average mechanism. This shows that the attentive prototypes can be more 'typical' when compared to the original average vectors by giving different weights for different instances.
+
+
+Relation Net minImageNet
+
+
+Improved Relation Net miniImageNet
+
+
+Relation Net tieredImageNet
+
+
+Improved Relation Net tieredImageNet
+
+
+Relation Net miniImageNet
+Fig. 3. The t-SNE visualization [20] of the improved feature embeddings learnt by our proposed approach..
+
+
+(a) 5-way 5-shot setting
+Improved Relation Net miniImageNet
+(b) 10-way 5-shot setting
+
+
+Relation Net tieredImageNet
+
+
+Improved Relation Net tieredImageNet
+
+2) Embedding module: The embedding is switched from four convolutional blocks in Relation Net [37] to the modified DeepCaps model and the supervision loss is changed to the improved triplet-like loss. Table 4 shows the results obtained by the improvements over the Relation Net. We find that the improved Relation Net exceeds the original model by approximately $+10\%$ . This shows the ability of the proposed capsule network-based embedding network to improve the performance of the metric based method. Fig. 3 visualizes the feature distribution using t-SNE [20] for the features computed in 5-way 5-shot setting and 10-way 5-shot setting. As can be clearly observed, the improved Relation Net model has more compact and separable clusters, indicating that features are more discriminative for the task. This is caused by the design of the embedding module.
+3) Improved Triplet-like loss: To help analyze our model and show the benefit of improved Triplet-like loss, we design several comparison methods as follows: Setting-1: Baseline model (modified DeepCaps); Setting-2: Using the attentive prototype strategy in the model training; Setting-3: Based on the Setting 2, we add the improved triplet-like loss to make the feature comparison more effective. With the help of improved triplet-like loss, we observed an improvement
+
+| Few-shot learning method | 5-Way 1-Shot | 5-Way 5-Shot |
| Setting-1 | 59.71±0.35 | 75.21±0.43 |
| Setting-2 | 61.76±0.12 | 78.45±0.23 |
| Setting-3 | 63.23±0.26 | 80.17±0.33 |
+
+Table 5. Few-shot classification accuracies (%) on miniImageNet.
+
+of $+1.5\%$ as shown in Table 5. Thus making the learnt feature embedding more discriminative for samples from different classes.
+
+# 5 Conclusion
+
+In this paper, we proposed a new few-shot learning scheme aiming to improve the metric learning-based prototypical network. Our proposed scheme has the following novel characteristics: (1) a new embedding space created by a capsule network, which is unique in its capability to encode the relative spatial relationship between features. The network is trained with a novel triple-loss designed to learn the embedding space; (2) an effective and robust non-parameter classification scheme, named attentive prototypes, to replace the simple feature average for prototypes. The instances from the support set are taken into account to generate prototypes, with their importance being calculated by the reconstruction error for a given query. Experimental results showed that the proposed method outperforms the other few-shot learning algorithms on all of the miniImageNet, tieredImageNet and FC100 datasets.
+
+# References
+
+1. Arik, S.O., Pfister, T.: Attention-based prototypical learning towards interpretable, confident and robust deep neural networks. arXiv preprint arXiv:1902.06292 (2019)
+2. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 248-255 (2009)
+3. Fe-Fei, L., et al.: A bayesian approach to unsupervised one-shot learning of object categories. In: IEEE International Conference on Computer Vision (ICCV). pp. 1134-1141 (2003)
+4. Fei-Fei, L., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence (TPAMI) 28(4), 594-611 (2006)
+5. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning (ICML). pp. 1126–1135 (2017)
+6. Fukunaga, K.: Introduction to statistical pattern recognition. Elsevier (2013)
+7. Gao, T., Han, X., Liu, Z., Sun, M.: Hybrid attention-based prototypical networks for noisy few-shot relation classification. In: AAAI Conference on Artificial Intelligence (AAAI) (2019)
+8. Gidaris, S., Komodakis, N.: Dynamic few-shot visual learning without forgetting. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4367-4375 (2018)
+9. Gidaris, S., Komodakis, N.: Generating classification weights with gnn denoising autoencoders for few-shot learning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
+0. Hermans, A., Beyer, L., Leibe, B.: In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737 (2017)
+1. Hinton, G.E., Krizhevsky, A., Wang, S.D.: Transforming auto-encoders. In: International Conference on Artificial Neural Networks. pp. 44-51. Springer (2011)
+2. Kim, J., Kim, T., Kim, S., Yoo, C.D.: Edge-labeling graph neural network for few-shot learning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 11-20 (2019)
+3. Kosiorek, A.R., Sabour, S., Teh, Y.W., Hinton, G.E.: Stacked capsule autoencoders. arXiv preprint arXiv:1906.06818 (2019)
+4. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Tech. rep., Citeseer (2009)
+5. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems (NIPS). pp. 1097-1105 (2012)
+6. Lee, K., Maji, S., Ravichandran, A., Soatto, S.: Meta-learning with differentiable convex optimization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June)
+7. Lenssen, J.E., Fey, M., Libuschewski, P.: Group equivariant capsule networks. In: Advances in neural information processing systems (NIPS). pp. 8844-8853 (2018)
+8. Li, H., Eigen, D., Dodge, S., Zeiler, M., Wang, X.: Finding task-relevant features for few-shot learning by category traversal. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1-10 (2019)
+9. Li, Z., Zhou, F., Chen, F., Li, H.: Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835 (2017)
+
+20. Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(Nov), 2579-2605 (2008)
+21. Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P.: A simple neural attentive meta-learner. arXiv preprint arXiv:1707.03141 (2017)
+22. Mitani, Y., Hamamoto, Y.: A local mean-based nonparametric classifier. Pattern Recognition Letters 27(10), 1151-1159 (2006)
+23. Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 (2018)
+24. Oreshkin, B., López, P.R., Lacoste, A.: Tadam: Task dependent adaptive metric for improved few-shot learning. In: Advances in neural information processing systems (NIPS). pp. 721-731 (2018)
+25. Peng, H., Li, J., Gong, Q., Wang, S., He, L., Li, B., Wang, L., Yu, P.S.: Hierarchical taxonomy-aware and attentional graph capsule rcnns for large-scale multi-label text classification. arXiv preprint arXiv:1906.04898 (2019)
+26. Qiao, S., Liu, C., Shen, W., Yuille, A.L.: Few-shot image recognition by predicting parameters from activations. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 7229-7238 (2018)
+27. Rajasegaran, J., Jayasundara, V., Jayasekara, S., Jayasekara, H., Seneviratne, S., Rodrigo, R.: Deepcaps: Going deeper with capsule networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10725-10733 (2019)
+28. Rawlinson, D., Ahmed, A., Kowadlo, G.: Sparse unsupervised capsules generalize better. arXiv preprint arXiv:1804.06094 (2018)
+29. Ren, M., Triantafillou, E., Ravi, S., Snell, J., Swersky, K., Tenenbaum, J.B., Larochelle, H., Zemel, R.S.: Meta-learning for semi-supervised few-shot classification. In: International Conference on Learning Representations (ICLR) (2018)
+30. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. International journal of computer vision 115(3), 211-252 (2015)
+31. Rusu, A.A., Rao, D., Sygnowski, J., Vinyals, O., Pascanu, R., Osindero, S., Hadsell, R.: Meta-learning with latent embedding optimization. In: International Conference on Learning Representations (ICLR) (2018)
+32. Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: Advances in neural information processing systems (NIPS). pp. 3856-3866 (2017)
+33. Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: A unified embedding for face recognition and clustering. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 815-823 (2015)
+34. Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Advances in neural information processing systems (NIPS). pp. 4077-4087 (2017)
+35. Sohn, K.: Improved deep metric learning with multi-class n-pair loss objective. In: Advances in neural information processing systems (NIPS). pp. 1857-1865 (2016)
+36. Sun, Q., Liu, Y., Chua, T.S., Schiele, B.: Meta-transfer learning for few-shot learning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 403-412 (2019)
+37. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1199-1208 (2018)
+38. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1-9 (2015)
+
+39. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: Advances in neural information processing systems (NIPS). pp. 3630-3638 (2016)
+40. Xia, C., Zhang, C., Yan, X., Chang, Y., Yu, P.S.: Zero-shot user intent detection via capsule neural networks. arXiv preprint arXiv:1809.00385 (2018)
+41. Zhang, W., Tang, P., Zhao, L.: Remote sensing image scene classification using cnn-capsnet. Remote Sensing 11(5), 494 (2019)
+42. Zhang, X., Zhao, S.G.: Cervical image classification based on image segmentation preprocessing and a capsnet network model. International Journal of Imaging Systems and Technology 29(1), 19-28 (2019)
+43. Zhao, Y., Birdal, T., Deng, H., Tombari, F.: 3d point capsule networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1009-1018 (2019)
\ No newline at end of file
diff --git a/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/images.zip b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1ea552228680b9443fbaa9134e012c078d04b4fe
--- /dev/null
+++ b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dae8191f50a35162e3af405f5168f9ffdd6832da54440e26ab2544129a6b6f7e
+size 387461
diff --git a/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/layout.json b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d4f1906616d1899b449451f732150d9e8c69dad9
--- /dev/null
+++ b/attentiveprototypefewshotlearningwithcapsulenetworkbasedembedding/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d8de5a961efbebb4fe41d6dd558ac8fd6658f904284c3696a606bdd84aaaa146
+size 401994
diff --git a/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/21259404-2e0f-4acc-87e8-7507783d6bb5_content_list.json b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/21259404-2e0f-4acc-87e8-7507783d6bb5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..67e37d36b76796a8a03a36583a9ea50e196802e4
--- /dev/null
+++ b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/21259404-2e0f-4acc-87e8-7507783d6bb5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4fbb0c6dc196a74f5f168ecfd22bb0de38e7ece15cf61bcc984d5ff19efefd5e
+size 80213
diff --git a/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/21259404-2e0f-4acc-87e8-7507783d6bb5_model.json b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/21259404-2e0f-4acc-87e8-7507783d6bb5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..96418a7b231a0f9ed02fe86bb238a4be23400840
--- /dev/null
+++ b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/21259404-2e0f-4acc-87e8-7507783d6bb5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c9994e003129088d3c51ac06dc43ac536bcb9d21cf5a9155dc1170c87dcfa6be
+size 96508
diff --git a/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/21259404-2e0f-4acc-87e8-7507783d6bb5_origin.pdf b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/21259404-2e0f-4acc-87e8-7507783d6bb5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1de1a9ff56ed3fddb116fb70ab6710578c978d45
--- /dev/null
+++ b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/21259404-2e0f-4acc-87e8-7507783d6bb5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b5fb629240b076b9dbd335d6469f069eb46874f3efdca043d8dcd136ecfd6f68
+size 3589267
diff --git a/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/full.md b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..037735238c249bd746bb94590a44f15d95ec3d52
--- /dev/null
+++ b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/full.md
@@ -0,0 +1,284 @@
+# Attract, Perturb, and Explore: Learning a Feature Alignment Network for Semi-supervised Domain Adaptation
+
+Taekyung Kim[0000-0001-7401-098X] and Changick Kim
+
+Korea Advanced Institute of Science and Technology, Daejeon, South Korea {tkkim93, changick}@kaist.ac.kr
+
+Abstract. Although unsupervised domain adaptation methods have been widely adopted across several computer vision tasks, it is more desirable if we can exploit a few labeled data from new domains encountered in a real application. The novel setting of the semi-supervised domain adaptation (SSDA) problem shares the challenges with the domain adaptation problem and the semi-supervised learning problem. However, a recent study shows that conventional domain adaptation and semi-supervised learning methods often result in less effective or negative transfer in the SSDA problem. In order to interpret the observation and address the SSDA problem, in this paper, we raise the intra-domain discrepancy issue within the target domain, which has never been discussed so far. Then, we demonstrate that addressing the intra-domain discrepancy leads to the ultimate goal of the SSDA problem. We propose an SSDA framework that aims to align features via alleviation of the intra-domain discrepancy. Our framework mainly consists of three schemes, i.e., attraction, perturbation, and exploration. First, the attraction scheme globally minimizes the intra-domain discrepancy within the target domain. Second, we demonstrate the incompatibility of the conventional adversarial perturbation methods with SSDA. Then, we present a domain adaptive adversarial perturbation scheme, which perturbs the given target samples in a way that reduces the intra-domain discrepancy. Finally, the exploration scheme locally aligns features in a class-wise manner complementary to the attraction scheme by selectively aligning unlabeled target features complementary to the perturbation scheme. We conduct extensive experiments on domain adaptation benchmark datasets such as DomainNet, Office-Home, and Office. Our method achieves state-of-the-art performances on all datasets.
+
+Keywords: Domain Adaptation $\cdot$ Semi-supervised Learning
+
+# 1 Introduction
+
+Despite the promising success of deep neural networks in several computer vision tasks, these networks often show performance degradation when tested beyond the training environment. One way to mitigate this problem is to collect
+
+
+Fig. 1: Conceptual descriptions of the feature alignment approaches. The top row describes the different feature alignment behaviors between the UDA and SSDA problem. Supervision on labeled target samples attracts the corresponding features and their neighborhood toward the source feature cluster, which causes the intra-domain discrepancy. The bottom row describes the proposed attraction, perturbation, and exploration schemes, which are explained in Section 4 in detail.
+
+large amounts of data from the new domain and train the network. Such heavy demands on data annotation cause great interest in domain adaptation and semi-supervised learning on deep neural networks. However, most recent studies on deep domain adaptation are focused on unsupervised approaches, and deep semi-supervised learning is still concentrated on addressing the identical domain problem. Though these methods can be directly applied to the semi-supervised domain adaptation (SSDA) problem only with an additional supervision on the extra labeled samples, a recent study [26] reveals that unsupervised domain adaptation (UDA) methods and semi-supervised learning (SSL) methods often show less effective or even worse performances than just training on the labeled source and target samples in the SSDA problem.
+
+In this paper, we introduce a new concept called intra-domain discrepancy to analyze the failure of the UDA and SSL methods and address the SSDA problems. Intra-domain discrepancy is a chronic issue in the SSDA problem that occurs during labeled sample supervision, but has never been discussed so far. In the UDA problem, supervision on the labeled source samples does not critically affect the target domain distribution in general but implicitly attracts some alignable target features similar to the source features. Thus, aligning the source and target domains by reducing their inter-domain discrepancy is reasonable. However, in the SSDA problem, supervision on the labeled target samples enforces the corresponding features and their neighborhood to be attracted toward source feature clusters, which guarantees partial alignment between two domain distributions. Besides, unlabeled target samples that less correlate with the la-
+
+beled target samples are less affected by the supervision and eventually remain unaligned (Top row in Fig. 1). Thus, the target domain distribution is separated into an aligned target subdistribution and an unaligned target subdistribution, causing the intra-domain discrepancy within the target domain. The failure of the UDA and SSL methods will be discussed in Section 3 in detail.
+
+Motivated by the insight, we propose an SSDA framework that aligns cross-domain features by addressing the intra-domain discrepancy within the target domain. Our framework focuses on enhancing the discriminability on the unaligned target samples and modulating the class prototypes, the representative features of each class. It consists of three schemes, i.e., attraction, perturbation, and exploration, as shown in Fig. 1. First, the attraction scheme aligns the unaligned target subdistribution to the aligned target subdistribution through the intra-domain discrepancy minimization. Second, we discuss why conventional adversarial perturbation methods are ineffective in the SSDA problem. Unlike these approaches, our perturbation scheme perturbs target subdistributions into their intermediate region to propagate labels to the unaligned target subdistribution. Note that our perturbation scheme does not ruin the already aligned target features since it additionally generates perturbed features temporarily for regularization. Finally, the exploration scheme locally modulates the prototypes in a class-aware manner complementary to the attraction and perturbation schemes. We perform extensive experiments to evaluate the proposed method on domain adaptation datasets such as DomainNet, Office-Home, Office, and achieved state-of-the-art performances. We also deeply analyze our methods in detail.
+
+Our contributions can be summarized as follows:
+
+- We introduce the intra-domain discrepancy issue within the target domain in the SSDA problem.
+- We propose an SSDA framework that addresses the intra-domain discrepancy issues via three schemes, i.e., attraction, perturbation, and exploration.
+
+- The attraction scheme aligns the unaligned target subdistribution to the aligned target subdistribution through the intra-domain discrepancy minimization.
+- The perturbation scheme perturb target subdistributions into their intermediate region to propagate labels to the unaligned target subdistribution.
+- The exploration scheme locally modulate the prototypes in a class-aware manner complementary to the attraction and perturbation schemes.
+
+- We conduct extensive experiments on DomainNet, Office-Home, and Office. We achieve state-of-the-art performances among various methods, including vanilla deep neural networks, UDA, SSL, and SSDA methods.
+
+# 2 Related Work
+
+# 2.1 Unsupervised Domain Adaptation
+
+The recent success of deep learning-based approaches and the following enormous demand for massive amounts of data attract great interest in domain adaptation
+
+(DA). Even in the midst of significant interest, most recent works are focused on unsupervised domain adaptation (UDA). Recent UDA methods can be categorized into three approaches. The first approach is to reduce the cross-domain divergence. This can be achieved by minimizing the estimated domain divergence such as MMD [9] or assimilating feature distributions through adversarial confusion using a domain classifier [7, 17-19]. The second approach is to translate the appearance of one domain into the opposite domain so that the translated data can be regarded as sampled from the opposite domain [13, 12, 10]. The last approach is to consider the source domain as partially labeled data and utilize the semi-supervised learning schemes. For example, Drop-to-Adapt [16] exploits a virtual adversarial perturbation scheme [21]. Recently, these approaches are widely adopted across several computer vision tasks beyond image classification such as object detection [4, 28, 14], semantic segmentation [30, 11], person re-identification [35], and even in depth estimation [34].
+
+# 2.2 Semi-supervised Learning
+
+Similar to domain adaptation (DA), semi-supervised learning (SSL) has also attracted great attention as a way to overcome the shortages of the labeled data. The difference between DA and SSL is that domain adaptation assumes to deal with data sampled from two distributions with significant domain discrepancy, while SSL assumes to deal with the labeled and unlabeled data sampled from the identical distribution. With the rise of the deep learning approaches, several methods have been recently proposed for deep SSL. Some works add data augmentation and regularize the model by enforcing a consistency between the given and the augmented data [15, 29]. Miyato et al. [21] extend this scheme by adversarially searching the bounded and small perturbation which leads the model to the most unstable state. Laine and Aila [15] ensemble the prediction of the model by averaging them throughout the training phase, while Targainen and Valpola [29] ensemble the parameter of the model itself. Other few works use self-training schemes with a memory module or a regularization through convergence speed [3, 5]. Recently, Wang et al. [32] propose an augmentation distribution alignment approach to explicitly address the empirical distribution mismatch problem in semi-supervised learning.
+
+# 2.3 Semi-supervised Domain Adaptation
+
+Semi-supervised domain adaptation (SSDA) is an important task which bridges the well-organized source distribution toward target distribution via partially labeled target samples, while a few works have explored so far [1, 6, 33, 26]. Donahue et al. [6] address the domain discrepancy by optimizing the auxiliary constrains on the labeled data. Yao et al. [33] learn a subspace that can reduce the data distribution mismatch. Ao et al. [1] estimate the soft label of the given labeled target sample with the source model and interpolated with the hard label for target model supervision. Saito et al. [26] minimize the distance between the unlabeled target samples and the class prototypes through minimax training on
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+
+(e)
+Fig.2: (a)-(f) The t-SNE visualization of the source and target features in (a) the UDA problem and (b) the SSDA problem with three target labels for each class. We adopted the Real to Sketch scenario of the DomainNet dataset on the AlexNet backbone and visualized for partial classes. (c) and (e) visualize the source distribution of the UDA and SSDA problems and (d) and (f) visualize the target distribution of the UDA and SSDA problems, respectively. Even only three labeled target samples per class can attract their neighborhoods in this degree and separate the target domain into the aligned and unaligned subdistributions.
+
+
+(f)
+
+entropy. However, none of these methods discuss and mitigate the intra-domain discrepancy issue in the SSDA problem. Different from previous works, we address the SSDA problem with a new perspective of the intra-domain discrepancy.
+
+# 3 Intra-domain Discrepancy
+
+Intra-domain discrepancy of a domain is an internal distribution gap among subdistributions within the domain. Though we demonstrate the intra-domain discrepancy issue in the semi-supervised domain adaptation (SSDA) problem, such subdistributions can also appear in the unsupervised domain adaptation (UDA) problem since there usually exist target samples alignable to the source clusters. However, since each domain generally has a unique correlation among domain samples, the target domain distribution is not easily separated into distinctive subdistributions, which eventually causes the insufficient intra-domain discrepancy. Thus, the conventional inter-domain discrepancy minimization approaches have been effectively applied to the UDA problem. In contrast, in the
+
+SSDA problem, supervision on the labeled target samples enforces the target domain to be separated into the aligned subdistribution and the unaligned subdistribution deterministically. More specifically, as shown in the top row of Fig. 1, the presence of the label pulls the target samples and its neighborhoods toward source feature clusters of each corresponding labels. Besides, the unlabeled target samples which less correlate with the given labeled target samples are still located distant from source feature clusters, producing inaccurate and even meaningless inference results. Figure 2 demonstrates the existence of the intra-domain discrepancy within the target domain. Though only three target labels per class are given, significant number of target samples are aligned while wrongly predicted target samples (red circle in Fig. 2 (f)) are still located far from the source domain.
+
+The presence of the intra-domain discrepancy makes the conventional domain adaptation methods less suitable for the SSDA problem. The ultimate goal of domain adaptation is to enhance the discriminability on the target domain, and most of the error occurs on the unaligned target subdistribution in this case. Thus, solving SSDA problems depends on how far the unaligned subdistribution is aligned. However, common domain adaptation methods focus on reducing the inter-domain discrepancy between the source and target domains regardless of the intra-domain discrepancy within the target domain. Since the existence of the aligned target subdistribution cause underestimation of the inter-domain discrepancy, the inter-domain discrepancy reduction approaches work less effectively in the SSDA problems. Moreover, since the aligned target subdistribution is aligned in a class-aware manner, such approaches can even negatively affect.
+
+Similarly, conventional semi-supervised learning (SSL) methods also suffer from the intra-domain discrepancy issue in the SSDA problem. It stems from the different assumptions between the SSDA and SSL problems. Since the SSL problem assumes to sample labeled and unlabeled data from the identical distribution, SSL methods mainly focus on propagating the correct labels to their neighbors. In contrast, SSDA problems assume that there is a significant distribution divergence between the source and target domains, and that labeled samples are dominated by the source domain. Since correctly predicted target samples are highly aligned with the source distribution, whereas incorrectly predicted target samples are located far from them, we can no longer assume that these target samples share the same distribution. Thus, the SSL methods only propagate errors within the wrongly predicted subdistribution, and the propagation is also meaningless in the correctly predicted subdistribution due to the rich distribution of the source domain. Motivated by the interpretation, we propose a framework that addresses the intra-domain discrepancy.
+
+# 4 Method
+
+# 4.1 Problem Formulation
+
+Let us denote the set of source domain samples by $\mathcal{D}_s = \left\{(\mathbf{x}_i^s,y_i^s)\right\}_{i = 1}^{m_s}$ . For the target domain, $\mathcal{D}_t = \{(x_i^t,y_i^t)\}_{i = 1}^{m_t}$ and $\mathcal{D}_u = \{(x_i^u)\}_{i = 1}^{m_u}$ denote the sets of
+
+
+
+
+
+
+Fig. 3: An overall framework of the proposed method. Our framework consists of the feature extractor, trainable class prototypes, supervision module, and each module for the proposed schemes. The class prototypes and all normalized features of the samples are embedded in the same spherical feature space.
+
+
+
+
+
+labeled and unlabeled target samples, respectively. SSDA aims to enhance the target domain discriminability through training on $\mathcal{D}_s$ , $\mathcal{D}_t$ , and $\mathcal{D}_u$ .
+
+# 4.2 Spherical Feature Space with Prototypes
+
+When aligning feature distributions, it is crucial to determine which feature space to adapt. Even if the same method is used, performance may not be improved depending on the feature space applied. Thus, we adopt similarity-based prototypical classifier in [2] to prepare suitable feature space for better adaptation. Briefly, the prototypical classifier inputs normalized feature and compare the similarities among all class-wise prototypes, which reduces intra-class variations as results. For the classifier training, we use a cross-entropy loss as our classification loss to train an embedding function $f_{\theta}(\cdot)$ with parameters $\theta$ and the prototypes $\mathbf{p}_k$ ( $k = 1, \dots, K$ ) on the source domain samples and the labeled target samples:
+
+$$
+\begin{array}{l} \mathcal {L} _ {c l s} = \mathbb {E} _ {(\mathbf {x}, y) \in \mathcal {D} _ {s} \cup \mathcal {D} _ {t}} [ - \log p (y | \mathbf {x}, \mathbf {p}) ] \\ = \mathbb {E} _ {(\mathbf {x}, y) \in \mathcal {D} _ {s} \cup \mathcal {D} _ {t}} \left[ - \log \left(\frac {\exp \left(\mathbf {p} _ {y} \cdot f _ {\theta} (\mathbf {x}) / T\right)}{\sum_ {i = 1} ^ {K} \exp \left(\mathbf {p} _ {i} \cdot f _ {\theta} (\mathbf {x}) / T\right)}\right) \right]. \tag {1} \\ \end{array}
+$$
+
+While the prototypical classifier is trying to reduce the intra-class variation of the labeled sample features, the proposed schemes focus on aligning distributions of the normalized features on the spherical feature space.
+
+# 4.3 Attraction Scheme
+
+The attraction scheme aims to globally align the unaligned target subdistribution to the aligned target subdistribution in a subdistribution-level through the estimated intra-domain discrepancy minimization. The scheme measures the feature distribution divergence between the target subdistributions to estimate the intra-domain discrepancy within the target domain. However, the limited number of the labeled target samples would not be sufficient to represent the feature distribution of the aligned target subdistribution. Thus, motivated by the observation that the features of the aligned target subdistribution are highly aligned with that of the source domain in a class-aware manner, we instead use the complex distribution of the labeled source and target data. For the empirical estimation of the intra-domain discrepancy, we adopt Maximum Mean Discrepancy (MMD) [9], a kernel two-sample test that measures the distribution difference. We exploit a mixture $k(\cdot ,\cdot)$ of Gaussian Radial Basis Function (RBF) kernels with multiple kernel widths $\sigma_{i}$ $(i = 1,\dots,N)$ . Thus, the estimated intra-domain discrepancy on the spherical feature space can be written as:
+
+$$
+\begin{array}{l} d \left(\mathcal {D} _ {s} \cup \mathcal {D} _ {t}, \mathcal {D} _ {u}\right) = \mathbb {E} _ {\left(\mathbf {x}, y\right), \left(\mathbf {x} ^ {\prime}, y ^ {\prime}\right) \in \mathcal {D} _ {s} \cup \mathcal {D} _ {t}} \left[ k \left(f _ {\theta} (\mathbf {x}), f _ {\theta} \left(\mathbf {x} ^ {\prime}\right)\right) \right] \\ + \mathbb {E} _ {(\mathbf {z}, w), \left(\mathbf {z} ^ {\prime}, w ^ {\prime}\right) \in \mathcal {D} _ {u}} \left[ k \left(f _ {\theta} (\mathbf {z}), f _ {\theta} \left(\mathbf {z} ^ {\prime}\right)\right) \right] \tag {2} \\ - 2 \mathbb {E} _ {(\mathbf {x}, y) \in \mathcal {D} _ {s} \cup \mathcal {D} _ {t}, (\mathbf {z}, w) \in \mathcal {D} _ {u}} \left[ k \left(f _ {\theta} (\mathbf {x}), f _ {\theta} (\mathbf {z})\right) \right], \\ \end{array}
+$$
+
+where $\mathbf{x}'$ , $\mathbf{z}$ , and $\mathbf{z}'$ represent samples, $y'$ , $w$ , and $w'$ represent the corresponding labels. Since our attraction scheme directly minimizes the intra-domain discrepancy, the attraction loss can be written by:
+
+$$
+L _ {a} = d \left(\mathcal {D} _ {s} \cup \mathcal {D} _ {t}, \mathcal {D} _ {u}\right). \tag {3}
+$$
+
+# 4.4 Perturbation scheme
+
+Conventional adversarial perturbation, one of the semi-supervised learning (SSL) approaches, turns out to be ineffective or even cause negative transfer in the SSDA problem. In the same context as discussed in Section 3, the labeled target samples and its neighborhoods are aligned to the source domain separated from the inaccurate target samples, causing the intra-domain discrepancy. Then, the aligned features already guaranteed its confidence by rich information of the source domain, while the unaligned features can only propagate inaccurate predictions. Thus, the perturbation on both the aligned and unaligned target subdistribution are less meaningful.
+
+Unlike the common adversarial perturbation approaches, our scheme perturbs target subdistributions toward their intermediate region for 1) accurate prediction propagation from the aligned subdistribution to the unaligned subdistribution and 2) class prototypes modulation toward the region. Such perturbation can be achieved by searching the direction of the anisotropically high entropy of the target features since the element-wise entropy increases as the feature move away from the prototype while the feature far from the prototypes
+
+can be attracted toward the prototypes. Note that the perturbation scheme does not ruin the already aligned subdistribution since it temporally generates additional perturbed features of the aligned feature for regularization. To achieve this, we first perturb the class prototypes in an entropy maximization direction. Then, we optimize a small and bounded perturbation toward the perturbed prototypes. Finally, we regularize the perturbed data and the given data through Kullback-Leibler divergence. To summarize, the perturbation loss can be formulated as follows:
+
+$$
+\begin{array}{l} H _ {\mathbf {p}} (\mathbf {x}) = - \sum_ {i = 1} ^ {K} p (y = i | \mathbf {x}) \log p (y = i | \mathbf {x}, \mathbf {p}) \\ r_{\mathbf{x}} = \operatorname *{argmin}_{\| r\| < \epsilon}\max_{\mathbf{p}}H_{\mathbf{p}}(\mathbf{x} + r) \\ \mathcal {L} _ {p} = \mathbb {E} _ {\mathbf {x} \in \mathcal {D} _ {u}} \left[ \sum_ {i = 1} ^ {K} D _ {K L} [ p (y = i | \mathbf {x}, \mathbf {p}), p (y = i | \mathbf {x} + \mathbf {r} _ {\mathbf {x}}, \mathbf {p}) ] \right] \tag {4} \\ + \mathbb {E} _ {(\mathbf {z}, w) \in \mathcal {D} _ {t}} \left[ \sum_ {i = 1} ^ {K} D _ {K L} [ p (y = i | \mathbf {z}, \mathbf {p}), p (y = i | \mathbf {z} + \mathbf {r} _ {\mathbf {z}}, \mathbf {p}) ] \right]. \\ \end{array}
+$$
+
+where $H_{\mathbf{p}}(\cdot)$ is an element-wise entropy function defined upon similarities between the given feature and the prototypes, $\mathbf{x}$ and $\mathbf{z}$ represent samples, and $y$ represent the corresponding label.
+
+# 4.5 Exploration scheme
+
+The exploration scheme aims to locally modulate the prototypes in a class-aware manner complementary to the attraction scheme, while selectively aligns the unlabeled target features via suitable criteria complementary to the perturbation scheme. Though the attraction scheme globally aligns the target subdistributions on the feature space regardless of the prototypes, it does not explicitly enforce the prototypes to be modulated, which can be complemented by local and class-aware alignment. On the other hand, since the perturbation scheme regularizes the perturbed features of the anisotropically high entropy, the entropy of the perturbed feature and its neighborhood gradually became low. The exploration scheme aligns these features so that their entropy became isotropic, and thus the aligned features can be perturbed farther toward the unaligned subdistribution. To practically achieve this, we selectively collect unlabeled target data with its element-wise entropy less than a certain threshold, then apply a cross-entropy loss with the class of the nearest prototype. The objective function of the exploration scheme can be written as follows:
+
+$$
+M _ {\epsilon} = \left\{\mathbf {x} \in \mathcal {D} _ {u} | H _ {\mathbf {p}} (\mathbf {x}) < \epsilon \right\}
+$$
+
+$$
+\hat {y} _ {\mathbf {x}} = \underset {i \in \{1, \dots , K \}} {\operatorname {a r g m a x}} p (y = i | \mathbf {x}, \mathbf {p}) \tag {5}
+$$
+
+$$
+\mathcal {L} _ {e} = \mathbb {E} _ {\mathcal {D} _ {u}} [ - \mathbf {1} _ {M _ {e}} (\mathbf {x}) \log p (y = \hat {y} _ {\mathbf {x}} | \mathbf {x}, \mathbf {p}) ].
+$$
+
+Table 1: Classification accuracy (\%) on the DomainNet dataset on the AlexNet and ResNet-34 backbone networks. The performance comparisons were done for seven scenarios with one or three labeled target samples for each class.
+
+| Net | Method | R to C | R to P | P to C | C to S | S to P | R to S | P to R | MEAN |
| 1-shot | 3-shot | 1-shot | 3-shot | 1-shot | 3-shot | 1-shot | 3-shot | 1-shot | 3-shot | 1-shot | 3-shot | 1-shot | 3-shot | 1-shot | 3-shot |
| AlexNet | S+T | 43.3 | 47.1 | 42.4 | 45.0 | 40.1 | 44.9 | 33.6 | 36.4 | 35.7 | 38.4 | 29.1 | 33.3 | 55.8 | 58.7 | 40.0 | 43.4 |
| DANN | 43.3 | 46.1 | 41.6 | 43.8 | 39.1 | 41.0 | 35.9 | 36.5 | 36.9 | 38.9 | 32.5 | 33.4 | 53.6 | 57.3 | 40.4 | 42.4 |
| ADR | 43.1 | 46.2 | 41.4 | 44.4 | 39.3 | 43.6 | 32.8 | 36.4 | 33.1 | 38.9 | 29.1 | 32.4 | 55.9 | 57.3 | 39.2 | 42.7 |
| CDAN | 46.3 | 46.8 | 45.7 | 45.0 | 38.3 | 42.3 | 27.5 | 29.5 | 30.2 | 33.7 | 28.8 | 31.3 | 56.7 | 58.7 | 39.1 | 41.0 |
| ENT | 37.0 | 45.5 | 35.6 | 42.6 | 26.8 | 40.4 | 18.9 | 31.1 | 15.1 | 29.6 | 18.0 | 29.6 | 52.2 | 60.0 | 29.1 | 39.8 |
| MME | 48.9 | 55.6 | 48.0 | 49.0 | 46.7 | 51.7 | 36.3 | 39.4 | 39.4 | 43.0 | 33.3 | 37.9 | 56.8 | 60.7 | 44.2 | 48.2 |
| SagNet | 45.8 | 49.1 | 45.6 | 46.7 | 42.7 | 46.3 | 36.1 | 39.4 | 37.1 | 39.8 | 34.2 | 37.5 | 54.0 | 57.0 | 42.2 | 45.1 |
| Ours | 47.7 | 54.6 | 49.0 | 50.5 | 46.9 | 52.1 | 38.5 | 42.6 | 38.5 | 42.2 | 33.8 | 38.7 | 57.5 | 61.4 | 44.6 | 48.9 |
| ResNet | S+T | 55.6 | 60.0 | 60.6 | 62.2 | 56.8 | 59.4 | 50.8 | 55.0 | 56.0 | 59.5 | 46.3 | 50.1 | 71.8 | 73.9 | 56.9 | 60.0 |
| DANN | 58.2 | 59.8 | 61.4 | 62.8 | 56.3 | 59.6 | 52.8 | 55.4 | 57.4 | 59.9 | 52.2 | 54.9 | 70.3 | 72.2 | 58.4 | 60.7 |
| ADR | 57.1 | 60.7 | 61.3 | 61.9 | 57.0 | 60.7 | 51.0 | 54.4 | 56.0 | 59.9 | 49.0 | 51.1 | 72.0 | 74.2 | 57.6 | 60.4 |
| CDAN | 65.0 | 69.0 | 64.9 | 67.3 | 63.7 | 68.4 | 53.1 | 57.8 | 63.4 | 65.3 | 54.5 | 59.0 | 73.2 | 78.5 | 62.5 | 66.5 |
| ENT | 65.2 | 71.0 | 65.9 | 69.2 | 65.4 | 71.1 | 54.6 | 60.0 | 59.7 | 62.1 | 52.1 | 61.1 | 75.0 | 78.6 | 62.6 | 67.6 |
| MME | 70.0 | 72.2 | 67.7 | 69.7 | 69.0 | 71.7 | 56.3 | 61.8 | 64.8 | 66.8 | 61.0 | 61.9 | 76.1 | 78.5 | 66.4 | 68.9 |
| SagNet | 59.4 | 62.0 | 61.9 | 62.9 | 59.1 | 61.5 | 54.0 | 57.1 | 56.6 | 59.0 | 49.7 | 54.4 | 72.2 | 73.4 | 59.0 | 61.5 |
| Ours | 70.4 | 76.6 | 70.8 | 72.1 | 72.9 | 76.7 | 56.7 | 63.1 | 64.5 | 66.1 | 63.0 | 67.8 | 76.6 | 79.4 | 67.6 | 71.7 |
+
+where $M_{\epsilon}$ is a set of unlabeled target data with entropy value less than a hyperparameter $\epsilon$ , and $\mathbf{1}_{M_{\epsilon}}(\cdot)$ is an indicator function that filters out alignable samples from the given unlabeled target samples.
+
+# 4.6 Overall framework and training objective
+
+The overall training objective of our method is the weighted sum of the supervision loss, the attraction loss, the perturbation loss, and the exploration loss. The optimization problem can be formulated as follows:
+
+$$
+\min _ {\mathbf {p}, \theta} \mathcal {L} _ {c l s} + \alpha \mathcal {L} _ {a} + \beta \mathcal {L} _ {e} + \gamma \mathcal {L} _ {p}. \tag {6}
+$$
+
+We integrated all the schemes into one framework, as shown in the Fig. 3.
+
+# 5 Experiments
+
+# 5.1 Experimental Setup
+
+Datasets. DomainNet [24] is a recently released large-scale domain adaptation benchmark dataset that contains six domains and approximately 0.6 million images with 345 classes. Office-Home [31] and Office [25] are standard benchmarks for domain adaptation. Office-Home consists of Art, Clipart, Product, and Real-world domain with 65 classes. Office consists of Amazon, Webcam, and DSLR domains with 31 classes.
+
+Evaluation tasks. For a fair comparison with the state-of-the-art SSDA method [26], we performed experiments on 7 adaptation scenarios on the four domains
+
+Table 2: Classification accuracy (%) on the Office-Home dataset with the AlexNet and ResNet-34 backbone networks. The performance comparisons were done for a total of 12 scenarios on three-shot setting.
+
+| Net | Method | R to C | R to P | R to A | P to R | P to C | P to A | A to P | A to C | A to R | C to R | C to A | C to P | MEAN |
| AlexNet | S+T | 44.6 | 66.7 | 47.7 | 57.8 | 44.4 | 36.1 | 57.6 | 38.8 | 57.0 | 54.3 | 37.5 | 57.9 | 50.0 |
| DANN | 47.2 | 66.7 | 46.6 | 58.1 | 44.4 | 36.1 | 57.2 | 39.8 | 56.6 | 54.3 | 38.6 | 57.9 | 50.3 |
| ADR | 45.0 | 66.2 | 46.9 | 57.3 | 38.9 | 36.3 | 57.5 | 40.0 | 57.8 | 53.4 | 37.3 | 57.7 | 49.5 |
| CDAN | 41.8 | 69.9 | 43.2 | 53.6 | 35.8 | 32.0 | 56.3 | 34.5 | 53.5 | 49.3 | 27.9 | 56.2 | 46.2 |
| ENT | 44.9 | 70.4 | 47.1 | 60.3 | 41.2 | 34.6 | 60.7 | 37.8 | 60.5 | 58.0 | 31.8 | 63.4 | 50.9 |
| MME | 51.2 | 73.0 | 50.3 | 61.6 | 47.2 | 40.7 | 63.9 | 43.8 | 61.4 | 59.9 | 44.7 | 64.7 | 55.2 |
| Ours | 51.9 | 74.6 | 51.2 | 61.6 | 47.9 | 42.1 | 65.5 | 44.5 | 60.9 | 58.1 | 44.3 | 64.8 | 55.6 |
| ResNet | S+T | 55.7 | 80.8 | 67.8 | 73.1 | 53.8 | 63.5 | 73.1 | 54.0 | 74.2 | 68.3 | 57.6 | 72.3 | 66.2 |
| DANN | 57.3 | 75.5 | 65.2 | 69.2 | 51.8 | 56.6 | 68.3 | 54.7 | 73.8 | 67.1 | 55.1 | 67.5 | 63.5 |
| CDAN | 61.4 | 80.7 | 67.1 | 76.8 | 58.1 | 61.4 | 74.1 | 59.2 | 74.1 | 70.7 | 60.5 | 74.5 | 68.2 |
| ENT | 62.6 | 85.7 | 70.2 | 79.9 | 60.5 | 63.9 | 79.5 | 61.3 | 79.1 | 76.4 | 64.7 | 79.1 | 71.9 |
| MME | 64.6 | 85.5 | 71.3 | 80.1 | 64.6 | 65.5 | 79.0 | 63.6 | 79.7 | 76.6 | 67.2 | 79.3 | 73.1 |
| Ours | 66.4 | 86.2 | 73.4 | 82.0 | 65.2 | 66.1 | 81.1 | 63.9 | 80.2 | 76.8 | 66.6 | 79.9 | 74.0 |
+
+(Real, Clipart, Painting, Sketch) with 126 classes for DomainNet, 12 adaptation scenarios on all the domains for Office-Home, and two challenging adaptation scenarios on Office. One or three labeled target samples are given for each class for these scenarios. Additionally, we compared the performances for 5, 10 and 20 labeled target samples for each class.
+
+Implementation details. We adopted AlexNet and ResNet-34 for the backbone network. Every mini-batch consists of the same number of labeled source and target samples with a doubled number of unlabeled target samples. We prepared 32 and 24 samples for each split of the mini-batch for AlexNet and Resnet-34, respectively. We used the Stochastic Gradient Descent (SGD) optimizer with an initial learning rate of 0.01, a momentum of 0.9, and a weight decay of 0.0005. All implementations were done in PyTorch [23] and on a single GeForce Titan XP GPU.
+
+Baselines. We compared our method with the semi-supervised domain adaptation (SSDA), unsupervised domain adaptation (UDA), semi-supervised learning (SSL), and no adaptation methods. More specifically, the baselines consist of MME [26], SagNet [22], DANN [7], ADR [27], CDAN [18], ENT [8], and non-adapted model. For the UDA methods (DANN, ADR, and CDAN), the labeled target samples were supervised during the training process. S+T is a vanilla model trained on all labeled samples. DANN confuses the cross-domain distributions through adversarial learning. ADR adopts the dropout scheme to modify the decision boundary for feature alignment. CDAN adversarially aligns the feature by fooling the conditional domain discriminator. ENT is an SSL method that minimizes the entropy of the unlabeled target data. For the fair comparison, all the methods have the same backbone architecture with our method.
+
+Table 3: Classification accuracy (%) on the Office dataset with three-shot setting.
+
+| Net | Method | W to A | D to A | MEAN |
| AlexNet | S+T | 61.2 | 62.4 | 61.8 |
| DANN | 64.4 | 65.2 | 64.8 |
| ADR | 61.2 | 61.4 | 61.3 |
| CDAN | 60.3 | 61.4 | 60.8 |
| ENT | 64.0 | 66.2 | 65.1 |
| MME | 67.3 | 67.8 | 67.6 |
| Ours | 67.6 | 69.0 | 68.3 |
+
+
+(a) AlexNet
+
+
+(b) ResNet-34
+Fig. 4: Trend in classification accuracy (\%) with varying number of labeled target samples per class. The experiments are conducted on the Real to Clipart scenario of the DomainNet dataset.
+
+# 5.2 Results
+
+Performance Comparison on DomainNet. We summarized the classification accuracies of 7 scenarios on the DomainNet dataset in Table 1. On average, our method outperformed the best-performed baseline by $2.8\%$ in the three-shot setting and $1.2\%$ in the one-shot setting on ResNet-34, and by $0.7\%$ in the three-shot setting and $0.4\%$ in the one-shot setting on AlexNet. Moreover, our method outperformed most of the cases except for a few adaptation tasks. On the other hand, though UDA methods like DANN and ADR performed slightly better than $\mathrm{S + T}$ when only one labeled target per class is given, these methods become less effective or even cause negative transfer as the number of the labeled target samples increases. It verifies our statement that conventional domain adaptation methods are often less beneficial than the partial alignment effect from the given target labels. ENT showed significant improvement on ResNet-34, while it shows degenerative performance on AlexNet. Moreover, the performance enhancement gap increased as the number of labeled target samples increases, which will be discussed in more detail in Section 5.3.
+
+Performance Comparison on Office-Home and Office. The comparison results of our method with the baselines on the Office-Home dataset are reported in Table 2. Our method outperformed all the baselines regardless of the backbone network on average. Similar to DomainNet, our method showed the best
+
+Table 4: Ablation study results of the proposed schemes on the Real to Sketch task of DomainNet with three-shot setting.
+
+| Net | Method | Attract | Explore | Perturb | R to C | R to P | P to C | C to S | S to P | R to S | P to R | MEAN |
| AlexNet | S+T | | | | 47.1 | 45.0 | 44.9 | 36.4 | 38.4 | 33.3 | 58.7 | 43.4 |
| DANN | | | | 46.1 | 43.8 | 41.0 | 36.5 | 38.9 | 33.4 | 57.3 | 42.4 |
| MMD | | | | 47.9 | 45.5 | 44.6 | 38.1 | 38.4 | 35.5 | 56.6 | 43.8 |
| VAT | | | | 46.1 | 43.8 | 44.3 | 35.6 | 38.2 | 31.8 | 57.7 | 42.5 |
| ✓ | | | 50.2 | 46.2 | 47.5 | 40.8 | 41.3 | 37.2 | 59.8 | 46.1 |
| ✓ | ✓ | | 53.9 | 49.8 | 50.5 | 42.0 | 41.9 | 38.0 | 60.7 | 48.3 |
| Ours | | | ✓ | 57.2 | 47.5 | 54.1 | 38.8 | 39.7 | 38.5 | 59.2 | 47.9 |
| ✓ | ✓ | ✓ | 54.6 | 50.5 | 52.1 | 42.6 | 42.2 | 38.7 | 61.4 | 48.9 |
+
+performance in most of the scenarios. While DANN performed at least similar to $\mathrm{S + T}$ on the AlexNet backbone, it showed degenerative performance on the ResNet-34 backbone. It demonstrates that capacity difference of the backbone network causes the difference in the degree of the target label exploitation, and DANN performed less effective than the exploitation of ResNet-34. The considerable results of ENT are reasonable since the three-shot setting provides approximately $5\sim 10\%$ of the target labels for training, and such ratio is quite rich in a perspective of the SSL problem. Table 3 showed the performance comparison on the Office dataset and our method also outperformed other baseline on the dataset.
+
+# 5.3 Analysis
+
+Performance comparison with varying number of target labels. We compared the behavior of the methods by varying the number of labeled target samples from 0 to 20 for each class. As shown in Fig. 4, our methods showed superior performance for a large number of target labels even on the scenario where our method worked less effectively on a one-shot or three-shot setting. Moreover, it outperformed the other baselines throughout all the cases on the ResNet-34 backbone. On the other hand, ENT also significantly enhanced the accuracy for a large number of target labels, and it even outperformed the state-of-the-art SSDA methods when more than twenty and five target labels are given per class for the AlexNet and ResNet-34 backbone networks, respectively. It is reasonable since the increase of the labeled target sample ratio assimilates the SSDA problem to the SSL problem, which is suitable for SSL methods.
+
+Ablation Study on the proposed schemes. We conducted an ablation study on our schemes. To verify the effectiveness, we additionally evaluated DANN, MMD, and VAT [21] on the DomainNet dataset. As shown in Table 4, DANN and MMD rarely worked or even caused negative transfer while our attraction scheme showed meaningful improvement on average. It verifies that conventional UDA methods that focus on reducing the inter-domain discrepancy suffer from the intra-domain discrepancy issue, and it can be addressed by the intra-domain
+
+
+
+
+
+
+
+
+(a) Iteration: 0
+(d) Iteration: 5k
+Fig. 5: (a)-(f) The t-SNE visualization of the feature alignment progress through our method during the training phase.
+
+
+(b) Iteration: 1k
+(e) Iteration: 30k
+
+
+(c) Iteration: 2k
+(f) Iteration: 70k
+
+discrepancy minimization. Moreover, VAT also caused degenerative effect while our perturbation scheme significantly enhanced the performance, which demonstrates that conventional adversarial perturbation methods are not suitable for the SSDA problem, and our perturbation scheme can address it by modulating the perturbation direction toward the intermediate region of the target subdistributions. The exploration scheme also worked complementary to other schemes.
+
+Convergence Analysis. To analyze the convergence of our method, we depicted the t-SNE visualization [20] of the cross-domain features over the training progress in Fig. 5. We conducted the experiment on the Real to Sketch scenario of the DomainNet. All 126 classes were used for the experiment, but we choose 20 classes for better visualization. Note that we did not specifically pick classes of top-20 accuracies. Figure 5 (a) clearly shows the initial domain divergence between the source and the target domain. Moreover, the feature depiction of the early stages often showed many unaligned source and target clusters. As the training goes on, our method aligned the corresponding source and target clusters and finally obtained well-accumulated target clusters, as shown in Fig. 5 (f).
+
+# 6 Conclusions
+
+In this work, we demonstrated the intra-domain discrepancy issue of the target domain in the SSDA problem. Motivated by this, we proposed an SSDA framework that aligns the cross-domain feature distributions by addressing the intra-domain discrepancy through the attraction, exploration, and perturbation schemes. The attraction scheme directly minimized the estimated intra-domain discrepancy within the target domain. The perturbation scheme perturbed the well-aligned and unaligned target features into the intermediate region of the target subdistributions. The exploration scheme locally aligned features in a selective and class-wise manner complementary to the attraction and perturbation schemes. The experiments conducted on DomainNet, Office-Home, and Office datasets validate the effectiveness of our method, and it outperformed the conventional UDA and SSL methods on all the datasets.
+
+# References
+
+1. Ao, S., Li, X., Ling, C.X.: Fast generalized distillation for semi-supervised domain adaptation. In: AAAI (2017)
+2. Chen, W.Y., Liu, Y.C., Kira, Z., Wang, Y.C., Huang, J.B.: A closer look at few-shot classification. In: International Conference on Learning Representations (ICLR) (2019)
+3. Chen, Y., Zhu, X., Gong, S.: Semi-supervised deep learning with memory. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
+4. Chen, Y., Li, W., Sakaridis, C., Dai, D., Van Gool, L.: Domain adaptive faster r-cnn for object detection in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
+5. Cicek, S., Fawzi, A., Soatto, S.: SaaS: Speed as a supervisor for semi-supervised learning. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
+6. Donahue, J., Hoffman, J., Rodner, E., Saenko, K., Darrell, T.: Semi-supervised domain adaptation with instance constraints. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013)
+7. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: Proceedings of the International Conference on Machine Learning (ICML) (2015)
+8. Grandvalet, Y., Bengio, Y.: Semi-supervised learning by entropy minimization. In: Advances in Neural Information Processing Systems (NeurIPS) (2005)
+9. Gretton, A., Borgwardt, K.M., Rasch, M.J., Scholkopf, B., Smola, A.: A kernel two-sample test. Journal of Machine Learning Research (JMLR) (2012)
+10. Hoffman, J., Tzeng, E., Park, T., Zhu, J.Y., Isola, P., Saenko, K., Efros, A.A., Darrell, T.: Cycada: Cycle-consistent adversarial domain adaptation. In: Proceedings of the International conference on machine learning (ICML) (2018)
+11. Hong, W., Wang, Z., Yang, M., Yuan, J.: Conditional generative adversarial network for structured domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
+12. Hu, L., Kan, M., Shan, S., Chen, X.: Duplex generative adversarial network for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
+13. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
+14. Kim, T., Jeong, M., Kim, S., Choi, S., Kim, C.: Diversify and match: A domain adaptive representation learning paradigm for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
+15. Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. International Conference on Learning Representations (ICLR) (2017)
+16. Lee, S., Kim, D., Kim, N., Jeong, S.G.: Drop to adapt: Learning discriminative features for unsupervised domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019)
+17. Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: Proceedings of the International Conference on Machine Learning (ICML) (2015)
+
+18. Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems (NeurIPS) (2018)
+19. Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: Advances in Neural Information Processing Systems (NeurIPS) (2016)
+20. van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. Journal of Machine Learning Research (JMLR) (2008)
+21. Miyato, T., Maeda, S.i., Ishii, S., Koyama, M.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2018)
+22. Nam, H., Lee, H., Park, J., Yoon, W., Yoo, D.: Reducing domain gap via style-agnostic networks (2019)
+23. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch (2017)
+24. Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019)
+25. Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Proceedings of the European Conference on Computer Vision (ECCV) (2010)
+26. Saito, K., Kim, D., Sclaroff, S., Darrell, T., Saenko, K.: Semi-supervised domain adaptation via minimax entropy. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019)
+27. Saito, K., Ushiku, Y., Harada, T., Saenko, K.: Adversarial dropout regularization. In: Proceedings of the International Conference on Learning Representations (ICLR) (2018)
+28. Saito, K., Ushiku, Y., Harada, T., Saenko, K.: Strong-weak distribution alignment for adaptive object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
+29. Tarvainen, A., Valpola, H.: Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In: Advances in neural information processing systems (NeurIPS) (2017)
+30. Tsai, Y.H., Hung, W.C., Schulter, S., Sohn, K., Yang, M.H., Chandraker, M.: Learning to adapt structured output space for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
+31. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
+32. Wang, Q., Li, W., Gool, L.V.: Semi-supervised learning by augmented distribution alignment. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019)
+33. Yao, T., Yingwei Pan, Ngo, C., Houqiang Li, Tao Mei: Semi-supervised domain adaptation with subspace learning for visual recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
+34. Zheng, C., Cham, T.J., Cai, J.: T2Net: Synthetic-to-realistic translation for solving single-image depth estimation tasks. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
+35. Zheng, Z., Yang, X., Yu, Z., Zheng, L., Yang, Y., Kautz, J.: Joint discriminative and generative learning for person re-identification. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
\ No newline at end of file
diff --git a/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/images.zip b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fd187ff9dfdc4bb706f3def8953d995e97b593cf
--- /dev/null
+++ b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:92ae27b72579a45c218149ec375d6babe74fa4485f9af1c5bb1206c25509e785
+size 625566
diff --git a/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/layout.json b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0637a2d89a92fdde10f389749a4b5525122b926a
--- /dev/null
+++ b/attractperturbandexplorelearningafeaturealignmentnetworkforsemisuperviseddomainadaptation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:369583f2d1e752ad51eb5e181f770ddd88bb3312e207c770845d9709ef679551
+size 337059
diff --git a/attributionalrobustnesstrainingusinginputgradientspatialalignment/b93e6872-285f-4272-814b-334f5c49043e_content_list.json b/attributionalrobustnesstrainingusinginputgradientspatialalignment/b93e6872-285f-4272-814b-334f5c49043e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e3e13d163c529750ca5d4cab022911abb0fe237f
--- /dev/null
+++ b/attributionalrobustnesstrainingusinginputgradientspatialalignment/b93e6872-285f-4272-814b-334f5c49043e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f942efb3cf4202f7474211ef96e041ec3fc2c987596e76d5c5376b4abbd8cb7f
+size 83495
diff --git a/attributionalrobustnesstrainingusinginputgradientspatialalignment/b93e6872-285f-4272-814b-334f5c49043e_model.json b/attributionalrobustnesstrainingusinginputgradientspatialalignment/b93e6872-285f-4272-814b-334f5c49043e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d2775017e8738a5e5c61cc4bb54c1e2f600773af
--- /dev/null
+++ b/attributionalrobustnesstrainingusinginputgradientspatialalignment/b93e6872-285f-4272-814b-334f5c49043e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:72ec9907848228f060e26c353e13c06d06ffc142c8bddbe45c1b8a51d0435010
+size 108280
diff --git a/attributionalrobustnesstrainingusinginputgradientspatialalignment/b93e6872-285f-4272-814b-334f5c49043e_origin.pdf b/attributionalrobustnesstrainingusinginputgradientspatialalignment/b93e6872-285f-4272-814b-334f5c49043e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5bbb7a465d8b265a9ab113edce7ae643a751ec4a
--- /dev/null
+++ b/attributionalrobustnesstrainingusinginputgradientspatialalignment/b93e6872-285f-4272-814b-334f5c49043e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:22629348830dd5dafa8338717b3dc202566d5511caebc25fcddf21710364790d
+size 2862639
diff --git a/attributionalrobustnesstrainingusinginputgradientspatialalignment/full.md b/attributionalrobustnesstrainingusinginputgradientspatialalignment/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..01b2eceb570dd34ddd86f10a5c7d7f1ae886c75b
--- /dev/null
+++ b/attributionalrobustnesstrainingusinginputgradientspatialalignment/full.md
@@ -0,0 +1,335 @@
+# Attributional Robustness Training using Input-Gradient Spatial Alignment
+
+Mayank Singh1*, Nupur Kumari1*, Puneet Mangla2, Abhishek Sinha1*, Vineeth N Balasubramanian2, and Balaji Krishnamurthy1
+
+1 Media and Data Science Research Lab, Adobe, India {msingh,nupkumar}@adobe.com, abhishek.sinha94@gmail.com, kbalaji@adobe.com
+
+IIT Hyderabad, India {cs17btech11029,vineethnb}@iith.ac.in
+
+Abstract. Interpretability is an emerging area of research in trustworthy machine learning. Safe deployment of machine learning system mandates that the prediction and its explanation be reliable and robust. Recently, it has been shown that the explanations could be manipulated easily by adding visually imperceptible perturbations to the input while keeping the model's prediction intact. In this work, we study the problem of attributional robustness (i.e. models having robust explanations) by showing an upper bound for attributional vulnerability in terms of spatial correlation between the input image and its explanation map. We propose a training methodology that learns robust features by minimizing this upper bound using soft-margin triplet loss. Our methodology of robust attribution training $(ART)$ achieves the new state-of-the-art attributional robustness measure by a margin of $\approx 6 - 18\%$ on several standard datasets, ie. SVHN, CIFAR-10 and GTSRB. We further show the utility of the proposed robust training technique $(ART)$ in the downstream task of weakly supervised object localization by achieving the new state-of-the-art performance on CUB-200 dataset.
+
+Keywords: Attributional robustness; Adversarial robustness; Explainable deep learning
+
+# 1 Introduction
+
+Attribution methods [9, 45, 51, 48, 47, 54, 46] are an increasingly popular class of explanation techniques that aim to highlight relevant input features responsible for model's prediction. These techniques are extensively used with deep learning models in risk-sensitive and safety-critical applications such as healthcare [4, 32, 56, 24], where they provide a human user with visual validation of the features used by the model for predictions. In computer-assisted diagnosis, [56] showed that predictions with attribution maps increased accuracy of retina specialists
+
+
+Fig. 1: Illustration of targeted manipulation [12] of attribution maps on CUB-200 [61] using the target attribution of (a). Here, (b) Integrated Gradients [54], (c) GradCAM++ [9] and (d) GradSHAP [29] blocks show : Top (b), (c), (d) original image and its attribution map; Bottom (b), (c), (d) perturbed image and its attribution map.
+
+above that of unassisted reader or model alone. In [24], the authors improve the analysis of skin lesions by leveraging explanation maps of prediction.
+
+It has been recently demonstrated that one could construct targeted [12] and un-targeted perturbations [16, 10] that can arbitrarily manipulate attribution maps without affecting the model's prediction. This issue further weakens the cause of safe application of machine learning algorithms. We show an illustrative example of attribution-based attacks for image classifiers over different attribution methods in Fig. 1. This vulnerability leads to newer challenges for attribution methods, as well as robust training techniques. The intuition of attributional robustness is that if the inputs are visually indistinguishable with the same model prediction, then interpretation maps should also remain the same.
+
+As one of the first efforts, [10] recently proposed a training methodology that aims to obtain models having robust integrated gradient [54] attributions. In addition to being an early effort, the instability of this training methodology, as discussed in [10], limits its usability in the broader context of robust training in computer vision. In this paper, we build upon this work by obtaining an upper bound for attributional vulnerability as a function of spatial correlation between the input image and its explanation map. Furthermore, we also introduce a training technique that minimizes this upper bound to provide attributional robustness. In particular, we introduce a training methodology for attributional robustness that uses soft-margin triplet loss to increase the spatial correlation of input with its attribution map. The triplet loss considers input image as the anchor, gradient of the correct class logit with respect to input as the positive and gradient of the incorrect class with highest logit value with respect to input as the negative. We show empirically how this choice results in learning of robust and interpretable features that help in other downstream weakly supervised tasks.
+
+Existing related efforts in deep learning research are largely focused on robustness to adversarial perturbations [17, 55], which are imperceptible perturbations which, when added to input, drastically change the neural network's prediction. While adversarial robustness has been explored significantly in re
+
+cent years, there has been limited progress made on the front of attributional robustness, which we seek to highlight in this work. Our main contributions can be summarized as:
+
+- We tackle the problem of attribution vulnerability and provide an upper bound for it as a function of spatial correlation between the input and its attribution map [48]. We then propose $ART$ , a new training method that aims to minimize this bound to learn attributionally robust model.
+- Our method outperforms prior work and achieves state-of-the-art attributional robustness on Integrated Gradient [54] based attribution method.
+- We show that the proposed methodology also induces immunity to adversarial perturbations and common perturbations [20] on standard vision datasets that is comparable to the state-of-the-art adversarial training technique [31].
+- We show the utility of $ART$ for other computer vision tasks such as weakly supervised object localization (WSOL) and segmentation. Specifically, $ART$ achieves state-of-the-art performance in WSOL task on CUB-200 [61] dataset.
+
+# 2 Related Work
+
+Our work is associated with various recent development made in the field of explanation methods, robustness to input distribution shifts and weakly supervised object localization. We hence describe earlier efforts in these directions below.
+
+Visual Explanation Methods: Various explanation methods have been proposed that focus on producing posterior explanations for the model's decisions. A popular approach to do so is to attribute the predictions to the set of input features [48, 52, 47, 54, 46, 6]. [69, 13] provide a survey of interpretation techniques. Another class of explanation methods, commonly referred to as attribution techniques, can be broadly divided into three categories - gradient/back-propagation, propagation and perturbation based methods. Gradient-based methods attribute an importance score for each pixel by using the derivative of a class score with respect to input features [48, 47, 54]. Propagation-based techniques [6, 46, 67] leverage layer-wise propagation of feature importance to calculate the attribution maps. Perturbation-based interpretation methods generate attribution maps by examining the change in prediction of the model when the input image is perturbed [65, 40, 41]. In this work, we primarily report results on the attribution method of Integrated Gradients $IG$ [54] that satisfies desirable axiomatic properties and was also used in the previous work [10].
+
+Robustness of Attribution Maps: Recently, there have been a few efforts [70, 16, 12, 10, 3] that have explored the robustness of attribution maps, which we call attributional robustness in this work. The authors of [16, 12, 70] study the robustness of a network's attribution maps and show that the attribution maps can be significantly manipulated via imperceptible input perturbations while preserving the classifier's prediction. Recently, Chen, J. et al.[10] proposed a robust attribution training methodology, which is one of the first attempts at
+
+making an image classification model attributionally robust and is the current state of the art. The method minimizes the norm of difference in Integrated Gradients [54] of an original and perturbed image during training. In this work, we approach the problem from a different perspective of maintaining spatial alignment between an image and its saliency map.
+
+Adversarial Perturbation and Robustness: Adversarial attacks can be broadly categorized into two types: White-box [33, 31, 8, 62] and Black-box attacks [22, 58, 2, 39]. Several proposed defense techniques have been shown to be ineffective to adaptive adversarial attacks [5, 28, 8, 7]. Adversarial training [18, 31, 50], which is a defense technique that continuously augments the data with adversarial examples while training, is largely considered the current state-of-the-art to achieve adversarial robustness. [66] characterizes the trade-off between accuracy and robustness for classification problems and propose a regularized adversarial training method. Prior works have also attempted to improve adversarial robustness using gradient regularization that minimizes the Frobenius norm of the Hessian of the classification loss with respect to input[42, 34, 30] or weights [23]. For a comprehensive review of the work done in the area of adversarial examples, please refer [63, 1].
+
+We show in our work that in addition to providing attributional robustness, our proposed method helps in achieving performance gain on downstream task of WSOL. We hence briefly discuss earlier efforts on this task below.
+
+Weakly Supervised Object Localization (WSOL): The problem of WSOL aims to identify the location of the object in a scene using only image-level labels, and without any location annotations. Generally, rich labeled data is scarcely available, and its collection is expensive and time-consuming. Learning from weak supervision is hence promising as it requires less rich labels and has the potential to scale. A common problem with most previous approaches is that the model only identifies the most discriminative part of the object rather than the complete object. For example, in the case of a bird, the model may rely on the beak region for classification than the entire bird's shape. In WSOL task, ADL [11], the current state-of-the-art method, uses an attention-based dropout layer while training the model that promotes the classification model to also focus on less discriminative parts of the image. For getting the bounding box from the model, ADL and similar other techniques in this domain first extract attribution maps, generally CAM-based[71], for each image and then fit a bounding box as described in [71]. We now present our methodology.
+
+# 3 Attributional Robustness Training: Methodology
+
+Given an input image $x \in [0,1]^n$ with true label $y \in \{1\dots k\}$ , we consider a neural network model $f_{\theta}: \mathbb{R}^n \to \mathbb{R}^k$ with ReLU activation function that classifies $x$ into one of $k$ classes as $\arg \max f(x)_i$ where $i \in \{1\dots k\}$ . Here, $f(x)_i$ is the $i^{th}$ logit of $f(x)$ . Attribution map $A(x,f(x)_i): \mathbb{R}^n \to \mathbb{R}^n$ with respect to a given
+
+The class $i$ assigns an importance score to each input pixel of $x$ based on its relevance to the model for predicting the class $i$ .
+
+# 3.1 Attribution Manipulation
+
+It was shown recently [12, 16] that for standard models $f_{\theta}$ , it is possible to manipulate the attribution map $A(x,f(x)_y)$ (denoted as $A(x)$ for simplicity in the rest of the paper) with visually imperceptible perturbation $\delta$ in the input by optimizing the following loss function.
+
+$$
+\underset {\delta \in B _ {\epsilon}} {\arg \max } D [ A (x + \delta , f (x + \delta) _ {y}), A (x, f (x) _ {y}) ] \tag {1}
+$$
+
+$$
+\text {s u b j e c t} f (x)) = \arg \max (f (x + \delta)) = y
+$$
+
+where $B_{\epsilon}$ is an $l_{p}$ ball of radius $\epsilon$ centered at $x$ and $D$ is a dissimilarity function to measure the change between attribution maps. The manipulation was shown for various perturbation-based and gradient-based attribution methods.
+
+This vulnerability in neural network-based classification models suggests that the model relies on features different from what humans perceive as important for its prediction. The goal of attributional robustness is to mitigate this vulnerability and ensure that attribution maps of two visually indistinguishable images are also nearly identical. In the next section, we propose a new training methodology for attributional robustness motivated from the observation that feature importance in image space has a high spatial correlation with the input image for robust models [57, 15].
+
+# 3.2 Attributional Robustness Training (ART)
+
+Given an input image $x \in \mathbb{R}^n$ with ground truth label $y \in \{1 \dots k\}$ and a classification model $f_{\theta}$ , the gradient-based feature importance score is defined as $\nabla_x f(x)_i : i \in \{1 \dots k\}$ and denoted as $g^{i}(x)$ in the rest of the paper. For achieving attributional robustness, we need to minimize the attribution vulnerability to attacks as defined in Equation 1. Attribution vulnerability can be formulated as the maximum possible change in $g^{y}(x)$ in a $\epsilon$ -neighborhood of $x$ if $A$ is taken as gradient attribution method [48] and $D$ is a distance measure in some norm $||.||$ i.e.
+
+$$
+\max _ {\delta \in B _ {\epsilon}} | | g ^ {y} (x + \delta) - g ^ {y} (x) | | \tag {2}
+$$
+
+We show that Equation 2 is upper bounded by the maximum of the distance between $g^y (x + \delta)$ and $x + \delta$ for $\delta$ in $\epsilon$ neighbourhood of $x$ .
+
+$$
+\begin{array}{l} \left| \left| g ^ {y} (x + \delta) - g ^ {y} (x) \right| \right| = \left| \left| g ^ {y} (x + \delta) - (x + \delta) - \left(g ^ {y} (x) - x\right) + \delta \right| \right| \\ \leq | | g ^ {y} (x + \delta) - (x + \delta) | | + | | g ^ {y} (x) - x | | + | | \delta | | \tag {3} \\ \leq | | g ^ {y} (x + \delta) - (x + \delta) | | + \max _ {\delta \in B _ {\epsilon}} | | g ^ {y} (x + \delta) - (x + \delta) | | + | | \delta | | \\ \end{array}
+$$
+
+Taking max on both sides:
+
+$$
+\max _ {\delta \in B _ {\epsilon}} | | g ^ {y} (x + \delta) - g ^ {y} (x)) | | \leq 2 \max _ {\delta \in B _ {\epsilon}} | | g ^ {y} (x + \delta) - (x + \delta) | | + | | \epsilon | | \tag {4}
+$$
+
+
+Fig. 2: Block diagram summarizing our training technique for $ART$ . Dashed line represents backward gradient flow, and bold lines denote forward pass of the neural network.
+
+Leveraging existing understanding [44, 21] that minimizing the distance between two quantities can benefit from a negative anchor, we use a triplet loss formulation as defined in Equation 5 with image $x$ as an anchor, $g^{y}(x)$ as positive sample and $g^{i^{*}}(x)$ as negative sample. More details about the selection of the optimization objective 5 and choice for the negative sample can be found in the supplementary section 1.1. Hence to achieve attributional robustness, we propose a training technique $ART$ that encourages high spatial correlation between $g^{y}(x)$ and $x$ by optimizing $L_{attr}$ which is a triplet loss [21] with soft margin on cosine distance between $g^{i}(x)$ and $x$ i.e.
+
+$$
+L _ {a t t r} (x, y) = \log \left(1 + \exp \left(- \left(d \left(g ^ {i ^ {*}} (x), x\right) - d \left(g ^ {y} (x), x\right)\right)\right)\right)
+$$
+
+$$
+\text {w h e r e} d \left(g ^ {i} (x), x\right) = 1 - \frac {g ^ {i} (x) . x}{\left| \left| g ^ {i} (x) \right| \right| _ {2} . \left| \left| x \right| \right| _ {2}}; i ^ {*} = \underset {i \neq y} {\arg \max } f (x) _ {i} \tag {5}
+$$
+
+Hence, the classification training objective for $ART$ methodology is:
+
+$$
+\underset {\theta} {\text {m i n i m i z e}} \mathbb {E} _ {(x, y)} \left[ L _ {c e} (x + \delta , y) + \lambda L _ {\text {a t t r}} (x + \delta , y) \right] \tag {6}
+$$
+
+$$
+\text{where}\delta = \operatorname *{arg max}_{||\delta ||_{\infty} < \epsilon}L_{attr}(x + \delta ,y)
+$$
+
+Here $L_{ce}$ is the standard cross-entropy loss. The optimization of $L_{attr}$ involves computing gradient of $f(x)_i$ with respect to input $x$ which suffers from the problem of vanishing second derivative in case of ReLU activation, i.e. $\partial^2 f_i / \partial x^2 \approx 0$ . To alleviate this, following previous works [12, 10], we replace ReLU with softplus non-linearities while optimizing $L_{attr}$ as it has a well-defined second derivative. The softplus approximates to ReLU as the value of $\beta$ in $\text{softplus}_\beta(x) = \frac{\log(1 + e^{\beta x})}{\beta}$ increases. Note that optimization of $L_{ce}$ follows the usual ReLU activation pathway. Thus, our training methodology consists of two steps: first, we calculate a perturbed image $\tilde{x} = x + \delta$ that maximizes $L_{attr}$ through iterative projected gradient descent; secondly, we use $\tilde{x}$ as the training point on which $L_{ce}$ and $L_{attr}$ is minimized with their relative weightage controlled by the hyper-parameter $\lambda$ .
+
+Note that the square root of cosine distance for unit $l_{2}$ norm vectors as used in our formulation of $L_{attr}$ is a valid distance metric and is related to the Euclidean
+
+distance. Details about this can be found in the supplementary section 1.2. Through experiments, we empirically show that minimizing the upper bound in Equation 4 as our training objective increases the attributional robustness of the model by a significant margin. The block diagram for our training methodology is shown in Fig 2, and its pseudo-code is given in Algorithm 1.
+
+# 3.3 Connection to Adversarial Robustness
+
+For a given input image $x$ , an adversarial example is a slightly perturbed image $x'$ such that $||x - x'||$ is small in some norm but the model $f_{\theta}$ classifies $x'$ incorrectly. Adversarial examples are calculated by optimizing a loss function $L$ which is large when $f(x) \neq y$ :
+
+$$
+x _ {a d v} = \underset {x ^ {\prime}: | | x ^ {\prime} - x | | _ {p} < \epsilon} {\arg \max } L (\theta , x ^ {\prime}, y) \tag {7}
+$$
+
+where $L$ can be the cross-entropy loss, for example. For an axiomatic attribution function $A$ which satisfies the completeness axiom i.e. $\sum_{j=1}^{n} A(x)_j = f(x)_y$ , it can be shown that $|f(x)_y - f(x')_y| < ||A(x) - A(x')||_1$ , as below:
+
+$$
+\begin{array}{l} \left| f (x) _ {y} - f \left(x ^ {\prime}\right) _ {y} \right| = \left| \sum_ {j = 1} ^ {n} A (x) _ {j} - \sum_ {j = 1} ^ {n} A \left(x ^ {\prime}\right) _ {j} \right| \\ \leq \sum_ {j = 1} ^ {n} | A (x) _ {j} - A \left(x ^ {\prime}\right) _ {j} | \tag {8} \\ = \left\| A (x) - A \left(x ^ {\prime}\right) \right\| _ {1} \\ \end{array}
+$$
+
+The above relationship connects adversarial robustness to attributional robustness as the maximum change in $f(x)_y$ is upper bounded by the maximum change in attribution map of $x$ in its $\epsilon$ neighborhood. Also, it was shown [57] recently that for an adversarially robust model, gradient-based feature importance map $g^y (x)$ has high spatial correlation with the image $x$ and it highlights the perceptually relevant features of the image. For classifiers with a locally affine approximation like a DNN with ReLU activations, Etmann et al.[15] establish theoretical connection between adversarial robustness, and the correlation of $g^{y}(x)$ with image $x$ . [15] shows that for a given image $x$ , its distance to the nearest distance boundary is upper-bounded by the dot product between $x$ and $g^{y}(x)$ . The authors of [15] showed that increasing adversarial robustness increases the correlation between $g^{y}(x)$ and $x$ . Moreover, this correlation is related to the increase in attributional robustness of model as we show in Section 3.2.
+
+# 3.4 Downstream Task: Weakly supervised Object localization (WSOL)
+
+As an additional benefit of our approach, we show its improved performance on a downstream task - Weakly supervised Object localization (WSOL), in this case. The problem of WSOL deals with detecting objects where only class label information of images is available, and the ground truth bounding box location is inaccessible. Generally, the pipeline for obtaining bounding box locations in
+
+Algorithm 1: Attributional Robustness Training (ART)
+1 Input: Classification model $f_{\theta}$ , training data $X = \{(x_i,y_i)\}$ , batch size $b$ number of epochs $E$ , number of attack steps $a$ , step-size for iterative perturbation $\alpha$ , softplus parameter $\beta$ , weight of $L_{attr}$ loss $\lambda$
+2 for epoch $\in \{1,2,\dots,E\}$ do
+3 Get mini-batch $x,y = \{(x_1,y_1)\dots (x_b,y_b)\}$
+4 $\tilde{x} = x + \text{Uniform} [-\epsilon , + \epsilon ]$
+5 for $i = 1,2,\ldots ,a$ do
+6 $\begin{array}{rlr}{\tilde{x}}&{=}&{\tilde{x}+\alpha*sign(\nabla_{x}L_{attr}(\tilde{x},y))}\\ {\tilde{x}}&{=}&{Proj_{\ell_{\infty}}(\tilde{x})}\end{array}$
+7 end
+8 $i^{*}=\arg\max _{i\neq y}f(x)_{i}$
+9 Calculate $g^{y}(\tilde{x}) = \nabla_{x}f(\tilde{x})_{y}$
+10 Calculate $g^{i^{*}}(\tilde{x}) = \nabla_{x}f(\tilde{x})_{i^{*}}$ // We calculate $g^{y}(\tilde{x})$ and $g^{i^{*}}(\tilde{x})$ using softplus $\beta$ activation as described in Section 3.2
+12 loss $= L_{ce}(\tilde{x},y) + \lambda \cdot L_{attr}(\tilde{x},y)$
+13 Update $\theta$ using loss
+14 end
+15 return $f_{\theta}$
+
+WSOL relies on attribution maps. Also, the task of object detection is widely used to validate the quality of attribution maps empirically. Since our proposed training methodology $ART$ promotes attribution map to be invariant to small perturbations in input, it leads to better attribution maps identifying the complete object instead of focusing on only the most discriminative part of the object. We validate this empirically by using attribution maps obtained from our model for bounding-box detection on the CUB dataset and obtaining new state-of-the-art localization results.
+
+# 4 Experiments and Results
+
+In this section, we first describe the implementation details of $ART$ and evaluation setting for measuring the attributional and adversarial robustness. We then show the performance of $ART$ on the downstream WSOL task.
+
+# 4.1 Attributional and Adversarial Robustness
+
+Baselines: We compare our training methodology with the following approaches:
+
+- Natural: Standard training with cross entropy classification loss.
+- $PGD - n$ : Adversarily trained model with $n$ -step PGD attack as in [31], which is typically used by work in this area [10].
+- IG Norm and IG-SUM Norm [10]: Current state-of-the-art robust attribution training technique.
+
+Table 1: Attributional and adversarial robustness of different approaches on various datasets. Hyper-parameters for attributional attack are same as [10]. Similarity measures used are IN: Top-k intersection, K:kendall's tau rank order correlation. The values denote similarity between attribution maps of original and perturbed examples [16] based on Intergrated Gradient method.
+
+| Dataset | Approach | Attributional Robustness | Accuracy |
| IN | K | Natural | PGD-40 Attack |
| CIFAR-10 | Natural | 40.25 | 49.17 | 95.26 | 0. |
| PGD-10 [31] | 69.00 | 72.27 | 87.32 | 44.07 |
| ART | 92.90 | 91.76 | 89.84 | 37.58 |
| SVHN | Natural | 60.43 | 56.50 | 95.66 | 0. |
| PGD-7 [31] | 39.67 | 55.56 | 92.84 | 50.12 |
| ART | 61.37 | 72.60 | 95.47 | 43.56 |
| GTSRB | Natural | 68.74 | 76.48 | 99.43 | 19.9 |
| IG Norm [10] | 74.81 | 75.55 | 97.02 | 75.24 |
| IG-SUM Norm [10] | 74.04 | 76.84 | 95.68 | 77.12 |
| PGD-7 [31] | 86.13 | 88.42 | 98.36 | 87.49 |
| ART | 91.96 | 89.34 | 98.47 | 84.66 |
| Flower | Natural | 38.22 | 56.43 | 93.91 | 0. |
| IG Norm [10] | 64.68 | 75.91 | 85.29 | 24.26 |
| IG-SUM Norm [10] | 66.33 | 79.74 | 82.35 | 47.06 |
| PGD-7 [31] | 80.84 | 84.14 | 92.64 | 69.85 |
| ART | 79.84 | 84.87 | 93.21 | 33.08 |
+
+Datasets and Implementation Details: To study the efficacy of our methodology, we benchmark on the following standard vision datasets: CIFAR-10 [27], SVHN [35], GTSRB [53] and Flower [36]. For CIFAR-10, GTSRB and Flower datasets, we use Wideresnet-28-10 [64] model architecture for Natural, PGD-10 and $ART$ . For SVHN, we use WideResNet-40-2 [64] architecture. We use the perturbation $\epsilon = 8 / 255$ in $\ell_{\infty}$ -norm for $ART$ and $PGD-n$ as in [31, 10]. We use $\lambda = 0.5$ , $a = 3$ and $\beta = 50$ for all experiments in the paper. For training, we use SGD optimizer with step-wise learning rate schedule. More details about training hyper-parameters can be found in the supplementary section 1.3.
+
+Evaluation: For evaluating attributional robustness, we follow [10] and present our results with Integrated Gradient (IG)-based attribution maps. We show attributional robustness of $ART$ on other attribution methods in Section 5. IG satisfies several theoretical properties desirable for an attribution method, e.g. sensitivity and completeness axioms and is defined as:
+
+$$
+I G (x, f (x) _ {i}) = (x - \bar {x}) \odot \int_ {t = 0} ^ {1} \nabla_ {x} f (\bar {x} + t (x - \bar {x})) _ {i} d t \tag {9}
+$$
+
+where $\overline{x}$ is a suitable baseline at which the function prediction is neutral. For computing perturbed image $\tilde{x}$ on which $IG(\tilde{x})$ changes drastically from $IG(x)$ , we perform Iterative Feature Importance Attack (IFIA) proposed by Ghorbani et al.[16] with $\ell_{\infty}$ bound of $\epsilon = 8 / 255$ as used by previous work [10].
+
+
+Fig. 3: Examples of gradient attribution map [48] for different models on CIFAR-10. Top to bottom: Image; attribution maps for Natural, PGD-10 and ART models
+
+
+Fig. 4: Random samples (of resolution $32 \times 32$ ) generated using a CIFAR-10 robustly trained ART classifier
+
+For assessing similarity between $A(x)$ and perturbed image $A(\tilde{x})$ , we use Top-k intersection (IN) and Kendall's tau coefficient (K) similar to [10]. Kendall's tau coefficient is a measure of similarity of ordering when ranked by values, and therefore is a suitable metric for comparing attribution maps. Top-k intersection measures the percentage of common indices in top-k values of attribution map of $x$ and $\tilde{x}$ . We report average of IN and $K$ metric over random 1000 samples of test-set. More details about the attack methodology and evaluation parameters can be found in supplementary section 1.3. For evaluating adversarial robustness, we perform 40 step PGD attack [31] using cross-entropy loss with $\ell_{\infty}$ bound of $\epsilon = 8 / 255$ and report the model accuracy on adversarial examples. Table 1 compares attributional and adversarial robustness across different datasets and training approaches. ART achieves state-of-the-art attributional robustness on attribution attacks [16] when compared with baselines. We also observe that ART consistently achieves higher test accuracy than [31] and has adversarial robustness significantly greater than that of the Natural model.
+
+Qualitative study of input-gradients for ART: Motivated by [57] which claims that adversarially trained models exhibits human-aligned gradients (agree with human saliency), we studied the same with (ART), and the results are shown in Fig 3. Qualitative study of input-gradients shows a high degree of spatial alignment between the object and the gradient. We also show image generation from random seeds in Fig 4 using robust ART model as done in [43]. The image generation process involves maximization of the class score of the
+
+
+
+
+Fig. 5: Comparison of heatmap and estimated bounding box by VGG model trained via ART (top row) and ADL (bottom row) on CUB dataset; The red bounding box is ground truth and green bounding box corresponds to the estimated box
+
+
+
+Table 2: Weakly Supervised Localization on CUB dataset. Bold text refers to the best GT-Known Loc and Top-1 Loc for each model. * denotes directly reported from the paper. # denotes our implementation from the official code released by ADL [11]²
+
+| Model | Method | Saliency | Method | Top-1 Acc |
| Grad | CAM |
| GT-Known Loc | Top-1 Loc | GT-Known Loc | Top-1 Loc |
| ResNet50-SE | ADL [11] | - | - | - | 62.29* | 80.34* |
| ResNet50 | ADL# | 52.93 | 43.78 | 56.85 | 47.53 | 80.0 |
| Natural | 50.2 | 42.0 | 60.37 | 50.0 | 81.12 |
| PGD-7[31] | 66.73 | 47.48 | 55.24 | 39.45 | 70.3 |
| ART | 82.65 | 65.22 | 58.87 | 46.02 | 77.51 |
| VGG-GAP | ADL# | 63.18 | 43.59 | 69.36 | 50.88 | 70.31 |
| Natural | 72.54 | 53.81 | 48.75 | 35.03 | 72.94 |
| ART | 76.50 | 57.74 | 52.88 | 40.75 | 74.51 |
+
+desired class starting from a random seed which is sampled from some class-conditional seed distribution as defined in [43].
+
+# 4.2 Weakly Supervised Image Localization
+
+This task relies on the attribution map obtained from the classification model to estimate a bounding box for objects. We compare our approach with ADL [11] $^3$ on the CUB dataset, which has ground truth bounding box of 5794 bird images. We adopt similar processing steps as ADL for predicting bounding boxes except that we use gradient attribution map $\nabla_x f(x)_y$ instead of CAM [71]. As a post-processing step, we convert the attribution map to grayscale, normalize it and then apply a mean filtering of $3 \times 3$ kernel over it. Then a bounding box is fit over this heatmap to localize the object.
+
+We perform experiments on Resnet-50 [19] and VGG [49] architectures. We use $\ell_{\infty}$ bound of $\epsilon = 2 / 255$ for $ART$ and $PGD - 7$ training on CUB dataset. For evaluation, we used similar metrics as in [11] i.e. GT-Known Loc: Intersection over Union (IoU) of estimated box and ground truth bounding box is at least 0.5 and ground truth is known; Top-1 Loc: prediction is correct and IoU of bounding box is at least 0.5; Top-1 Acc: top-1 classification accuracy. More details about
+
+Table 3: Top-1 accuracy of different models on perturbed variants of test-set (GN:Gaussian noise; SN: Shot noise; IN: Impulse noise; DB: Defocus blur; Gl-B: Glass blur; MB: Motion blur; ZB: Zoom blur; S: Snow; F: Fog; B: Brightness; C: Contrast; E: Elastic transform; P: Pixelation noise; J: JPEG compression; Sp-N: Speckle Noise)
+
+| Models | GN | SN | IN | DB | Gl-B | MB | ZB | S | F | B | C | E | P | J | Sp-N |
| Natural | 49.16 | 61.42 | 59.22 | 83.55 | 53.84 | 79.16 | 79.18 | 84.53 | 91.6 | 94.37 | 87.63 | 84.44 | 74.12 | 79.76 | 65.04 |
| PGD-10 | 83.32 | 84.33 | 73.73 | 83.09 | 81.27 | 79.60 | 82.07 | 82.68 | 68.81 | 85.97 | 57.86 | 81.68 | 85.56 | 85.56 | 83.64 |
| ART | 85.44 | 86.41 | 77.07 | 86.07 | 81.70 | 83.14 | 85.54 | 84.99 | 71.04 | 89.42 | 56.69 | 84.72 | 87.64 | 87.89 | 86.02 |
+
+Table 4: Attributional Robustness on CIFAR-10 for other attribution methods
+
+| Model | Gradient[48] | GradSHAP [29] |
| IN | K | IN | K |
| Natural | 13.72 | 9.5 | 4.5 | 16.52 |
| PGD-10 [31] | 54.8 | 54.06 | 45.05 | 59.80 |
| ART | 76.07 | 70.31 | 48.31 | 62.35 |
+
+
+Fig. 6: $\operatorname{Cosine}(x, \nabla_x f(x)_y)$ for different models over test-set of CIFAR-10
+
+dataset and hyper-parameters can be found in the supplementary section 2.1. Our approach results in higher GT-Known Loc and Top-1 Loc for both Resnet-50 and VGG-GAP [11] model as shown in Table 2. We also show qualitative comparison of the bounding box estimated by our approach with [11] in Fig 5.
+
+# 5 Discussion and Ablation Studies
+
+To understand the scope and impact of the proposed training approach $ART$ , we perform various experiments and report these findings in this section. These studies were carried out on the CIFAR-10 dataset.
+
+Robustness to targeted attribution attacks: In targeted attribution attacks, the aim is to calculate perturbations that minimize dissimilarity between the attribution map of a given image and a target image's attribution map. We evaluate the robustness of $ART$ model using targeted attribution attack as proposed in [12] using the $IG$ attribution method on a batch of 1000 test examples. To obtain the target attribution maps, we randomly shuffle the examples and then evaluate $ART$ and $PGD-10$ trained model on these examples. The kendall's tau coefficient and top- $k$ intersection similarity measure between original and perturbed images on $ART$ was 64.76 and 70.64 as compared to 36.29 and 31.81 on the $PGD-10$ adversarially trained model.
+
+Attributional robustness for other attribution methods: We evaluate $ART$ against attribution attack [16] using gradient[48] and gradSHAP[29] attribution methods in Table 4. We observe that $ART$ achieves higher attributional robustness than Natural and PGD-10 models on Top-k intersection (IN) and Kendall's tau coefficient (K) measure. We also compare the cosine similarity between $x$ and $g^{y}(x)$ for all models trained on CIFAR-10 dataset and show its variance plot in Fig. 6. We can see that $ART$ trained model achieves higher
+
+cosine similarity than Natural and PGD-10. This empirically validates that our optimization is effective in increasing the spatial correlation between $x$ and $g^{y}(x)$ .
+
+Robustness against gradient-free and stronger attacks: To show the absence of gradient masking and obfuscation [5, 7], we evaluate our model on a gradient-free adversarial optimization algorithm [58] and a stronger PGD attack with a larger number of steps. We observe similar adversarial robustness when we increase the number of steps in PGD-attack. For 100 step and 500 step PGD attacks, $ART$ achieves $37.42\%$ and $37.18\%$ accuracy respectively. On the gradient-free SPSA [58] attack, $ART$ obtains 44.7 adversarial accuracy that was evaluated over 1000 random test samples.
+
+Robustness to common perturbations [20] and spatial adversarial perturbations [14]: We compare $ART$ with $PGD-10$ adversarily trained model on the common perturbations dataset [20] for CIFAR-10. The dataset consists of perturbed images of 15 common-place visual perturbations at five levels of severity, resulting in 75 distinct corruptions. We report the mean accuracy over severity levels for all 15 types of perturbations and observe that $ART$ performs better than other models on a majority of these perturbations, as shown in Table 3. On $PGD-40\ell_{2}$ norm attack with $\epsilon = 1.0$ and spatial attack [14] we observe robustness of $39.65\%$ , $11.13\%$ for $ART$ and $29.68\%$ , $6.76\%$ for $PGD-10$ trained model, highlighting the improved robustness provided by our method. More results of varying $\epsilon$ in adversarial attacks and combining $PGD$ adversarial training [31] with $ART$ can be found in the supplementary section 3.
+
+Image Segmentation: Data collection for image segmentation task is time-consuming and costly. Hence, recent efforts [26, 59, 60, 25, 38, 68, 37] have focused on weakly supervised segmentation models, where image labels are leveraged instead of segmentation masks. Since models trained via our approach perform well on WSOL, we further evaluate it on weakly supervised image segmentation task for Flowers dataset [36] where we have access to segmentation masks of 849 images. Samples of weakly-supervised segmentation mask obtained from attribution maps on various models are shown in Fig. 7. We observe that attribution maps of $ART$ can serve as a better prior for segmentation masks as compared to other baselines. We evaluate our results using Top-1 Seg metric which considers an answer as correct when the model prediction is correct and the IoU between ground-truth mask and estimated mask is at least 0.5. We compare $ART$ against Natural and PGD-7 trained models using gradient[48] and IG [54] attribution maps. Attribution maps are converted into gray-scale heatmaps and a smoothing filter is applied as a post-processing step. We obtain a Top-1 Seg performance of 0.337, 0.422, and 0.604 via IG attribution maps and 0.244, 0.246, 0.317 via gradient maps for Natural, PGD-7 and $ART$ respectively.
+
+Effect of $\beta$ , $\lambda$ and $a$ on performance: We perform experiments to study the role of $\beta$ , $\lambda$ and $a$ as used in Algorithm 1 on the model performance by varying one parameter and fixing the others on their best-performing values, i.e. 50, 0.5 and 3 respectively. Fig. 8a shows the plots of attributional robustness. Fig. 8b shows the plots of test accuracy and adversarial accuracy on $\ell_{\infty}$ PGD-40 perturbations with $\epsilon = 8/255$ . We observe that adversarial and attributional robustness
+
+
+Fig. 7: Example images of weakly supervised segmentation masks obtained from different models via different attribution methods
+
+
+Fig. 8: (a): Top-k Intersection (IN) and Kendall correlation (K) measure of attributional robustness; (b): Test accuracy and adversarial accuracy (PGD-40 perturbations) on varying $\beta$ , $\lambda$ and attack steps in our training methodology on CIFAR-10
+
+initially increases with increasing $\beta$ , but the trend reverses for higher values of $\beta$ . On varying $\lambda$ , we find that the attributional and adversarial robustness of the model increases with increasing $\lambda$ and saturates after 0.75. For attack steps parameter $a$ , we find that the performance in terms of test accuracy, adversarial accuracy and attributional robustness saturates after 3 attack steps as shown in the right-most plot of Fig. 8a and 8b.
+
+# 6 Conclusion
+
+We propose a new method for the problem space of attributional robustness, using the observation that increasing the alignment between the object in an input and the attribution map generated from the network's prediction leads to improvement in attributional robustness. We empirically showed this for both un-targeted and targeted attribution attacks over several benchmark datasets. We showed that the attributional robustness also brings out other improvements in the network, such as reduced vulnerability to adversarial attacks and common perturbations. For other vision tasks such as weakly supervised object localization, our attributionally robust model achieves a new state-of-the-art accuracy even without being explicitly trained to achieve that objective. We hope that our work can open a broader discussion around notions of robustness and the application of robust features on other downstream tasks.
+
+Acknowledgements. This work was partly supported by the Ministry of Human Resource Development and Department of Science and Technology, Govt of India through the UAY program.
+
+# References
+
+1. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access (2018)
+2. Alexey Kurakin, Ian J. Goodfellow, S.B.: Adversarial examples in the physical world. ICLR Workshop (2017)
+3. Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods. ICML 2018 Workshop (2018)
+4. Ardila, D., Kiraly, A.P., Choi, B., Reicher, J.J., Peng, L., Tse, D., Etemadi, M., Ye, W., Corrado, G., Naidich, D.P., Shetty, S.: End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature Medicine 25, 954--961 (2019)
+5. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. ICML (2018)
+6. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7): e0130140 (2015)
+7. Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber, J., Tsipras, D., Goodfellow, I., Madry, A., Kurakin, A.: On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705 (2019)
+8. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP) (2017)
+9. Chattopadhyay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Gradcam++: Generalized gradient-based visual explanations for deep convolutional networks. arXiv preprint arXiv:1710.11063 (2017)
+0. Chen, J., Wu, X., Rastogi, V., Liang, Y., Jha, S.: Robust attribution regularization. arXiv preprint arXiv:1905.09957 (2019)
+1. Choe, J., Shim, H.: Attention-based dropout layer for weakly supervised object localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2219-2228 (2019)
+2. Dombrowski, A.K., Alber, M., Anders, C., Ackermann, M., Müller, K.R., Kessel, P.: Explanations can be manipulated and geometry is to blame. In: Advances in Neural Information Processing Systems. pp. 13567-13578 (2019)
+3. Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. arXiv preprint arXiv:1808.00033 (2018)
+4. Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: International Conference on Machine Learning. pp. 1802-1811 (2019)
+5. Etmann, C., Lunz, S., Maass, P., Schonlieb, C.B.: On the connection between adversarial robustness and saliency map interpretability. arXiv preprint arXiv:1905.04172 (2019)
+6. Ghorbani, A., Abid, A., Zou, J.: Interpretation of neural networks is fragile. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 3681-3688 (2019)
+7. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)
+8. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. ICLR (2015)
+9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)
+
+20. Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations (2019)
+21. Hermans, A., Beyer, L., Leibe, B.: In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737 (2017)
+22. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. ICML (2018)
+23. Jakubovitz, D., Giryes, R.: Improving dnn robustness to adversarial attacks using jacobian regularization. ECCV (2018)
+24. Jia, X., Shen, L.: Skin lesion classification using class activation map. arXiv preprint arXiv:1703.01053 (2017)
+25. Jiang, Q., Tawose, O.T., Pei, S., Chen, X., Jiang, L., Wang, J., Zhao, D.: Weakly-supervised image semantic segmentation based on superpixel region merging. Big Data and Cognitive Computing 3(2), 31 (2019)
+26. Kolesnikov, A., Lampert, C.H.: Seed, expand and constrain: Three principles for weakly-supervised image segmentation. CoRR abs/1603.06098 (2016), http:// arxiv.org/abs/1603.06098
+27. Krizhevsky, A., Nair, V., Hinton, G.: Cifar-10. URL http://www.cs.toronto.edu/kriz/cifar.htm1 (2010)
+28. Logan Engstrom, Andrew Ilyas, A.A.: Evaluating and understanding the robustness of adversarial logit pairing. NeurIPS SECML (2018)
+29. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) NeurIPS (2017), http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
+30. Lyu, C., Huang, K., Liang, H.N.: A unified gradient regularization family for adversarial examples. ICDM (2015)
+31. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
+32. Mitani, A., Huang, A., Venugopalan, S., Corrado, G.S., Peng, L., Webster, D.R., Hammel, N., Liu, Y., Varadarajan, A.V.: Detection of anaemia from retinal fundus images via deep learning. Nature Biomedical Eng. 4, 18-27 (2020)
+33. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. arXiv preprint arXiv:1511.04599v3 (2016)
+34. Moosavi-Dezfooli, S.M., Fawzi, A., Uesato, J., Frossard, P.: Robustness via curvature regularization, and vice versa. CVPR (2019)
+35. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning and Unsupervised Feature Learning (2011)
+36. Nilsback, M.E., Zisserman, A.: A visual vocabulary for flower classification. In: IEEE Conference on Computer Vision and Pattern Recognition. vol. 2, pp. 1447-1454 (2006)
+37. Nilsback, M.E., Zisserman, A.: Delving deeper into the whorl of flower segmentation. Image Vision Comput. 28(6), 1049-1062 (Jun 2010). https://doi.org/10.1016/j.imavis.2009.10.001, http://dx.doi.org/10.1016/j.imavis.2009.10.001
+38. Oh, S.J., Benenson, R., Khoreva, A., Akata, Z., Fritz, M., Schiele, B.: Exploiting saliency for object segmentation from image level labels. CoRR abs/1701.08261 (2017), http://arxiv.org/abs/1701.08261
+39. Papernot, N., McDaniel, P., Goodfellow, I.J., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. ACM (2017)
+
+40. Petsiuk, V., Das, A., Saenko, K.: Rise: Randomized input sampling for explanation of black-box models. In: BMVC (2018)
+41. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier. In: ACM SIGKDD (2016)
+42. Ross, A.S., Doshi-Velez, F.: Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. AAAI (2018)
+43. Santurkar, S., Ilyas, A., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Image synthesis with a single (robust) classifier. In: NeurIPS (2019)
+44. Schroff, Florian an Kalenichenko, D., Philbin, J.: Facenet: A unified embedding for face recognition and clustering. CVPR (2015)
+45. Selvaraju, R.R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., Batra, D.: Gradcam: Visual explanations from deep networks via gradient-based localization (2016)
+46. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. pp. 3145-3153 (2017)
+47. Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713 (2016)
+48. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)
+49. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
+50. Sinha, A., Singh, M., Kumari, N., Krishnamurthy, B., Machiraju, H., Balasubramanian, V.: Harnessing the vulnerability of latent layers in adversarially trained models. arXiv preprint arXiv:1905.05186 (2019)
+51. Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. Workshop on Visualization for Deep Learning, ICML (2017)
+52. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net. ICLR workshop (2015)
+53. Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German Traffic Sign Recognition Benchmark: A multi-class classification competition. In: IEEE International Joint Conference on Neural Networks. pp. 1453-1460 (2011)
+54. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. ICML (2017)
+55. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., Fergus, R.: Intriguing properties of neural networks. ICLR (2014)
+56. Taly, A., Joseph, A., Sood, A., Webster, D., Coz, D.D., Wu, D., Rahimy, E., Corrado, G., Smith, J., Krause, J., Blumer, K., Peng, L., Shumski, M., Hammel, N., Sayres, R.A., Barb, S., Rastegar, Z.: Using a deep learning algorithm and integrated gradient explanation to assist grading for diabetic retinopathy. Ophthalmology (2019)
+57. Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., Madry, A.: Robustness may be at odds with accuracy. In: ICLR (2019)
+58. Uesato, J., O'Donoghue, B., Kohli, P., van den Oord, A.: Adversarial risk and the dangers of evaluating against weak attacks. ICML (2018)
+59. Vasconcelos, M., Vasconcelos, N., Carneiro, G.: Weakly supervised top-down image segmentation. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). vol. 1, pp. 1001-1006 (June 2006). https://doi.org/10.1109/CVPR.2006.333
+
+60. Vezhnevets, A., Buhmann, J.M.: Towards weakly supervised semantic segmentation by means of multiple instance and multitask learning. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 3249-3256 (June 2010). https://doi.org/10.1109/CVPR.2010.5540060
+61. Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD Birds-200-2011 Dataset. Tech. Rep. CNS-TR-2011-001, California Institute of Technology (2011)
+62. Xu, K., Liu, S., Zhao, P., Chen, P.Y., Zhang, H., Fan, Q., Erdogmus, D., Wang, Y., Lin, X.: Structured adversarial attack: Towards general implementation and better interpretability. ICLR (2019)
+63. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems (2019)
+64. Zagoruyko, S., Komodakis, N.: Wide residual networks. CoRR abs/1605.07146 (2016), http://arxiv.org/abs/1605.07146
+65. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: ECCV (2014)
+66. Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573 (2019)
+67. Zhang, J., Lin, Z., Brandt, J., Shen, X., Sclaroff, S.: Top-down neural attention by excitation backprop. ECCV (2016)
+68. Zhang, L., Song, M., Liu, Z., Liu, X., Bu, J., Chen, C.: Probabilistic graphlet cut: Exploiting spatial structure cue for weakly supervised image segmentation. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition. pp. 1908-1915 (June 2013). https://doi.org/10.1109/CVPR.2013.249
+69. Zhang, Q., Zhu, S.C.: Visual interpretability for deep learning: a survey. arXiv preprint arXiv:1802.00614 (2018)
+70. Zhang, X., Wang, N., Shen, H., Ji, S., Luo, X., Wang, T.: Interpretable deep learning under fire. arXiv preprint arXiv:1812.00891 (2018)
+71. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: CVPR (2016)
\ No newline at end of file
diff --git a/attributionalrobustnesstrainingusinginputgradientspatialalignment/images.zip b/attributionalrobustnesstrainingusinginputgradientspatialalignment/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..bd367f46e491f5dfa594a471a48863971a1090cc
--- /dev/null
+++ b/attributionalrobustnesstrainingusinginputgradientspatialalignment/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05ab367274f2da5c8f8a36e58e4eeb11a77d46445a27f2bf99885a1693ea03f9
+size 532998
diff --git a/attributionalrobustnesstrainingusinginputgradientspatialalignment/layout.json b/attributionalrobustnesstrainingusinginputgradientspatialalignment/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..fae432949958a2b8ea29d8f1129c54e3e76fe542
--- /dev/null
+++ b/attributionalrobustnesstrainingusinginputgradientspatialalignment/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9fa1bb905cd1fb8166108093826af07f52b4cb7fa7eca8a8ce2957759b2c56a8
+size 539211