SlowGuess commited on
Commit
f3d0558
·
verified ·
1 Parent(s): 2c277bc

Add Batch cff15c10-1030-4d9d-ba35-9a0d3a630544

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/6e3469cb-722d-49d1-a6bf-8a1dd18c912a_content_list.json +3 -0
  2. 360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/6e3469cb-722d-49d1-a6bf-8a1dd18c912a_model.json +3 -0
  3. 360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/6e3469cb-722d-49d1-a6bf-8a1dd18c912a_origin.pdf +3 -0
  4. 360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/full.md +361 -0
  5. 360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/images.zip +3 -0
  6. 360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/layout.json +3 -0
  7. 3dbaframeworkfordebuggingcomputervisionmodels/cd93b6f4-1282-48aa-a291-7bb9dc9637ba_content_list.json +3 -0
  8. 3dbaframeworkfordebuggingcomputervisionmodels/cd93b6f4-1282-48aa-a291-7bb9dc9637ba_model.json +3 -0
  9. 3dbaframeworkfordebuggingcomputervisionmodels/cd93b6f4-1282-48aa-a291-7bb9dc9637ba_origin.pdf +3 -0
  10. 3dbaframeworkfordebuggingcomputervisionmodels/full.md +418 -0
  11. 3dbaframeworkfordebuggingcomputervisionmodels/images.zip +3 -0
  12. 3dbaframeworkfordebuggingcomputervisionmodels/layout.json +3 -0
  13. 3dconceptgroundingonneuralfields/71b2d5b1-3719-44fc-8a2c-7914d08740a4_content_list.json +3 -0
  14. 3dconceptgroundingonneuralfields/71b2d5b1-3719-44fc-8a2c-7914d08740a4_model.json +3 -0
  15. 3dconceptgroundingonneuralfields/71b2d5b1-3719-44fc-8a2c-7914d08740a4_origin.pdf +3 -0
  16. 3dconceptgroundingonneuralfields/full.md +493 -0
  17. 3dconceptgroundingonneuralfields/images.zip +3 -0
  18. 3dconceptgroundingonneuralfields/layout.json +3 -0
  19. 3dilgirregularlatentgridsfor3dgenerativemodeling/8fbd01f6-2eac-4b93-b500-5dd2007d58c3_content_list.json +3 -0
  20. 3dilgirregularlatentgridsfor3dgenerativemodeling/8fbd01f6-2eac-4b93-b500-5dd2007d58c3_model.json +3 -0
  21. 3dilgirregularlatentgridsfor3dgenerativemodeling/8fbd01f6-2eac-4b93-b500-5dd2007d58c3_origin.pdf +3 -0
  22. 3dilgirregularlatentgridsfor3dgenerativemodeling/full.md +378 -0
  23. 3dilgirregularlatentgridsfor3dgenerativemodeling/images.zip +3 -0
  24. 3dilgirregularlatentgridsfor3dgenerativemodeling/layout.json +3 -0
  25. 3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/1c7b48e2-458c-49c5-8f42-bda21790c830_content_list.json +3 -0
  26. 3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/1c7b48e2-458c-49c5-8f42-bda21790c830_model.json +3 -0
  27. 3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/1c7b48e2-458c-49c5-8f42-bda21790c830_origin.pdf +3 -0
  28. 3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/full.md +251 -0
  29. 3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/images.zip +3 -0
  30. 3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/layout.json +3 -0
  31. 4dunsupervisedobjectdiscovery/6ad6a509-61be-4729-9c43-0ccd04335532_content_list.json +3 -0
  32. 4dunsupervisedobjectdiscovery/6ad6a509-61be-4729-9c43-0ccd04335532_model.json +3 -0
  33. 4dunsupervisedobjectdiscovery/6ad6a509-61be-4729-9c43-0ccd04335532_origin.pdf +3 -0
  34. 4dunsupervisedobjectdiscovery/full.md +334 -0
  35. 4dunsupervisedobjectdiscovery/images.zip +3 -0
  36. 4dunsupervisedobjectdiscovery/layout.json +3 -0
  37. abenchmarkforcompositionalvisualreasoning/67b372e6-264d-4e8c-8738-917bc01af149_content_list.json +3 -0
  38. abenchmarkforcompositionalvisualreasoning/67b372e6-264d-4e8c-8738-917bc01af149_model.json +3 -0
  39. abenchmarkforcompositionalvisualreasoning/67b372e6-264d-4e8c-8738-917bc01af149_origin.pdf +3 -0
  40. abenchmarkforcompositionalvisualreasoning/full.md +219 -0
  41. abenchmarkforcompositionalvisualreasoning/images.zip +3 -0
  42. abenchmarkforcompositionalvisualreasoning/layout.json +3 -0
  43. abestofbothworldsalgorithmforbanditswithdelayedfeedback/9a6c173a-992b-4704-b0db-40d588be1e9a_content_list.json +3 -0
  44. abestofbothworldsalgorithmforbanditswithdelayedfeedback/9a6c173a-992b-4704-b0db-40d588be1e9a_model.json +3 -0
  45. abestofbothworldsalgorithmforbanditswithdelayedfeedback/9a6c173a-992b-4704-b0db-40d588be1e9a_origin.pdf +3 -0
  46. abestofbothworldsalgorithmforbanditswithdelayedfeedback/full.md +405 -0
  47. abestofbothworldsalgorithmforbanditswithdelayedfeedback/images.zip +3 -0
  48. abestofbothworldsalgorithmforbanditswithdelayedfeedback/layout.json +3 -0
  49. aboostingapproachtoreinforcementlearning/e2e16445-7251-410e-8e14-fb6768630d8b_content_list.json +3 -0
  50. aboostingapproachtoreinforcementlearning/e2e16445-7251-410e-8e14-fb6768630d8b_model.json +3 -0
360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/6e3469cb-722d-49d1-a6bf-8a1dd18c912a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1301f108920bf7cfe297e34269e72e97be3a640e876f05ffd1d0607020479e2b
3
+ size 85318
360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/6e3469cb-722d-49d1-a6bf-8a1dd18c912a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02838c42b246635cabbe67b20f3b7ad16cc78ee1f11cd62af146b0847f3b5958
3
+ size 108484
360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/6e3469cb-722d-49d1-a6bf-8a1dd18c912a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a064bd54831775325d9d15503baf707256abcc8eb3eec64e45f89ac70283d82a
3
+ size 7878630
360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/full.md ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 360-MLC: Multi-view Layout Consistency for Self-training and Hyper-parameter Tuning
2
+
3
+ Bolivar Solarte\*, Chin-Hsuan Wu\*, Yueh-Cheng Liu, Yi-Hsuan Tsai†, Min Sun
4
+
5
+ $^{1}$ National Tsing Hua University, $^{2}$ Phiar Technologies
6
+
7
+ https://enriquesolarte.github.io/360-mlxc
8
+
9
+ # Abstract
10
+
11
+ We present 360-MLC, a self-training method based on multi-view layout consistency for finetuning monocular room layout models using unlabeled 360-images only. This can be valuable in practical scenarios where a pre-trained model needs to be adapted to a new data domain without using any ground truth annotations. Our simple yet effective assumption is that multiple layout estimations in the same scene must define a consistent geometry regardless of their camera positions. Based on this idea, we leverage a pre-trained model to project estimated layout boundaries from several camera views into the 3D world coordinate. Then, we re-project them back to the spherical coordinate and build a probability function, from which we sample the pseudo-labels for self-training. To handle unconfident pseudo-labels, we evaluate the variance in the re-projected boundaries as an uncertainty value to weight each pseudo-label in our loss function during training. In addition, since ground truth annotations are not available during training nor in testing, we leverage the entropy information in multiple layout estimations as a quantitative metric to measure the geometry consistency of the scene, allowing us to evaluate any layout estimator for hyper-parameter tuning, including model selection without ground truth annotations. Experimental results show that our solution achieves favorable performance against state-of-the-art methods when self-training from three publicly available source datasets to a unique, newly labeled dataset consisting of multi-view images of the same scenes.
12
+
13
+ # 1 Introduction
14
+
15
+ Room-Layout geometry is one of the fundamental geometry representations for an indoor scene, which can be parameterized with points and lines describing corners and wall boundaries. Therefore, this geometry has been largely used as a primary stage for challenging tasks like robot localization [5, 42], scene understanding [24], floor plan estimation [29], etc. Several methods have been proposed for estimating layout geometry from imagery [12, 13, 45], while the current state-of-the-art methods [19, 10, 17, 47, 15] leverage deep learning approaches to regress the wall-ceiling and floor boundaries directly from monocular 360-images in a supervised manner.
16
+
17
+ However, deploying a room layout model using 360-images in a new target domain remains a challenging problem. For instance, a pre-trained layout model may estimate inconsistent geometry, due to novel view positions, different lighting conditions, or severe object occlusions in the scene, especially for large and complex rooms. To handle these issues, ground truth annotations in the new target domain are usually required to finetune the model, which involves a cumbersome data labeling process. Moreover, considering the large variety of indoor scene styles, a room layout model may
18
+
19
+ ![](images/81238020470f94242e1bccf83afc47d2bd13a8480b3b5d5dd884be1fbbb80d2e.jpg)
20
+ Figure 1: 360-MLC pipeline. (a) Our method uses a pre-trained room layout model and collects the layout estimations from multiple views in the same scenes; (b) we re-project multi-view estimates to the target view (the image with the orange bounding box); (c) 360-MLC then generates the pseudolabel and the uncertainty to compute the consistency loss as self-supervisions without requiring ground truth labels in the target scene.
21
+
22
+ demand more new labeled data to adapt to another target domain. In this paper, we aim to make a pre-trained room layout model self-trainable in the new target domain without using any annotated data during model finetuning, in which such setting is practical but has not been studied widely for room layout estimation.
23
+
24
+ To this end, we present 360-MLC, a self-training method based on Multi-view Layout Consistency (MLC), capable of adapting a pre-trained layout model into a new data domain using solely registered 360-images. Our main assumption is that multiple layout estimates must define together a consistent scene geometry regardless of their camera positions. Based on this idea, we project several estimated layout boundaries into the world coordinate with the Euclidean space and re-project them back to a target view (spherical coordinate) for building a probability function, from where we sample the most likely points as pseudo-labels. In addition, we evaluate the variance in the re-projected boundaries as an uncertainty measure to weight our pseudo-labels for unconfident boundary regions. We combine both our pseudo-label and its uncertainty into the proposed weighted boundary consistency loss $(\mathcal{L}_{\mathrm{WBC}})$ , which allows us to finetune a layout model in a self-training manner without requiring any ground truth annotations. Therefore, pseudo-labels generated via our MLC have higher quality than pseudo-labels predicted from a single view, which is crucial for achieving satisfactory performance when no annotations are available in the new domain.
25
+
26
+ We further propose the multi-view layout consistency metric based on entropy information $(\mathcal{H}_{\mathrm{MLC}})$ that quantitatively evaluates any layout predictor without requiring ground truth labels. This metric can be used to monitor the quality of the predicted layouts in both training and testing. As a result, our $\mathcal{H}_{\mathrm{MLC}}$ metric can be used for hyper-parameter tuning and model selection, which is practical when no ground truths are available in the new domain. Our key idea behind $\mathcal{H}_{\mathrm{MLC}}$ is to evaluate the entropy information from multiple layouts projected into a discrete grid as a 2D density function, where a better geometry alignment of layouts would yield a lower entropy evaluation. We leverage our $\mathcal{H}_{\mathrm{MLC}}$ metric across different experiments and show its versatility and reliability for the multi-view layout setting.
27
+
28
+ In experiments, we leverage the MP3D-FPE [29] multi-view dataset as our unlabeled new domain. Since layout ground truths are not available in this dataset, we manually annotate 7 scenes with 700 frames for only the performance evaluation purpose. For the dataset used in model pre-training, we validate our framework on three real-world room-Layout estimation datasets: MatterportLayout [36, 48], Zillow Indoor Dataset (ZInD) [9], and the dataset used in LayoutNet [47]. We show that our method is able to handle two practical settings during training: 1) having both the labeled pre-trained data and the unlabeled data in the target domain, 2) only providing the unlabeled data with the pre-trained model. Moreover, we demonstrate the usefulness of the proposed modules, including the weighted boundary consistency loss with pseudo-labels, and the multi-view layout consistency metric that facilitates the hyper-parameter tuning without the need for ground truth labels.
29
+
30
+ Our contributions based on the idea of Multi-view Layout Consistency are summarized as follows:
31
+
32
+ 1. Based on multi-view geometry consistency, we propose a self-training framework for room layout estimation, requiring only registered 360-images as the input.
33
+ 2. We propose the weighted boundary consistency loss function $(\mathcal{L}_{\mathrm{WBC}})$ that uses multiple estimated layout boundaries with their uncertainty to improve the self-training process using pseudo-labels.
34
+ 3. We introduce the multi-view layout consistency metric $(\mathcal{H}_{\mathrm{MLC}})$ for measuring multi-view layout geometry consistency based on entropy information. This metric allows us to evaluate any layout estimator quantitatively without requiring ground truth labels, which enables hyper-parameter tuning and model selection.
35
+
36
+ # 2 Related Work
37
+
38
+ Indoor room layout estimation. Estimating the layout structure of a room for cluttered indoor environments is a challenging task. Early methods estimate plane surfaces and their orientations based on points or edge features to construct the spatial layouts for perspective images [12, 13, 35] or panorama images (i.e. 360-images) [45, 41]. On the other hand, some approaches [3, 14, 7, 40] leverage semantic cues in the scene, such as objects or humans, to improve the layout estimation.
39
+
40
+ Deep learning approaches [19, 10, 17, 47, 15] leverage convolutional neural networks (CNN) to extract the geometric cues (e.g., corners or edges) and semantic cues (e.g., pixel-wise segmentation), which largely enhance the performance. Recent state-of-the-art methods [43, 30, 31, 37] are able to robustly estimate the layout of the whole room from a single panorama image. CFL [11] and AtlantaNet [21] avoid the commonly-used Manhattan world assumption [8], enabling the ability to handle complex room shapes. In addition to monocular approaches, MVLayoutNet [16] and PSMNet [38] explore the usage of multi-view 360-images as inputs to further improve the layout estimations. However, training deep neural networks to perform layout estimation requires large-scale datasets with manual annotations. In this paper, we aim to mitigate this problem by learning from unlabeled data.
41
+
42
+ Self-training. To incorporate unlabeled data during training, self-training [27, 44, 23] uses a pretrained teacher model to generate pseudo-labels for the data without ground truth annotations. Then, both the labeled data and the unlabeled data with pseudo-labels are used to train a better model (i.e., student model). Due to its simplicity, many attempts have been made for the field of semi-supervised learning [18, 2, 4, 28] and unsupervised domain adaptation [49, 50]. Moreover, recent methods [39, 46] show that self-training can surpass state-of-the-art fully-supervised models on large-scale datasets such as ImageNet.
43
+
44
+ However, most of these works focus on self-training for classification or object detection tasks, while self-training for tasks considering geometric predictions (e.g., room-Layout estimation studied in this paper) is rarely explored. SSLayout360 [34] trains a layout estimation model with pseudolabels produced by the Mean Teacher [33]. Yet, the simple extension from classification to layout estimation considers each image independently, ignoring the geometry information coming from other camera views. In addition, another challenge is that pseudo-labels are usually noisy. Previous methods attempt to reduce the noise by assembling multiple predictions for an image under different augmentations [4, 22] or by selecting only the pseudo-labels with high confidence [28]. In this paper, we leverage multi-view consistency from the layout estimations and measure their uncertainty to construct reliable pseudo-labels.
45
+
46
+ Unsupervised model validation. Without ground truth annotations, how to evaluate a machine learning model remains an open issue. Traditional unsupervised learning algorithms (e.g., clustering) can be evaluated without external labels through computing cohesion and separation [32]. For unsupervised domain adaptations, where the problem assumes no labeled data is available in the target domain, unsupervised validation is more practical for hyper-parameter tuning [20, 25]. [20] evaluates the confidence of the predictions of the classifier using entropy. [25] proposes the soft neighborhood density, measuring the local similarity between data samples (the higher, the better). However, these metrics are designed for classification or segmentation tasks and cannot easily be adopted by our tasks. Therefore, in this paper, we study unsupervised validation from another perspective: evaluating the geometry consistency between multiple views, requiring no ground truth annotations.
47
+
48
+ # 3 Our Approach: 360-MLC
49
+
50
+ Our primary goal is to deploy a model pre-trained on source domain into a new dataset (target domain), where the data distribution may differs from the one used in the pre-trained model. We assume that several images in the new scene are captured and registered by their camera poses, but their layout ground truth annotations are not available. Under this practical scenario, we present 360-MLC, a self-training method that is based on multi-view layout consistency.
51
+
52
+ For illustration purposes, Fig. 1 presents the overview of our method. First, we begin with a set of 360-images and a pre-trained room layout model that can generate a set of estimated layout boundaries (see green lines in Fig. 1-(a)). Then, by leveraging the related camera poses of every image, all layout boundaries are projected into the world coordinate and then re-projected them back into a target view (see yellow lines in Fig. 1-(b)). Details of these projections are described in §3.1.
53
+
54
+ Assuming that the projected boundaries from all the views describe the same scene geometry, we can compute the most likely layout boundary positions as pseudo-labels for self-training along with their uncertainty based on the variance (see Fig. 1-(c)). Upon the estimated pseudo-labels, we define our weighted boundary consistency loss $\mathcal{L}_{\mathrm{WBC}}$ , allowing us to define a reliable regularization used for self-training. More details are presented in §3.2.
55
+
56
+ A challenging step towards our goal is the lack of a metric for evaluating a layout estimator when no ground truth annotations are available. To tackle this issue, §3.3 describes our multi-view layout consistency metric $\mathcal{H}_{\mathrm{MLC}}$ , which allows us to evaluate multiple layout estimates without requiring any ground truth. We leverage the proposed metric throughout all the experiments described in §4 as the quantitative metric for model selection and hyper-parameter tuning.
57
+
58
+ # 3.1 Multi-view Layout Re-projection
59
+
60
+ In this section, we describe the multi-view layout re-projection process of all estimated layout boundaries from different cameras positions in the scene. To this end, we define the set of 360-images and boundaries in the scene as follows:
61
+
62
+ $$
63
+ \left\{\left(I _ {i}, Y _ {i}\right) \right\} _ {i = 1: N}, \quad I _ {i} \in \mathbb {R} ^ {H \times W}, \quad Y _ {i} \in \mathbb {R} ^ {W}, \tag {1}
64
+ $$
65
+
66
+ where $I_{i}$ is the i-th 360-image with the size of $W$ columns times $H$ rows pixels, $Y_{i}$ is the layout boundary of image $I_{i}$ , and $N$ is the number of images. $Y_{i} \in \mathbb{R}^{W}$ is a vector of boundaries at all columns, where $Y_{i}(\theta) = \phi$ specifies that the layout boundary is at row $\phi$ for column $\theta$ in the pixel coordinate.
67
+
68
+ For a supervised training, $Y_{i}$ is given as a ground truth label. However, in our proposed self-training framework, we aim to ensemble pseudo-labels through geometry re-projection of multiple views. To begin with, we describe the process to project the boundary $Y_{i}$ into the $j$ -th target view as the re-projected boundary $Y_{i \rightarrow j}$ . The projection of $Y_{i}$ into the world coordinate using the Euclidean space is described as follows:
69
+
70
+ $$
71
+ X _ {i} = \operatorname {P r o j} \left(Y _ {i}, T _ {i}, h _ {i}\right), \tag {2}
72
+ $$
73
+
74
+ where $X_{i}$ is the projected layout boundary in the world coordinate; $h_i$ is the camera height; $T_{i} \in \mathrm{SE}(3)$ is the camera pose with respect to the world coordinate; $\operatorname{Proj}(\cdot)$ is the layout projection for spherical cameras that maps spherical coordinates $[\theta, \phi]^\top \in \mathbb{R}^2$ into the 3D world coordinate $[x, y, z]^\top \in \mathbb{R}^3$ . Details of this projection function $\operatorname{Proj}(\cdot)$ and how we handle unknown camera heights with estimated poses are presented in the supplementary material.
75
+
76
+ Then, $X_{i}$ in the world coordinate can be re-projected into a j-th target view $(Y_{i\rightarrow j})$ as follows:
77
+
78
+ $$
79
+ Y _ {i \rightarrow j} = \operatorname {P r o j} ^ {- 1} \left(T _ {j}, X _ {i}\right), \tag {3}
80
+ $$
81
+
82
+ where $\mathrm{Proj}^{-1}(\cdot)$ is the inverse of the projection function presented in Eq. (2). We collect all reprojected boundaries into $\mathbf{Y}_j = [Y_{1\rightarrow j};\ldots ;Y_{N\rightarrow j}]\in \mathbb{R}^{W\times N}$ , where $\mathbf{Y}_j(\theta)\in \mathbb{R}^N$ is a vector of row positions on boundaries from $N$ views at column $\theta$ . As shown in the yellow lines in Fig. 1-(b), the visualization of $\mathbf{Y}_j$ reveals the underlying geometry of the scene. More visualization results are presented in §4.2.
83
+
84
+ ![](images/2dd8971e19e4344044ba68eb97f60686871e3a11bf59527d846e94d7c4014fff.jpg)
85
+ (a) Pre-trained model
86
+
87
+ ![](images/2c75e0ef81c9d086bd5d48a7aeb5092e7eae2873a548c17127e2a340cbeb8829.jpg)
88
+ (b) After our self-training
89
+ Figure 2: 2D density function of multi-view layouts. We project multiple layout boundaries into a top-view 2D density map that reveals the geometry consistency in the scene. (a) We present the projection of multi-view layout using a pre-trained model before self-training. (b) We observe a better layout consistency of the scene after our self-training process. In §3.3, we use such a top-view map for calculating our layout consistency metric $\mathcal{H}_{\mathrm{MLC}}$ .
90
+
91
+ # 3.2 Weighted Boundary Consistency Loss
92
+
93
+ In this section, we describe how multiple re-projected boundaries can be used to estimate pseudo-label boundaries and its uncertainty as a reliable supervision for our self-training formulation. First, we obtain a set of predicted boundaries $\{Y_{i} = \mathcal{M}(I_{i})\}_{i\in 1:N}$ for all views using the pre-trained model $\mathcal{M}$ . Following the process in §3.1, we re-project the estimated boundaries into $\mathbf{Y}_j$ for each view. We define the pseudo-label $(\bar{Y_j})$ and its uncertainty $(\sigma_j)$ as follows:
94
+
95
+ $$
96
+ \bar {Y} _ {j} = \left[ \operatorname {M e d i a n} \left(\mathbf {Y} _ {j} (\theta)\right) \right] _ {\theta = 1: W} \in \mathbb {R} ^ {W}, \quad \sigma_ {j} = \left[ \operatorname {S T D} \left(\mathbf {Y} _ {j} (\theta)\right) \right] _ {\theta = 1: W} \in \mathbb {R} ^ {W}, \tag {4}
97
+ $$
98
+
99
+ where $\mathrm{Median}(\cdot)$ and $\mathrm{STD}(\cdot)$ are the median and standard deviation both applied to $\mathbf{Y}_j(\theta)$ (i.e., $N$ re-projections of row positions on boundary at column $\theta$ ). The pseudo-labels are computed for each target view. By leveraging Eq. (4), we define our geometry loss as follows:
100
+
101
+ $$
102
+ \mathcal {L} _ {\mathrm {W B C}} = \sum_ {j = 1} ^ {N} \sum_ {\theta = 1} ^ {W} \frac {\left| \left| Y _ {j} (\theta) - \bar {Y} _ {j} (\theta) \right| \right| _ {1}}{\sigma_ {j} ^ {2} (\theta)}. \tag {5}
103
+ $$
104
+
105
+ $\mathcal{L}_{\mathrm{WBC}}$ is our proposed weighted boundary consistency loss for finetuning the room layout model $\mathcal{M}$ . Note that the denominator weights the pseudo-label accordingly as the uncertainty at each column $\theta$ . This design aims to reduce the effect of unstable predictions in the scene (e.g., drastic occlusions) that generally incur a larger variance in the re-projection due to the inconsistency between multiple views.
106
+
107
+ # 3.3 Multi-view Layout Consistency Metric
108
+
109
+ In practice, one challenge for adapting a pre-trained model into a completely unlabeled new data domain is that there is no labelled hold-out dataset for tuning hyper-parameters such as learning rate, etc. Hence, we need a metric to measure the performance based on unlabeled data. To this end, we propose a the $\mathcal{H}_{\mathrm{MLC}}$ metric to measure the multi-view layout consistency that does not rely on ground truth labels but on model outputs only.
110
+
111
+ First, we collect all estimated layout boundaries in the scene projected by Eq. (2) to build a top-view 2D density map. This can be described as follows:
112
+
113
+ $$
114
+ \mathbf {X} = \left\{X _ {i} \right\} _ {i = 1: N}, \quad \Phi (\mathbf {X}) \in \mathbb {R} ^ {U \times V}, \tag {6}
115
+ $$
116
+
117
+ where $\mathbf{X}$ is the set of all estimated layouts in the scene, and $\Phi (\cdot)$ is a top-down projection function that maps $\mathbf{X}$ into a discrete 2D-grid with the size $U\times V$ as a normalized histogram. Our key idea is to evaluate the entropy in this discrete grid as follows:
118
+
119
+ $$
120
+ \mathcal {H} _ {\mathrm {M L C}} = \sum_ {u, v} - \Phi_ {u, v} (\mathbf {X}) \cdot \log \Phi_ {u, v} (\mathbf {X}). \tag {7}
121
+ $$
122
+
123
+ The intuition behind this evaluation comes from the fact that better alignment of layout boundaries yield in a lower entropy evaluation, while a higher entropy value would reflect the poor alignment of
124
+
125
+ Table 1: Dataset statistics.
126
+
127
+ <table><tr><td>Source Dataset</td><td>Number of Frames</td><td>Target Dataset</td><td>Number of Frames</td></tr><tr><td>MatterportLayout [48]</td><td>2094</td><td>MP3D-FPE [29]</td><td>2094</td></tr><tr><td>ZInD [9]</td><td>2094</td><td>MP3D-FPE</td><td>2094</td></tr><tr><td>LayoutNet [47]</td><td>817</td><td>MP3D-FPE</td><td>817</td></tr></table>
128
+
129
+ layout geometry between multiple views. As a result, we can use $\mathcal{H}_{\mathrm{MLC}}$ to select hyper-parameters and early stop the model training, without using any ground truth annotations. For illustration purposes, Fig. 2 presents two projected scenes that show how the geometry consistency correlates with a less disordered 2D projection. More details are presented in the supplementary material.
130
+
131
+ # 4 Experiments
132
+
133
+ # 4.1 Experimental Setup
134
+
135
+ Datasets. We conduct extensive experiments using publicly available 360-image layout datasets: Matterport3D Floor Plan Estimation (MP3D-FPE) [29] as the target dataset, and three real-world datasets as the pre-training datasets, including MatterportLayout [36, 48], Zillow Indoor Dataset (ZInD) [9], and the dataset used in LayoutNet [47] that combines PanoContext [45] and Stanford2D3D [1] (referred to as the LayoutNet dataset for simplicity). MP3D-FPE is collected using the MINOS simulator [26] to render sequences of 360-images within each scene/room from the Matterport3D dataset [6]. Note that it is the only dataset containing multi-view 360-images and thus we consider it as the target dataset.
136
+
137
+ In experiments, we aim to pre-train the layout model using each of the pre-trained dataset, and then we utilize the target MP3D-FPE dataset for self-training and performing evaluation. We follow the standard training split released in each pre-trained dataset. For the target dataset MP3D-FPE, we use 2094 frames as the training set and 700 frames as the testing set, from 32 and 7 scenes, respectively, where those scenes are not included in the training set of the Matterport3D dataset. On average, there are 10.46 views within each room. Note that the target MP3D-FPE dataset is a challenging dataset since it includes many complex scenes that do not follow the Manhattan assumption [8], and we carefully label the ground truth 2D and 3D layout for the testing set. As a result, compared to other datasets, the performance scores on MP3D-FPE are lower when the model is evaluated using state-of-the-art methods.
138
+
139
+ Evaluation metrics. We follow Zou et al. [48] to construct the four standard protocols for evaluation. To evaluate layout boundary, we use 2D and 3D intersection-of-union (IoU). To evaluate layout depth, we use root-mean-square error (RMSE) by setting the camera height as 1.6 meters, and $\delta_{1}$ , which describes the percentage of pixels where the ratio between the estimation and the ground truth depth is within the threshold of 1.25. In addition, we introduce a new metric $\mathcal{H}_{\mathrm{MLC}}$ outlined in §3.3 to analyze the consistency between multiple layout estimations. We show in §4.3 that $\mathcal{H}_{\mathrm{MLC}}$ is highly co-related to 2D/3D IoUs on the hold-out dataset. Hence, $\mathcal{H}_{\mathrm{MLC}}$ is suitable for hyper-parameter tuning when ground truths are not available.
140
+
141
+ Implementation details. We adopt HorizonNet [30] as our layout estimation backbone due to its state-of-the-art performance. Note that, different from the original HorizonNet, our model is trained without the supervision on the corner channel since our pseudo-label contains only the boundary information. Common data augmentation techniques for 360-images are also applied during training, including left-right flipping, panoramic horizontal rotation, and luminance augmentation. We use the Adam optimizer to train the model for 300 epochs by setting the learning rate as 0.0001 and the batch size as 4. We save the model every 5 epochs and the early-stopped model is selected based on the lowest $\mathcal{H}_{\mathrm{MLC}}$ score on the hold-out test dataset for all methods. All models are trained on a single NVIDIA Titan X GPU with 12 GB of memory. We will make our models, codes, and dataset available to the public.
142
+
143
+ Table 2: Evaluation results of Setting 1 on MP3D-FPE [29].
144
+
145
+ <table><tr><td>Pre-trained Dataset</td><td>Method</td><td>2D IoU (%) ↑</td><td>3D IoU (%) ↑</td><td>RMSE ↓</td><td>δ1 ↑</td><td>HMLC ↓</td></tr><tr><td rowspan="3">MatterportLayout [48]</td><td>Pre-trained</td><td>65.38</td><td>62.28</td><td>0.58</td><td>0.78</td><td>8.18</td></tr><tr><td>SSLayout360* [34]</td><td>70.53</td><td>66.74</td><td>0.48</td><td>0.82</td><td>8.15</td></tr><tr><td>Ours</td><td>71.50</td><td>67.70</td><td>0.46</td><td>0.82</td><td>8.10</td></tr><tr><td rowspan="3">ZInD [9]</td><td>Pre-trained</td><td>45.43</td><td>42.17</td><td>1.02</td><td>0.61</td><td>8.32</td></tr><tr><td>SSLayout360*</td><td>62.62</td><td>58.27</td><td>0.66</td><td>0.74</td><td>8.34</td></tr><tr><td>Ours</td><td>65.60</td><td>60.70</td><td>0.56</td><td>0.75</td><td>8.19</td></tr><tr><td rowspan="3">LayoutNet [47]</td><td>Pre-trained</td><td>64.34</td><td>58.92</td><td>0.61</td><td>0.70</td><td>8.50</td></tr><tr><td>SSLayout360*</td><td>67.48</td><td>62.89</td><td>0.56</td><td>0.77</td><td>8.29</td></tr><tr><td>Ours</td><td>69.40</td><td>65.22</td><td>0.55</td><td>0.72</td><td>8.32</td></tr></table>
146
+
147
+ Table 3: Evaluation results of Setting 2 on MP3D-FPE [29].
148
+
149
+ <table><tr><td>Pre-trained Dataset</td><td>Method</td><td>2D IoU (%) ↑</td><td>3D IoU (%) ↑</td><td>RMSE ↓</td><td>δ1 ↑</td><td>HMLC ↓</td></tr><tr><td rowspan="2">MatterportLayout [48]</td><td>SSLayout360-ST*</td><td>70.58</td><td>66.64</td><td>0.52</td><td>0.81</td><td>8.10</td></tr><tr><td>Ours</td><td>71.50</td><td>67.20</td><td>0.49</td><td>0.78</td><td>8.14</td></tr><tr><td rowspan="2">ZInD [9]</td><td>SSLayout360-ST*</td><td>56.52</td><td>52.12</td><td>0.72</td><td>0.74</td><td>8.20</td></tr><tr><td>Ours</td><td>64.45</td><td>59.53</td><td>0.59</td><td>0.75</td><td>8.17</td></tr><tr><td rowspan="2">LayoutNet [47]</td><td>SSLayout360-ST*</td><td>66.14</td><td>61.51</td><td>0.57</td><td>0.76</td><td>8.30</td></tr><tr><td>Ours</td><td>69.12</td><td>64.62</td><td>0.57</td><td>0.69</td><td>8.31</td></tr></table>
150
+
151
+ # 4.2 Experimental Results
152
+
153
+ Baselines. The proposed framework is compared against our re-implementation of SSLayout360 [34]. We ensure that our re-implemented SSLayout360* has similar performance compared to the reported results in [34]. Note that we use the same backbone, i.e., HorizonNet, for both our 360-MLC and SSLayout360*, and we train models using only the boundary supervision as stated in §4.1. We implement an extended self-training version based on SSLayout360*, namely SSLayout360-ST*, in which we disable the supervised loss and train without any ground truth annotations. We initialize all models with the same pre-trained weights from the official HorizonNet release. In addition, different from our approach, both SSLayout360* and SSLayout360-ST* are trained on two NVIDIA Titan X GPUs due to the requirement of larger memory for their teacher-student architecture.
154
+
155
+ Setting 1: labeled pre-training data + unlabeled target data. In this setting, we sample the same amount of training data from both the pre-training dataset and the target dataset, as shown in Table 1. We start from the pre-trained model, and then generate pseudo-labels for unlabeled data. Then, during model finetuning, we include both labeled pre-trained data and pseudo-labeled data from MP3D-FPE. In Table 2, we show that our method consistently performs favorably against SSLayout360* across most metrics, in which SSLayout360* does not consider the usage of multi-view layout consistency as our approach does.
156
+
157
+ Setting 2: pre-trained model + unlabeled target data. A more practical and challenging setting is that only the pre-trained model and unlabeled data in the target domain are available during model finetuning. In Table 3, similar to Setting 1, our method has consistent performance improvement against SSLayout360-ST*. Note that we observe that our performance gains are larger (especially on ZInD and LayoutNet) compared to Setting 1. For MatterportLayout, since its data distribution is closer to the MP3D-FPE (both are from Matterport3D), the performance difference is less.
158
+
159
+ Moreover, when we remove the labeled pre-trained data from Setting 1, we do not observe a significant performance drop in Setting 2 on all dataset settings, while SSLayout360-ST* is more sensitive to the labeled pre-trained data, e.g., on ZInD, 2D/3D IoU is decreased by $6.1\% / 6.15\%$ . This demonstrates the
160
+
161
+ ![](images/1c770ecf00458a6d1acb6d0563c150d25ccc5d5bc49e8cf944e24f664a0c3993.jpg)
162
+ (a)
163
+
164
+ ![](images/11f62fb5da768267755ace006e1c659e1d209a8f546a756c255df947103b6230.jpg)
165
+ (b)
166
+
167
+ ![](images/aa39e5ab03d8b3ce16209b6fa6dfb5a746a101599245750fe34303ead183db22.jpg)
168
+ (c)
169
+
170
+ ![](images/3f8bde94d5cb3d88c4b5bfe3e47038f970df181922a34d80ac65cacad5e1f2b2.jpg)
171
+
172
+ ![](images/ba299b97fc48cf8eb8f091952c266f5bec7927847838606feda926cce23c3b24.jpg)
173
+
174
+ ![](images/0180987e59bca55120ab6c5a19b524b7855b9806b2373887266ff1c233779d1a.jpg)
175
+
176
+ ![](images/c38c80a40348d434c71220833b4db7281acd6f5461281c677d5ac91a45549636.jpg)
177
+ Figure 3: Multi-view pseudo-labeling. We show the qualitative visualization of our proposed 360-MLC. In (a), all re-projected layout boundaries from different camera views are presented as yellow lines. In (b), the corresponded pseudo-labels are depicted in magenta. In (c), the uncertainty in the pseudo labels is shown as 2D maps. We can appreciate that our proposed 360-MCL can estimate plausible pseudo-labels from estimations along the scene.
178
+
179
+ ![](images/d11ef6a76974c3937b16660f35db8b7e9d0cac347400fb53ad3f5ddf14bbe64e.jpg)
180
+
181
+ ![](images/111ad9563243d8520b3d75f93aae5102aa5a2344884b3123f973ec60769b9a19.jpg)
182
+
183
+ ![](images/b2061b21b2700ddee509d1ad7aa6cdb6a8258b644df9bbac747131a9d41c185d.jpg)
184
+
185
+ ![](images/49e455ddc7a4eab6c412e0419aedb74801a4b81760d4547d8fa5618a7fe39418.jpg)
186
+
187
+ ![](images/436c97a812cc46bfe3a8f3dd574d0e0722f9f621802bdbdce9eed2781531a1e7.jpg)
188
+ Ground Truth
189
+ Ours
190
+
191
+ ![](images/32ae59fc597bd8601777906c8e7da36504492951cf14b7b27ccd2648c62330cb.jpg)
192
+
193
+ ![](images/84fa3d6f6dbf43a9ae3c617a4c80a330560339300303ea9dea7acffcb2da9650.jpg)
194
+ SSLayout360* [34]
195
+
196
+ ![](images/1a96aca821f378e2a976c9dc33397a0381d40d225a09da9f284521249cce8058.jpg)
197
+
198
+ ![](images/95b4ff6de602201e0a029c34e2eefb19b668b3aabe29c41920c1cee65166b6ea.jpg)
199
+ Pre-trained
200
+
201
+ ![](images/7511619621e8067a1dacde2f1481af54a5c72c6ee6d9b756187c81f003c57cf1.jpg)
202
+ Figure 4: Qualitative results in the 2D top-view. We project the estimated layouts registered by the corresponding camera poses into 2D. Comparing to our baseline SSLayout360* [34] and the pre-trained model, our model produces more consistent layouts (highlighted with red circles).
203
+
204
+ effectiveness of considering multi-view layout consistency that generates more reliable pseudo-labels, even when no labeled data is provided during training.
205
+
206
+ Qualitative results. For illustration purpose, we visualize the proposed multi-view pseudo-labeling in Fig 3. In Fig. 4 we show qualitative results of multiple layouts projected into the 3D world coordinate for two test scenes evaluated on MP3D-FPE [29], following the Setting 2. It can be observed that our proposed 360-MLC presents sharper and clearer layout boundaries for those scenes, demonstrating a better performance compared with the baselines. Lastly, we analysis the qualitative layout predictions on 360-images in Fig 5.
207
+
208
+ # 4.3 Ablation Study
209
+
210
+ In this section, we present our ablation study to validate the effectiveness of the proposed components. We conduct experiments using the ZInD pre-trained weights under the Setting 2 mentioned in $\S 4.2$ .
211
+
212
+ ![](images/f38a66f9f04e9ddbe4cdd84bdf01a2ca42fbb270a1d813d768787e9e708c3a38.jpg)
213
+
214
+ ![](images/c1494b88e6bb096a32b8fc4c39773cd6c2b22f81c20f20a56fe85e5313c2dc05.jpg)
215
+
216
+ ![](images/5b8d6a16aec9dce3a28800e9439ba99bf170353c676d9fe909815c2ea5383f29.jpg)
217
+ Figure 5: Qualitative comparisons on 360-images. We compare our model with SSLayout360* in 360-images. The cyan, yellow, and magenta lines are ground truth, SSLayout360*, and 360-MLC, respectively. We observe that our model predicts layout boundary more closely to the ground truth than SSLayout360*, and is more robust towards furniture (e.g., tables, couch) and large rooms. The dashed white bounding boxes highlight the error predictions from the baseline.
218
+
219
+ ![](images/431f70848a1a656cdfdaba5850e035d8d8b1bbddb4163ee098d0e05c7a88086d.jpg)
220
+
221
+ Table 4: Ablation study with Setting 2 on MP3D-FPE [29], pre-trained on ZInD [9].
222
+
223
+ <table><tr><td></td><td>Frames (%)</td><td>Median</td><td>Mean</td><td>LWBC</td><td>2D IoU (%)↑</td><td>3D IoU (%)↑</td><td>RMSE ↓</td><td>δ1 ↑</td><td>HMLC ↓</td></tr><tr><td>(a)</td><td>10</td><td>✓</td><td>-</td><td>✓</td><td>57.29</td><td>53.44</td><td>0.71</td><td>0.68</td><td>8.23</td></tr><tr><td>(b)</td><td>50</td><td>✓</td><td>-</td><td>✓</td><td>59.55</td><td>55.46</td><td>0.65</td><td>0.69</td><td>8.22</td></tr><tr><td>(c)</td><td>100</td><td>-</td><td>✓</td><td>✓</td><td>46.17</td><td>42.63</td><td>0.82</td><td>0.63</td><td>8.22</td></tr><tr><td>(d)</td><td>100</td><td>✓</td><td>-</td><td>-</td><td>62.81</td><td>58.05</td><td>0.58</td><td>0.76</td><td>8.18</td></tr><tr><td>(e)</td><td>100</td><td>✓</td><td>-</td><td>✓</td><td>64.45</td><td>59.53</td><td>0.59</td><td>0.75</td><td>8.17</td></tr></table>
224
+
225
+ Results are presented in Table 4, describing four main experiments. First, we investigate how the number of views used to generate our pseudo-labels may affect the performance of our proposed solution. Second, we replace the Median function in Eq. (4) with Mean, aiming to demonstrate the effects of outliers in the quality of our pseudo-labels. Then, we analyze the proposed $\mathcal{L}_{\mathrm{WBC}}$ with the vanilla $\mathcal{L}_1$ loss function. Lastly, we show how $\mathcal{H}_{\mathrm{MLC}}$ could be used in hyper-parameter tuning, such as selecting learning rates and models.
226
+
227
+ Multi-view pseudo-labeling. In this experiment, we aim to verify whether the more views of estimations we use, the better quality of the pseudo-labels we can acquire. Therefore, we experiment three models trained with the same amount of data but using pseudo-labels created by $10\%$ , $50\%$ and $100\%$ of frames, as shown in row (a), (b), and (e), respectively in Table 4. The result demonstrates the contribution of our proposed method: using the layout estimations from more views for self-training can help ensemble more robust training signals.
228
+
229
+ Median and mean in Eq. (4). In this experiment, we investigate the impact of the median operator for pseudo-label generation. We compare the mean and median functions in row (c) and (e) in Table 4. It can be observed that the median function has a better performance since it is capable of ignoring outliers in the re-projected boundaries from multiple views, increasing the robustness of pseudo-labels.
230
+
231
+ $\mathcal{L}_{\mathrm{WBC}}$ and $\mathcal{L}_1$ . The results in row (d) and (e) of Table 4 show that adding the uncertainty $\sigma$ in $\mathcal{L}_{\mathrm{WBC}}$ performs better than the simple $\mathcal{L}_1$ loss. This is because our proposed loss function down-weights the part of the layout boundaries that are noisy (i.e., high variance), avoiding the effect of unreliable pseudo-labels during self-training.
232
+
233
+ $\mathcal{H}_{\mathrm{MLC}}$ for hyper-parameter tuning. We present an example of using our proposed metric $\mathcal{H}_{\mathrm{MLC}}$ for hyper-parameter tuning. In Fig. 6, we test three different settings using the same model training
234
+
235
+ (a)
236
+ ![](images/d9b4f0ca5c7efa8095836e143ae9af021a1e290794b9efe9d5b43c658079454e.jpg)
237
+ $\mathrm{lr = 1e - 5}$ data $= 1k$ Ir $= 1\mathrm{e} - 4$ data $= 1k$ Ir $= 1\mathrm{e} - 5$ data $= 2k$
238
+
239
+ ![](images/dfd1b2d9d6175afebde8b602484ab67928d6549c0362a2cde5c663bed569b315.jpg)
240
+ (b)
241
+ Figure 6: An example of hyper-parameter tuning. We show that the layout evaluation metrics, (a) 2D IoU and (b) 3D IoU, evaluated using ground truth labels, are inversely correlated with the proposed metric (c) $\mathcal{H}_{\mathrm{MLC}}$ under three different settings. In this way, we are able to select better models with the lower value of $\mathcal{H}_{\mathrm{MLC}}$ . We conduct the experiment under two conditions: different learning rates and different amount of training data.
242
+
243
+ ![](images/99ec51dc2f86e257610318cf49eda8761a7c1648c28148a7c9e73023d983bb8e.jpg)
244
+ (c)
245
+
246
+ Table 5: Evaluation results of Setting 2 on MP3D-FPE [29] using estimated poses.
247
+
248
+ <table><tr><td>Pre-trained Dataset</td><td>Method</td><td>2D IoU (%) ↑</td><td>3D IoU (%) ↑</td><td>RMSE ↓</td><td>δ1 ↑</td></tr><tr><td rowspan="3">MatterportLayout [48]</td><td>Ours + ground truth poses</td><td>71.50</td><td>67.20</td><td>0.49</td><td>0.78</td></tr><tr><td>Ours + estimated poses</td><td>70.85</td><td>66.91</td><td>0.48</td><td>0.78</td></tr><tr><td>Ours + noisy poses</td><td>66.05</td><td>61.41</td><td>0.71</td><td>0.65</td></tr></table>
249
+
250
+ process. Among them, using learning rate $1 \times 10^{-5}$ and $1K$ training samples (green lines) performs the worst, while adopting learning rate $1 \times 10^{-5}$ and $2K$ training samples (blue lines) performs the best. We show that the trend of our proposed unsupervised metric (Fig. 6-(c)) is consistent with the two supervised metrics, 2D/3D IoU (Fig. 6-(a) and (b)). Therefore, even without ground truth labels, our metric can be served as a robust indication for validation.
251
+
252
+ # 5 Limitations
253
+
254
+ Several views from the same scene with their registered camera poses are required to formulate the proposed 360-MLC. Although the registration of multiple camera poses can be accomplished accurately by external sensors, structure from motion (SfM), or Simultaneous Localization and Mapping (SLAM) solutions, any error in this registration may lead to poor performance. To complement the experiments depicted in §4, Table 5 shows that our proposed method can keep similar performance using estimated poses under mild noise conditions. However, under severe noise conditions, we can appreciate a lower performance for our proposed solution.
255
+
256
+ # 6 Conclusions
257
+
258
+ We present 360-MLC, a self-training method based on multi-view layout consistency for finetuning monocular 360-Layout models using unlabeled data only. Our method tackles a practical scenario where a pre-trained model needs to be adapted to a new data domain without using any ground truth annotations. In addition, we leverage the entropy information in multiple layout estimations as a quantitative metric to measure the geometry consistency of the scene, allowing us to evaluate any layout estimator for hyper-parameter tuning and model selection in an unsupervised fashion. Experimental results show that our self-training solution achieves favorable performance against state-of-the-art methods from three publicly available source datasets to the newly labeled multi-view MP3D-FPE dataset.
259
+
260
+ # 7 Acknowledgements
261
+
262
+ This work is supported in part by Ministry of Science and Technology of Taiwan (MOST 110-2634-F-002-051). We thank National Center for High-performance Computing (NCHC) for computational and storage resource.
263
+
264
+ # References
265
+
266
+ [1] Iro Armeni, Sasha Sax, Amir R Zamir, and Silvio Savarese. Joint 2d-3d-semantic data for indoor scene understanding. arXiv preprint arXiv:1702.01105, 2017. 6
267
+ [2] Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. Advances in neural information processing systems, 27, 2014. 3
268
+ [3] Sid Yingze Bao, Min Sun, and Silvio Savarese. Toward coherent object detection and scene layout understanding. Image and Vision Computing, 29(9):569-579, 2011. 3
269
+ [4] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. Advances in Neural Information Processing Systems, 32, 2019. 3
270
+ [5] Federico Boniardi, Abhinav Valada, Rohit Mohan, Tim Caselitz, and Wolfram Burgard. Robot localization in floor plans using a room layout edge extraction network. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5291-5297. IEEE, 2019. 1
271
+ [6] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. International Conference on 3D Vision (3DV), 2017. 6
272
+ [7] Yu-Wei Chao, Wongun Choi, Caroline Pantofaru, and Silvio Savarese. Layout estimation of highly cluttered indoor scenes using geometric and semantic cues. In International Conference on Image Analysis and Processing, pages 489–499. Springer, 2013. 3
273
+ [8] James M Coughlan and Alan L Yuille. Manhattan world: Compass direction from a single image by bayesian inference. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, pages 941-947. IEEE, 1999. 3, 6
274
+ [9] Steve Cruz, Will Hutchcroft, Yuguang Li, Naji Khosravan, Ivaylo Boyadzhiev, and Sing Bing Kang. Zillow indoor dataset: Annotated floor plans with 360deg panoramas and 3d room layouts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2133-2143, 2021. 2, 6, 7, 9
275
+ [10] Saumitro Dasgupta, Kuan Fang, Kevin Chen, and Silvio Savarese. Delay: Robust spatial layout estimation for cluttered indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 616-624, 2016. 1, 3
276
+ [11] Clara Fernandez-Labrador, Jose M Facil, Alejandro Perez-Yus, Cédric Demonceaux, Javier Civera, and Jose J Guerrero. Corners for layout: End-to-end layout recovery from 360 images. IEEE Robotics and Automation Letters, 5(2):1255-1262, 2020. 3
277
+ [12] Alex Flint, Christopher Mei, David Murray, and Ian Reid. A dynamic programming approach to reconstructing building interiors. In European conference on computer vision, pages 394-407. Springer, 2010. 1, 3
278
+ [13] Alex Flint, David Murray, and Ian Reid. Manhattan scene understanding using monocular, stereo, and 3d features. In 2011 International Conference on Computer Vision, pages 2228-2235. IEEE, 2011. 1, 3
279
+ [14] Abhinav Gupta, Martial Hebert, Takeo Kanade, and David Blei. Estimating spatial layout of rooms using volumetric reasoning about objects and surfaces. Advances in neural information processing systems, 23, 2010. 3
280
+ [15] Martin Hirzer, Vincent Lepetit, and PETER ROTH. Smart hypothesis generation for efficient and robust room layout estimation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 2912-2920, 2020. 1, 3
281
+ [16] Zhihua Hu, Bo Duan, Yanfeng Zhang, Mingwei Sun, and Jingwei Huang. Mvlayoutnet: 3d layout reconstruction with multi-view panoramas. arXiv preprint arXiv:2112.06133, 2021. 3
282
+ [17] Chen-Yu Lee, Vijay Badrinarayanan, Tomasz Malisiewicz, and Andrew Rabinovich. Roomnet: End-to-end room layout estimation. In Proceedings of the IEEE international conference on computer vision, pages 4865-4874, 2017. 1, 3
283
+ [18] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, page 896, 2013. 3
284
+ [19] Arun Mallya and Svetlana Lazebnik. Learning informative edge maps for indoor scene layout prediction. In Proceedings of the IEEE international conference on computer vision, pages 936-944, 2015. 1, 3
285
+ [20] Pietro Morerio, Jacopo Cavazza, and Vittorio Murino. Minimal-entropy correlation alignment for unsupervised deep domain adaptation. International Conference on Learning Representations, 2018. 3
286
+ [21] Giovanni Pintore, Marco Agus, and Enrico Gobbetti. Atlantanet: Inferring the 3d indoor layout from a single $360^{\circ}$ image beyond the Manhattan world assumption. In European Conference on Computer Vision, pages 432-448. Springer, 2020. 3
287
+ [22] Ilija Radosavovic, Piotr Dólar, Ross Girshick, Georgia Gkioxari, and Kaiming He. Data distillation: Towards omni-supervised learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4119-4128, 2018. 3
288
+ [23] Ellen Riloff. Automatically generating extraction patterns from untagged text. In Proceedings of the national conference on artificial intelligence, pages 1044-1049, 1996. 3
289
+
290
+ [24] Antoni Rosinol, Arjun Gupta, Marcus Abate, Jingnan Shi, and Luca Carlone. 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans. In Proceedings of Robotics: Science and Systems, July 2020. 1
291
+ [25] Kuniaki Saito, Donghyun Kim, Piotr Teterwak, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Tune it the right way: Unsupervised validation of domain adaptation via soft neighborhood density. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9184-9193, 2021. 3
292
+ [26] Manolis Savva, Angel X Chang, Alexey Dosovitskiy, Thomas Funkhouser, and Vladlen Koltun. Minos: Multimodal indoor simulator for navigation in complex environments. arXiv preprint arXiv:1712.03931, 2017. 6
293
+ [27] Henry Scudder. Probability of error of some adaptive pattern-recognition machines. IEEE Transactions on Information Theory, 11(3):363-371, 1965. 3
294
+ [28] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in Neural Information Processing Systems, 33:596-608, 2020. 3
295
+ [29] Bolivar Solarte, Yueh-Cheng Liu, Chin-Hsuan Wu, Yi-Hsuan Tsai, and Min Sun. 360-dfpe: Leveraging monocular 360-Layouts for direct floor plan estimation. IEEE Robotics and Automation Letters, pages 1–1, 2022. 1, 2, 6, 7, 8, 9, 10
296
+ [30] Cheng Sun, Chi-Wei Hsiao, Min Sun, and Hwann-Tzong Chen. Horizonnet: Learning room layout with 1d representation and pano stretch data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1047–1056, 2019. 3, 6
297
+ [31] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Hohonet: 360 indoor holistic understanding with latent horizontal features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2573-2582, 2021. 3
298
+ [32] Pang-Ning Tan, Micahel Steinbach, and Vipin Kumar. Introduction to data mining, pearson education. Inc., New Delhi, 2006. 3
299
+ [33] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems, 30, 2017. 3
300
+ [34] Phi Vu Tran. Sslayout360: Semi-supervised indoor layout estimation from $360^{\circ}$ panorama. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15348-15357. IEEE Computer Society, 2021. 3, 7, 8
301
+ [35] Grace Tsai, Shanghai Xu, Jingen Liu, and Benjamin Kuipers. Real-time indoor scene understanding using bayesian filtering with motion cues. In 2011 International Conference on Computer Vision, pages 121-128. IEEE, 2011. 3
302
+ [36] Fu-En Wang, Yu-Hsuan Yeh, Min Sun, Wei-Chen Chiu, and Yi-Hsuan Tsai. Layoutmp3d: Layout annotation of matterport3d. arXiv preprint arXiv:2003.13516, 2020. 2, 6
303
+ [37] Fu-En Wang, Yu-Hsuan Yeh, Min Sun, Wei-Chen Chiu, and Yi-Hsuan Tsai. Led2-net: Monocular 360deg layout estimation via differentiable depth rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12956–12965, 2021. 3
304
+ [38] Haiyan Wang, Will Hutchcroft, Yuguang Li, Zhiqiang Wan, Ivaylo Boyadzhiev, Yingli Tian, and Sing Bing Kang. Psmnet: Position-aware stereo merging network for room layout estimation. arXiv preprint arXiv:2203.15965, 2022. 3
305
+ [39] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10687-10698, 2020. 3
306
+ [40] Jiu Xu, Björn Stenger, Tommi Kerola, and Tony Tung. Pano2cad: Room layout from a single panorama image. In 2017 IEEE winter conference on applications of computer vision (WACV), pages 354-362. IEEE, 2017. 3
307
+ [41] Hao Yang and Hui Zhang. Efficient 3d room shape recovery from a single panorama. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5422-5430, 2016. 3
308
+ [42] Shichao Yang and Sebastian Scherer. Monocular object and plane slam in structured environments. IEEE Robotics and Automation Letters, 4(4):3145-3152, 2019. 1
309
+ [43] Shang-Ta Yang, Fu-En Wang, Chi-Han Peng, Peter Wonka, Min Sun, and Hung-Kuo Chu. Dula-net: A dual-projection network for estimating room layouts from a single rgb panorama. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3363–3372, 2019. 3
310
+ [44] David Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189-196, 1995. 3
311
+ [45] Yinda Zhang, Shuran Song, Ping Tan, and Jianxiong Xiao. Panocontext: A whole-room 3d context model for panoramic scene understanding. In European conference on computer vision, pages 668-686. Springer, 2014. 1, 3, 6
312
+ [46] Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus Cubuk, and Quoc Le. Rethinking pre-training and self-training. Advances in neural information processing systems, 33:3833-3845, 2020. 3
313
+
314
+ [47] Chuhang Zou, Alex Colburn, Qi Shan, and Derek Hoiem. Layoutnet: Reconstructing the 3d room layout from a single rgb image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2051-2059, 2018. 1, 2, 3, 6, 7
315
+ [48] Chuhang Zou, Jheng-Wei Su, Chi-Han Peng, Alex Colburn, Qi Shan, Peter Wonka, Hung-Kuo Chu, and Derek Hoiem. 3d Manhattan room layout reconstruction from a single 360 image. 2019. 2, 6, 7, 10
316
+ [49] Yang Zou, Zhiding Yu, BVK Kumar, and Jinsong Wang. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European conference on computer vision (ECCV), pages 289-305, 2018. 3
317
+ [50] Yang Zou, Zhiding Yu, Xiaofeng Liu, BVK Kumar, and Jinsong Wang. Confidence regularized self-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5982-5991, 2019. 3
318
+
319
+ # Checklist
320
+
321
+ The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default [TODO] to [Yes], [No], or [N/A]. You are strongly encouraged to include a justification to your answer, either by referencing the appropriate section of your paper or providing a brief inline description. For example:
322
+
323
+ - Did you include the license to the code and datasets? [Yes]
324
+ - Did you include the license to the code and datasets? [No] The code and the data are proprietary.
325
+ - Did you include the license to the code and datasets? [N/A]
326
+
327
+ Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below.
328
+
329
+ 1. For all authors...
330
+
331
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
332
+ (b) Did you describe the limitations of your work? [No] It is clear from the problem definition.
333
+ (c) Did you discuss any potential negative societal impacts of your work? [No] There is no such issue.
334
+ (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
335
+
336
+ 2. If you are including theoretical results...
337
+
338
+ (a) Did you state the full set of assumptions of all theoretical results? [N/A]
339
+ (b) Did you include complete proofs of all theoretical results? [N/A]
340
+
341
+ 3. If you ran experiments...
342
+
343
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] will do after the publication.
344
+ (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes]
345
+ (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [No]
346
+ (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes]
347
+
348
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
349
+
350
+ (a) If your work uses existing assets, did you cite the creators? [Yes]
351
+ (b) Did you mention the license of the assets? [No] please referred to the cited papers.
352
+ (c) Did you include any new assets either in the supplemental material or as a URL? [No] will do after the publication.
353
+
354
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [No] please referred to the cited papers.
355
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [No] please referred to the cited papers.
356
+
357
+ 5. If you used crowdsourcing or conducted research with human subjects...
358
+
359
+ (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
360
+ (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
361
+ (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:629dd37d05eab354c1b12e35c43c18f01820dee83dc31b2983dbadae26b83b83
3
+ size 552474
360mlcmultiviewlayoutconsistencyforselftrainingandhyperparametertuning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:484fb539957786c535cecab24f3f80db445e2309d652bdad1560dfacfc3457ec
3
+ size 453498
3dbaframeworkfordebuggingcomputervisionmodels/cd93b6f4-1282-48aa-a291-7bb9dc9637ba_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9937c6590881fd63058406a73abc1f90b68d3b36c522dc9d0fa485fa7dabb389
3
+ size 85223
3dbaframeworkfordebuggingcomputervisionmodels/cd93b6f4-1282-48aa-a291-7bb9dc9637ba_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9424af76b0d4c15d17fd89bc3583db4e24964d5b284308f27a08883c08e3b534
3
+ size 113704
3dbaframeworkfordebuggingcomputervisionmodels/cd93b6f4-1282-48aa-a291-7bb9dc9637ba_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4eb70849d99dd9a7821adad873418d48b736e713122f8d41be8eda4ccb1f9aa6
3
+ size 7919132
3dbaframeworkfordebuggingcomputervisionmodels/full.md ADDED
@@ -0,0 +1,418 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3DB: A Framework for Debugging Computer Vision Models
2
+
3
+ Guillaume Leclerc†
4
+
5
+ LECLERC@MIT.EDU MIT*
6
+
7
+ Hadi Salman†
8
+
9
+ HADY@MIT.EDU
10
+ MIT*
11
+
12
+ Andrew Ilyas†
13
+
14
+ AILYAS@MIT.EDU MIT
15
+
16
+ Sai Vermprala
17
+
18
+ SAIHV@MICROSOFT.COM
19
+
20
+ Microsoft Research
21
+
22
+ Logan Engstrom
23
+
24
+ ENGSTROM@MIT.EDU MIT
25
+
26
+ Vibhav Vineet
27
+
28
+ VIVINEET@MICROSOFT.COM
29
+
30
+ Microsoft Research
31
+
32
+ Kai Xiao
33
+
34
+ KAIX@MIT.EDU
35
+ MIT
36
+
37
+ Pengchuan Zhang
38
+
39
+ PENZHAN@MICROSOFT.COM
40
+
41
+ Microsoft Research
42
+
43
+ Shibani Santurkar
44
+
45
+ SHIBANI@MIT.EDU MIT
46
+
47
+ Greg Yang
48
+
49
+ GE.YANG@MICROSOFT.COM Microsoft Research
50
+
51
+ Ashish Kapoor
52
+
53
+ AKAPOOR@MICROSOFT.COM Microsoft Research
54
+
55
+ Aleksander Madry
56
+
57
+ MADRY@MIT.EDU MIT
58
+
59
+ # Abstract
60
+
61
+ We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation. We demonstrate, through a wide range of use cases, that 3DB allows users to discover vulnerabilities in computer vision systems and gain insights into how models make decisions. 3DB captures and generalizes many robustness analyses from prior work, and enables one to study their interplay. Finally, we find that the insights generated by the system transfer to the physical world. We are releasing 3DB as a library<sup>1</sup> alongside a set of examples<sup>2</sup>, guides<sup>3</sup>, and documentation<sup>4</sup>.
62
+
63
+ # 1 Introduction
64
+
65
+ Modern machine learning models turn out to be remarkably brittle under distribution shift. Indeed, in the context of computer vision, models exhibit an abnormal sensitivity to slight input rotations and translations [18, 37], synthetic image corruptions [32, 38], and changes to the data collection pipeline [49, 19]. Still, while brittleness is widespread, it is often hard to understand its root causes, or even to characterize the precise situations in which this behavior arises.
66
+
67
+ How do we then comprehensively diagnose model failure modes? Stakes are often too high to simply deploy models and collect "real-world" failure cases. There has thus been a line of work in computer vision focused on identifying systematic sources of model failure such as unfamiliar
68
+
69
+ ![](images/b53a40e08fb141556ae6e97b6fe57b7764785028db7f4b45349a021c6151261e.jpg)
70
+ Figure 1: Examples of vulnerabilities of computer vision systems identified through prior in-depth robustness studies. Figures reproduced from [25, 5, 32, 38, 3, 18, 69, 52].
71
+
72
+ ![](images/fb3a6fd377b58d936015261cb3216a8125e300c733a42ed15b174501ab25dd8f.jpg)
73
+
74
+ ![](images/14e074ffb18e10513001f8bedf5a8093b39975db731ba3e3e4ab8e8daee12cab.jpg)
75
+
76
+ ![](images/5b3778bec055b575ca038f61ec4d3309e81a03d7b9b09bbf69f19d5665dc3d2b.jpg)
77
+
78
+ ![](images/534d0e4dc4e04a07674ef5f5b9be5fd6ef021f39a8e5157dcf6b2cbb23387378.jpg)
79
+ Figure 2: The 3DB framework is modular enough to facilitate—among other tasks—efficient rediscovery of all the types of brittleness shown in Figure 1. It also allows users to realistically compose transformations (right) while still being able to disentangle the results.
80
+
81
+ object orientations [3], misleading backgrounds [74, 69], or shape-texture conflicts [25, 5]. These analyses—a selection of which is visualized in Figure 1—reveal patterns or situations that degrade performance of vision models, providing invaluable insights into model robustness. Still, carrying out each such analysis requires its own set of (often complex) tools, usually accompanied by a significant amount of manual labor (e.g., image editing, style transfer), expertise, and data cleaning. This prompts the question:
82
+
83
+ Can we support reliable discovery of model failures in a systematic, automated, and unified way?
84
+
85
+ Contributions. In this work, we propose 3DB, a framework for automatically identifying and analyzing the failure modes of computer vision models. This framework makes use of a 3D simulator to render realistic scenes that can be fed into any computer vision system. Users can specify a set of transformations to apply to the scene—such as pose changes, background changes, or camera effects—and can also customize and compose them. The system then performs a guided search, evaluation, and aggregation over these user-specified configurations and presents the user with an interactive, user-friendly summary of the model's performance and vulnerabilities. 3DB is general enough to enable users to, with minimal effort, re-discover insights from prior work on pose, background, and texture bias (cf. Fig. 2), among others. Further, while prior studies have largely been focused on examining model sensitivities along a single axis, 3DB allows users to compose various transformations and understand the interplay between them, while still being able to disentangle their individual effects.
86
+
87
+ The remainder of this paper is structured into the following parts: in Section 2 we discuss the design of $3DB$ , including the motivating principles, design goals, and concrete architecture used. We highlight how the implementation of $3DB$ allows users to quickly experiment, stress-test, and analyze their vision models. Then, in Section 3 we illustrate the utility of $3DB$ through a series of case studies uncovering biases in an ImageNet-pretrained classifier. Finally, we show (in Section 4) that the vulnerabilities uncovered with $3DB$ correspond to actual failure modes in the physical world (i.e., they are not specific to simulation).
88
+
89
+ # 2 Designing 3DB
90
+
91
+ The goal of 3DB is to leverage photorealistic simulation to effectively diagnose failure modes of computer vision models. To this end, the following set of principles guide the design of 3DB:
92
+
93
+ Generality. 3DB should support any type of computer vision model (i.e., not necessarily a neural network) trained on any dataset and task (i.e., not necessarily classification). Furthermore, the framework should support diagnosing non-robustness with respect to any parameterizable three-dimensional scene transformation.
94
+
95
+ Compositionality. Corruptions and transformations rarely occur in isolation—3DB should allow users to investigate robustness along many different axes simultaneously.
96
+
97
+ Physical realism. The vulnerabilities extracted from 3DB should correspond to models' behavior in the real (physical) world, and, in particular, not depend on artifacts of the simulation process itself.
98
+
99
+ User-friendliness. 3DB should be simple to use and should relay insights to the user in an easy-to-understand manner. Even non-experts should be able to look at the result of a 3DB experiment and easily understand what the weak points of their model are, as well as gain insight into how the model behaves more generally.
100
+
101
+ Scalability. 3DB should be performant and parallel.
102
+
103
+ # 2.1 Capabilities and workflow
104
+
105
+ To achieve the goals articulated above, we design 3DB modularly, i.e., as a combination of swapable components. This combination allows the user to specify transformations they want to test, search over the space of these transformations, and aggregate the results of this search in a concise way. More specifically, the 3DB workflow revolves around five steps (visualized in Figure 3):
106
+
107
+ Setup. The user collects one or more 3D meshes that correspond to objects the model is trained to recognize, as well as a set of environments to test against.
108
+
109
+ Search space design. The user defines a search space by specifying a set of transformations (which 3DB calls controls) that they expect the computer vision model to be robust to (e.g., rotations, translations, zoom, etc.). Controls are grouped into "rendered controls" (applied during the rendering process) and "post-processor controls" (applied after the rendering as a 2D image transformation).
110
+
111
+ Policy-guided search. After the user has specified a set of controls, 3DB instantiates and renders a myriad of object configurations derived from compositions of the given transformations. It records the behavior of the ML model on each constructed scene for later analysis. A user-specified search policy over the space of all possible combinations of transformations determines the scenes for 3DB to render.
112
+
113
+ ![](images/5db3d27850b880eb1d9c3b10a5d008c32cf88182fdef28e7bdd51412434edaeb.jpg)
114
+ Figure 3: An overview of the 3DB workflow: First, the user specifies a set of 3D object models and environments to use for debugging. The user also enumerates a set of (in-built or custom) transformations, known as controls, to be applied by 3DB while rendering the scene. Based on a user-specified search policy over all these controls (and their compositions), 3DB then selects the exact scenes to render. The computer vision model is finally evaluated on these scenes and the results are logged in a user-friendly manner in a custom dashboard.
115
+
116
+ Model loading. The only remaining step before running a 3DB analysis is loading the model that the user wants to analyze (e.g., a pre-trained classifier or object detector).
117
+
118
+ Analysis and insight extraction. Finally, 3DB is equipped with a model dashboard (cf. Appendix C) that can read the generated log files and produce a user-friendly visualization of the generated insights. By default, the dashboard has three panels. The first of these is failure mode display, which highlights configurations, scenes, and transformations that caused the model to misbehave. The per-object analysis pane allows the user to inspect the model's performance on a specific 3D mesh (e.g., accuracy, robustness, and vulnerability to groups of transformations). Finally, the aggregate analysis pane extracts insights about the model's performance averaged over all the objects and environments collected and thus allows the user to notice consistent trends and vulnerabilities in their model.
119
+
120
+ Each of the aforementioned components (the controls, policy, renderer, inference module, and logger) are fully customizable and can be extended or replaced by the user without altering the core code of 3DB. For example, while 3DB supports more than 10 types of controls out-of-the-box, users can add custom ones (e.g., geometric transformations) by implementing an abstract function that maps a 3D state and a set of parameters to a new state. Similarly, 3DB supports debugging classification and object detection models by default, and by implementing a custom evaluator module, users can extend support to a wide variety of other vision tasks and models. We refer to Appendix B for more on 3DB design principles, implementation, and scalability.
121
+
122
+ # 3 Debugging and analyzing models with 3DB
123
+
124
+ In this section, we illustrate through case studies how to analyze and debug vision models with 3DB. In each case, we follow the workflow outlined in Section 2.1—importing the relevant objects, selecting the desired transformations (or constructing custom ones), selecting a search policy, and finally analyzing the results.
125
+
126
+ In all our experiments, we analyze a ResNet-18 [30] trained on the ImageNet [53] classification task (its validation set accuracy is $69.8\%$ ). Note that 3DB is classifier-agnostic (i.e., ResNet-18 can be replaced with any PyTorch classification module), and even supports object detection tasks. For our analysis, we collect 3D models for 16 ImageNet classes (see Appendix F for more details on each experiment). We ensure that in "clean" settings, i.e., when rendered in simple poses on a plain white background, the 3D models are correctly classified at a reasonable rate (cf. Table 1) by our pre-trained ResNet.
127
+
128
+ Table 1: Accuracy of a pre-trained ResNet-18, for each of the 16 ImageNet classes considered, on the corresponding 3D model we collected, rendered at an unchallenging pose on a white background ("Simulated" row); and the subset of the ImageNet validation set corresponding to the class ("ImageNet" row).
129
+
130
+ <table><tr><td></td><td>banana</td><td>baseball</td><td>bowl</td><td>drill</td><td>golf ball</td><td>hammer</td><td>lemon</td><td>mug</td></tr><tr><td>Simulated accuracy (%)</td><td>96.8</td><td>100.0</td><td>17.5</td><td>63.3</td><td>95.0</td><td>65.6</td><td>100.0</td><td>13.4</td></tr><tr><td>ImageNet accuracy (%)</td><td>82.0</td><td>66.0</td><td>84.0</td><td>40.0</td><td>82.0</td><td>54.0</td><td>76.0</td><td>42.0</td></tr></table>
131
+
132
+ # 3.1 Sensitivity to image backgrounds
133
+
134
+ We begin our exploration by using 3DB to confirm ImageNet classifiers' reliance on background signal, as pinpointed by several recent in-depth studies [72, 74, 69]. Out-of-the-box, 3DB can render 3D models onto HDRI files using image-based lighting; we downloaded 408 such background environments from hdrihaven.com. We then used the pre-packaged "camera" and "orientation" controls to render (and evaluate our classifier on) scenes of the pre-collected 3D models at random poses, orientations, and scales on each background. Figure 4 shows random example scenes generated by 3DB for the "coffee mug" model.
135
+
136
+ Analyzing a subset of backgrounds. In Figure 6, we visualize the performance of a ResNet-18 classifier on the 3D models from 16 different ImageNet classes—in random positions, orientations, and scales—rendered onto 20 of the collected HDRI backgrounds. One can observe that background
137
+
138
+ ![](images/a24cf30358be9e6eeb8c1ece83e3de310e9254195d1bfffc82f651c0323c67a6.jpg)
139
+ bucket $(90.4\%)$
140
+
141
+ ![](images/8a5ccbc89179bd4b221c8b8887258b0733887993ae6a8648d32d56da510936ac.jpg)
142
+ coffee mug $(42.6\%)$
143
+
144
+ ![](images/69be4f3479217daf25fbf6d2e3d33d5b6f7f3f33934fd2ea9fe8b7052d14e4f7.jpg)
145
+ cup (15.2%)
146
+
147
+ ![](images/3c9267cf006370938628e1ff16afc8393288a6ce1bc63b09893e7c4c4323dbab.jpg)
148
+ plunger (14.3%)
149
+ Figure 4: Renderings of the mug 3D model in different environments, labeled with a pretrained model's top prediction.
150
+
151
+ ![](images/d1030fe21793e3a648a1e06596fce1bb7de89965f635725dcef3f4aadc6a3c8e.jpg)
152
+ coffeepot $(49.5\%)$
153
+
154
+ ![](images/6dc2c6acae6a96a492b0fd19e531f58e0c13f49ecddf51244c0368386a0fa9b0.jpg)
155
+ bucket $(61.9\%)$
156
+
157
+ ![](images/e4df666623d69ebc6763a3019db2e34604532a1d2d38e0b9b6d2c2a75660b752.jpg)
158
+
159
+ ![](images/4b31f4f8e2c6f5a36c6ad092d8757475bfe97af7a87c95c3bedc07672d241f16.jpg)
160
+
161
+ ![](images/257bd9fa689f01520f70ceac864fe07f352f1f82a095c45009961296640c2ba7.jpg)
162
+
163
+ ![](images/5385dc8d6ce83af4e679c33e6ae3d3ad785a023e5a868fddb7568152f49db6f9.jpg)
164
+ Figure 5: (Top) Best and (Bottom) worst background environments for classification of the coffee mug, and their respective accuracies (averaged over camera positions and zoom factors).
165
+
166
+ ![](images/6567a1b92f13406f982390a2a6fdb76f18350023036890906e26c1b9d04309eb.jpg)
167
+
168
+ ![](images/8acc69604b508440ae2eb481bfa9bdcbc9fa96a2158070ea4b1c8f91d5c1ae3e.jpg)
169
+
170
+ ![](images/f8f247d81499769428f4d5b31d0d63a0dd53d4bbaea9f4c5b0d58870f4030c73.jpg)
171
+ Figure 6: Visualization of accuracy on controls from Section 3.1. (Left) We compute the accuracy of the model conditioned on each object-environment pair. For each environment on the x-axis, we plot the variation in accuracy (over the set of possible objects) using a boxplot. We visualize the per-object accuracy spread by including the median line, the first and third quartiles box edges (the interval between which is called the inter-quartile range, IQR), the range, and the outliers (points that are outside the IQR by $\frac{3}{2} |IQR|$ ). (Right) Using the same format, we track how the classified object (x-axis) impacts variation in accuracy (over different environments) on the y-axis.
172
+
173
+ ![](images/3dd1f5b8b53ac819f9c9a6b17121cd74b94f5469a3f18a17915c0c24070a9db5.jpg)
174
+
175
+ dependence indeed varies widely across different objects—for example, the “orange” and “lemon” 3D models depend much more on background than the “tennis ball.” We also find that certain backgrounds yield systemically higher or lower accuracy; for example, average accuracy on “gray pier” is five times lower than that of “factory yard.”
176
+
177
+ Analyzing all backgrounds with the mug model. The previous study broadly characterizes the classifier's sensitivity to different models and environments. Now, to gain a deeper understanding of this sensitivity, we focus our analysis only on a single 3D model (a "coffee mug") rendered in all 408 environments. The highest-accuracy backgrounds had tags such as skies, field, and mountain, while the lowest-accuracy backgrounds had tags indoor, city, and building.
178
+
179
+ At first, this observation seems to be at odds with the idea that the classifier relies heavily on context clues to make decisions. After all, the backgrounds where the classifier seems to perform well (poorly) are places that we would expect a coffee mug to be rarely (frequently) present in the real world. Visualizing the best and worst backgrounds in terms of accuracy (Figure 5) suggests a possible explanation for this: the best backgrounds tend to be clean and distraction-free. Conversely, complicated backgrounds (e.g., some indoor scenes) often contain context clues that make the mug difficult for models to detect. Comparing a "background complexity" metric (based on the number of edges in the image) to accuracy (Figure 7) supports this explanation: mugs overlaid on more complex backgrounds are more frequently misclassified by the model. In fact, some specific backgrounds even result in the model "hallucinating" objects; for example, the second-most frequent predictions for the pond and sidewalk backgrounds were birdhouse and traffic light respectively, despite the fact that neither object is present in the environment.
180
+
181
+ Zoom/background interactions case study: the advantage of composable controls. Finally, we leverage 3DB's composability to study interactions between controls. In Figure 8, we plot the mean classification accuracy of our "orange" model while varying background and scale factor. We, for example, find that while the model is highly accurate at classifying "orange" at $2 \times$ zoom, the same
182
+
183
+ ![](images/fae413d3d7ed1f7d9b56c875d0ca02dda88b95c7d8744e821bcbc1a544902e36.jpg)
184
+ Figure 7: Relation between the complexity of a background and its average accuracy. Here complexity is defined as the average pixel value of the image after applying an edge detection filter.
185
+
186
+ ![](images/56208cbacb541bbc05c6dc05a82c28fa1f5fbb6e523b8fc41bab2cf868f3a7d1.jpg)
187
+ Figure 8: 3DB's focus on composability enables us to study robustness along multiple axes simultaneously. Here we study average model accuracy (computed over pose randomization) as a function of both zoom level and background.
188
+
189
+ zoom factor induces failure in a well-lit mountainous environment ("kiara late-afternoon")—a fine-grained failure mode that we would not catch without explicitly capturing the interaction between background choice and zoom.
190
+
191
+ # 3.2 Texture-shape bias
192
+
193
+ ![](images/b646755803918f749230f203c4acad79fbc3a6fcffae4a763e9d03af7244000f.jpg)
194
+ (a) Texture image $81.4\%$ Indian elephant
195
+
196
+ ![](images/1846590ae52b0ee22a98297cdd660516ec61f0147127a57e4fb1becd507c3d9d.jpg)
197
+ (b) Content image 71.1% tabby cat
198
+
199
+ ![](images/47d906afbc8880425191f7e8313439110954835d70db1ecd0fdf897a785af000.jpg)
200
+ (c) Texture-shape cue conflict $63.9\%$ Indian elephant
201
+
202
+ ![](images/0e8353b142982cff7322a1a063594871ba3e0bfb5fe7784bead7105054dc3d77.jpg)
203
+ Figure 9: Cue-conflict images generated by Geirhos et al. [25] (top) and 3DB (bottom).
204
+
205
+ ![](images/bee0710557b3956f4fd90a469a4d37daa6853ac3de7d24586f278e3971690372.jpg)
206
+
207
+ ![](images/7c0a987e9715326f92c9927e0fd84912b3c94a954ceb5c5a542cb4899d5302b8.jpg)
208
+
209
+ ![](images/d03a6b77d143f725fec5a0414b2933560a19a139d713e99decb99e9f36cfb599.jpg)
210
+ Figure 10: Model accuracy on previously correctly-classified images after their texture is altered via $3DB$ , as a function of texture-type.
211
+
212
+ We now demonstrate how $3DB$ can be straightforwardly extended to discover more complex failure modes in computer vision models. Specifically, we will show how to rediscover the "texture bias" exhibited by ImageNet-trained convolutional neural networks (CNNs) [25] in a systematic and (near) photorealistic way. Geirhos et al. [25] fuse pairs of images—combining texture information from one with shape and edge information from the other—to create so-called "cue-conflict" images. They then demonstrate that on these images (cf. Figure 9), ImageNet-trained CNNs typically predict the class corresponding to the texture component, while humans typically predict based on shape.
213
+
214
+ Cue-conflict images identify a concrete difference between human and CNN decision mechanisms. However, the fused images are unrealistic and can be cumbersome to generate (e.g., even the simplest approach uses style transfer [24]). 3DB gives us an opportunity to rediscover the influence of texture in a more streamlined fashion.
215
+
216
+ Specifically, we implement a control (now pre-packaged with 3DB) that replaces an object's texture with a random (or user-specified) one. We use this control to create cue-conflict objects out of eight
217
+
218
+ ![](images/122882e6c662aaebc91d5df3ed41d75c7dc74c1fb99468f6553060a844ff1514.jpg)
219
+ Figure 11: (Left) We compute the accuracy of the model for each object-orientation pair. For each object on the x-axis, we plot the variation in accuracy (over the set of possible orientations) using a boxplot. We visualize the per-orientation accuracy spread by including the median line, the first and third quartiles box edges, the range, and the outliers. (Right) Using the same format as the left hand plot, we plot how the classified object (on the x-axis) impacts variation in accuracy (over different zoom values) on the y-axis.
220
+
221
+ ![](images/409a55ab77951c7bf1b0f2199430908000291c648692765b7c5bdc45775b4482.jpg)
222
+
223
+ 3D models $^5$ and seven animal-skin texture images $^6$ (i.e., 56 objects in total). We test our pre-trained ResNet-18 on images of these objects rendered in a variety of poses and camera locations. Figure 9 displays sample cue-conflict images generated using 3DB.
224
+
225
+ Our study confirms the findings of Geirhos et al. [25] and indicates that texture bias indeed extends to (near-)realistic settings. For images that were originally correctly classified (i.e., when rendered with the original texture), changing the texture reduced accuracy by $90 - 95\%$ uniformly across textures (Figure 10). Furthermore, we observe that the model predictions usually align better with the texture of the objects rather than their geometry (See Figure 21 in the Appendix).
226
+
227
+ # 3.3 Orientation and scale dependence
228
+
229
+ Image classification models are brittle to object orientation in both real and simulated settings [37, 18, 6, 3]. As was the case for both background and texture sensitivity, reproducing and extending such observations is straightforward with $3DB$ . Once again, we use the built-in controls to render objects at varying poses, orientations, scales, and environments before stratifying on properties of interest. Indeed, we find that classification accuracy is highly dependent on object orientation (Figure 11 left) and scale (Figure 11 right). However, this dependence is not uniform across objects. As one would expect, the classifier's accuracy is less sensitive to orientation on more symmetric objects (like "tennis ball" or "baseball"), but can vary widely on more uneven objects (like "drill").
230
+
231
+ For a more fine-grained look at the importance of object orientation, we can measure the classifier accuracy conditioned on a given part of each 3D model being visible. This analysis is once again straightforward in $3DB$ , since each rendering is (optionally) accompanied by a UV map which maps pixels in the scene back to locations on the object surface. Combining these UV maps with accuracy data allows one to construct the "accuracy heatmaps" shown in Figure 12, wherein each part of an object's surface corresponds to classifier accuracy on renderings in which the part is visible. The results confirm that atypical viewpoints adversely impact model performance, and also allow users to draw up a variety of testable hypotheses regarding performance on specific 3D models (e.g., for the coffee mug, the bottom rim is highlighted in red—is it the case that mugs are more accurately classified when viewed from the bottom)? These hypotheses can then be investi
232
+
233
+ ![](images/416b135bf77c7423cb9b39b9b4bb1a6cf25a838995964ad38cbe126b6f6ecb55.jpg)
234
+
235
+ ![](images/4143885d87707c8449c3cbc7469862fc17d8d71bd71271ed95ea0a2666b5410a.jpg)
236
+ Figure 12: Model sensitivity to pose. The heatmaps denote the accuracy of the model in predicting the correct label, conditioned on a specific part of the object being visible in the image. Here, red and blue denotes high and low accuracy respectively.
237
+
238
+ ![](images/d741bcef0e7ab43e54348eef11bd9fe6106a479d747d6973fc5a2963da6fd926.jpg)
239
+
240
+ ![](images/c505004e520c4ba1598b38da96a628eaf9701f52b5ae38aba61286d63184be64.jpg)
241
+
242
+ ![](images/66c00da7482e964dc1d81249af0fd8e1ea844e47281c2f8392486bd1fd5cc4cf.jpg)
243
+
244
+ ![](images/10e52561f7a2fba9f77533f7ccbbfd1aa41b6824a33e464307bacaf1aedaa671.jpg)
245
+ (a)
246
+
247
+ ![](images/a736c89809140a078a26ccbe92ca32ddc398a648342c91c99eafcc68424ad220.jpg)
248
+ (b)
249
+ Figure 13: Testing classifier sensitivity to context: Figure (a) shows the correlation of the liquid mixture in the mug on the prediction of the model, averaged over random viewpoints (see Figure 20b for the raw frequencies). Figure (b) shows that for a fixed viewpoint, model predictions are unstable with respect to the liquid. Figure (c) shows examples of rendered liquids (water, black coffee, milk, and mixtures).
250
+
251
+ ![](images/79f4a9be3fab842510b3409fde212500e5a6877c675ee165b811b45c38c62d64.jpg)
252
+ (c)
253
+
254
+ gated further through natural data collection, or—as we discuss in the upcoming section—through additional experimentation with 3DB.
255
+
256
+ # 3.4 Case study: using 3DB to dive deeper
257
+
258
+ Our heatmap analysis in the previous section (cf. Figure 12) showed that classification accuracy for the mug decreases when its interior is visible. What could be causing this effect? One hypothesis is that in the ImageNet training set, objects are captured in context, and thus ImageNet-trained classifiers rely on this context to make decisions. Inspecting the ImageNet dataset, we notice that coffee mugs in context usually contain coffee. Thus, the aforementioned hypothesis would suggest that the model relies, at least partially, on the contents of the mug to correctly classify it. Can we leverage 3DB to confirm or refute this hypothesis?
259
+
260
+ To test this, we implement a custom control that can render a liquid inside the "coffee mug" model. Specifically, this control takes water:milk:coffee ratios as parameters, then uses a parametric Blender shader (cf. Appendix G) to render a corresponding mixture of the liquids into the mug. We used the pre-packaged grid search policy, (programmatically) restricting the search space to viewpoints from which the interior of the mug was visible.
261
+
262
+ The results of the experiment are shown in Figure 13. It turns out that the model is indeed sensitive to changes in liquid, supporting our hypothesis: model predictions stayed constant (over all liquids) for only $20.7\%$ of the rendered viewpoints (cf. Figure 13b). The 3DB experiment provides further support for the hypothesis when we look at the correlation between the liquid mixture and the predicted class: Figure 13a visualizes this correlation in a normalized heatmap (for the unnormalized version, see Figure 20b in the Appendix G). We find that the model is most likely to predict "coffee mug" when coffee is added to the interior (unsurprisingly); as the coffee is mixed with water or milk, the predicted label distribution shifts towards "bucket" and "cup" or "pill bottle," respectively. Overall, our experiment suggests that current ResNet-18 classifiers are indeed sensitive to object context—in this case, the fluid composition of the mug interior. More broadly, this illustration highlights how a system designer can quickly go from hypothesis to empirical verification with minimal effort using 3DB. (In fact, going from the hypothesis to Figure 13 took less than a day of work for one author.)
263
+
264
+ # 4 Physical realism
265
+
266
+ The previous sections have demonstrated various ways in which we can use 3DB to obtain insights into model behavior in simulation. Our overarching goal, however, is to understand when models will fail in the physical world. Thus, we would like for the insights extracted by 3DB to correspond to naturally-arising model behavior, and not just artifacts of the simulation itself. To this end, we now test the physical realism of 3DB: can we understand model performance (and uncover vulnerabilities) on real photos using only a high-fidelity simulation?
267
+
268
+ To answer this question, we collected a set of physical objects corresponding to 3D models, and set up a physical room with a corresponding 3D environment. We used 3DB to identify strong points and vulnerabilities of an ImageNet classifier in this environment, mirroring our methodology from
269
+
270
+ ![](images/5a308dec7ea58a029cc3fbe2413ccca3fcc904867aa45b00ff762f73a29fc4dd.jpg)
271
+ Figure 14: (Top) Agreement, in terms of model correctness, between model predictions within 3DB and model predictions in the real world. For each object, we selected five rendered scenes found by 3DB that were misclassified in simulation, and five that were correctly classified; we recreated and deployed the model on each scene in the physical world. The positive (resp., negative) predictive value is rate at which correctly (resp. incorrectly) classified examples in simulation were also correctly (resp., incorrectly) classified in the physical world. (Bottom) Comparison between example simulated scenes generated by 3DB (first row) and their recreated physical counterparts (second row). Border color indicates whether the model was correct on this specific image.
272
+
273
+ Section 3. We recreated each scenario found by $3DB$ in the physical room, and took photographs that matched the simulation as closely as possible. Finally, we evaluated the physical realism of $3DB$ by comparing models' performance on the photos to what $3DB$ predicted.
274
+
275
+ Setup. We used a studio room shown in Appendix Figure 18b for which we obtained a fairly accurate 3D model (cf. Appendix Figure 18a). We leverage the YCB [13] dataset to guide our selection of real-world objects, for which 3D models are available. We supplement these by sourcing additional objects (from amazon.com) and using a 3D scanner to obtain corresponding meshes.
276
+
277
+ We next used 3DB to analyze the performance of a pre-trained ImageNet ResNet-18 on the collected objects in simulation, varying over a set of realistic object poses, locations, and orientations. For each object, we selected 10 rendered situations: five where the model made the correct prediction, and five where the model predicted incorrectly. We then tried to recreate each rendering in the physical world. First we roughly placed the main object in the location and orientation specified in the rendering, then we used a custom-built iOS application (see Appendix D) to more precisely match the rendering with the physical setup.
278
+
279
+ Results. Figure 14 visualizes a few samples of renderings with their recreated physical counterparts, annotated with model correctness. Overall, we found a $85\%$ agreement rate between the model's correctness on the real photos and the synthetic renderings—agreement rates per class are shown in Figure 14. Thus, despite imperfections in our physical reconstructions, the vulnerabilities identified by 3DB turned out to be physically realizable vulnerabilities (and conversely, the positive examples found by 3DB are usually also classified correctly in the real world). We found that objects with simpler/non-metallic materials (e.g., the bowl, mug, and sandal) tended to be more reliable than metallic objects such as the hammer and drill. It is thus possible that more precise texture tuning of 3D models object could increase agreement further (although a more comprehensive study would be needed to verify this).
280
+
281
+ # 5 Related work
282
+
283
+ In this section, we give a brief overview of existing work in robustness, interpretability, and simulation that provide the context for our work. We refer the reader to Appendix A for a detailed discussion of prior work.
284
+
285
+ Model Robustness. The brittleness of current ML models has drawn the attention to analyze the robustness and reliability of these models. A long line of research focuses on analyzing model robustness to adversarial examples [61, 20, 70, 21, 20, 12, 5, 68, 47, 44]. Another line of research involves
286
+
287
+ analyzing robustness to non-adversarial corruptions [18, 25, 32, 38, 74, 69, 23, 52]. A more closely related line of research to ours analyzes the impact of factors such as object pose and geometry by applying synthetic perturbations in three-dimensional space [28, 57, 29, 3, 35].
288
+
289
+ Interpretability and model debugging. 3DB can be cast as a method for debugging vision models that provides users fine-grained control over the rendered scenes and thus enables them to find specific modes of failure (cf. Sections 3 and 4). Model debugging is also a common goal in interpretability, where methods generally seek to provide justification for model decisions based on either local features (e.g., saliency maps) [58, 14, 60, 50, 22, 74, 27] or global ones (i.e., general biases of the model) [7, 41, 71, 63].
290
+
291
+ Simulated environments. Finally, there has been a long line of work on developing simulation platforms as a source of additional training data [11, 8, 36, 73, 15, 31, 51, 55, 59, 17, 42, 64, 48, 65, 66, 67, 54]. 3DB shares some components with many of these works (e.g., a rendering engine), but has a very different goal and set of applications, i.e., diagnosing specific failures in existing models.
292
+
293
+ # 6 Conclusion
294
+
295
+ In this work, we introduced 3DB, a unified framework for diagnosing failure modes in vision models based on high-fidelity rendering. We demonstrate the utility of 3DB by applying it to a number of model debugging use cases—such as understanding classifier sensitivities to realistic scene and object perturbations, and discovering model biases. Further, we show that the debugging analysis done using 3DB in simulation is actually predictive of model behavior in the physical world. Finally, we note that 3DB was designed with extensibility as a priority; we encourage the community to build upon the framework so as to uncover new insights into the vulnerabilities of vision models.
296
+
297
+ Limitations. One limitation of 3DB is the need for high-quality 3D models for objects of interest in order to achieve photorealistic images. This requires 3D model artists and/or effective photogrammetry techniques. Additionally, creating fully realistic scenes may require more complexity than just combining a single object with a background, which is what we focus on in this paper. 3DB does support multiple objects, and the user can programmatically specify how different objects are located relative to each other; we hope to explore this more in the future.
298
+
299
+ # Acknowledgements
300
+
301
+ Work supported in part by the NSF grants CCF-1553428 and CNS-1815221, the Google PhD Fellowship, the Open Philanthropy Project AI Fellowship, the NDSEG PhD Fellowship, and the Microsoft Corporation. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0015.
302
+
303
+ Research was sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
304
+
305
+ # References
306
+
307
+ [1] Julius Adebayo et al. "Sanity checks for saliency maps". In: Neural Information Processing Systems (NeurIPS). 2018.
308
+ [2] Julius Adebayo et al. "Debugging Tests for Model Explanations". In: 2020.
309
+ [3] Michael A Alcorn et al. "Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects". In: Conference on Computer Vision and Pattern Recognition (CVPR). 2019.
310
+ [4] David Alvarez-Melis and Tommi S Jaakkola. "On the robustness of interpretability methods". In: arXiv preprint arXiv:1806.08049 (2018).
311
+ [5] Anish Athalye et al. "Synthesizing Robust Adversarial Examples". In: International Conference on Machine Learning (ICML). 2018.
312
+ [6] Andrei Barbu et al. "ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models". In: Neural Information Processing Systems (NeurIPS). 2019.
313
+ [7] David Bau et al. "Network dissection: Quantifying interpretability of deep visual representations". In: Computer Vision and Pattern Recognition (CVPR). 2017.
314
+ [8] Charles Beattie et al. “Deepmind lab”. In: arXiv preprint arXiv:1612.03801 (2016).
315
+ [9] Harkirat Singh Behl et al. "Autosimulate: (quickly) learning synthetic data generation". In: European Conference on Computer Vision. Springer. 2020, pp. 255-271.
316
+ [10] Blender Online Community. Blender - a 3D modelling and rendering package. Blender Foundation. Stichting Blender Foundation, Amsterdam, 2020. URL: http://www.blender.org.
317
+ [11] Greg Brockman et al. "Openai gym". In: arXiv preprint arXiv:1606.01540 (2016).
318
+ [12] Tom B. Brown et al. Adversarial Patch. 2018. arXiv: 1712.09665 [cs.CV].
319
+ [13] Berk Calli et al. "Benchmarking in manipulation research: The YCB object and model set and benchmarking protocols". In: arXiv preprint arXiv:1502.03143 (2015).
320
+ [14] Piotr Dabkowski and Yarin Gal. "Real time image saliency for black box classifiers". In: Neural Information Processing Systems (NeurIPS). 2017.
321
+ [15] Maximilian Denninger et al. "BlenderProc". In: arXiv preprint arXiv:1911.01911 (2019).
322
+ [16] Jeevan Devaranjan, Amlan Kar, and Sanja Fidler. "Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data Generation". In: European Conference on Computer Vision. Springer. 2020, pp. 715-733.
323
+ [17] Alexey Dosovitskiy et al. "CARLA: An open urban driving simulator". In: arXiv preprint arXiv:1711.03938 (2017).
324
+ [18] Logan Engstrom et al. "Exploring the Landscape of Spatial Robustness". In: International Conference on Machine Learning (ICML). 2019.
325
+ [19] Logan Engstrom et al. "Identifying Statistical Bias in Dataset Replication". In: International Conference on Machine Learning (ICML). 2020.
326
+ [20] Kevin Eykholt et al. "Physical Adversarial Examples for Object Detectors". In: CoRR (2018).
327
+ [21] Volker Fischer et al. "Adversarial examples for semantic image segmentation". In: Arxiv preprint arXiv:1703.01101. 2017.
328
+ [22] Ruth C Fong and Andrea Vedaldi. "Interpretable explanations of black boxes by meaningful perturbation". In: International Conference on Computer Vision (ICCV). 2017.
329
+ [23] Nic Ford et al. "Adversarial Examples Are a Natural Consequence of Test Error in Noise". In: arXiv preprint arXiv:1901.10513. 2019.
330
+ [24] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. "Image style transfer using convolutional neural networks". In: computer vision and pattern recognition (CVPR). 2016.
331
+ [25] Robert Geirhos et al. "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness." In: International Conference on Learning Representations (ICLR). 2019.
332
+ [26] Amirata Ghorbani, Abubakar Abid, and James Zou. "Interpretation of neural networks is fragile". In: AAAI Conference on Artificial Intelligence (AAAI). 2019.
333
+ [27] Yash Goyal et al. "Counterfactual visual explanations". In: arXiv preprint arXiv:1904.07451 (2019).
334
+
335
+ [28] Abdullah Hamdi and Bernard Ghanem. "Towards Analyzing Semantic Robustness of Deep Neural Networks". In: arXiv preprint arXiv:1904.04621 (2019).
336
+ [29] Abdullah Hamdi, Matthias Muller, and Bernard Ghanem. "SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications". In: arXiv preprint arXiv:1812.02132 (2018).
337
+ [30] Kaiming He et al. Deep Residual Learning for Image Recognition. 2015.
338
+ [31] Christoph Heindl et al. "BlendTorch: A Real-Time, Adaptive Domain Randomization Library". In: arXiv preprint arXiv:2010.11696 (2020).
339
+ [32] Dan Hendrycks and Thomas G. Dietterich. "Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations". In: International Conference on Learning Representations (ICLR). 2019.
340
+ [33] Dan Hendrycks et al. "Natural adversarial examples". In: arXiv preprint arXiv:1907.07174 (2019).
341
+ [34] Sandy Huang et al. "Adversarial Attacks on Neural Network Policies". In: ArXiv preprint arXiv:1702.02284. 2017.
342
+ [35] Lakshya Jain et al. "Analyzing and Improving Neural Networks by Generating Semantic Counterexamples through Differentiable Rendering". In: arXiv preprint arXiv:1910.00727 (2020).
343
+ [36] Arthur Juliani et al. Unity: A General Platform for Intelligent Agents. 2020. arXiv: 1809.02627 [cs.LG].
344
+ [37] Can Kanbak, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. "Geometric robustness of deep networks: analysis and improvement". In: Conference on Computer Vision and Pattern Recognition (CVPR). 2018.
345
+ [38] Daniel Kang et al. "Testing Robustness Against Unforeseen Adversaries". In: ArXiv preprint arxiv:1908.08016. 2019.
346
+ [39] Amlan Kar et al. "Meta-sim: Learning to generate synthetic datasets". In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019, pp. 4551-4560.
347
+ [40] Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. "Neural 3D Mesh Renderer". In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2018.
348
+ [41] Been Kim et al. "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)". In: International conference on machine learning (ICML). 2018.
349
+ [42] Eric Kolve et al. "Ai2-thor: An interactive 3d environment for visual ai". In: arXiv preprint arXiv:1712.05474 (2017).
350
+ [43] Jernej Kos, Ian Fischer, and Dawn Song. "Adversarial examples for generative models". In: IEEE Security and Privacy Workshops (SPW). 2018.
351
+ [44] Juncheng Li, Frank R. Schmidt, and J. Zico Kolter. "Adversarial camera stickers: A physical camera-based attack on deep learning systems". In: Arxiv preprint arXiv:1904.00759. 2019.
352
+ [45] Tzu-Mao Li et al. "Differentiable Monte Carlo Ray Tracing through Edge Sampling". In: SIGGRAPH Asia 2018 Technical Papers. 2018.
353
+ [46] Zachary C Lipton. "The Myth of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery." In: (2018).
354
+ [47] Hsueh-Ti Derek Liu et al. "Beyond Pixel Norm-Balls: Parametric Adversaries Using An Analytically Differentiable Renderer". In: International Conference on Learning Representations (ICLR). 2019.
355
+ [48] Xavier Puig et al. "Virtualhome: Simulating household activities via programs". In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
356
+ [49] Benjamin Recht et al. "Do ImageNet Classifiers Generalize to ImageNet?" In: International Conference on Machine Learning (ICML). 2019.
357
+ [50] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "Why should I trust you?" Explaining the predictions of any classifier". In: International Conference on Knowledge Discovery and Data Mining (KDD). 2016.
358
+ [51] Mike Roberts and Nathan Paczan. Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding. arXiv 2020.
359
+ [52] Amir Rosenfeld, Richard Zemel, and John K. Tsotsos. "The Elephant in the Room". In: arXiv preprint arXiv:1808.03305. 2018.
360
+
361
+ [53] Olga Russakovsky et al. "ImageNet Large Scale Visual Recognition Challenge". In: International Journal of Computer Vision (IJCV). 2015.
362
+ [54] Manolis Savva et al. "Habitat: A platform for embodied ai research". In: Proceedings of the IEEE International Conference on Computer Vision. 2019.
363
+ [55] Shital Shah et al. "Airsim: High-fidelity visual and physical simulation for autonomous vehicles". In: Field and service robotics. Springer. 2018, pp. 621-635.
364
+ [56] Vaishaal Shankar et al. "Do Image Classifiers Generalize Across Time?" In: arXiv preprint arXiv:1906.02168 (2019).
365
+ [57] Michelle Shu et al. "Identifying Model Weakness with Adversarial Examiner". In: AAAI Conference on Artificial Intelligence (AAAI). 2020.
366
+ [58] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep inside convolutional networks: Visualising image classification models and saliency maps". In: arXiv preprint arXiv:1312.6034 (2013).
367
+ [59] Yunlong Song et al. "Flightmare: A Flexible Quadrotor Simulator". In: arXiv preprint arXiv:2009.00563 (2020).
368
+ [60] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. "Axiomatic attribution for deep networks". In: International Conference on Machine Learning (ICML). 2017.
369
+ [61] Christian Szegedy et al. "Intriguing properties of neural networks". In: International Conference on Learning Representations (ICLR). 2014.
370
+ [62] Antonio Torralba and Alexei A Efros. "Unbiased look at dataset bias". In: CVPR 2011. 2011.
371
+ [63] Eric Wong, Shibani Santurkar, and Aleksander Madry. "Leveraging Sparse Linear Layers for Debuggable Deep Networks". In: International Conference on Machine Learning (ICML). 2021.
372
+ [64] Yi Wu et al. "Building generalizable agents with a realistic and rich 3d environment". In: arXiv preprint arXiv:1801.02209 (2018).
373
+ [65] Fei Xia et al. "Gibson env: Real-world perception for embodied agents". In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
374
+ [66] Fei Xia et al. "Interactive Gibson Benchmark: A Benchmark for Interactive Navigation in Cluttered Environments". In: IEEE Robotics and Automation Letters (2020).
375
+ [67] Fanbo Xiang et al. "SAPIEN: A simulated part-based interactive environment". In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
376
+ [68] Chaowei Xiao et al. "MeshAdv: Adversarial Meshes for Visual Recognition". In: Computer Vision and Pattern Recognition (CVPR). 2019.
377
+ [69] Kai Xiao et al. "Noise or signal: The role of image backgrounds in object recognition". In: arXiv preprint arXiv:2006.09994 (2020).
378
+ [70] Cihang Xie et al. "Adversarial examples for semantic segmentation and object detection". In: Proceedings of the IEEE International Conference on Computer Vision. 2017, pp. 1369-1378.
379
+ [71] Chih-Kuan Yeh et al. "On Completeness-aware Concept-Based Explanations in Deep Neural Networks". In: Advances in Neural Information Processing Systems (NeurIPS) (2020).
380
+ [72] Jianguo Zhang et al. "Local features and kernels for classification of texture and object categories: A comprehensive study". In: International journal of computer vision. 2007.
381
+ [73] Yuke Zhu et al. "robosuite: A modular simulation framework and benchmark for robot learning". In: arXiv preprint arXiv:2009.12293 (2020).
382
+ [74] Zhuotun Zhu, Lingxi Xie, and Alan Yuille. "Object Recognition without and without Objects". In: International Joint Conference on Artificial Intelligence. 2017.
383
+
384
+ # Checklist
385
+
386
+ 1. For all authors...
387
+
388
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
389
+ (b) Did you describe the limitations of your work? [Yes] See conclusion.
390
+ (c) Did you discuss any potential negative societal impacts of your work? [Yes] See conclusion.
391
+ (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
392
+
393
+ 2. If you are including theoretical results...
394
+
395
+ (a) Did you state the full set of assumptions of all theoretical results? [N/A]
396
+ (b) Did you include complete proofs of all theoretical results? [N/A]
397
+
398
+ 3. If you ran experiments...
399
+
400
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See supplementary material.
401
+ (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes]
402
+ (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] See Appendix F.
403
+ (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix B.
404
+
405
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
406
+
407
+ (a) If your work uses existing assets, did you cite the creators? [Yes]
408
+ (b) Did you mention the license of the assets? [N/A]
409
+ (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
410
+
411
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A]
412
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A]
413
+
414
+ 5. If you used crowdsourcing or conducted research with human subjects...
415
+
416
+ (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
417
+ (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
418
+ (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
3dbaframeworkfordebuggingcomputervisionmodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cddaca078e0cfc68cfde934b77b7aa83d5974aa5882ef40420df1e42282daf8
3
+ size 494101
3dbaframeworkfordebuggingcomputervisionmodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:776525be2654f64598a43ca2c6026fe9a30d8b13a2aa982262c4d033e362ca98
3
+ size 454635
3dconceptgroundingonneuralfields/71b2d5b1-3719-44fc-8a2c-7914d08740a4_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce81f8daaa33e5e768e6e7189c3faf12f28e7d2b2d91c1948d809c691cd818cb
3
+ size 101002
3dconceptgroundingonneuralfields/71b2d5b1-3719-44fc-8a2c-7914d08740a4_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bc23f631c1acf24659a0a5d748efaa4f9f55dedd88b3319c9cfff9867de5b87
3
+ size 122728
3dconceptgroundingonneuralfields/71b2d5b1-3719-44fc-8a2c-7914d08740a4_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a12acb3e5f81366b288984065c2628674b5febac9781eaf9cc913e16ed40b5b
3
+ size 5878715
3dconceptgroundingonneuralfields/full.md ADDED
@@ -0,0 +1,493 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D Concept Grounding on Neural Fields
2
+
3
+ Yining Hong
4
+
5
+ University of California, Los Angeles
6
+
7
+ Yilun Du
8
+
9
+ Massachusetts Institute of Technology
10
+
11
+ Chumru Lin
12
+
13
+ Shanghai Jiao Tong University
14
+
15
+ Joshua B. Tenenbaum
16
+
17
+ MIT BCS, CBMM, CSAIL
18
+
19
+ Chuang Gan
20
+
21
+ UMass Amherst and MIT-IBM Watson AI Lab
22
+
23
+ # Abstract
24
+
25
+ In this paper, we address the challenging problem of 3D concept grounding (i.e. segmenting and learning visual concepts) by looking at RGBD images and reasoning about paired questions and answers. Existing visual reasoning approaches typically utilize supervised methods to extract 2D segmentation masks on which concepts are grounded. In contrast, humans are capable of grounding concepts on the underlying 3D representation of images. However, traditionally inferred 3D representations (e.g., point clouds, voxelgrids and meshes) cannot capture continuous 3D features flexibly, thus making it challenging to ground concepts to 3D regions based on the language description of the object being referred to. To address both issues, we propose to leverage the continuous, differentiable nature of neural fields to segment and learn concepts. Specifically, each 3D coordinate in a scene is represented as a high dimensional descriptor. Concept grounding can then be performed by computing the similarity between the descriptor vector of a 3D coordinate and the vector embedding of a language concept, which enables segmentations and concept learning to be jointly learned on neural fields in a differentiable fashion. As a result, both 3D semantic and instance segmentations can emerge directly from question answering supervision using a set of defined neural operators on top of neural fields (e.g., filtering and counting). Experimental results on our collected PARTNET-REASONING dataset show that our proposed framework outperforms unsupervised / language-mediated segmentation models on semantic and instance segmentation tasks, as well as outperforms existing models on the challenging 3D aware visual reasoning tasks. Furthermore, our framework can generalize well to unseen shape categories and real scans*.
26
+
27
+ # 1 Introduction
28
+
29
+ Visual reasoning, the ability to utilize compositional operators to perform complex visual question answering tasks (e.g., counting, comparing and logical reasoning), has become a challenging problem these years since they go beyond pattern recognition and bias exploitation [19, 22, 2]. Consider an image of a table pictured in Figure 1. We wish to construct a method that is able to accurately ground the concepts and reason about the image such as the number of legs the pictured table has. Existing works typically address this problem by utilizing a supervised segmentation model for legs, and then applying a count operator on extracted segmentation masks [28]. However, as illustrated in Figure 1, in many visual reasoning problems, the correct answer depends on a very small portion of
30
+
31
+ ![](images/163e68919589b9232b5b737623273ce09788a01446210f5344503e7e9aec3207.jpg)
32
+ Figure 1: Issues with 2D Concept Grounding and Visual Reasoning. Existing methods typically answer questions by relying on 2D segmentation masks obtained from supervised models. However, heavy occlusion leads to incorrect segmentations and answers.
33
+
34
+ a given image, such as a highly occluded back leg, which many existing 2D segmentation systems may neglect. Humans are able to accurately ground the concepts from images and answer such questions by reasoning on the underlying 3D representation of the image [43]. In this underlying 3D representation, the individual legs of a table are roughly similar in size and shape, without underlying issues of occlusion of different legs, making reasoning significantly easier and more flexible. In addition, such an intermediate 3D representation further abstracts confounding factors towards reasoning, such as the underlying viewing direction from which we see the input shape. To resemble more human-like reasoning capability, in this paper, we propose a novel concept grounding framework by utilizing an intermediate 3D representation.
35
+
36
+ A natural question arises – what makes a good intermediate 3D representation for concept grounding? Such an intermediate representation should be discovered in a weakly-supervised manner and efficient to infer, as well as maintain correspondences between 3D coordinates and feature maps in a fully differentiable and flexible way, so that segmentation and concept learning can directly emerge from this inferred 3D representation with supervision of question answering. While traditional 3D representations (e.g., point clouds, voxelgrids and meshes) are efficient to infer, they can not provide continuous 3D feature correspondences flexibly, thus making it challenging to ground concepts to 3D coordinates based on the language descriptions of objects being referred to. To address both issues, in this work, we propose to leverage the continuous, differentiable nature of neural fields as the intermediate 3D representation, which could be easily used for segmentation and concept learning through question answering.
37
+
38
+ To parameterize a neural field for reasoning, we utilize recently proposed Neural Descriptor Fields (NDFs) [41] which assign each 3D point in a scene a high dimensional descriptor. Such descriptors are learned in a weakly-supervised manner, and implicitly capture the hierarchical nature of a scene, by leveraging implicit feature hierarchy learned by a neural network. Portions of a scene relevant to a particular concept could be then differentiably extracted through a vector similarity computation between each descriptor and a corresponding vector embedding of the concept, enabling concept segmentations on NDFs to be differentiably learned. In contrast, existing reasoning approaches utilize supervised segmentation models to pre-defined concepts, which prevents reasoning on concepts unseen by the segmentation model [28], and further prevents models from adapting segmentations based on language descriptions.
39
+
40
+ On top of NDFs, we define a set of neural operators for visual reasoning. First, we construct a filter operator, and predict the existence of a particular concept in an image. We also have a query operator which queries about an attribute of the image. In addition, we define a neural counting operator, which quantifies the number of instances of a particular concept.
41
+
42
+ To evaluate the performances of our 3D Concept Grounding (3D-CG) framework, we conduct experiments on our collected PARTNET-REASONING dataset, which contains approximately 3K images of 3D shapes and 20K question-answer pairs. We find that by integrating our neural operators with NDF, our framework is able to effectively and robustly answer a set of visual reasoning questions. Simultaneously, we observe the emergence of 3D segmentations of concepts, at the semantic and instance level, directly from the underlying supervision of downstream visual question answering.
43
+
44
+ Our contributions can be summed up as follows: 1) we propose 3D-CG, which utilizes the differentiable nature of neural descriptor fields (NDF) to ground concepts and perform segmentations; 2) we define a set of neural operators, including a neural counting operator on top of the NDF; 3) with
45
+
46
+ 3D-CG, semantic and instance segmentations can emerge from question answering supervision; 4) our 3D-CG outperforms baseline models in both segmentation and reasoning tasks; 5) it can also generalize well to unseen shape categories and real scans.
47
+
48
+ # 2 Related works
49
+
50
+ Visual Reasoning. There have been different tasks focusing on learning visual concepts from natural language, such as visually-grounded question answering [11, 12] and text-image retrieval [45]. Visual question answering that evaluates machine's reasoning abilities stands out as a challenging task as it requires human-like understanding of the visual scene. Numerous visual reasoning models have been proposed in recent years. Specifically, MAC [16] combined multi-modal attention for compositional reasoning. LCGN [15] built contextualized representations for objects to support relational reasoning. These methods model the reasoning process implicitly with neural networks. Neural-symbolic methods [53, 28, 3] explicitly perform symbolic reasoning on the objects representations and language representations. Specifically, they use perception models to extract 2D masks for 3D shapes as a first step, and then execute operators and ground concepts on these pre-segmented masks, but are limited to a set of pre-defined concepts. In this paper, we present an approach towards grounding 3D concepts in a fully differentiable manner, with which 3D segmentations can emerge from question answering supervision.
51
+
52
+ Neural Fields. Our approach utilizes neural fields also known as neural implicit representations, to parameterize an underlying 3D geometry of a shape for reasoning. Implicit fields have been shown to accurately represent shape topology and 3D geometry [1, 20, 32]. Recent works improve traditional shape fields by using continuous neural networks [33, 5, 36, 38, 39, 51, 13]. These works are typically used for reconstruction [33, 36, 29] or geometry processing [51, 37, 8]. We use a neural descriptor field (NDF) similar to [29] to infer the 3D representations from 2D images, and leverage the features learnt from the NDF for visual reasoning and language grounding. Neural fields have also been used to represent dynamic scenes [34, 10], appearance [42, 30, 35, 52, 40], physics [23], robotics [18], acoustics [27] and more general multi-modal signals [9]. Conditional neural field such as [25, 48] allows us to encode instance-specific information into the field, which is also utilized by our framework to encode the RGBD data of 3D shapes. There are also some works that integrate semantics or language in neural fields [17, 46]. However, they mainly focus on using language for manipulation, editing or generation. In this work, we utilize descriptors defined on neural fields [41] to differentiably reason in 3D given natural language questions. We refer readers to recent surveys [49, 44] for more related works on neural fields.
53
+
54
+ Language-driven Segmentation. Recent works have been focused on leveraging language for segmentation. Specifically, BiLD [14] distills the knowledge from a pre-trained image classification model into a two-stage detector. MDETR [21] is an end-to-end modulated detector that detects objects in an image conditioned on a raw text query, like a caption or a question. LSeg [24] uses a text encoder to compute embeddings of descriptive input labels together with a transformer-based image encoder that computes dense per-pixel embeddings of the input image. GroupViT [50] learns to group image regions into progressively larger arbitrary-shaped segments with a text encode. These methods typically use an encoder to encode the text and do not have the ability to modify the segmentations based on the question answering loss. LORL [47] uses the part-centric concepts derived from language to facilitate the learning of part-centric representations. However, they can only improve the segmentation results but cannot generate segmentations from scratch with language supervision.
55
+
56
+ # 3 3D Concept Grouding
57
+
58
+ In Figure 2, we show an overall framework of our 3D Concept Grounding (3D-CG), which seamlessly bridges the gap between 3D perception and reasoning by using Neural Descriptor Field (NDF). Specifically, we first use NDF to assign a high dimensional descriptor vector for each 3D coordinate, and run a semantic parsing module to translate the question into neural operators to be executed on the neural field. The concepts, also the parameters of the operators, possess concept embeddings as well. Upon executing the operators and grounding concepts in the neural field, we perform dot product attention between the descriptor vector of each coordinate and the concept embedding vector to calculate the score (or mask probability) of a certain concept being grounded on a coordinate. We also propose a count operator which assigns point coordinates into different slots for counting the number of instances of a part category. Both the NDF and the neural operator have a fully differentiable
59
+
60
+ ![](images/04d247fb596db394671bba3e48794817bb23c01aa4febd25c3147a4c77626320.jpg)
61
+ Figure 2: Our 3D Concept Grounding (3D-CG) framework. Given an input image and a question, 3D-CG first utilizes a Neural Descriptor Field (NDF) to extract a descriptor vector for each 3D point coordinate (here we take 5 coordinates with 5 different colors as examples), and a semantic parsing network to parse the question into operators. The mask probabilities of the coordinates are initialized with occupancy values. Concepts are also mapped to embedding vectors. Attention is calculated between the descriptors and concept embeddings to yield new mask probabilities and filter the set of coordinates being referred as the concept. We also define a count operator which segments a part category into different slots. The sum of the maxpooled mask probabilities of all slots can be used for counting, and the loss can be back propagated to optimize NDF as well as the embeddings. Semantic and instance segmentations emerge from question-answering supervision with 3D-CG.
62
+
63
+ design. Therefore, the entire framework can be end-to-end trained, and the gradient from the question answering loss can be utilized to optimize both the NDF perception module and the visual reasoning module and jointly learn all the embeddings and network parameters.
64
+
65
+ # 3.1 Model Details
66
+
67
+ # 3.1.1 Neural Descriptor Fields
68
+
69
+ In this paper, we utilize a neural field to map a 3D coordinate $\mathbf{x}$ using a descriptor function $f(\mathbf{x})$ and extract the feature descriptor of that 3D coordinate:
70
+
71
+ $$
72
+ f (\mathbf {x}): \mathbb {R} ^ {3} \rightarrow \mathbb {R} ^ {n}, \quad \mathbf {x} \mapsto f (\mathbf {x}) = \mathbf {v} \tag {1}
73
+ $$
74
+
75
+ where $\mathbf{v}$ is the descriptor representation which encodes information about shape properties (e.g., geometry, appearance, etc).
76
+
77
+ In our setting, we condition the neural field on the partial point cloud $\mathcal{P}$ derived from RGB-D images. We use a PointNet-based encoder $\mathcal{E}$ to encode $\mathcal{P}$ . The descriptor function then becomes $f(\mathbf{x},\mathcal{E}(P)) = \mathbf{v}$ . This continuous and differentiable representation maps each 3D coordinate $\mathbf{x}$ to a descriptor vector $\mathbf{v}$ . Since the concepts to be grounded in most reasoning tasks focus on geometry and appearance, we leverage two pre-training tasks to parameterize $f$ and learn the features concerning these properties.
78
+
79
+ Shape Reconstruction. Inspired by recent works on using occupancy networks [29] to reconstruct shapes and learn shape representations, we also use an MLP decoder $\mathcal{D}_1$ which maps each descriptor vector $\mathbf{v}$ to an occupancy value: $\mathcal{D}_1:\mathbf{v}\mapsto \mathbf{o}\in [0,1]$ , where the occupancy value indicates whether the 3D coordinate is at the surface of a shape.
80
+
81
+ Color Reconstruction. We use another MLP $\mathcal{D}_2$ to decode an RGB color of each coordinate: $\mathcal{D}_2:\mathbf{v}\mapsto \mathbf{c}\in \mathbb{R}^3$ , where the color value indicates the color of the 3D coordinate on the shape.
82
+
83
+ After learning decoders $D_{i}$ through these pre-training tasks, to construct the Neural Descriptor Field (NDF) $f(\pmb{x})$ , we concatenate all the intermediate activations of $D_{i}$ when decoding a 3D point $\mathbf{x}$ [41]. The resultant descriptor vector $\mathbf{v}$ can then be used for concept grounding. Since $\mathbf{v}$ consists of intermediate activations of $D_{i}$ , they implicitly capture the hierarchical nature of a scene, with earlier activations corresponding to lower level features and later activations corresponding to higher level features.
84
+
85
+ # 3.1.2 Concept Quantization
86
+
87
+ Visual reasoning requires determining the attributes (e.g., color, category) of a shape or shape part, where each attribute contains a set of visual concepts (e.g., blue, leg). As shown in Figure 2, visual concepts such as legs, are represented as vectors in the embedding space of part categories. These concept vectors are also learned by our framework. To ground concepts in the neural fields, we calculate the dot products $\langle \cdot ,\cdot \rangle$ between the concept vectors and the descriptor vectors from the neural field. For example, to calculate the score of a 3D coordinate belonging to the category 1eg, we take its descriptor vector $\mathbf{v}$ and compute $p = \langle \mathbf{v},v^{leg}\rangle$ , where $v^{leg}$ is the concept embedding of 1eg.
88
+
89
+ # 3.1.3 Semantic Parsing
90
+
91
+ To transform natural language questions into a set of primitive operators that can be executed on the neural field, a semantic parsing module similar to that in [53] is incorporated into our framework. This module utilizes a LSTM-based Seq2Seq to translate questions into a set of fundamental operators for reasoning. These operators take concepts and attributes as their parameters (e.g., Filter(leg), Query(color)).
92
+
93
+ # 3.1.4 Neural operators on Neural Descriptor Fields
94
+
95
+ The operators extracted from natural language questions can then be executed on the neural field. Due to the differentiable nature of the neural field, the whole execution process is also fully-differentiable. We represent the intermediate results in a probabilistic manner: for the $i$ -th sampled coordinate $\mathbf{x}_i$ , it is represented by a descriptor vector $\mathbf{v}_i$ and has an attention mask $\mathbf{m}_i \in [0,1]$ . $\mathbf{m}_i$ denotes the probability that a coordinate belongs to a certain set, and is initialized using the occupancy value output by $\mathcal{D}_1$ : $\mathbf{m}_i = \mathbf{o}_i$ .
96
+
97
+ There are three kinds of operators output by the semantic parsing module, we will illustrate how each operator executes on the descriptor vectors to yield the outputs.
98
+
99
+ Filter Operator. The filter operator "selects outs" a set of coordinates belonging to the concept $c$ by outputting new mask probabilities:
100
+
101
+ $$
102
+ \operatorname {F i l t e r} (c): \mathbf {m} _ {i} ^ {c} = \min (\mathbf {m} _ {i}, \langle \mathbf {v} _ {i}, v ^ {c} \rangle) \tag {2}
103
+ $$
104
+
105
+ Query Operator. The query operator asks about an attribute $a$ on selected coordinates and outputs concept $\hat{c}$ of the maximum probability:
106
+
107
+ $$
108
+ \operatorname {Q u e r y} (a): \hat {c} = \arg \max \min \left(\mathbf {m} _ {i}, \left\langle \mathbf {v} _ {i}, v ^ {c} \right\rangle\right), c \in a \tag {3}
109
+ $$
110
+
111
+ Count Operator. The count operator intends to segment the coordinates in the same part category into instances and count the number of instances. Inspired by previous unsupervised segmentation methods which output different parts in various slots [26, 4], we also allocate a set of slots for different instances of a category. The $j$ -th slot $s_j$ has its own embedding vector $v_j^s$ . The score $s_{ij}$ of the $i$ -th coordinate belonging to the $j$ -th slot is also computed by dot product $s_{ij} = \langle \mathbf{v}, v^{s_j} \rangle$ . We further take softmax to normalize the probabilities across slots. Since we want to count the instances of a part that was previously selected out, we also take the minimum of previous mask probabilities and the mask probabilities of each slot. The result of counting (i.e., the number of part instances $n$ ) is obtained by summing up the maximum probability in each slot.
112
+
113
+ $$
114
+ \operatorname {C o u n t} (c): \mathbf {m} _ {i j} = \min \left(\mathbf {m} _ {i} ^ {c}, \frac {s _ {i j}}{\sum s _ {i ^ {\prime} j}}\right) \tag {4}
115
+ $$
116
+
117
+ $$
118
+ n = \sum_ {j} \max _ {i} \mathbf {m} _ {i j} \tag {5}
119
+ $$
120
+
121
+ # 3.1.5 Segmentation
122
+
123
+ Note that based on the above operators, not only visual reasoning can be performed, but we can also do semantic segmentation and instance segmentation.
124
+
125
+ Semantic Segmentation. We can achieve semantic segmentations by executing Query(category) on each coordinate and output the category $\hat{c}$ with the maximum probability for each point.
126
+
127
+ Instance Segmentation. We can perform instance segmentations by first doing semantic segmentation, and then for each output $\hat{c}$ we further execute count $(\hat{c})$
128
+
129
+ # 3.2 Training Paradigm
130
+
131
+ Optimization Objective. During training, we jointly optimize the NDF and the concept embeddings by minimizing the loss as:
132
+
133
+ $$
134
+ \mathcal {L} = \alpha \cdot \mathcal {L} _ {\mathrm {N D F}} + \beta \cdot \mathcal {L} _ {\text {r e a s o n i n g}} \tag {6}
135
+ $$
136
+
137
+ Specifically, the NDF loss consists of two parts: the binary cross entropy classification loss between the ground-truth occupancy and the predicted occupancy, and the MSE loss between the ground-truth rgb value and the predicted rgb value.
138
+
139
+ We also define three kinds of losses for three types of questions: 1) For questions that ask about whether a part exist, we use an MSE loss $\|a - \max(\mathbf{m}_i)\|^2$ where $a$ is the ground-truth answer and $\max(\mathbf{m}_i)$ takes the maximum mask probability among all sampled 3D coordinates; 2) For questions that query about an attribute, we take the cross entropy classification loss between the predicted concept category $\hat{c}$ and the ground-truth concept category; 3) For counting questions, we first use an MSE loss $\|a - n\|^2$ between the answer and the number output by summing up the maxpool values of all the slots, and further add a loss to ensure that the mask probabilities of the top $K$ values in the top $a$ slots (i.e., the slots with the maximum maxpool values) should be close to 1, where $K$ is a hyper-parameter. During training, we first train the NDF module only for $N_1$ epochs and then jointly optimize the NDF module and the concept embeddings. The Seq2seq model for semantic parsing is trained independently with supervision.
140
+
141
+ Curriculum Learning. Motivated by previous curriculum strategies for visual reasoning [28], we employ a curriculum learning strategy for training. We split the questions according to the length of operators they are parsed into. Therefore, we start with questions with only one neural operator (e.g., "is there a yellow part of the chair" can be parsed into Filter(Color)).
142
+
143
+ # 4 Experiments
144
+
145
+ # 4.1 Experimental Setup
146
+
147
+ # 4.1.1 Dataset
148
+
149
+ Instead of object-based visual reasoning [19, 53, 28] where objects are spatially disentangled, which makes segmentations quite trivial, we seek to explore part-based visual reasoning where segmentations and reasoning are both harder. To this end, we collect a new dataset, PartNet-Reasoning, which focuses on visual question answering on the PartNet [31] dataset. Specifically, we render approximately 3K RGB-D images from shapes of 4 categories: Chair, Table, Bag and Cart, with 8 question-answer pairs on average for each shape. We have three question types: exist_part queries about whether a certain part exists by having the filter operator as the last operator; query_part uses query operator to query about an attribute (e.g., part category or color) of a filtered part; count_part utilizes the count operator to count the number of instances of a filtered part. We are interested in whether the reasoning tasks can benefit the segmentations of the fine-grained parts, as well as whether the latent descriptor vectors by NDF can result in better visual reasoning.
150
+
151
+ # 4.1.2 Evaluation Tasks
152
+
153
+ Reasoning. We report the visual question answering accuracy on the PartNet-Reasoning dataset w.r.t the three types of questions on all four categories.
154
+
155
+ Segmentation. We further evaluate both the performances of semantic segmentation and instance segmentation on our dataset. As stated in 3.1.5, semantic segmentation can be performed by querying the part category with the maximum mark probability of each coordinate, and filtering the coordinates according to category labels, and instance segmentation can be achieved by using the count operator on each part category. We report the mean per-label Intersection Over Union (IOU).
156
+
157
+ # 4.1.3 Baselines
158
+
159
+ We compare our approach to a set of different baselines, with details provided in the supp. material.
160
+
161
+ <table><tr><td colspan="2"></td><td>PointNet+LSTM</td><td>MAC</td><td>NDF+LSTM</td><td>CVX+L</td><td>BAE+L</td><td>3D-CG</td></tr><tr><td rowspan="3">Chair</td><td>exist_part</td><td>52.3</td><td>65.2</td><td>55.7</td><td>71.9</td><td>72.4</td><td>85.4</td></tr><tr><td>query_part</td><td>41.6</td><td>53.9</td><td>54.2</td><td>72.3</td><td>70.5</td><td>68.7</td></tr><tr><td>count_part</td><td>63.4</td><td>78.1</td><td>71.6</td><td>48.8</td><td>68.0</td><td>92.2</td></tr><tr><td rowspan="3">Table</td><td>exist_part</td><td>55.1</td><td>66.4</td><td>61.3</td><td>68.5</td><td>71.0</td><td>80.3</td></tr><tr><td>query_part</td><td>43.6</td><td>51.2</td><td>54.4</td><td>69.7</td><td>73.6</td><td>75.1</td></tr><tr><td>count_part</td><td>35.5</td><td>55.3</td><td>50.1</td><td>30.7</td><td>45.7</td><td>90.9</td></tr><tr><td rowspan="3">Bag</td><td>exist_part</td><td>65.4</td><td>85.4</td><td>69.2</td><td>87.3</td><td>73.8</td><td>89.2</td></tr><tr><td>query_part</td><td>53.1</td><td>74.8</td><td>64.0</td><td>88.1</td><td>70.6</td><td>85.2</td></tr><tr><td>count_part</td><td>51.2</td><td>70.9</td><td>72.3</td><td>53.4</td><td>31.4</td><td>68.3</td></tr><tr><td rowspan="3">Cart</td><td>exist_part</td><td>49.7</td><td>75.3</td><td>61.7</td><td>79.1</td><td>91.0</td><td>90.1</td></tr><tr><td>query_part</td><td>50.1</td><td>64.0</td><td>57.0</td><td>72.3</td><td>81.5</td><td>86.3</td></tr><tr><td>count_part</td><td>41.6</td><td>82.1</td><td>67.3</td><td>46.6</td><td>47.3</td><td>74.8</td></tr></table>
162
+
163
+ Table 1: Visual question answering accuracies. exist_part queries about whether a certain part exists by having the filter operator as the last operator; query_part uses query operator to query about an attribute of a filtered part; count_part utilizes the count operator to count the number of instances of a filtered part. Point+LSTM, MAC and NDF+LSTM are methods based on neural networks. CVX+L and BAE+L are neural-symbolic methods which pre-segment the masks and ground concepts on the masks. 3D-CG outperforms baseline models by a large margin.
164
+
165
+ Unsupervised Segmentation Approaches. We compare 3D-CG with two additional 3D unsupervised segmentation models, and further consider how we may integrate such approaches with concept grounding. First, we consider CVXNet [7], which decomposes a solid object into a collection of convex polytope using neural implicit fields. In this baseline, a number of primitives are selected and a convex part is generated in each primitive. Next, we consider BAENet [4] which decomposes a shape into a set of pre-assigned branches, each specifying a portion of a whole shape. In practice, we tuned with the original codes of BAENet on our dataset and found that nearly all coordinates would be assigned to a single branch. This is probably because the original model is trained on voxel models, while ours is on partial point cloud and contains more complex shapes, thus posing more challenges. Therefore, for BAENet we use an additional loss which enforces that at least some of the branches should have multiple point coordinates assigned to positive occupancy value. The primitives in CVXNet and the branches in BAENet are similar to slots in our paper.
166
+
167
+ Reasoning Baselines. We compare our approach to a set of different reasoning baselines. First we consider PointNet+LSTM and NDF+LSTM, which use the features by PointNet or our NDF module, concatenated with the features of the questions encoded by a LSTM model. The final answer distribution is predicted with an MLP. Next, we compare with MAC, a commonly-used attention-based model for visual reasoning. We add a depth channel to the input to the model. Finally we compare with CVX+L and BAE+L are neural-symbolic models that use the similar language-mediated segmentation framework from [47]. Specifically, they first initiate the segmentations in the slots, and each slot has a slot feature vector. The operators from the questions are executed symbolically on the slot features. Question answering loss can be also propagated back to finetune the slot segmentations and features.
168
+
169
+ Segmentation Baselines. For the segmentation tasks, we compare our approach with unsupervised segmentation approaches CVX and BAE described above. We further construct language-mediated variants CVX-L and BAE-L. For semantic segmentation by CVX and BAE, we use the manual grouping methods as in [7]. For semantic segmentation by language-mediated models, we use the filter operator to filter the slots belonging to the part categories.
170
+
171
+ # 4.2 Experimental Results
172
+
173
+ # 4.2.1 Reasoning & Concept Grounding
174
+
175
+ Table 1 shows the VQA accuracy among all models on our dataset. We can see that overall, our 3D-CG outperforms baseline modes by a large margin. Language-mediated neural-symbolic methods (CVX+L & BAE+L) are better than neural methods in the exist_part and query_part question types, but in general they are far worse than our method. This is because the slot-based representations pre-select the parts and make it easier for the concepts to attend to specific regions. However, wrong pre-segmentations are also hard to be corrected during the training of reasoning.
176
+
177
+ ![](images/2464135cbedd09e1dfc6f260ffdff4791105ef9d4c42e03f6b97d50d6bcfc196.jpg)
178
+
179
+ # Question:
180
+
181
+ Is there any other part of the same color as the legs?
182
+
183
+ Step1: Filter (Leg)
184
+
185
+ Step2: Query (Color)
186
+ Step3: Other &
187
+
188
+ Filter (Dark Blue)
189
+
190
+ ![](images/a84c7edf2a67a52861bc8a4ab77e358a4a3bb2d0d0e9409715b4b311c01a096d.jpg)
191
+
192
+ Dark Blue
193
+
194
+ ![](images/4e86f0033720fe9a853329918229c34a3af406b630daf38d91af675de2ca3936.jpg)
195
+
196
+ # Answer: Yes
197
+
198
+ Filter (Yellow)
199
+
200
+ ![](images/14956e6cb25ddb336cde1b1a13b0aa0e234367399d9ab9258fb15dc7856010a6.jpg)
201
+
202
+ Answer: Yes
203
+
204
+ ![](images/49eb2cce1eb112bb4a2b8dc98a81c970fe53d37cd7307fb4e8d5d8fabca79263.jpg)
205
+
206
+ # Question:
207
+
208
+ What is the category of the green part of the chair?
209
+
210
+ Step1: Filter (Green)
211
+
212
+ ![](images/dfc0d1db48cc8d20f1c27f82d1251a9993bdb27ba71808d63c8c393130045ba1.jpg)
213
+
214
+ Step2: Query (Category) Seat
215
+
216
+ ![](images/188dcd5e513d37708e47a9c14791114e846b06ff623513e28ea0befb4edfc5dd.jpg)
217
+
218
+ # Question:
219
+
220
+ How many legs does this table have?
221
+
222
+ Step1: Filter (Leg)
223
+
224
+ Step2: Count
225
+
226
+ Answer: 6
227
+
228
+ ![](images/6e9279932678c7e344f77fdfa747f5ddee36eba56ad9f5b77c9054cd785b2df5.jpg)
229
+ Figure 3: Qualitative Illustration. Examples of the reasoning process of our proposed 3D-CG. Given an input image of a shape and a question, we first parse the question into steps of operators. We visualize the set of points being referred to by the operator via highlighting the regions where mask probabilities $>0.5$ . As is shown, our 3D-CG can make reference to the right set of coordinates, thus correctly answering the questions.
230
+
231
+ ![](images/7ea4f38f323bcf5c6ddb0b9b243d828d47c1d24e4208ec655cb5e9bf392715bf.jpg)
232
+
233
+ Question: Is there a yellow part in the cart?
234
+
235
+ The accuracies for the count_part type further demonstrates this point, where language-mediated baselines have extremely low accuracies, and segmentations from Figure 4a also show that even with supervision from question answering loss, BAE and CVX cannot segment the instances of a part category (e.g., legs and wheels). Neural methods such as MAC and NDF+LSTM perform well in the count_part type in some categories, probably due to some shortcuts of the PartNet dataset (e.g., most bags have two shoulder straps). In comparison to both neural and neural-symbolic methods, our method performs well in all three question types. This is because the regions attended by concepts are dynamically improved with the question answering loss, while the advantage of explicit disentanglement of concept learning and reasoning process is maintained.
236
+
237
+ Figure 3 shows some qualitative examples of the reasoning process of our method. Specifically, the questions are parsed into a set of neural operators. For each step, the neural operator takes the output of the last step as its input. For the filter operator, we visualize the attended regions with mask probabilities greater than 0.5. From the figure, we can see that our method can attend to the correct region to answer the questions, as well as segment the instances correctly for counting problems. This attention-based reasoning pipeline is also closer than previous neural-symbolic methods to the way that humans perform reasoning. For example, when asked the question "What is the category of the green part of the chair?", humans would directly pay attention to the green region regardless of the segmentations of the rest of the chair. However, previous neural symbolic methods [53, 28, 47] segment the chair into different parts first and then select the green part, which is very unnatural.
238
+
239
+ # 4.2.2 Segmentation
240
+
241
+ Table 2 shows the semantic segmentation and instance segmentation results, and Figure 4a visualizes some qualitative examples. We can see that overall, our 3D-CG can better segment parts than unsupervised or language-mediated methods. Other methods experience some common problems such as failing to segment the instances within a part category (e.g., one of the legs is always merged with the seat), or experience unclear boundaries and segment one part into multiple parts (e.g., segmenting the chair back or the tabletop into two parts). In general, language-mediated methods are better than pure unsupervised methods, which indicates that the supervision of question answering does benefit the segmentation modules. However, a lot of the wrong segmentations cannot be corrected by reasoning, resulting in bad results for counting questions for these models. In contrast, our 3D-CG can learn segmentations well using question answering loss, mainly because the segmentations emerge from question-answering supervision without any restrictions.
242
+
243
+ For fair comparison with baselines, we first control the reconstruction part and use the ground-truth voxels for the segmentation evaluations only. We also provide additional experimental results when the segmentations are performed on the reconstructed voxels, which is shown in Table 3.
244
+
245
+ ![](images/e9486f4c10da52a79b46c92f42696caa9971184a6b7417f39d19661c8ba85156.jpg)
246
+ Shapes
247
+
248
+ ![](images/3f80f1c4694dda4c53a9f74163b1ea5476991ae33bcee88f6f6ca0dcfc76db04.jpg)
249
+
250
+ ![](images/a640957760f68664f6c78dc20bd28322936182e88004e9ffdfc8a16aa20f00e9.jpg)
251
+
252
+ ![](images/0ecd30cc941627e6033cd54ab03bae4a0781466d62a8cfe9a852fb248e68dc35.jpg)
253
+ Semantic Segmentation
254
+
255
+ ![](images/db6b85fcb1131c4f951a6e7312d9cefab1aa49091b4ea905f5819fe96a161131.jpg)
256
+
257
+ ![](images/2aaf7b788e072fa10b5a884916db7fe6477470da92201a621a01d9fc39830476.jpg)
258
+
259
+ ![](images/901ea62ff38fb878306cd4cbb4246c0d97c97b9a8873da0fdfcf3b2e2e5a28a5.jpg)
260
+
261
+ ![](images/3a53e69a8888e86a9880c20180edb501a75e4d4c0413c0dc43aab102101ed685.jpg)
262
+
263
+ ![](images/32c97fb8c37f43502ab69fd33e6dea0b39da1efba4451067c0a96198df68870a.jpg)
264
+ CVX+L
265
+
266
+ ![](images/cb38ee9aa08a3265006bf375e9d7762a775259cae3491e08556947f1af93c591.jpg)
267
+ $\mathrm{BAE + L}$
268
+
269
+ ![](images/3858df2e025f358d22f52fd84194d274008adeebc6f8c201ed8cabd7c5071eb5.jpg)
270
+
271
+ ![](images/1db726c4f2d6d2835cb2e9a21d83124082be4f4fa958b5a993199db08d4d731a.jpg)
272
+
273
+ ![](images/2af5a1415103e9bf47f99b533500ac69b173fdc37e13ead8d9c9d880c9e97746.jpg)
274
+
275
+ ![](images/085335504d4f32b3e6bd3ae8f7220fcb9c19ce9c3bfbe13f733bdca8719cc7af.jpg)
276
+ BAE
277
+
278
+ ![](images/f2c2807b62b7e03e2338a1bc850e3bc7449dc4beda1e40a36fca897230de73fa.jpg)
279
+
280
+ ![](images/4346907cc9df03becfe443276b546289a77e153ae64df79724cf30dee2c3cef0.jpg)
281
+
282
+ ![](images/400c63b71c588c8b22398817b69e4b7f183098b5e5281b3228c8968449dab2e9.jpg)
283
+
284
+ ![](images/f92d3a63fd9b9fb513966010f12d2e7dd9dde58fa86e3f62d6ddf88d4f5f83ba.jpg)
285
+
286
+ ![](images/66db31bd4aba2e48e2946a157695363b61aa950797918273daf9e12ada237797.jpg)
287
+ 3D-CG
288
+
289
+ ![](images/1b6bcc7f4796c2112fa129099e5f4f301a80bdb8fb70ae3af40abbf7f653085b.jpg)
290
+ Instance Segmentation
291
+
292
+ ![](images/f11c31a30063f3837ba3a3cdf073bc320b7fa88967e1faab01837a6dc7b4cdaa.jpg)
293
+
294
+ ![](images/d3350bbf6cad3e39df2eb1fda1b727e20be614965e3f2a5d4734c5852390c794.jpg)
295
+
296
+ ![](images/01b2cb2ce32dcb40936d0da34c614c553e2728ae624df36e9e8e484cc39d47f2.jpg)
297
+
298
+ ![](images/6dc9a4083a47561aa270757a5f76f585122a7fc25950c38d938d8f3205f7897c.jpg)
299
+ CVX+L
300
+
301
+ ![](images/f20a94fac38192ac0f101cc602918299ce709fff570e1360fb648db8a779a48b.jpg)
302
+ BAE
303
+
304
+ ![](images/1ad600c1076b4592ffb8429e5e277d40a5e51e96368586b22622b6dc56cfa5f5.jpg)
305
+
306
+ ![](images/9be0a1203af84f7e02ac9a443bbac03a715025137d04b0c68fa52e60e094861a.jpg)
307
+
308
+ ![](images/c7bae5536b86b371f4727f30c0c665380c559dbe1a0079f9eaebbe0714a7c57e.jpg)
309
+
310
+ ![](images/5d5c80ef9f5074d1ac70d706ddcfaf9c99e76d0122cf8f647e9c2cfcdab58e2c.jpg)
311
+ $\mathrm{BAE + L}$
312
+
313
+ ![](images/1d45affca44c582ecccf7aabed7711d8c732ff01d327841c9bb47b2d5c68b3ad.jpg)
314
+
315
+ ![](images/4b3b428ead50426a6feac7aa80bc6ea9369024846177dde96831aa5b16bb7fdc.jpg)
316
+
317
+ ![](images/80b1335c5586990efe763198b288cb5e64de785b67b4f9359368addcc0985f85.jpg)
318
+
319
+ ![](images/d57738d8270a05604ffbe63fc21971dfffc9559d5032d0cbdfa12ecbe020916b.jpg)
320
+
321
+ ![](images/04d2577b90c1e6646a69aa65ec170af8112b6001308c58960f6c23100387a1f6.jpg)
322
+ 3D-CG
323
+
324
+ ![](images/76c31482a63cb33025c1ffcad40b46b004c242872c474d825daab1c58cea6577.jpg)
325
+
326
+ ![](images/9337d1c16f1872d99185c7829097ec959da673b17e4d35faa85559dc03fe42fa.jpg)
327
+
328
+ ![](images/30a5000bc808f3a96fd020a4fe093a3e9f2fc65e3a687b2f8f6d3dd8b3bfc78a.jpg)
329
+
330
+ ![](images/ef70ba7b3609e356cb9e7d283bd2507e5142cbf3517bd33e3ab9e5f68685af0e.jpg)
331
+
332
+ ![](images/66f098696704e2cb3a3c1134541704b00a80716fe80fea2ad3686de8055be4b8.jpg)
333
+ (a) Visualization of segmentation results.
334
+
335
+ ![](images/8d4092ee0d09379bde1464440fa4ccfefa1348522b41e9bcaddecf383829c849.jpg)
336
+ Figure 4: Visualization of segmentation as well as generalization. Here we use ground-truth voxel values for better visualization. CVX and BAE are unsupervised segmentaton methods. $\mathrm{CVX + L}$ and $\mathrm{BAE + L}$ are language-mediated methods which utilize question answering loss to finetune the segmentation module. 3D-CG has better performances in both semantic and instance segmentations, while other methods suffer from merging parts or segmenting a part into multiple parts. It can also generalize well to unseen categories and real scans.
337
+
338
+ ![](images/476f5d8b63ff9827a4799754da359d7f3a060d4997566aa44929983657b51e95.jpg)
339
+
340
+ ![](images/f138c91c449cbd53c5354c9785b4894dfac1cfd3169b11598e479ef31fc98d3b.jpg)
341
+
342
+ ![](images/b8fec541ef6efda93c46cde41dda1e5b0113fc3aa2182ec7d131a160870ec02b.jpg)
343
+
344
+ # Generalization to New Categories
345
+
346
+ ![](images/ae1f51a1f7ef37c045d4750be730decb2a0e6ede645aeaa0b7d3e0e72ce7c5f9.jpg)
347
+
348
+ ![](images/82b22cfd9176fa41645ff8b086df1cf72d3835c8b240c38ecf5db81f0487533b.jpg)
349
+ Count (Wheel)
350
+
351
+ ![](images/9b7af56eb246bbef2f3a637d02ccc10b03efc9b39ef8c7a833f219221f382cde.jpg)
352
+ Filter (Leg)
353
+
354
+ ![](images/d845f04870006d5dc0216a5fc72472fbc9fc52cea9b7668fc4499114aea81acc.jpg)
355
+
356
+ ![](images/6ea5c8be0aa010c80be5fd5384488e64e5bbdc7b2bce9caf7df71f92c11a4492.jpg)
357
+ Generalization to Real Scans
358
+
359
+ ![](images/4e628167f30fe24f6be2e09992bad008a779b015704c8ccb51d40c0e4eb125e7.jpg)
360
+ Instance Segmentation
361
+
362
+ ![](images/8e4f0b91d289c6961bd9e238be1686490d27d63fcf41529c34391b714dc3d314.jpg)
363
+ Filter (Back)
364
+
365
+ ![](images/9b5c008210f944875e64ef20ecc228b13a1f7e7ddd3c2dea46029a44f0bd2138.jpg)
366
+ (b) Generalization.
367
+
368
+ # 4.2.3 Generalization
369
+
370
+ In Figure 4b, we show some qualitative examples of how our 3D-CG trained on seen categories can be generalized directly to unseen categories and real scans. We show results of semantic and instance segmentations, as well as visualize the parts that are referred to in the questions.
371
+
372
+ <table><tr><td></td><td>Chair</td><td>Table</td><td>Bag</td><td>Cart</td><td>Mean</td><td></td><td>Chair</td><td>Table</td><td>Bag</td><td>Cart</td><td>Mean</td></tr><tr><td>CVX</td><td>62.3</td><td>74.2</td><td>66.0</td><td>44.2</td><td>61.7</td><td>CVX</td><td>44.1</td><td>40.6</td><td>31.2</td><td>22.5</td><td>34.6</td></tr><tr><td>CVX+L</td><td>64.6</td><td>74.5</td><td>70.2</td><td>51.3</td><td>65.2</td><td>CVX+L</td><td>42.6</td><td>51.2</td><td>41.3</td><td>20.4</td><td>38.9</td></tr><tr><td>BAE</td><td>49.5</td><td>71.0</td><td>64.1</td><td>49.7</td><td>58.6</td><td>BAE</td><td>53.3</td><td>32.2</td><td>39.8</td><td>34.0</td><td>39.9</td></tr><tr><td>BAE+L</td><td>56.3</td><td>72.3</td><td>69.8</td><td>46.9</td><td>61.3</td><td>BAE+L</td><td>50.1</td><td>47.0</td><td>34.1</td><td>36.6</td><td>42.0</td></tr><tr><td>3D-CG</td><td>76.6</td><td>79.3</td><td>67.3</td><td>54.2</td><td>69.4</td><td>3D-CG</td><td>68.5</td><td>71.2</td><td>47.2</td><td>40.5</td><td>56.9</td></tr></table>
373
+
374
+ (a) Semantic Segmentation IOU
375
+ (b) Instance Segmentation IOU
376
+ Table 2: Segmentation Results on Ground-Truth Occupancy Values. 3D-CG outperforms all unsupervised / language-mediated baseline models.
377
+
378
+ <table><tr><td></td><td>Chair</td><td>Table</td><td>Bag</td><td>Cart</td><td>Mean</td></tr><tr><td>CVX</td><td>38.1</td><td>49.5</td><td>40.3</td><td>26.0</td><td>38.5</td></tr><tr><td>CVX+L</td><td>34.0</td><td>39.0</td><td>38.3</td><td>29.2</td><td>35.1</td></tr><tr><td>BAE</td><td>36.3</td><td>50.1</td><td>50.0</td><td>38.5</td><td>43.8</td></tr><tr><td>BAE+L</td><td>41.1</td><td>51.6</td><td>51.7</td><td>34.5</td><td>44.8</td></tr><tr><td>3D-CG</td><td>69.3</td><td>65.7</td><td>56.9</td><td>48.2</td><td>60.0</td></tr></table>
379
+
380
+ (a) Semantic Segmentation IOU
381
+
382
+ <table><tr><td></td><td>Chair</td><td>Table</td><td>Bag</td><td>Cart</td><td>Mean</td></tr><tr><td>CVX</td><td>29.4</td><td>26.4</td><td>18.2</td><td>13.4</td><td>21.9</td></tr><tr><td>CVX+L</td><td>22.9</td><td>28.2</td><td>22.6</td><td>13.8</td><td>21.9</td></tr><tr><td>BAE</td><td>38.7</td><td>22.7</td><td>28.1</td><td>24.8</td><td>28.6</td></tr><tr><td>BAE+L</td><td>37.3</td><td>34.5</td><td>25.1</td><td>26.7</td><td>30.9</td></tr><tr><td>3D-CG</td><td>57.1</td><td>58.7</td><td>42.0</td><td>35.4</td><td>48.3</td></tr></table>
383
+
384
+ (b) Instance Segmentation IOU
385
+
386
+ Table 3: Segmentation Results on Predicted Occupancy Values.
387
+
388
+ For generalizing to new categories, we use the model trained on carts to ground and count the instances of the concept "wheel" in cars. We can see that the instance segmentation results on the wheels are not perfect because one wheel is in wrong position. However, most of the wheels are detected and the model manages to output the right count. We also use the model trained on chairs to filter the legs of a bed. All the legs are successfully selected out by our 3D-CG.
389
+
390
+ We also use real 3D scans from the RedWood dataset [6] to estimate 3D-CG's ability to generalize to real scenes. We use a single-view scan to reconstruct partial point cloud and remove the background such as the floor. We input the partial point cloud into our 3D-CG and output the segmentations on the ground-truth voxels. For generalizing to bicycles, we use the 3D-CG model trained on carts. We can see that 3D-CG can find all instances in the bicycle and detect both wheels. Furthermore, 3D-CG trained on chairs can also be generalized to chairs in real scans.
391
+
392
+ # 5 Discussion
393
+
394
+ Conclusion. In this paper, we propose 3D-CG, which leverages the continuous and differentiable nature of neural descriptor fields to segment and learn concepts in the 3D space. We define a set of neural operators on top of the neural field, with which not only can semantic and instance segmentations emerge from question-answering supervision, but visual reasoning can also be well performed.
395
+
396
+ Limitations and Future Work. A limitation of our underlying framework with 3D-CG is that while we show transfer results on real scenes, our approach is only trained with synthetic scenes and questions. While we believe our proposed operations is general-purpose, an interesting direction of future work would be scaling our framework to directly train on complex real-world scenes, and ground more concepts from natural language on these real scenes. A promising direction of future work would be to explore the combination of large pre-trained visual-language models and volumetric rendering on multi-view images.
397
+
398
+ # Acknowledgments and Disclosure of Funding
399
+
400
+ This work was supported by MIT-IBM Watson AI Lab and its member company Nexplore, Amazon Research Award, ONR MURI, DARPA Machine Common Sense program, ONR (N00014-18-1-2847), and Mitsubishi Electric.
401
+
402
+ # References
403
+
404
+ [1] J. Bloomenthal and B. Wyvill. Introduction to implicit surfaces. 1997. 3
405
+ [2] L. Bottou. From machine learning to machine reasoning. Machine Learning, 94:133-149, 2013. 1
406
+ [3] Z. Chen, J. Mao, J. Wu, K.-Y. K. Wong, J. B. Tenenbaum, and C. Gan. Grounding physical concepts of objects and events through dynamic visual reasoning. *ICLR*, 2021. 3
407
+ [4] Z. Chen, K. Yin, M. Fisher, S. Chaudhuri, and H. Zhang. Bae-net: Branched autoencoder for shape co-segmentation. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 8489-8498, 2019. 5, 7
408
+ [5] Z. Chen and H. Zhang. Learning implicit fields for generative shape modeling. In Proc. CVPR, pages 5939-5948, 2019. 3
409
+ [6] S. Choi, Q.-Y. Zhou, S. D. Miller, and V. Koltun. A large dataset of object scans. ArXiv, abs/1602.02481, 2016. 10
410
+ [7] B. Deng, K. Genova, S. Yazdani, S. Bouaziz, G. E. Hinton, and A. Tagliasacchi. Cvxnet: Learnable convex decomposition. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 31-41, 2020. 7
411
+ [8] Y. Deng, J. Yang, and X. Tong. Deformed implicit field: Modeling 3d shapes with learned dense correspondence. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10281-10291, 2021. 3
412
+ [9] Y. Du, M. K. Collins, B. J. Tenenbaum, and V. Sitzmann. Learning signal-agnostic manifolds of neural fields. In Advances in Neural Information Processing Systems, 2021. 3
413
+ [10] Y. Du, Y. Zhang, H.-X. Yu, J. B. Tenenbaum, and J. Wu. Neural radiance flow for 4d view synthesis and video processing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021. 3
414
+ [11] C. Gan, Y. Li, H. Li, C. Sun, and B. Gong. Vqs: Linking segmentations to questions and answers for supervised attention in vqa and question-focused semantic segmentation. In ICCV, pages 1811–1820, 2017. 3
415
+ [12] S. Ganju, O. Russakovsky, and A. K. Gupta. What's in a question: Using visual questions as a form of supervision. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6422-6431, 2017. 3
416
+ [13] K. Genova, F. Cole, D. Vlasic, A. Sarna, W. T. Freeman, and T. A. Funkhouser. Learning shape templates with structured implicit functions. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7153-7163, 2019. 3
417
+ [14] X. Gu, T.-Y. Lin, W. Kuo, and Y. Cui. Zero-shot detection via vision and language knowledge distillation. ArXiv, abs/2104.13921, 2021. 3
418
+ [15] R. Hu, A. Rohrbach, T. Darrell, and K. Saenko. Language-conditioned graph networks for relational reasoning. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 10293-10302, 2019. 3
419
+ [16] D. A. Hudson and C. D. Manning. Compositional attention networks for machine reasoning. *ICLR*, 2018. 3
420
+ [17] A. Jain, B. Mildenhall, J. T. Barron, P. Abbeel, and B. Poole. Zero-shot text-guided object generation with dream fields. arXiv, December 2021. 3
421
+ [18] Z. Jiang, Y. Zhu, M. Svetlik, K. Fang, and Y. Zhu. Synergies between affordance and geometry: 6-dof grasp detection via implicit representations. *ArXiv*, abs/2104.01542, 2021. 3
422
+
423
+ [19] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. B. Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1988–1997, 2017. 1, 6
424
+ [20] M. W. Jones, J. A. Bærentzen, and M. Sránek. 3d distance fields: a survey of techniques and applications. IEEE Transactions on Visualization and Computer Graphics, 12:581-599, 2006. 3
425
+ [21] A. Kamath, M. Singh, Y. LeCun, I. Misra, G. Synnaeve, and N. Carion. Mdetr - modulated detection for end-to-end multi-modal understanding. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 1760-1770, 2021. 3
426
+ [22] C. Kervadec, T. Jaunet, G. Antipov, M. Baccouche, R. Vuillemot, and C. Wolf. How transferable are reasoning patterns in vqa? 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4205-4214, 2021. 1
427
+ [23] S. Kollmannsberger, D. D'Angella, M. Jokeit, and L. A. Herrmann. Physics-informed neural networks. Deep Learning in Computational Mechanics, 2021. 3
428
+ [24] B. Li, K. Q. Weinberger, S. Belongie, V. Koltun, and R. Ranftl. Language-driven semantic segmentation, 2022. 3
429
+ [25] S. Liu, X. Zhang, Z. Zhang, R. Zhang, J. Zhu, and B. C. Russell. Editing conditional radiance fields. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 5753-5763, 2021. 3
430
+ [26] F. Locatello, D. Weissenborn, T. Unterthiner, A. Mahendran, G. Heigold, J. Uszkoreit, A. Dosovitskiy, and T. Kipf. Object-centric learning with slot attention. *ArXiv*, abs/2006.15055, 2020. 5
431
+ [27] A. Luo, Y. Du, M. J. Tarr, J. B. Tenenbaum, A. Torralba, and C. Gan. Learning neural acoustic fields. arXiv preprint arXiv:2204.00628, 2022. 3
432
+ [28] J. Mao, C. Gan, P. Kohli, J. B. Tenenbaum, and J. Wu. The neuro-symbolic concept learner: Interpreting scenes words and sentences from natural supervision. ArXiv, abs/1904.12584, 2019. 1, 2, 3, 6, 8
433
+ [29] L. M. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger. Occupancy networks: Learning 3d reconstruction in function space. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4455-4465, 2019. 3, 4
434
+ [30] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Proc. ECCV, 2020. 3
435
+ [31] K. Mo, S. Zhu, A. X. Chang, L. Yi, S. Tripathi, L. J. Guibas, and H. Su. Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 909–918, 2019. 6
436
+ [32] K. Museth, D. E. Breen, R. T. Whitaker, and A. H. Barr. Level set surface editing operators. Proceedings of the 29th annual conference on Computer graphics and interactive techniques, 2002. 3
437
+ [33] M. Niemeyer, L. Mescheder, M. Oechsle, and A. Geiger. Occupancy flow: 4d reconstruction by learning particle dynamics. In Proc. ICCV, 2019. 3
438
+ [34] M. Niemeyer, L. Mescheder, M. Oechsle, and A. Geiger. Occupancy flow: 4d reconstruction by learning particle dynamics. In Proceedings of the IEEE International Conference on Computer Vision, pages 5379-5389, 2019. 3
439
+ [35] M. Niemeyer, L. Mescheder, M. Oechsle, and A. Geiger. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proc. CVPR, 2020. 3
440
+ [36] J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proc. CVPR, 2019. 3
441
+
442
+ [37] K. Park, U. Sinha, J. T. Barron, S. Bouaziz, D. B. Goldman, S. M. Seitz, and R. M. Brualla. Deformable neural radiance fields. ArXiv, abs/2011.12948, 2020. 3
443
+ [38] S. Peng, M. Niemeyer, L. Mescheder, M. Pollefeys, and A. Geiger. Convolutional occupancy networks. In Proc. ECCV, 2020. 3
444
+ [39] D. Rebain, K. Li, V. Sitzmann, S. Yazdani, K. M. Yi, and A. Tagliasacchi. Deep medial fields. arXiv preprint arXiv:2106.03804, 2021. 3
445
+ [40] S. Saito, Z. Huang, R. Natsume, S. Morishima, A. Kanazawa, and H. Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. In Proc. ICCV, pages 2304-2314, 2019. 3
446
+ [41] A. Simeonov, Y. Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V. Sitzmann. Neural descriptor fields: Se (3)-equivariant object representations for manipulation. arXiv preprint arXiv:2112.05124, 2021. 2, 3, 4
447
+ [42] V. Sitzmann, M. Zollhöfer, and G. Wetzstein. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In Proc. NeurIPS 2019, 2019. 3
448
+ [43] E. S. Spelke, K. Breinlinger, K. Jacobson, and A. Phillips. Gestalt Relations and Object Perception: A Developmental Study. Perception, 22(12):1483-1501, 1993. 2
449
+ [44] A. Tewari, J. Thies, B. Mildenhall, P. Srinivasan, E. Tretschk, W. Yifan, C. Lassner, V. Sitzmann, R. Martin-Brualla, S. Lombardi, et al. Advances in neural rendering. In Computer Graphics Forum, volume 41, pages 703-735. Wiley Online Library, 2022. 3
450
+ [45] I. Vendrov, R. Kiros, S. Fidler, and R. Urtasun. Order-embeddings of images and language. CoRR, abs/1511.06361, 2016. 3
451
+ [46] C. Wang, M. Chai, M. He, D. Chen, and J. Liao. Clip-nerf: Text-and-image driven manipulation of neural radiance fields. ArXiv, abs/2112.05139, 2021. 3
452
+ [47] R. Wang, J. Mao, S. J. Gershman, and J. Wu. Language-mediated, object-centric representation learning. In FINDINGS, 2021. 3, 7, 8
453
+ [48] S. Wang, L. Li, Y. Ding, C. Fan, and X. Yu. Audio2head: Audio-driven one-shot talking-head generation with natural head motion. In IJCAI, 2021. 3
454
+ [49] Y. Xie, T. Takikawa, S. Saito, O. Litany, S. Yan, N. Khan, F. Tombari, J. Tompkin, V. Sitzmann, and S. Sridhar. Neural fields in visual computing and beyond. Computer Graphics Forum, 41, 2022. 3
455
+ [50] J. Xu, S. D. Mello, S. Liu, W. Byeon, T. Breuel, J. Kautz, and X. Wang. Groupvit: Semantic segmentation emerges from text supervision. ArXiv, abs/2202.11094, 2022. 3
456
+ [51] G. Yang, S. Belongie, B. Hariharan, and V. Koltun. Geometry processing with neural fields. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, 2021. 3
457
+ [52] L. Yariv, Y. Kasten, D. Moran, M. Galun, M. Atzmon, B. Ronen, and Y. Lipman. Multiview neural surface reconstruction by disentangling geometry and appearance. Proc. NeurIPS, 2020. 3
458
+ [53] K. Yi, J. Wu, C. Gan, A. Torralba, P. Kohli, and J. B. Tenenbaum. Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. In NeurIPS, 2018. 3, 5, 6, 8
459
+
460
+ # Checklist
461
+
462
+ 1. For all authors...
463
+
464
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
465
+ (b) Did you describe the limitations of your work? [Yes]
466
+ (c) Did you discuss any potential negative societal impacts of your work? [Yes]
467
+ (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
468
+
469
+ 2. If you are including theoretical results...
470
+
471
+ (a) Did you state the full set of assumptions of all theoretical results? [N/A]
472
+ (b) Did you include complete proofs of all theoretical results? [N/A]
473
+
474
+ 3. If you ran experiments...
475
+
476
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No]
477
+ (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes]
478
+ (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [No]
479
+ (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]
480
+
481
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
482
+
483
+ (a) If your work uses existing assets, did you cite the creators? [Yes]
484
+ (b) Did you mention the license of the assets? [Yes]
485
+ (c) Did you include any new assets either in the supplemental material or as a URL? [No]
486
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [Yes]
487
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes]
488
+
489
+ 5. If you used crowdsourcing or conducted research with human subjects...
490
+
491
+ (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
492
+ (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
493
+ (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
3dconceptgroundingonneuralfields/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13aa20de6ea01778e9ebe1a1614f8d10969183397fa6c5ed69429bf20266b17a
3
+ size 508690
3dconceptgroundingonneuralfields/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4db4a977c6167daaef94d31e93e9a8f8ff4f6f9c01d090067e978880d2a5e2fb
3
+ size 522850
3dilgirregularlatentgridsfor3dgenerativemodeling/8fbd01f6-2eac-4b93-b500-5dd2007d58c3_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e67302163d35961a7c57f443346676b104d3996552c62298ab92dae63ac778e5
3
+ size 83035
3dilgirregularlatentgridsfor3dgenerativemodeling/8fbd01f6-2eac-4b93-b500-5dd2007d58c3_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6a2c52fb3126b35653566cca3b9a96224d25dec9b0fa18344db56f8472c5788
3
+ size 108331
3dilgirregularlatentgridsfor3dgenerativemodeling/8fbd01f6-2eac-4b93-b500-5dd2007d58c3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6947f7672a0c3e3f9d63feeeda130183b70e87c77983b40724f085bcc42ebbb
3
+ size 7221014
3dilgirregularlatentgridsfor3dgenerativemodeling/full.md ADDED
@@ -0,0 +1,378 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3DILG: Irregular Latent Grids for 3D Generative Modeling
2
+
3
+ Biao Zhang
4
+
5
+ KAUST
6
+
7
+ biao.zhang@kaust.edu.sa
8
+
9
+ Matthias Nießner
10
+
11
+ Technical University of Munich
12
+
13
+ niessner@tum.de
14
+
15
+ Peter Wonka
16
+
17
+ KAUST
18
+
19
+ pwonka@gmail.com
20
+
21
+ # Abstract
22
+
23
+ We propose a new representation for encoding 3D shapes as neural fields. The representation is designed to be compatible with the transformer architecture and to benefit both shape reconstruction and shape generation. Existing works on neural fields are grid-based representations with latents defined on a regular grid. In contrast, we define latents on irregular grids, enabling our representation to be sparse and adaptive. In the context of shape reconstruction from point clouds, our shape representation built on irregular grids improves upon grid-based methods in terms of reconstruction accuracy. For shape generation, our representation promotes high-quality shape generation using auto-regressive probabilistic models. We show different applications that improve over the current state of the art. First, we show results for probabilistic shape reconstruction from a single higher resolution image. Second, we train a probabilistic model conditioned on very low resolution images. Third, we apply our model to category-conditioned generation. All probabilistic experiments confirm that we are able to generate detailed and high quality shapes to yield the new state of the art in generative 3D shape modeling.
24
+
25
+ # 1 Introduction
26
+
27
+ ![](images/db45a56d0fecb2f1e76d28db09c40973c50b075164e8a8883cc8b2a4f9f53ad5.jpg)
28
+ Figure 1: Irregular Latent Grids enable many applications: (1) shape reconstruction, (2) high-resolution-image-conditioned generation, (3) low-resolution-image-conditioned generation, (4) point-cloud-conditioned generation, and (5) category-conditioned generation. The data structure is especially suited for auto-regressive modeling (applications 2-5). For each of these applications the probabilistic approach enables sampling of many plausible models for a single query.
29
+
30
+ Neural fields for 3D shapes are popular in machine learning because they are generally easier to process with neural networks than other alternative representations, e.g., triangle meshes or spline surfaces. Earlier works represent a shape with a single global latent [29, 13, 7, 36, 66]. This already gives promising results in shape autoencoding and shape reconstruction. However, shape details are often missing and hard to recover from a global latent. Later, local latent grids for shapes were proposed [39, 23, 3, 48]. A local latent mainly influences the shape (surface) in a local 3D neighborhood, thus perceiving shape details. However, in contrast to global latents, local latents
31
+
32
+ ![](images/6b2e3edc9afd7985396e5926e2911fd9cf21f762888ce2bf7888e3e6fb7cd637.jpg)
33
+ Figure 2: Latent grids. From left to right, we show single global latent (e.g., OccNet [29]), latent grid (e.g. ConvONet [39]), multiscale latent grids (e.g. IF-Net [9]), and our irregular latent grid.
34
+
35
+ require positional information about their location, e.g., as implicitly defined by a regular grid. Furthermore, multiscale latent pyramid grids [9, 6] provide some performance improvement over basic grids. We illustrate the three current ways of modeling neural fields in Fig. 2.
36
+
37
+ In this paper, we set out to study irregular grids as 3D shape representation for neural fields. This extends previous grid-based representations, but allows each latent to have an arbitrary position in space, rather than a position that is pre-determined on a regular grid. We do not want to give up the advantages of a fixed length representation, and therefore propose to encode shapes as a fixed length sequence of tuples $(\mathbf{x}_i,\mathbf{z}_i)_{i\in \mathcal{M}}$ , where $\mathbf{x}_i$ are the 3D positions and $\mathbf{z}_i$ are the latents. The overall advantages of this representation are that it is fixed length, works well with transformer-based architectures using established building blocks, and that it can adapt to the 3D shape it encodes by placing latents at the most important positions. The representation also scales better than grid-based (and especially pyramid-based) representations to a larger number of latents. The number of latents in a representation is a factor that greatly influences the training time of transformer architectures, as the computation time is quadratic in the number of latents.
38
+
39
+ While our proposed representation brings some improvement for 3D shape reconstruction from point clouds, the improvement is very significant for 3D generative models that have been less explored. We believe that the most promising avenue for 3D shape generation is to follow recent image generative methods based on vector quantization and auto-regressive models using transformer, e.g. VQGAN [16]. These models operate on discrete latent vectors represented by indices [53]. During training, an autoregressive probabilistic model is trained to predict discrete indices. When doing inference (sampling), discrete indices are sampled from the autoregressive model and are decoded to images with a learned decoder. These models work well in the image domain, because a single image can comfortably be represented by medium size, e.g. $16 \times 16$ latent grids of 256 latents [16]. This does not directly scale to 3D shapes, as a $16 \times 16 \times 16$ grid is too large to be comfortably trained on 4-8 GPUs due to the quadratic complexity of the transformer architecture [55]. As a result, generative models based on low-resolution regular grids lead to artifacts in the details, whereas our representation yields a nice and clean surface (e.g. Fig. 9).
40
+
41
+ We show the following applications of our proposed representations (see Fig. 1). For 3D shape reconstruction, we show results for 3D reconstruction from a point cloud. For generative modeling, we show results for image-conditioned 3D shape generation, object category-conditioned 3D shape generation, and point-cloud-conditioned 3D shape generation. The generative model is probabilistic and can generate multiple different 3D shapes for the same conditioning information.
42
+
43
+ We summarize the contributions of our work as follows: 1) We propose irregular latent grids as 3D shape representation for neural fields. Our shape representation thereby extends existing works using a global latent, a local latent grid or a multi-scale grid. 2) We improve upon grid-based SOTA methods for 3D shape reconstruction for point-clouds. 3) We improve state-of-the-art generative models for 3D shape generation, including image-conditioned generation, object category-conditioned generation and point cloud-conditioned generation.
44
+
45
+ # 2 Related Work
46
+
47
+ # 2.1 Neural shape representations
48
+
49
+ Shape analysis with neural networks processes shapes in different representations. Common representations include voxels [27, 10, 12, 44] and point clouds [40, 41, 57, 58, 51]. More recently, researchers study shapes represented with neural fields [63], e.g., signed distance functions (SDFs) or occupancy (indicator) functions of shapes modeled by neural networks. Subsequently, meshes can be extracted by contouring methods such as marching cubes [28]. The methods have been called
50
+
51
+ neural implicit representations [29, 30, 36, 18, 13, 7, 66] or coordinate-based representations [47]. We decided to use the term neural fields in this paper [63].
52
+
53
+ Point cloud processing. Earlier works for point cloud processing rely on multilayer perceptrons (MLPs), e.g., PointNet [40]. PointNet++ [41] and DGCNN [57] extended the idea by employing a hierarchical structure to capture local information. Inspired by ViT [15], which treats images as a set of patches, PCT [20] and PT [67] propose a transformer backbone for point cloud processing. Both works introduce extra modules which are not standard transformers [55] anymore. Since our main goal is not to develop a general backbone for point clouds, we simply work with a standard transformer for shape autoencoding (first-stage training). Similarly, both PointBERT[65] and PointMAE [34] use standard transformers for point cloud self-supervised learning.
54
+
55
+ Neural fields for 3D shapes. One possible approach is to represent a shape with a single global latent, e.g., OccNet [29], CvxNet [13] and Zhang et al. [66]. While this method is very simple, it is not suitable to encode surface details or shapes with complex topology. Later works studied the use of local latents. Compared to global latents, these representations represent shapes with multiple local latents. Each latent is responsible for reconstruction in a local region. ConvONet [39], as a follow-up work of OccNet, learns latents on a grid (2d or 3d). IF-Net [9], trains a model which outputs 3d latent grids of several different resolutions. The grids are then concatenated into a final representation. Some more recent works [17, 26, 49] learn to put latents on 3D positions.
56
+
57
+ # 2.2 Generative models
58
+
59
+ Generative models include generative adversarial networks (GANs) [19], variational autoencoders (VAEs) [24], energy-based models [25, 59, 61, 62, 60], normalizing flows (NFs) [43, 14] and auto-regressive models (ARs) [54]. Our shape representation is designed to be compatible with auto-regressive generative models. Thus we will mainly discuss related work in this area.
60
+
61
+ In the image domain, earlier AR works generate pixels one-by-one, e.g., PixelRNN [54] and its follow-up works [52, 45]. Combining this idea with autoregressive transformers, similar approaches are applied to 3D data, e.g., PointGrow [46] for point cloud generation and PolyGen [33] for polygonal mesh generation. Other use cases include floor plan generation [35] and indoor scene generation [56, 38].
62
+
63
+ An important ingredient of many auto-regressive models is vector quantization (VQ) that was originally integrated into VAEs. VQVAE [53] adopts a two-stage training strategy. The first stage is an autoencoder with vector quantization, where bottleneck latents are quantized to discrete indices. In the second stage, an autoregressive model is trained to predict the discrete indices. This idea is further developed in VQGAN [16] which builds the autoregressive model with a transformer [55]. Vector quantization, has been shown to be an efficient way for generating high resolution images. We study how VQ can be used in the 3D domain, specifically, neural fields for 3D shapes.
64
+
65
+ There are some works adopting VQVAE to the 3D domain. Canonical Mapping [8] proposes point cloud generation models with VQVAE. In concurrent work, AutoSDF [32] generalizes VQVAE to 3D voxels. However, the bottleneck latent grid resolution is low and it is difficult to capture details as in ConvONet and IF-Net. Increasing the resolution is the key for detailed 3D shape reconstruction. On the other hand, higher resolution leads to more difficult training of autoregressive transformers. Another concurrent work, ShapeFormer [64], shows a solution by sparsifying latent grids. However, ShapeFormer still only accepts voxel grids as input (point clouds need to be voxelized). Additionally, the sequence length varies for different shapes, requiring the method to go through a whole dataset to find a maximum sequence length. This relies on dataset-dependent maximum sequence lengths. In contrast to existing works, our method processes point clouds directly, learns fixed-size latents and outputs neural fields. A comparison summary can be found in Table 1.
66
+
67
+ # 3 Shape Reconstruction
68
+
69
+ Here we build a shape autoencoding method. Given a shape surface, we want to output the same shape surface. We preprocess a shape surface via uniform sampling to a point cloud $\{\mathbf{x}_i\colon \mathbf{x}_i\in \mathbb{R}^3\}_{i\in \mathcal{N}}$ (also in matrix form $\mathbf{X}\in \mathbb{R}^{N\times 3}$ ), our goal is to predict the indicator function $\mathcal{O}(\cdot):\mathbb{R}^3\to [0,1]$ corresponding to the volume represented by the point cloud. Our representation is being generated in
70
+
71
+ Table 1: Method comparison. All methods here take point clouds as input. If the column Requiring Voxelization says yes, it means the method takes voxels as input. The numbers shown in parenthesis are alternative choices of hyperparameters.
72
+
73
+ <table><tr><td></td><td>Local Latents</td><td>Requiring Voxelization</td><td>Output</td><td>Latent Resolution</td><td>Vector Quantization</td><td>Sequence Length</td><td>Comments</td></tr><tr><td>OccNet [29]</td><td>X</td><td>X</td><td>Fields</td><td>1</td><td>X</td><td>1</td><td></td></tr><tr><td>ConvONet [39]</td><td>✓</td><td>✓</td><td>Fields</td><td>323(643)</td><td>X</td><td>323(643)</td><td></td></tr><tr><td>IF-Net [9]</td><td>✓</td><td>✓</td><td>Fields</td><td>1283</td><td>X</td><td>1283</td><td>Multiscale</td></tr><tr><td>AutoSDF [32]</td><td>✓</td><td>✓</td><td>Voxels</td><td>83</td><td>✓</td><td>83</td><td></td></tr><tr><td>CanMap[8]</td><td>✓</td><td>X</td><td>Point Cloud</td><td>128</td><td>✓</td><td>128</td><td></td></tr><tr><td>ShapeFormer [64]</td><td>✓</td><td>✓</td><td>Fields</td><td>163(323)</td><td>✓</td><td>variable</td><td>Sparsification</td></tr><tr><td>3DILG (Ours)</td><td>✓</td><td>X</td><td>Fields</td><td>512</td><td>✓</td><td>512</td><td></td></tr></table>
74
+
75
+ ![](images/75e474971ffbe75086baadb80aa1b1633af1ac43c52b77b8f89afebfb34d88dc.jpg)
76
+ Figure 3: Shape reconstruction from point clouds. Left: the main pipeline. The framework can be used with (Bottom Right) or without (Top Right) vector quantization.
77
+
78
+ ![](images/7ed8b402e9a93df94d6e918d7542f1208af1be4a34ab7d7fee88f69b51e545f2.jpg)
79
+
80
+ ![](images/892604d62c33cfcc75d11e1c2d39c89d5d48adb99cf46acbf64c25f9d418b1ef.jpg)
81
+
82
+ three steps (See Fig. 3): patch construction (Sec. 3.1), patch information processing (Sec. 3.2), and reconstruction (Sec. 3.3).
83
+
84
+ # 3.1 Patch Construction
85
+
86
+ We sub-sample the input point cloud via Farthest Point Sampling (FPS),
87
+
88
+ $$
89
+ \boxed {\mathrm {F P S}} \left(\left\{\mathbf {x} _ {i} \right\} _ {i \in \mathcal {N}}\right) = \left\{\mathbf {x} _ {i} \right\} _ {i \in \mathcal {M}} = \mathbf {X} _ {\mathcal {M}} \in \mathbb {R} ^ {M \times 3}, \tag {1}
90
+ $$
91
+
92
+ where $\mathcal{M} \subset \mathcal{N}$ and $|\mathcal{M}| = M$ . Next, for each point in the sub-sampled point set $\{\mathbf{x}_i\}_{i \in \mathcal{M}}$ , we apply the K-nearest neighbor (KNN) algorithm to find $K - 1$ points to form a point patch of size $K$ ,
93
+
94
+ $$
95
+ \forall i \in \mathcal {M}, \quad \text {K N N} (\mathbf {x} _ {i}) = \left\{\mathbf {x} _ {j} \right\} _ {j \in \mathcal {N} _ {i}}, \tag {2}
96
+ $$
97
+
98
+ where $\mathcal{N}_i$ is the neighbor index set for point $\mathbf{x}_i$ and $|\mathcal{N}_i| = K - 1$ . Thus we have a collection of point patches,
99
+
100
+ $$
101
+ \left\{\left(\mathbf {x} _ {i}, \left\{\mathbf {x} _ {j} \right\} _ {j \in \mathcal {N} _ {i}}\right): | \mathcal {N} _ {i} | = K - 1 \right\} _ {i \in \mathcal {M}} = \mathbf {X} _ {\mathcal {M}, K} \in \mathbb {R} ^ {M \times K \times 3}. \tag {3}
102
+ $$
103
+
104
+ We project each patch with a mini-PointNet-like [40] module to an embedding vector
105
+
106
+ $$
107
+ \forall i \in \mathcal {M}, \quad \text {P o i n t N e t} \left(\mathbf {x} _ {i}, \left\{\mathbf {x} _ {j} \right\} _ {j \in \mathcal {N} _ {i}}\right) = \mathbf {e} _ {i} \in \mathbb {R} ^ {C}, \tag {4}
108
+ $$
109
+
110
+ where $C$ is the embedding dimension for patches.
111
+
112
+ # 3.2 Transformer
113
+
114
+ Furthermore, we build a transformer to learn local latents $\{\mathbf{z}_i\}_{i\in \mathcal{M}}$ . The point coordinates $\{\mathbf{x}_i\}_{i\in \mathcal{M}}$ are converted to positional embeddings (PEs) [31], $\mathbf{p}_i = \mathrm{PE}(\mathbf{x}_i)\in \mathbb{R}^C$ ,
115
+
116
+ $$
117
+ \operatorname {P E} (\mathbf {x}) = \left[ \sin \left(2 ^ {0} \mathbf {x}\right), \cos \left(2 ^ {0} \mathbf {x}\right), \sin \left(2 ^ {1} \mathbf {x}\right), \cos \left(2 ^ {1} \mathbf {x}\right), \dots , \sin \left(2 ^ {7} \mathbf {x}\right), \cos \left(2 ^ {7} \mathbf {x}\right) \right]. \tag {5}
118
+ $$
119
+
120
+ Our transformer takes as input the sequence of patch-PE pairs,
121
+
122
+ $$
123
+ \boxed {\text {T r a n s f o r m e r}} \left(\left\{\left(\mathbf {e} _ {i}, \mathbf {p} _ {i}\right) \right\} _ {i \in \mathcal {M}}\right) = \left\{\mathbf {z} _ {i}: \mathbf {z} _ {i} \in \mathbb {R} ^ {C} \right\} _ {i \in \mathcal {M}}. \tag {6}
124
+ $$
125
+
126
+ The transformer is composed of $L$ blocks. We denote the intermediate outputs of all blocks as $\mathbf{z}_i^{(0)},\mathbf{z}_i^{(1)},\dots ,\mathbf{z}_i^{(l)},\dots ,\mathbf{z}_i^{(L)}$ , where $\mathbf{z}_i^{(0)}$ is the input $\mathbf{e}_i$ and $\mathbf{z}_i^{(L)}$ is the output $\mathbf{z}_i$ .
127
+
128
+ ![](images/04c257c2bf39d70364a9b7d3533f7b91063b9721a1bcc4c2546a0a395f2a90eb.jpg)
129
+ Figure 4: Autoregressive Generative Models with Unidirectional Transformer. Left: sequence prediction. Right: detailed visualizations with sequence element components.
130
+
131
+ ![](images/9051eda1f652eb492df2ee667ceed66365df824ba3ff4bfeea203b705098e856.jpg)
132
+ Figure 5: Autoregressive Generative Models with Bidirectional Transformer. Cubes (□) are predicted tokens. From left to right, we show 8 decoding steps.
133
+
134
+ Vector Quantization. This model can be used with vector quantization [53]. We simply replace the intermediate output at block $l$ with its closest vector in a dictionary $\mathcal{D}$ ,
135
+
136
+ $$
137
+ \forall i \in \mathcal {M}, \quad \underset {\hat {\mathbf {z}} _ {i} ^ {(l)} \in \mathcal {D}} {\arg \min } \left\| \hat {\mathbf {z}} _ {i} ^ {(l)} - \mathbf {z} _ {i} ^ {(l)} \right\|. \tag {7}
138
+ $$
139
+
140
+ The dictionary $\mathcal{D}$ contains $|\mathcal{D}| = D$ vectors. As in VQVAE [53], the gradient with respect to $\mathbf{z}_i^{(l)}$ is approximated with the straight-through estimator [1].
141
+
142
+ # 3.3 Reconstruction
143
+
144
+ We expect that each latent $\mathbf{z}_i$ is responsible for shape reconstruction near the point $\mathbf{x}_i$ . However, our goal is to estimate $\mathcal{O}(\cdot)$ for an arbitrary $\mathbf{x} \in \mathbb{R}^3$ . We interpolate latent $\mathbf{z}_{\mathbf{x}}$ for point $\mathbf{x}$ with the Nadaraya-Watson estimator,
145
+
146
+ $$
147
+ \mathbf {z} _ {\mathbf {x}} = \frac {\sum_ {i \in \mathcal {M}} \exp (- \beta \| \mathbf {x} - \mathbf {x} _ {i} \| ^ {2}) \mathbf {z} _ {i}}{\sum_ {i \in \mathcal {M}} \exp (- \beta \| \mathbf {x} - \mathbf {x} _ {i} \| ^ {2})}, \tag {8}
148
+ $$
149
+
150
+ where $\beta$ controls the smoothness of interpolation and can be fixed or learned. The final indicator is an MLP with a sigmoid activation,
151
+
152
+ $$
153
+ \hat {\mathcal {O}} (\mathbf {x}) = \operatorname {S i g m o i d} \left(\operatorname {M L P} \left(\mathbf {x}, \mathbf {z} _ {\mathbf {x}}\right)\right). \tag {9}
154
+ $$
155
+
156
+ Loss Function. We optimize the estimated $\hat{\mathcal{O}} (\cdot)$ by comparing to ground-truth $\mathcal{O}(\cdot)$ via a binary cross entropy loss (BCE) and an additional commitment loss term [53],
157
+
158
+ $$
159
+ \mathcal {L} = \mathcal {L} _ {\text {r e c o n}} + \lambda \mathcal {L} _ {\text {c o m m i t}} = \mathbb {E} _ {\mathbf {x} \in \mathbb {R} ^ {3}} \left[ \operatorname {B C E} \left(\hat {\mathcal {O}} (\mathbf {x}), \mathcal {O} (\mathbf {x})\right) \right] + \lambda \mathbb {E} _ {\mathbf {x} \in \mathbb {R} ^ {3}} \left[ \mathbb {E} _ {i \in \mathcal {M}} \left\| \operatorname {s g} \left(\hat {\mathbf {z}} _ {i} ^ {(l)}\right) - \mathbf {z} _ {i} ^ {(l)} \right\| ^ {2} \right], \tag {10}
160
+ $$
161
+
162
+ where $\mathrm{sg}(\cdot)$ is the stop-gradient operation. When $\lambda = 0$ , it means we train the model without vector quantization.
163
+
164
+ # 4 Autoregressive Generative Modeling
165
+
166
+ With Vector Quantization (Eq. 7), we compress the bit size of intermediate latents $\{\hat{\mathbf{z}}_i^{(l)}\}_{i\in \mathcal{M}}$ to $\log_2D$ where $D$ is the size of the dictionary $\mathcal{D}$ . We denote the compressed index as $z_{i}\in$ $\{0,1,\ldots ,D - 1\}$ . We also quantize point coordinates $\mathbf{x}_i$ to $(x_{i,1},x_{i,2},x_{i,3})$ where each entry is a 8-bit integer $\{0,1,\dots ,255\}$ . As a result, we obtain a discrete representation of the 3D shape $\{(x_{i,1},x_{i,2},x_{i,3},z_i)\}_{i\in \mathcal{M}}$ .
167
+
168
+ # 4.1 Unidirectional Transformer
169
+
170
+ Autoregressive generation with unidirectional transformer is a more classical approach. We often need to sequentialize an unordered set if the order is not defined yet. Specifically, we re-order the representations in ascending order by the first component $x_{i,1}$ , then by the second component $x_{i,2}$ and finally by the third component $x_{i,3}$ ,
171
+
172
+ $$
173
+ \mathcal {S} = \left\{\left(x _ {0, 1}, x _ {0, 2}, x _ {0, 3}, z _ {0}\right), \left(x _ {1, 1}, x _ {1, 2}, x _ {1, 3}, z _ {1}\right), \dots , \left(x _ {M - 1, 1}, x _ {M - 1, 2}, x _ {M - 1, 3}, z _ {M - 1}\right) \right\}. \tag {11}
174
+ $$
175
+
176
+ Our goal to predict sequence elements one-by-one. A common way is to flatten the sequence $S$ by concatenating the quadruplets, for examples, PointGrow [46] and PolyGen [33]. However, here we consider an approach generating quadruplet-by-quadruplet. We write $\mathbf{o}_i = (x_{i,1}, x_{i,2}, x_{i,3}, z_i)$ . The likelihood of generating $\mathbf{o}_i$ autoregressively is
177
+
178
+ $$
179
+ p (\mathcal {S} \mid \mathcal {C}) = \prod_ {i = 0} ^ {i = M - 1} p \left(\mathbf {o} _ {i} \mid \mathbf {o} _ {< i}, \mathcal {C}\right), \tag {12}
180
+ $$
181
+
182
+ where $\mathcal{C}$ is conditioned context. Similar to ATISS [38], we predict components of $\mathbf{o}_i = (x_{i,1}, x_{i,2}, x_{i,3}, z_i)$ autoregressively:
183
+
184
+ $$
185
+ \begin{array}{l} p (\mathbf {o} _ {i} \mid \mathbf {o} _ {< i}, \mathcal {C}) \\ = p \left(x _ {i, 1} \mid \mathbf {o} _ {< i}, \mathcal {C}\right) \cdot p \left(x _ {i, 2} \mid x _ {i, 1}, \mathbf {o} _ {< i}, \mathcal {C}\right) \cdot p \left(x _ {i, 3} \mid x _ {i, 2}, x _ {i, 1}, \mathbf {o} _ {< i}, \mathcal {C}\right) \cdot p \left(z _ {i} \mid x _ {i, 3}, x _ {i, 2}, x _ {i, 1}, \mathbf {o} _ {< i}, \mathcal {C}\right). \tag {13} \\ \end{array}
186
+ $$
187
+
188
+ The model is shown in Fig. 4. Different from ATISS [38] which uses MLPs to decode different components, instead we continue to apply transformer blocks for component decoding.
189
+
190
+ # 4.2 Bidirectional Transformer
191
+
192
+ Bidirectional transformer for autoregressive generation is recently proposed by MaskGIT [5]. Here we show our model can also be combined with bidirectional transformer. However, here we consider a different task. We generate $\{z_i\}_{i\in \mathcal{M}}$ conditioned on $\{(x_{i,1},x_{i,2},x_{i,3})\}_{i\in \mathcal{M}}$ . In training phase, we sample a subset $\{z_i\}_{i\in \mathcal{V}\subset \mathcal{M}}$ of $\{z_i\}_{i\in \mathcal{M}}$ as the input of the bidirectional transformer. We aim to predict $\{z_i\}_{i\in \mathcal{M}\backslash \mathcal{V}}$ . The coordinates $\{(x_{i,1},x_{i,2},x_{i,3})\}_{i\in \mathcal{M}}$ are converted to positional embeddings (either learned or fixed) as condition. The likelihood of generating $\mathcal{M}\setminus \mathcal{V}$ is as follows,
193
+
194
+ $$
195
+ \prod_ {\mathcal {V} \subset \mathcal {M}} p \left(\left\{z _ {i} \right\} _ {i \in \mathcal {M} \backslash \mathcal {V}} \mid \left\{z _ {i} \right\} _ {i \in \mathcal {V}}, \left\{\left(x _ {i, 1}, x _ {i, 2}, x _ {i, 3}\right) \right\} _ {i \in \mathcal {M}}\right). \tag {14}
196
+ $$
197
+
198
+ In practice, the bidirectional transformer takes as input all tokens except that $\{z_i\}_{i\in \mathcal{M}\setminus \mathcal{V}}$ are replaced by a special masked token $[m]$ . When decoding (inference), we iteratively predict multiple tokens at the same time. Tokens are sampled based on their probabilities (transformer output). See [5] for a detailed explanation. We show visualization of decoding steps in Fig. 5.
199
+
200
+ # 5 Reconstruction Experiments
201
+
202
+ We set the size of the input point cloud to $N = 2048$ . The number and the size of point patches are $M = 512$ and $K = 32$ , respectively. In the case of Vector Quantization, there are $D = 1024$ vectors in the dictionary $\mathcal{D}$ . Other details of the implementation can be found in the Appendix. We use the dataset ShapeNet-v2 [4] for shape reconstruction. We split samples into train/val/test (48597/1283/2592) set. Following the evaluation protocol of [13, 66], we include three metrics, volumetric IoU, the Chamfer-L1 distance, and F-Score [50]. We also show reconstruction results on another object-level dataset ABO [11], a real-world dataset D-FAUST [2], and a synthetic scene-level dataset in the appendix.
203
+
204
+ Results We compare our method with three existing works, OccNet [29], ConvONet [39] and IF-Net [9]. The results can be found in Table 2. We select 7 categories among 55 categories with largest training set (table, car, chair, airplane, sofa, rifle, lamp). Detailed results can be found in the Appendix. We show different choices of $M$ . As we can see in this table, increasing $M$ from 64 to 512 gives a performance boost. Even with the simplest model $M = 64$ , our results outperform ConvONet. The best results are achieved when setting $M = 512$ . In this case, our results lead in most categories
205
+
206
+ Table 2: Shape reconstruction. We train all models on ShapeNet-v2 (55 categories). All baseline methods are trained with the corresponding officially released code. For our model, we set $N = 2048$ and $K = 32$ . We select 7 categories to show. These categories have largest training set among 55 categories. We also show averaged metrics over all categories. The numbers shown in parenthesis are results of vector quantization.
207
+
208
+ <table><tr><td rowspan="2" colspan="2"></td><td rowspan="2">OccNet</td><td rowspan="2">ConvONet</td><td rowspan="2">IF-Net</td><td colspan="4">3DILG (Ours)</td></tr><tr><td>M=64</td><td>M=128</td><td>M=256</td><td>M=512</td></tr><tr><td rowspan="2">IoU ↑</td><td>mean (selected)</td><td>0.822</td><td>0.881</td><td>0.929</td><td>0.922(0.904)</td><td>0.936(0.929)</td><td>0.945(0.943)</td><td>0.952(0.950)</td></tr><tr><td>mean (all)</td><td>0.825</td><td>0.888</td><td>0.934</td><td>0.923(0.907)</td><td>0.937(0.929)</td><td>0.946(0.943)</td><td>0.953(0.950)</td></tr><tr><td rowspan="2">Chamfer ↓</td><td>mean (selected)</td><td>0.058</td><td>0.040</td><td>0.034</td><td>0.038(0.038)</td><td>0.035(0.036)</td><td>0.034(0.034)</td><td>0.032(0.030)</td></tr><tr><td>mean (all)</td><td>0.072</td><td>0.052</td><td>0.041</td><td>0.048(0.052)</td><td>0.044(0.046)</td><td>0.041(0.042)</td><td>0.040(0.040)</td></tr><tr><td rowspan="2">F-Score ↑</td><td>mean (selected)</td><td>0.898</td><td>0.951</td><td>0.975</td><td>0.959(0.948)</td><td>0.968(0.964)</td><td>0.972(0.969)</td><td>0.976(0.975)</td></tr><tr><td>mean (all)</td><td>0.858</td><td>0.933</td><td>0.967</td><td>0.942(0.926)</td><td>0.955(0.948)</td><td>0.963(0.958)</td><td>0.966(0.965)</td></tr></table>
209
+
210
+ when comparing to IF-Net. The results of IF-Net are better than ours in terms of metric F-Score in some categories. However, we argue that in this case the metric F-score is saturated (values are close to 1), which making it hard to compare. We also show results after introducing vector quantization to our model in the same table. We can see that vector quantization harms the performance slightly.
211
+
212
+ Qualitative Comparison Qualitative results are shown in Fig. 6. From the visualization, it can be seen that the reconstruction quality increases as $M$ increases, particularly for shapes with complex topology. OccNet, as a global latent method, often fails to recover complex structures. ConvONet can recover better structures than OccNet, due to its localized latents. By learning multiscale latents, IF-Net improves further upon ConvONet. Our method with $M = 64$ outperforms ConvONet, and with $M = 128$ , the results are comparable with IF-Net.
213
+
214
+ # 6 Generative Experiments
215
+
216
+ We introduce three experiments to show how our model can be combined with auto-regressive transformers for generative modeling. In Sec. 6.1, we show image-conditioned generation as probabilistic shape reconstruction from single image. In Sec. 6.2, we show how we generate samples given a category label (using the 55 ShapeNet categories). In Sec. 6.3, we further show generation conditioned on downsampled point clouds. In contrast to the first two tasks, the point-cloud-conditioned generation is based on a bidirectional transformer described in Sec 4.2. More generative results on additional datasets ABO and D-FAUST can be found in the appendix.
217
+
218
+ # 6.1 Probabilistic Shape Reconstruction from a Single Image
219
+
220
+ We train a uni-directional transformer for this task (see Sec. 4.1). The context $\mathcal{C}$ is an image. To train this model, we render 40 images $(224 \times 224)$ of different views for each shape in ShapeNet. The implementation of the uni-directional transformer is based on GPT [42]. It contains 24 blocks, where each block has an attention layer with 16 heads and 1024 embedding dimension. When sampling, nucleus sampling [22] with top- $p$ (0.85) and top- $k$ (100) are applied to predicted token probabilities.
221
+
222
+ High resolution images. We show some generated samples when $\mathcal{C}$ are high resolution images in Fig. 7. We compare our results with a deterministic method, OccNet [29]. As we can see in the results, the deterministic method (OccNet) tends to create blurred meshes. However, our probabilistic reconstruction is able to output detailed meshes.
223
+
224
+ Low resolution images. We also consider another more challenging task. The input images are downsampled to low resolution $(16 \times 16)$ . In this case, the generative model has more freedom to find multiple plausible interpretations, including variations with different topology (see Fig. 8).
225
+
226
+ ![](images/50c5ae517029d5e2b1a8bdbc070b9c8ef134b6331f2cc96571b3aa571b36e008.jpg)
227
+ Figure 6: Shape reconstruction. The column Input shows input point clouds of size 2048. The column GT shows ground-truth meshes. We compare our results with different $M$ to OccNet, ConvONet and IF-Net. We also show $\{\mathbf{x}_i\}_{i\in \mathcal{M}}$ obtained via Farthest Point Sampling.
228
+
229
+ # 6.2 Category-Conditioned Generation
230
+
231
+ To generate shapes given a category label, we use the context $\mathcal{C}$ to encode the category label. We employ the uni-directional transformer based on GPT as in Sec. 6.1. We compare to our proposed baseline model that encodes shapes as a latent grid of resolution $8^{3}$ . To make a fair comparison, we also extend ViT [15] to the 3D voxel domain in the first stage of training. In the second stage, we use the same uni-directional transformer for sequence prediction. Here the sequence length is $8^{3} = 512$ , which is the same as in our proposed model. The baseline model is named Grid-8<sup>3</sup>. The comparison to the baseline model is in Fig. 9. We can see that the baseline is unable to generate high quality shapes. We argue that this is because the representation is not expressive enough to capture surface details. Furthermore, we show more generated samples of our model in Fig. 10. The three selected categories are bookshelf, bench and chair. For both the baseline and our model, we render 10 images of predicted shapes, and calculate the Fréchet Inception Distance (FID) [21, 37] between predictions and test sets. The metrics can be found in Table 3. A perceptual study on the quality of generated samples can be found in the appendix.
232
+
233
+ # 6.3 Point Cloud Conditioned Generation
234
+
235
+ We train a bidirectional transformer described in Sec 4.2. The model takes as input $\{\mathbf{x}_i\}_{i\in \mathcal{M}}$ . The results can be found in Fig. 11.
236
+
237
+ ![](images/b7d25cfb8a0ffaae67e2368f523b2be2fd57b48f2ded7fa2be0817bea0385f80.jpg)
238
+ Figure 7: Image-conditioned generation $(224 \times 224)$ . We sample 2 shapes for each input image, and compare them with OccNet.
239
+
240
+ ![](images/2a6387706a80fa57c36073ce3d9a5c000c6b3b08a331e0c925d32a47c23d953b.jpg)
241
+ Figure 8: Image-conditioned generation $(16 \times 16)$ . We sample 8 shapes for each input image.
242
+
243
+ # 7 Conclusion
244
+
245
+ We have studied neural fields for shapes and presented a new representation. In contrast to common approaches which define latents a on a regular grid or multiple regular grids, we position latents on an irregular grid. Comparing to existing works, the representation better scales to larger models, because the irregular grid is sparse and adapts to the underlying 3D shape of the object. In our results, we demonstrated an improvement in 3D shape reconstruction from point clouds and generative modeling conditioned on images, object category, or point-clouds over alternative grid-based methods. In future work, we suggest to explore other applications of our proposed representation, e.g., shape completion, and extensions to textured 3D shapes and 3D scenes.
246
+
247
+ # Broader impact
248
+
249
+ We introduce a new 3d shape representation for generative modeling and shape analysis. This shape representation is designed to be compatible with the transformer architecture. We demonstrate some example applications in the paper including shape generation conditioned on images, point clouds, and shape class, and 3d shape reconstruction. However, we envision our proposed shape representation to be general and it could be employed in all shape processing tasks.
250
+
251
+ Potential societal impacts of generative modeling in general exist. Future iterations of our work could possibly be used to generate high fidelity 3D virtual humans. However, we do not see an important negative societal impact that is specific to our work and that would constitute an immediate concern.
252
+
253
+ ![](images/ee832b131a73f33be5ac7847fb864798844b980eaff45a1994664bd03eec50ff.jpg)
254
+ Figure 9: Comparison of category-conditioned generation. We compare our results (Left) with an $8^3$ latent grid baseline (Right).
255
+
256
+ Table 3: FID ↓ for category-conditioned generation. We compare our results with the baseline Grid-8³. The 7 categories are the largest categories in ShapeNet.
257
+
258
+ <table><tr><td rowspan="2"></td><td colspan="7">Categories</td><td rowspan="2">mean</td></tr><tr><td>table</td><td>car</td><td>chair</td><td>airplane</td><td>sofa</td><td>rifle</td><td>lamp</td></tr><tr><td>Grid-83</td><td>72.396</td><td>95.566</td><td>58.649</td><td>42.009</td><td>58.092</td><td>59.456</td><td>87.319</td><td>67.641</td></tr><tr><td>3DILG (Ours)</td><td>68.016</td><td>92.597</td><td>45.333</td><td>30.957</td><td>53.244</td><td>40.500</td><td>72.672</td><td>57.617</td></tr></table>
259
+
260
+ ![](images/0f870472740da780b05340b10ba0a0e770b67ea93520b8ff91c24c633730624e.jpg)
261
+ Figure 10: Category-conditioned generation. We choose 3 categories to show (bookshelf, bench and chair). We show 100 samples for each category.
262
+
263
+ ![](images/5b417da50a48b73bac74c344698d5102a2caa1dd39ade0d479763c56a904e171.jpg)
264
+ Figure 11: Point cloud conditioned generation. We show 8 decoding steps.
265
+
266
+ # Acknowledgements
267
+
268
+ We would like to acknowledge support from the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence.
269
+
270
+ # References
271
+
272
+ [1] Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. (Cited on 5)
273
+ [2] Federica Bogo, Javier Romero, Gerard Pons-Moll, and Michael J Black. Dynamic fauna: Registering human bodies in motion. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6233-6242, 2017. (Cited on 6)
274
+ [3] Rohan Chabra, Jan E Lenssen, Eddy Ilg, Tanner Schmidt, Julian Straub, Steven Lovegrove, and Richard Newcombe. Deep local shapes: Learning local sdf priors for detailed 3d reconstruction. In European Conference on Computer Vision, pages 608-625. Springer, 2020. (Cited on 1)
275
+ [4] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. (Cited on 6)
276
+ [5] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. arXiv preprint arXiv:2202.04200, 2022. (Cited on 6)
277
+ [6] Zhang Chen, Yinda Zhang, Kyle Genova, Sean Fanello, Sofien Bouaziz, Christian Hane, Ruofei Du, Cem Keskin, Thomas Funkhouser, and Danhang Tang. Multiresolution deep implicit functions for 3d shape representation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13087-13096, 2021. (Cited on 2)
278
+ [7] Zhiqin Chen, Andrea Tagliasacchi, and Hao Zhang. Bsp-net: Generating compact meshes via binary space partitioning. arXiv preprint arXiv:1911.06971, 2019. (Cited on 1, 3)
279
+ [8] An-Chieh Cheng, Xueting Li, Sifei Liu, Min Sun, and Ming-Hsuan Yang. Autoregressive 3d shape generation via canonical mapping. arXiv preprint arXiv:2204.01955, 2022. (Cited on 3, 4)
280
+ [9] Julian Chibane, Thiemo Alldieck, and Gerard Pons-Moll. Implicit functions in feature space for 3d shape reconstruction and completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6970-6981, 2020. (Cited on 2, 3, 4, 6)
281
+ [10] Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision, pages 628-644. Springer, 2016. (Cited on 2)
282
+ [11] Jasmine Collins, Shubham Goel, Kenan Deng, Achleshwar Luthra, Leon Xu, Erhan Gundogdu, Xi Zhang, Tomas F Yago Vicente, Thomas Dideriksen, Himanshu Arora, et al. Abo: Dataset and benchmarks for real-world 3d object understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21126-21136, 2022. (Cited on 6)
283
+ [12] Angela Dai, Charles Ruizhongtai Qi, and Matthias Nießner. Shape completion using 3d-encoder-predictor cnns and shape synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5868–5877, 2017. (Cited on 2)
284
+ [13] Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey Hinton, and Andrea Tagliasacchi. Cvxnet: Learnable convex decomposition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 31-44, 2020. (Cited on 1, 3, 6)
285
+ [14] Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: non-linear independent components estimation. In *Yoshua Bengio and Yann LeCun*, editors, International Conference on Learning Representations (ICLR), 2015. (Cited on 3)
286
+ [15] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021. (Cited on 3, 8)
287
+
288
+ [16] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12873-12883, 2021. (Cited on 2, 3)
289
+ [17] Kyle Genova, Forrester Cole, Avneesh Sud, Aaron Sarna, and Thomas Funkhouser. Local deep implicit functions for 3d shape. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4857-4866, 2020. (Cited on 3)
290
+ [18] Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T Freeman, and Thomas Funkhouser. Learning shape templates with structured implicit functions. In Proceedings of the IEEE International Conference on Computer Vision, pages 7154-7164, 2019. (Cited on 3)
291
+ [19] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in Neural Information Processing Systems, 27:2672–2680, 2014. (Cited on 3)
292
+ [20] Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R Martin, and Shi-Min Hu. Pct: Point cloud transformer. Computational Visual Media, 7(2):187-199, 2021. (Cited on 3)
293
+ [21] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. (Cited on 8)
294
+ [22] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019. (Cited on 7)
295
+ [23] Chiyu Jiang, Avneesh Sud, Ameesh Makadia, Jingwei Huang, Matthias Nießner, Thomas Funkhouser, et al. Local implicit grid representations for 3d scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6001-6010, 2020. (Cited on 1)
296
+ [24] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun, editors, International Conference on Learning Representations (ICLR), 2014. (Cited on 3)
297
+ [25] Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. A tutorial on energy-based learning. Predicting structured data, 1(0), 2006. (Cited on 3)
298
+ [26] Tianyang Li, Xin Wen, Yu-Shen Liu, Hua Su, and Zhizhong Han. Learning deep implicit functions for 3d shapes with dynamic code clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12840-12850, 2022. (Cited on 3)
299
+ [27] Yangyan Li, Soeren Pirk, Hao Su, Charles R Qi, and Leonidas J Guibas. Fpnn: Field probing neural networks for 3d data. Advances in neural information processing systems, 29, 2016. (Cited on 2)
300
+ [28] William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. ACM siggraph computer graphics, 21(4):163-169, 1987. (Cited on 2)
301
+ [29] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4460-4470, 2019. (Cited on 1, 2, 3, 4, 6, 7)
302
+ [30] Mateusz Michalkiewicz, Jhony K Pontes, Dominic Jack, Mahsa Baktashmotlagh, and Anders Eriksson. Deep level sets: Implicit surface representations for 3d shape inference. arXiv preprint arXiv:1901.06802, 2019. (Cited on 3)
303
+ [31] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision, pages 405-421. Springer, 2020. (Cited on 4)
304
+
305
+ [32] Paritosh Mittal, Yen-Chi Cheng, Maneesh Singh, and Shubham Tulsiani. Autosdf: Shape priors for 3d completion, reconstruction and generation. arXiv preprint arXiv:2203.09516, 2022. (Cited on 3, 4)
306
+ [33] Charlie Nash, Yaroslav Ganin, SM Ali Eslami, and Peter Battaglia. *Polygen: An autoregressive generative model of 3d meshes*. In *International Conference on Machine Learning*, pages 7220-7229. PMLR, 2020. (Cited on 3, 6)
307
+ [34] Yatian Pang, Wenxiao Wang, Francis EH Tay, Wei Liu, Yonghong Tian, and Li Yuan. Masked autoencoders for point cloud self-supervised learning. arXiv preprint arXiv:2203.06604, 2022. (Cited on 3)
308
+ [35] Wamiq Para, Paul Guerrero, Tom Kelly, Leonidas J Guibas, and Peter Wonka. Generative layout modeling using constraint graphs. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6690-6700, 2021. (Cited on 3)
309
+ [36] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 165-174, 2019. (Cited on 1, 3)
310
+ [37] Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu. On aliased resizing and surprising subtleties in gan evaluation. In CVPR, 2022. (Cited on 8)
311
+ [38] Despoina Paschalidou, Amlan Kar, Maria Shugrina, Karsten Kreis, Andreas Geiger, and Sanja Fidler. Atiss: Autoregressive transformers for indoor scene synthesis. Advances in Neural Information Processing Systems, 34, 2021. (Cited on 3, 6)
312
+ [39] Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. Convolutional occupancy networks. In European Conference on Computer Vision, pages 523-540. Springer, 2020. (Cited on 1, 2, 3, 4, 6)
313
+ [40] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. (Cited on 2, 3, 4)
314
+ [41] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pages 5099-5108, 2017. (Cited on 2, 3)
315
+ [42] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. (Cited on 7)
316
+ [43] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In International Conference on Machine Learning, pages 1530–1538, 2015. (Cited on 3)
317
+ [44] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3577-3586, 2017. (Cited on 2)
318
+ [45] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. PixelCNN++: Improving the pixelCNN with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017. (Cited on 3)
319
+ [46] Yongbin Sun, Yue Wang, Ziwei Liu, Joshua Siegel, and Sanjay Sarma. Pointgrow: Autoregressively learned point cloud generation with self-attention. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 61–70, 2020. (Cited on 3, 6)
320
+ [47] Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, and Ren Ng. Learned initializations for optimizing coordinate-based neural representations. In CVPR, 2021. (Cited on 3)
321
+ [48] Jiapeng Tang, Jiabao Lei, Dan Xu, Feiying Ma, Kui Jia, and Lei Zhang. Sa-convonet: Sign-agnostic optimization of convolutional occupancy networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6504–6513, 2021. (Cited on 1)
322
+
323
+ [49] Jiapeng Tang, Markhasin Lev, Wang Bi, Thies Justus, and Matthias Nießner. Neural shape deformation priors. In Advances in Neural Information Processing Systems, 2022. (Cited on 3)
324
+ [50] Maxim Tatarchenko, Stephan R Richter, René Ranftl, Zhuwen Li, Vladlen Koltun, and Thomas Brox. What do single-view 3d reconstruction networks learn? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3405-3414, 2019. (Cited on 6)
325
+ [51] Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatrix Marcotegui, François Goulette, and Leonidas J Guibas. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE International Conference on Computer Vision, pages 6411–6420, 2019. (Cited on 2)
326
+ [52] Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. Advances in Neural Information Processing Systems, 29:4790-4798, 2016. (Cited on 3)
327
+ [53] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. (Cited on 2, 3, 5)
328
+ [54] Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International Conference on Machine Learning, pages 1747-1756, 2016. (Cited on 3)
329
+ [55] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. (Cited on 2, 3)
330
+ [56] Xinpeng Wang, Chandan Yeshwanth, and Matthias Nießner. Sceneformer: Indoor scene generation with transformers. In 2021 International Conference on 3D Vision (3DV), pages 106-115. IEEE, 2021. (Cited on 3)
331
+ [57] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5):1-12, 2019. (Cited on 2, 3)
332
+ [58] Wenxuan Wu, Zhongang Qi, and Li Fuxin. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9621-9630, 2019. (Cited on 2)
333
+ [59] Jianwen Xie, Yang Lu, Song-Chun Zhu, and Yingnian Wu. A theory of generative convnet. In International Conference on Machine Learning, pages 2635-2644. PMLR, 2016. (Cited on 3)
334
+ [60] Jianwen Xie, Yifei Xu, Zilong Zheng, Song-Chun Zhu, and Ying Nian Wu. Generative pointnet: Deep energy-based learning on unordered point sets for 3d generation, reconstruction and classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14976-14985, 2021. (Cited on 3)
335
+ [61] Jianwen Xie, Zilong Zheng, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, and Ying Nian Wu. Learning descriptor networks for 3d shape synthesis and analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8629-8638, 2018. (Cited on 3)
336
+ [62] Jianwen Xie, Zilong Zheng, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, and Ying Nian Wu. Generative voxelnet: learning energy-based models for 3d shape synthesis and analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. (Cited on 3)
337
+ [63] Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. Neural fields in visual computing and beyond. Computer Graphics Forum, 2022. (Cited on 2, 3)
338
+ [64] Xingguang Yan, Liqiang Lin, Niloy J Mitra, Dani Lischinski, Danny Cohen-Or, and Hui Huang. Shapeformer: Transformer-based shape completion via sparse representation. arXiv preprint arXiv:2201.10326, 2022. (Cited on 3, 4)
339
+ [65] Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, and Jiwen Lu. Pointbert: Pre-training 3d point cloud transformers with masked point modeling. arXiv preprint arXiv:2111.14819, 2021. (Cited on 3)
340
+
341
+ [66] Biao Zhang and Peter Wonka. Training data generating networks: Shape reconstruction via bi-level optimization. In International Conference on Learning Representations, 2022. (Cited on 1, 3, 6)
342
+ [67] Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 16259-16268, 2021. (Cited on 3)
343
+
344
+ # Checklist
345
+
346
+ 1. For all authors...
347
+
348
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
349
+ (b) Did you describe the limitations of your work? [Yes] See Appendix.
350
+ (c) Did you discuss any potential negative societal impacts of your work? [Yes] See Appendix.
351
+ (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
352
+
353
+ 2. If you are including theoretical results...
354
+
355
+ (a) Did you state the full set of assumptions of all theoretical results? [N/A]
356
+ (b) Did you include complete proofs of all theoretical results? [N/A]
357
+
358
+ 3. If you ran experiments...
359
+
360
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Appendix.
361
+ (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix.
362
+ (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] We specify the random seed. The results are reproducible.
363
+ (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix.
364
+
365
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
366
+
367
+ (a) If your work uses existing assets, did you cite the creators? [Yes]
368
+ (b) Did you mention the license of the assets? [N/A]
369
+ (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
370
+
371
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A]
372
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A]
373
+
374
+ 5. If you used crowdsourcing or conducted research with human subjects...
375
+
376
+ (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [Yes] See Appendix.
377
+ (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
378
+ (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [Yes]
3dilgirregularlatentgridsfor3dgenerativemodeling/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:109760de2c8e54b9f6bd456fc1fc44b7bcca006f853c3e90d7d6a541a428ffe2
3
+ size 881567
3dilgirregularlatentgridsfor3dgenerativemodeling/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0dd81e58b41d9f7a3a77d272b0d650f5bbc57857029356c31b28a8d3cdbb66a
3
+ size 448932
3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/1c7b48e2-458c-49c5-8f42-bda21790c830_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48a3f7d8ac4ef4641ef0ce2696aad648b5e322b49cde124817ca8e1317774e16
3
+ size 78252
3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/1c7b48e2-458c-49c5-8f42-bda21790c830_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adf14f414600f44dbba01c61828873bf426e9ad8b02dd295accba9e79dcc6253
3
+ size 96139
3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/1c7b48e2-458c-49c5-8f42-bda21790c830_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11be7c6294695fcae465150a79945484bdb7a96bb9a2dbcefd405d0f0fab34e6
3
+ size 23638352
3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/full.md ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3DOS: Towards 3D Open Set Learning – Benchmarking and Understanding Semantic Novelty Detection on Point Clouds
2
+
3
+ Antonio Alliegro*, Francesco Cappio Borlino*, Tatiana Tommasi
4
+ Department of Control and Computer Engineering, Politecnico di Torino, Italy
5
+ Italian Institute of Technology, Italy
6
+ {antonio.alliegro, francesco.cappio, tatiana.tommasi}@polito.it
7
+
8
+ # Abstract
9
+
10
+ In recent years there has been significant progress in the field of 3D learning on classification, detection and segmentation problems. The vast majority of the existing studies focus on canonical closed-set conditions, neglecting the intrinsic open nature of the real-world. This limits the abilities of robots and autonomous systems involved in safety-critical applications that require managing novel and unknown signals. In this context exploiting 3D data can be a valuable asset since it provides rich information about the geometry of perceived objects and scenes. With this paper we provide the first broad study on 3D Open Set learning. We introduce 3DOS: a novel testbed for semantic novelty detection that considers several settings with increasing difficulties in terms of semantic (category) shift, and covers both in-domain (synthetic-to-synthetic, real-to-real) and cross-domain (synthetic-to-real) scenarios. Moreover, we investigate the related 2D Open Set literature to understand if and how its recent improvements are effective on 3D data. Our extensive benchmark positions several algorithms in the same coherent picture, revealing their strengths and limitations. The results of our analysis may serve as a reliable foothold for future tailored 3D Open Set methods.
11
+
12
+ # 1 Introduction
13
+
14
+ Most existing machine learning models rely on the assumption that train and test data are drawn i.i.d. from the same distribution. While reasonable for lab experiments, this assumption frequently fails to hold when models are deployed in the open world, where a variety of distributional shifts with respect to the training data can emerge. For example, new object categories may induce a semantic shift, or data from new domains may give rise to a covariate shift [53, 54, 36]. Such cases can occur separately or jointly, and the test samples that differ from what was observed during training are generally indicated as out-of-distribution (OOD) data. These data may become extremely dangerous for autonomous agents as testified by the numerous accidents involving self-driving cars that misbehaved when encountering anomalous objects in the streets<sup>2</sup>. To avoid similar risks it is of paramount importance to build robust models capable of maintaining their discrimination ability over the closed set of known classes while rejecting unknown categories. Solving this task is challenging for existing deep models: their exceptional closed set performance hides miscalibration [16] and over-confidence issues [33]. In other words, their output score cannot be regarded as a reliable measure of prediction correctness. This drawback has been largely discussed in the 2D visual
15
+
16
+ ![](images/8c3527a3b10e02a41423f204a369e7f173aae885fd3e3ec3d831c5cdb1ea2a67.jpg)
17
+ Figure 1: Schematic illustration of the OOD detection, semantic novelty detection and Open Set tasks on 3D data.
18
+
19
+ learning literature [41, 7, 57, 45] as its solution would enable the use of powerful deep models for many real-world tasks. In this context, and particularly for many safety-critical applications such as self-driving cars, 3D sensing is a valuable asset, providing detailed information about the geometry of sensed objects that 2D images cannot capture. However 3D literature in this field is still in its infancy, with only a small number of works which have just started to scratch the surface of the problem by focusing on particular sub-settings [31, 3]. With this work, we draw the community's attention to 3D Open Set learning, which entails developing models designed to process 3D point clouds that can recognize test samples from a set of known categories while avoiding prediction for samples from unknown classes. Our contributions are: 1) we propose 3DOS, the first benchmark for 3D Open Set learning, considering several settings with increasing levels of difficulty. It includes three main tracks: Synthetic, Real to Real, and Synthetic to Real. The first is meant to investigate the behavior of existing Open Set methods on 3D data, the other two are designed to simulate real-world deployment conditions; 2) we build a coherent picture by putting together the existing literature from OOD detection and Open Set recognition in 2D and 3D; 3) we analyze the performance of these methods to discover which is the state-of-the-art for 3D Open Set learning. We highlight their advantages and limitations and show that often a simple representation learning approach is enough to outperform sophisticated state-of-the-art methods.
20
+
21
+ Our code and data are available at https://github.com/antoalli/3D_0S.
22
+
23
+ # 2 Related Work
24
+
25
+ We provide an overview of existing literature on OOD detection and Open Set learning. The difference between these two tasks is often neglected, but it is important to point it out (see Fig. 1). In OOD detection it is sufficient to identify and reject samples with any distribution shift with respect to the training data. In the particular case of semantic novelty detection, the concept of novelty is limited to the categories not seen during training, regardless of the specific domain appearance of the observed samples. Besides separating data of known classes from those of unknown classes, Open Set recognition requires performing a class prediction over the known categories.
26
+
27
+ Discriminative Methods. By training a model with multi-class supervision we expect to get low uncertainty on in-distribution (ID) data and high uncertainty for OOD samples. Thus, a baseline approach may consist in using the maximum softmax probability (MSP) as a normality score to separate known and unknown instances [18]. However, deep models suffer from over-confidence [33] and their prediction outputs need some re-calibration to be considered as uncertainty scoring functions. ODIN [27] exploited temperature scaling and input pre-processing to better separate ID from OOD samples. In [28] the authors showed how to derive Energy scores from the prediction output, demonstrating that they are better aligned with the probability density of the inputs and are less prone to over-confidence. Instead of considering the output, GradNorm [21] focused on the network's gradients showing that their norm carries distinctive signatures to amplify the ID/OOD separability. ReAct [41] proposed to further increase this separability by rectifying the internal network activations. Finally, a very recent work has discussed how the normalized softmax probabilities can be replaced by the maximum logit scores (MLS), resulting in an approach competitive with other more complex
28
+
29
+ strategies [45]. We highlight that all the methods of this discriminative family are applied post-hoc on the closed set classifier, meaning that the original training procedure and objective are not modified. Thus, the models maintain their ability to distinguish among the known classes and are suitable for Open Set recognition.
30
+
31
+ Density and Reconstruction Based Methods. Density-based methods are trained to model the distribution of known data. Input samples are then identified as unknown if lying in low-likelihood regions. Several works have exploited generative models for OOD detection with novelty metrics that range from basic sample reconstruction [1, 10] to more complex likelihood ratio and regret [35, 51]. Still, generative models can be difficult to train, and their performance is frequently lower than that of discriminative ones. Recently a hybrid approach proposed to combine discriminative and probabilistic flow-based learning with promising results [57].
32
+
33
+ Outlier Exposure. Another line of OOD approaches exploits outlier data available at training time. They are used to regularize the model by applying conditions on the prediction entropy [19, 56] or running outlier mining, re-sampling and filtering [9, 26, 53].
34
+
35
+ OOD Data Generation. In many practical cases, it is not possible to access outlier samples at training time. Thus, unknown sample synthesis is used to prepare the model for the deployment conditions [32, 14, 7, 58]. Some recent OOD approaches have also combined real outlier mining and fake outlier generation [23].
36
+
37
+ Representation and Distance Based Methods. Enhancing data representation may help to better characterize known data and consequently ease the identification of unknown samples. In a reliable embedding space, OOD samples should be far away from ID classes so that the distance from stored exemplars or prototypes can be used as a scoring function. Existing approaches focus on two aspects: how to learn a good representation and how to measure distances. Self-supervised, contrastive and prototypes learning methods are of the first kind and generally rely on cosine similarity [43, 8, 40]. Other solutions build on discriminative models, but instead of considering the prediction output, they focus on the learned features and evaluate sample distances by using different metrics like $L^2$ norm, layer-wise Mahalanobis, or similarity metrics based on Gram matrices [20, 25, 38].
38
+
39
+ All the references mentioned above come from the 2D literature. Up to our knowledge 3D OOD detection and Open Set problems have been studied only by a handful of works. A VAE approach for reconstruction-based 3D OOD detection is provided in [31], together with an analysis on seven classes of the ShapeNet dataset [6], each used in turn as unknown. The study considers different VAE normality scores but does not compare with other baselines. In [3] the authors distilled knowledge from a large teacher network while also adding data produced by mixing training samples to define an unknown class. The authors focus on building a lightweight model and do not include comparisons with other Open Set methods. Moreover the classes in the used known/unknown datasets (ModelNet10/40 [49]) significantly differ in pose and headings which makes their separation trivial [24]. Two other works refer to 3D Open Set object detection and segmentation, but their objective is mainly clustering to aggregate points into object instances [5, 48].
40
+
41
+ # 3 3DOS Benchmark
42
+
43
+ Object recognition on 3D data is much more challenging than on 2D samples, with the main issues originating from the lack of color, texture, and of the general context in which the objects usually appear in images (see Fig. 2). When the goal is to evaluate whether a certain instance belongs to a known or novel class, all these cues provide crucial pieces of evidence, but without them, the task becomes really difficult. Data rescaling, resolution and noise can also significantly influence the final prediction. We dedicate our work to investigating all these aspects within the task of 3D Open Set learning: in the following we formalize the problem, introduce several testbeds and present an extensive experimental analysis.
44
+
45
+ # 3.1 Preliminaries
46
+
47
+ Problem formulation. We consider the labeled set $\mathcal{S} = \{\pmb{x}^s, y^s\}_{s=1}^N$ drawn from the training distribution $p_{\mathcal{S}}$ , and we indicate as known all the classes $y^s \in \mathcal{Y}^s$ covered by this set. A model trained on $\mathcal{S}$ is later evaluated on the test set $\mathcal{T} = \{\pmb{x}^t\}_{t=1}^M$ drawn from the distribution $p_{\mathcal{T}}$ . In the Open Set scenario train and test distributions differ in terms of semantic content: for the test data labels
48
+
49
+ ![](images/2a41ee8e255f24c21f239986ec97f0623f35ceadfd66e8791cb0a64da86b6789.jpg)
50
+ Figure 2: By looking at the point clouds of a dishwasher and microwave it might be very difficult to understand if they are the same object or not. Differently from the images, point clouds capture the object geometry, but they miss the original scale as well as color, texture, and object context which are naturally present in images.
51
+
52
+ ![](images/207466ec1f1fabc6eb889f04633bc5467d95fbc4bf068b50d27882ddae849f4d.jpg)
53
+
54
+ $y^{t} \in \mathcal{Y}^{t}$ it holds $\mathcal{Y}^s \neq \mathcal{Y}^t$ . More specifically, we consider a partial overlap between the two sets $\mathcal{Y}^s \subset \mathcal{Y}^t$ and the test classes which do not appear in the known classes set $\mathcal{Y}^s$ are therefore unknown. A reliable semantic novelty detection model trained on $S$ should output for each test sample $\boldsymbol{x}^t$ a normality score representing its probability of belonging to any of the known classes in $\mathcal{Y}^s$ . An Open Set model must also provide an output probability distribution over each of the classes in $\mathcal{Y}^s$ .
55
+
56
+ Performance Metrics. We evaluate the ability to detect unknown samples in test data by exploiting two metrics: AUROC and FPR95. Given that the detection of unknown samples is a binary task, both metrics are based on the concepts of True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN). The AUROC (the higher the better) is the Area Under the Receiver Operating Characteristic Curve. The ROC curve is a graph showing the TP rate (TPR) and the FP rate (FPR) plotted against each other [18] when varying the normality threshold. As a result, the AUROC is a threshold-free metric, and it can be interpreted as the probability that a known test sample has a greater normality score than an unknown one. The FPR95 (the lower the better) is the FP Rate at TP Rate $95\%$ , sometimes referred to as FPR@TPRx with $x = 95\%$ . This metric is based on the choice of a normality threshold so that $95\%$ of positive samples are predicted as positives $(\mathrm{TPR} = \mathrm{TP} / \mathrm{TP} + \mathrm{FN})$ . Then the false positive rate $(\mathrm{FPR} = \mathrm{FP} / \mathrm{FP} + \mathrm{TN})$ is computed using this threshold. For Open Set methods we also evaluate their ability to correctly classify known data by computing their classification accuracy (ACC). Further metrics are reported in the supplementary material.
57
+
58
+ Datasets. We build the 3DOS Benchmark on top of three well known 3D objects datasets: ShapeNet-Core [6], ModelNet40 [49] and ScanObjectNN [44].
59
+
60
+ ShapeNetCore contains 51,127 meshes from synthetic instances of 55 common object categories. In our analysis we adopt ShapeNetCore v2 and use the official training $(70\%)$ , validation $(10\%)$ and test $(20\%)$ splits. All objects are consistently aligned in pose. Having consistent alignment between different semantic categories is fundamental to avoid any bias that could lead to trivial inter-categories discrimination. The point clouds are obtained by uniformly sampling points from the mesh surface and normalized to fit within a unit cube centered at the origin. In our analysis we merge telephone and cellphone categories since they share similar semantic content, thus obtaining a total of 54 categories. ModelNet40 [49] contains 12311 3D CAD models from 40 man-made object categories. We use the official dataset split, consisting of 9843 train and 2468 test shapes according to [34]. We obtain a point cloud from each CAD model by uniformly sampling points from the faces of the synthetic mesh. Each point cloud is then centered in the origin and scaled to fit within a unit cube.
61
+
62
+ ScanObjectNN [44] contains 2902 3D scans of real-world objects from 15 categories. Specifically, we consider the original OBJ_BG split in which 3D scans are affected by acquisition artifacts such as vertex noise, non-uniform density, missing parts and occlusions. Data samples are already in the form of point clouds with 2048 points each and include the foreground object as well as background and other interacting objects which are absent in the synthetic instances of ModelNet and ShapeNet.
63
+
64
+ # 3.2 Benchmark Tracks
65
+
66
+ 3DOS includes three main Open Set tracks. The Synthetic Benchmark is designed to assess the performance of existing methods in the presence of semantic shift, while the more challenging Synth to Real Benchmark covers both semantic and domain shift, with train and test samples that are respectively drawn from synthetic data (Modelnet40) and real-world data (ScanObjectNN). Finally, the Real to Real Benchmark represents an intermediate case with semantic shift among training and test data and noisy samples (from ScanObjectNN) in both sets.
67
+
68
+ ![](images/49fdfdd85ae16f9886523991f0346a13181ccfe789ae8ec8dba23bf768b3a4fb.jpg)
69
+ Figure 3: Visualization of the object categories in each of the sets of the Synthetic Benchmark. SN1: mug, lamp, bed, washer, loudspeaker, telephone, dishwasher, camera, birdhouse, jar, bowl, bookshelf, stove, bench, display, keyboard, clock, piano. SN2: earphone, knife, chair, pillow, table, laptop, mailbox, basket, file cabinet, sofa, printer, flowerpot, microphone, tower, bag, trash bin. SN3: can, microwave, skateboard, faucet, train, guitar, pistol, helmet, watercraft, airplane, bottle, cap, rocket, rifle, remote, car, bus, motorbike.
70
+
71
+ ![](images/808121d2b34e9c5d8eb132319fd2da38a68054f01c5c78be146bcebc94a3797a.jpg)
72
+ Figure 4: Visualization of the object categories in each of the sets of the Synthetic to Real Benchmark. SR1: chair, shelf, door, sink, sofa. SR2: bed, toilet, desk, table, display. SR3: bag, bin, box, pillow, cabinet.
73
+
74
+ Synthetic Benchmark. For our synthetic testbed we employ ShapeNetCore [6] dataset and we split it into 3 not overlapping (i.e. semantically different) category sets of 18 categories each. We dub them as SN1, SN2 and SN3 (see Fig. 3 for the list of categories belonging to each set).
75
+
76
+ We obtained three scenarios of increasing difficulty by simply selecting each of the SN-Sets in turn as known class set and considering the remaining two category sets as unknown. For this track models are trained on the train split of the known classes set and evaluated on the test split of both known and unknown classes.
77
+
78
+ Synthetic to Real Benchmark. To define our Synthetic to Real-World cross-domain scenario, we employ synthetic point clouds from ModelNet40 [49] for training while we test on real-world point clouds from ScanObjectNN [44]. We choose to adopt ModelNet40 (instead of ShapeNetCore) because it has a better overlap with ScanObjectNN and previous works already considered the same cross-domain scenario in the context of point cloud object classification [2, 4]. We define three different category sets: SR1, SR2, and SR3 as described in Fig. 4. The first two sets are composed by matching classes of ModelNet40 and ScanObjectNN. The third set (SR3) is instead composed by ScanObjectNN classes without such a one-to-one mapping with ModelNet40. Overall we have two scenarios with either SR1 or SR2 used as known and the other two considered as unknown. For this track, models are trained on ModelNet40 samples of the known classes set and evaluated on the ScanObjectNN samples of both known and unknown classes.
79
+
80
+ Real to Real Benchmark. For this last case we exploited the same SR category sets created from ScanObjectNN described above. Specifically, each of them is used as unknown in the test set, while the other two are divided into train and test and used as known classes.
81
+
82
+ # 3.3 Evaluated Methods
83
+
84
+ We consider several approaches from the families of methods described in Sec. 2.
85
+
86
+ Discriminative Methods. All these methods are built on top of a standard closed set classifier trained with cross-entropy. For our analysis we select the MSP [18] baseline, as well as its maximum logit score variant (MLS) [45]. We further consider ODIN [27], Energy [28], GradNorm [21] and ReAct [41].
87
+
88
+ Density and Reconstruction Based Methods. We select two methods from this group. We test a VAE model with reconstruction based scoring by following one of the few existing works on 3D anomaly detection [31]. This is the only unsupervised approach in our analysis, and thus performs only OOD detection without providing predictions over known classes. The second approach is based on Normalizing Flow (NF). We took inspiration by the 2D Open Set state-of-the-art OpenHybrid [57] and the anomaly detection method DifferNet [37]. Specifically we train a NF model consisting of 8 coupling blocks [13] on top of the same feature embedding used by a cross-entropy classifier. The training objective consists in the maximization of the log-likelihood of training samples, and the
89
+
90
+ Table 1: Results on the Synthetic Benchmark track. Each column title indicates the chosen known class set, the other two sets serve as unknown.
91
+
92
+ <table><tr><td colspan="9">Synthetic Benchmark - DGCNN [47]</td><td colspan="8">Synthetic Benchmark - PointNet++ [34]</td></tr><tr><td>Method</td><td colspan="2">SN1 (hard)AUROC↑ FPR95↓</td><td colspan="2">SN2 (med)AUROC↑ FPR95↓</td><td colspan="2">SN3 (easy)AUROC↑ FPR95↓</td><td colspan="2">AvgAUROC↑ FPR95↓</td><td colspan="2">SN1 (hard)AUROC↑ FPR95↓</td><td colspan="2">SN2 (med)AUROC↑ FPR95↓</td><td colspan="2">SN3 (easy)AUROC↑ FPR95↓</td><td colspan="2">AvgAUROC↑ FPR95↓</td></tr><tr><td>MSP [18]</td><td>74.0</td><td>83.9</td><td>88.6</td><td>62.4</td><td>92.9</td><td>43.2</td><td>85.2</td><td>63.2</td><td>74.3</td><td>82.8</td><td>80.0</td><td>78.1</td><td>89.7</td><td>52.2</td><td>81.3</td><td>71.0</td></tr><tr><td>MLS [45]</td><td>75.1</td><td>77.7</td><td>91.1</td><td>42.6</td><td>92.4</td><td>35.2</td><td>86.2</td><td>51.8</td><td>72.0</td><td>80.8</td><td>83.9</td><td>64.1</td><td>89.8</td><td>40.5</td><td>81.9</td><td>61.8</td></tr><tr><td>ODIN [27]</td><td>75.4</td><td>76.5</td><td>91.1</td><td>42.9</td><td>92.5</td><td>34.4</td><td>86.3</td><td>51.3</td><td>74.2</td><td>79.4</td><td>79.4</td><td>71.7</td><td>87.8</td><td>41.8</td><td>80.5</td><td>64.3</td></tr><tr><td>Energy [28]</td><td>75.2</td><td>77.0</td><td>91.2</td><td>41.6</td><td>92.3</td><td>36.4</td><td>86.2</td><td>51.7</td><td>72.1</td><td>81.2</td><td>84.0</td><td>64.7</td><td>89.8</td><td>39.4</td><td>82.0</td><td>61.8</td></tr><tr><td>GradNorm [21]</td><td>66.2</td><td>88.1</td><td>80.9</td><td>64.0</td><td>71.6</td><td>77.7</td><td>72.9</td><td>76.6</td><td>72.1</td><td>81.8</td><td>57.7</td><td>88.9</td><td>57.8</td><td>79.0</td><td>62.6</td><td>83.3</td></tr><tr><td>ReAct [41]</td><td>76.4</td><td>74.6</td><td>92.5</td><td>37.9</td><td>96.4</td><td>19.3</td><td>88.4</td><td>43.9</td><td>73.7</td><td>79.4</td><td>89.6</td><td>52.1</td><td>95.0</td><td>27.2</td><td>86.1</td><td>52.9</td></tr><tr><td>VAE [31]</td><td>67.2</td><td>76.9</td><td>69.5</td><td>83.4</td><td>94.3</td><td>32.4</td><td>77.0</td><td>64.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>NF</td><td>82.0</td><td>74.8</td><td>86.1</td><td>53.8</td><td>97.4</td><td>11.5</td><td>88.5</td><td>46.7</td><td>81.5</td><td>72.5</td><td>71.1</td><td>78.0</td><td>91.0</td><td>49.6</td><td>81.2</td><td>66.7</td></tr><tr><td>OE+mixup [19]</td><td>73.7</td><td>78.9</td><td>90.4</td><td>44.7</td><td>91.4</td><td>46.0</td><td>85.2</td><td>56.5</td><td>72.7</td><td>78.9</td><td>80.3</td><td>68.8</td><td>87.3</td><td>62.2</td><td>80.1</td><td>69.9</td></tr><tr><td>ARPL+CS [7]</td><td>72.9</td><td>84.2</td><td>90.7</td><td>47.1</td><td>89.5</td><td>89.5</td><td>84.4</td><td>73.6</td><td>74.8</td><td>80.3</td><td>80.7</td><td>72.4</td><td>85.4</td><td>50.8</td><td>80.3</td><td>67.8</td></tr><tr><td>Cosine proto</td><td>84.3</td><td>59.1</td><td>88.8</td><td>39.7</td><td>86.4</td><td>48.0</td><td>86.5</td><td>48.9</td><td>80.3</td><td>68.3</td><td>88.7</td><td>60.8</td><td>91.9</td><td>38.0</td><td>86.9</td><td>55.7</td></tr><tr><td>CE (L2)</td><td>80.4</td><td>75.5</td><td>90.1</td><td>40.9</td><td>96.7</td><td>14.4</td><td>89.1</td><td>43.6</td><td>83.4</td><td>66.8</td><td>89.5</td><td>37.7</td><td>92.9</td><td>28.1</td><td>88.6</td><td>44.2</td></tr><tr><td>SupCon [22]</td><td>80.3</td><td>75.7</td><td>84.6</td><td>73.6</td><td>87.9</td><td>44.3</td><td>84.3</td><td>64.5</td><td>80.9</td><td>75.5</td><td>83.5</td><td>68.2</td><td>85.1</td><td>45.1</td><td>83.2</td><td>62.9</td></tr><tr><td>SubArcface [11]</td><td>81.2</td><td>73.4</td><td>91.9</td><td>44.0</td><td>94.9</td><td>26.5</td><td>89.3</td><td>48.0</td><td>79.0</td><td>81.2</td><td>82.9</td><td>60.3</td><td>89.1</td><td>32.8</td><td>83.7</td><td>58.1</td></tr></table>
93
+
94
+ Table 2: Relationship between closed and open set performance when training a discriminative model via the addition of Label Smoothing (LS). We show the results on the hard SN1 set.
95
+
96
+ <table><tr><td colspan="7">SN1 (hard) - Synthetic Benchmark - DGCNN [47]</td><td colspan="7">SN1 (hard) - Synthetic Benchmark - PointNet++ [34]</td></tr><tr><td></td><td>MSP +LS</td><td>MLS +LS</td><td colspan="2">CE (L2) +LS</td><td colspan="2">Closed set Acc +LS</td><td>MSP MSP+LS</td><td colspan="2">MLS MLS+LS</td><td colspan="2">CE (L2) +LS</td><td colspan="2">Closed set Acc +LS</td></tr><tr><td>AUROC↑</td><td>74.0</td><td>77.4</td><td>75.1</td><td>77.5</td><td>80.4</td><td>80.2</td><td>74.3</td><td>72.7</td><td>72.0</td><td>69.6</td><td>83.4</td><td>79.1</td><td rowspan="2">85.9</td></tr><tr><td>FPR95↓</td><td>83.9</td><td>73.7</td><td>77.7</td><td>71.8</td><td>75.5</td><td>66.7</td><td>82.8</td><td>78.6</td><td>80.8</td><td>77.6</td><td>66.8</td><td>77.4</td></tr></table>
97
+
98
+ predicted log-likelihood is later used to distinguish ID and OOD samples. Differently from the VAE this model also includes a closed set classifier and thus it is applicable to the Open Set task.
99
+
100
+ Outlier Exposure with OOD Generated Data. Our analysis focuses on the setting where training data does not include unknown samples, thus we assess the performance of the OE approach presented in [19] by exploiting fake OOD data produced via point cloud mixup [24] (OE+mixup).
101
+
102
+ Representation and Distance Based Methods. To evaluate the effect of a carefully learned feature embedding on the identification of novel categories, we consider the state-of-the-art 2D Open Set method ARPL+CS [7]. It learns reciprocal points that represent the otherness with respect to each known class: the distance from these points is considered proportional to the probability that a sample belongs to a certain class. Moreover, the method includes confusing samples (CS) generated in an adversarial manner to represent samples of unseen classes which are equidistant from all the reciprocal points. We also test the embedding of a cosine classifier (Cosine proto) as done in [15], this method learns class-prototypes by maximizing the cosine similarity between each training sample and the prototype of its class. At inference time the highest cosine similarity with a known class prototype is used as a normality score. In order to learn features representation on closed set data it is also possible to use standard losses like the supervised cross-entropy or supervised contrastive [22]. In the first case we rely on the euclidean distance between the feature of the test sample and the training samples $(\mathbf{CE}(L^2))$ , while in the second (SupCon) we use the cosine distance which better reflects the contrastive training objective. In this analysis of distance based methods we also include a seemingly unconnected technique originally proposed for face recognition. The SubArcFace [11] approach belongs to the family of margin-based softmax methods that aim at simultaneously achieving maximal intra-class compactness and inter-class discrepancy without the drawbacks (negative sampling and large data batches) that affect the triplet and the contrastive losses. With respect to other similar strategies [12, 46], for each known class, SubArcFace identifies multiple sub-centers, and a training sample only needs to be close to one of them rather than to a single class prototype. To get the normality score we follow the same procedure adopted for SupCon. It should be noted that the last three methods listed $(\mathrm{CE}(L^2)$ , SupCon, SubArcface) require training data to be available also at test time since they compute the test sample normality score as distance to the nearest train sample.
103
+
104
+ # 4 Experiments
105
+
106
+ We perform our experimental analysis with the goal of answering a set of research questions, each discussed below in separate paragraphs respectively for the Synthetic and Synthetic to Real benchmarks. Given that 3D point clouds literature counts a large number of backbones with no dominant one, we perform all main experiments with two reliable backbones: DGCNN [47] and PointNet++ [34].
107
+
108
+ Figure 5: Correlation between AUROC and ACC performance when changing the backbone on the Synthetic Benchmark SN1 case (left), the Synthetic to Real Benchmark SR1 case (middle) and the Real to Real Benchmark SR3 case (right).
109
+ ![](images/38c72811029bc81bf2f60a51f62bffd57177f9d271982a2d26a8a8d2271b87d3.jpg)
110
+ DGCNN PointNet++
111
+
112
+ ![](images/4ea6166f2d3a2475270611cc6c391dca886865711eee2d6ea52394e29e06083f.jpg)
113
+ CurveNet GDANet RSCNN
114
+
115
+ ![](images/6077007c00a7f28e033a1d467e5428af92b03af3ee8c6ba459c290ae0a61dba4.jpg)
116
+ PointMLP PCT
117
+
118
+ # 4.1 Implementation details
119
+
120
+ Unless otherwise specified, we use 1024 points for Synthetic point clouds (ShapeNet and ModelNet) and 2048 points for real-world point clouds (ScanObject). In the Synthetic Benchmark we augment training data with scale and translation transformations, while for the Synthetic to Real case we also augment it through random rotation around the up-axis. All models are trained with a batch size of 64 for 250 epochs, with exception of SubArcFace which is trained for 500 epochs on synthetic sets (SN1, SN2, SN3). For DGCNN experiments we use SGD optimizer with an initial learning rate of 0.1, momentum and weight decay are set respectively to 0.9 and 0.0001. With PointNet++ we employ Adam optimizer and set the initial learning rate to 0.001. Each experiment is repeated with three different seeds, we take results from the last epoch model and average across runs. Our code is implemented in PyTorch 1.9, experiments run on an HPC cluster with NVIDIA V100 GPUs. All models are trained on a single GPU except for SupCon and ARPL methods. To facilitate reproducibility we provide a complete list and discussion of the analyzed methods hyperparameters in the supplementary material.
121
+
122
+ # 4.2 Synthetic Benchmark
123
+
124
+ How do OOD detection methods perform on 3D semantic novelty detection? In these experiments we analyse the performance of OOD detection and Open Set methods on the Synthetic Benchmark. We report results in Tab. 1, following the same four-group organization of Sec.3.3.
125
+
126
+ Discriminative Methods. We consider MSP as the main baseline, but we include also its variant MLS [45]. We can see that the latter is a strong baseline, as it is often on par or better than more complex state-of-the-art methods (e.g. ODIN, Energy, GradNorm). In general, all methods of this group manage to improve the MSP baseline results both in terms of AUROC and FPR95, with the only exception of GradNorm. The peculiarity of this approach is that it relies on gradients extracted at test time from the network layers to compute the normality score. We hypothesize that the substantial difference between 2D (for which the model has been originally designed) and 3D network architectures may be responsible for the observed poor performance. ReAct consistently outperforms all the others with both the DGCNN and PointNet++ backbones.
127
+
128
+ Density and Reconstruction Based Methods. VAE results are far below the MSP baseline, this could be expected since it is the only unsupervised method in the table. It should be noticed that its encoder matches neither with DGCNN nor with PointNet++. It is composed of graph convolutional layers while the decoder is inspired to FoldingNet [55]. We still include VAE results in Tab. 1 and Tab. 3, regardless of its peculiarities, but we report its numbers with a different color. On the other hand, NF is quite sensitive to backbone choice; it performs well when trained on top of the semantically rich embedding extracted by the DGCNN, but it underperforms when trained on the local features embedding of PointNet++.
129
+
130
+ Outlier Exposure with OOD Generated Data. Comparing with the MSP baseline we observe that for both backbones the OE finetuning produces a slight improvement in terms of FPR95 while the AUROC does not show gains.
131
+
132
+ ![](images/5ad49ee02137c8026c82caec32d9769e6c672dfd1f8614e4e1c019cce6817cec.jpg)
133
+ Figure 6: Analysis of the sampling rate influence on the hard SN1 set. Left: the blue telephone and red cap are known and unknown test samples for a DGCNN classifier trained on SN1. We show the class with the highest probability (p) assignment predicted by the classifier. The known object is correctly recognized as the sampling rate increases, while the confidence in the unknown object decreases, supporting rejection. Right: AUROC and ACC trends when varying the number of sampled points.
134
+
135
+ ![](images/29528aee02165b72b9d8e394ddca8c5d7391f8c23fe76dc3c570cb06fbb0f02c.jpg)
136
+
137
+ Representation and Distance Based Methods. The ARPL+CS [7] approach is the current state-of-the-art 2D Open Set method, however the results indicate that it does not work as well on 3D data and it is easily outperformed by the much simpler Cosine proto. Interestingly, the simple CE $(L^2)$ method built on a standard cross entropy classifier obtains promising results for both backbones outperforming all the competing methods. The same considerations done for ARPL+CS hold for SupCon, which has been already successfully used in the past for OOD detection in 2D [43, 39]. We believe this is due to the fact that synthetic data are very clean and lack the variability required to build a good contrastive embedding. A better result can be obtained through SubArcFace which builds a similar feature embedding but is less computationally expensive and converges more easily. Given its state-of-the-art performance, for the following analyses we will primarily focus CE $(L^2)$ , along with MSP and MLS as baselines.
138
+
139
+ What is the effect of improving closed-set classification on 3D semantic novelty detection? A recent work has put under the spotlight the correlation between the closed accuracy of discriminative methods with their open set recognition performance on images [45]. To verify this trend on 3D data we run two sets of experiments.
140
+
141
+ A first analysis is done by exploiting a standard regularization technique as label smoothing [42]. Tab. 2 shows that LS provides a small closed set accuracy improvement for both backbones as well as some improvement also on AUROC and FPR95 when using the DGCNN backbone. It overcomes the results of ReAct (AUROC:76.4, FPR95:74.6), but remains still worse than Cosine proto which has the best performance on this set. With PointNet++ the advantage is evident only in FPR95 for MLS and MSP, but the open set performance decreases both in terms of AUROC and FPR95 for CE $(L^2)$ . A second evaluation is done by changing the network backbone. We experiment with a range of distinct architectures beyond the already considered DGCNN and PointNet++. Specifically, we tested CurveNet [50], GDANet [52], RSCNN [29], pointMLP [30] and PCT [17]. In particular the latter exploits Transformer blocks for point cloud learning and has recently achieved state-of-the-art performance for 3D object classification and segmentation. The results in Fig. 5 (left) show that for various 3D backbones the open set performance is not strictly linked to the closed set one. In particular, while RSCNN reaches one of the top closed set accuracy results, its MSP AUROC is the worst one.
142
+
143
+ Is 3D semantic novelty detection affected by the point cloud density? We investigate the impact of the point cloud sampling rate. We run experiments with different point cloud sizes: 512, 1024, 2048, and 4096. For each experiment we fix the number of points (e.g. 512) at both training and evaluation. Point clouds with higher sampling rates are more detailed and fine-grained structures become visible at the cost of higher computational complexity. We show in Fig. 6 (left) some visualizations of the influence of the sampling on the visibility of local details, which are important for both known vs unknown discrimination and closed set classification. In the right part Fig. 6, we report the results of closed and open set performance for MSP and CE ( $L^2$ ). While the closed set accuracy grows as the number of sampled points increases, there is no corresponding increase for the open set performance of CE ( $L^2$ ).
144
+
145
+ Table 3: Results on the Synthetic to Real Benchmark track. Each column title indicates the chosen known class set, the other two sets serve as unknown.
146
+
147
+ <table><tr><td colspan="7">Synth to Real Benchmark - DGCNN [47]</td><td colspan="6">Synth to Real Benchmark - PointNet++ [34]</td></tr><tr><td>Method</td><td>SR 1 (easy) AUROC↑</td><td>FPR95↓</td><td>SR 2 (hard) AUROC↑</td><td>FPR95↓</td><td>Avg AUROC↑</td><td>FPR95↓</td><td>SR 1 (easy) AUROC↑</td><td>FPR95↓</td><td>SR 2 (hard) AUROC↑</td><td>FPR95↓</td><td>Avg AUROC↑</td><td>FPR95↓</td></tr><tr><td>MSP [18]</td><td>72.2</td><td>91.0</td><td>61.2</td><td>90.3</td><td>66.7</td><td>90.6</td><td>81.0</td><td>79.6</td><td>70.3</td><td>86.7</td><td>75.6</td><td>83.2</td></tr><tr><td>MLS</td><td>69.0</td><td>92.2</td><td>62.4</td><td>88.9</td><td>65.7</td><td>90.5</td><td>82.1</td><td>76.6</td><td>67.6</td><td>86.8</td><td>74.8</td><td>81.7</td></tr><tr><td>ODIN [27]</td><td>69.0</td><td>92.2</td><td>62.4</td><td>89.0</td><td>65.7</td><td>90.6</td><td>81.7</td><td>77.3</td><td>70.2</td><td>84.4</td><td>76.0</td><td>80.8</td></tr><tr><td>Energy [28]</td><td>68.8</td><td>92.7</td><td>62.4</td><td>88.9</td><td>65.6</td><td>90.8</td><td>81.9</td><td>77.5</td><td>67.7</td><td>87.3</td><td>74.8</td><td>82.4</td></tr><tr><td>GradNorm [21]</td><td>67.0</td><td>93.5</td><td>59.8</td><td>89.4</td><td>63.4</td><td>91.5</td><td>77.6</td><td>80.1</td><td>68.4</td><td>86.3</td><td>73.0</td><td>83.2</td></tr><tr><td>ReAct [41]</td><td>68.4</td><td>92.1</td><td>62.8</td><td>88.8</td><td>65.6</td><td>90.5</td><td>81.7</td><td>75.6</td><td>67.6</td><td>87.2</td><td>74.6</td><td>81.4</td></tr><tr><td>VAE [31]</td><td>68.6</td><td>77.0</td><td>57.9</td><td>92.3</td><td>63.3</td><td>84.6</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>NF</td><td>72.5</td><td>81.6</td><td>70.2</td><td>83.0</td><td>71.3</td><td>82.3</td><td>78.0</td><td>84.4</td><td>74.7</td><td>84.2</td><td>76.4</td><td>84.3</td></tr><tr><td>OE+mixup [19]</td><td>71.1</td><td>89.6</td><td>59.5</td><td>92.0</td><td>65.3</td><td>90.8</td><td>71.2</td><td>89.7</td><td>60.3</td><td>93.5</td><td>65.7</td><td>91.6</td></tr><tr><td>ARPL+CS [7]</td><td>71.5</td><td>90.2</td><td>62.8</td><td>89.5</td><td>67.1</td><td>89.8</td><td>82.8</td><td>74.9</td><td>68.0</td><td>89.3</td><td>75.4</td><td>82.1</td></tr><tr><td>Cosine proto</td><td>58.6</td><td>90.6</td><td>57.3</td><td>91.3</td><td>57.9</td><td>91.0</td><td>79.9</td><td>74.5</td><td>76.5</td><td>77.8</td><td>78.2</td><td>76.1</td></tr><tr><td>CE (L2)</td><td>67.5</td><td>87.4</td><td>64.6</td><td>91.0</td><td>66.1</td><td>89.2</td><td>79.7</td><td>84.5</td><td>75.7</td><td>80.2</td><td>77.7</td><td>82.3</td></tr><tr><td>SubArcFace [11]</td><td>74.5</td><td>86.7</td><td>68.7</td><td>86.6</td><td>71.6</td><td>86.7</td><td>78.7</td><td>84.3</td><td>75.1</td><td>83.4</td><td>76.9</td><td>83.8</td></tr></table>
148
+
149
+ Table 4: Results on the Real to Real Benchmark track. Each column title indicates the chosen unknown class set, the other two sets serve as known.
150
+
151
+ <table><tr><td colspan="9">Real to Real Benchmark - DGCNN [47]</td><td colspan="7">Real to Real Benchmark - PointNet++ [34]</td><td></td></tr><tr><td>Method</td><td colspan="2">SR3 (easy)AUROC† FPR95↓</td><td colspan="2">SR2 (med)AUROC† FPR95↓</td><td colspan="2">SR1 (hard)AUROC† FPR95↓</td><td colspan="2">AvgAUROC† FPR95↓</td><td colspan="2">SR3 (easy)AUROC† FPR95↓</td><td colspan="2">SR2 (med)AUROC† FPR95↓</td><td colspan="2">SR1 (hard)AUROC† FPR95↓</td><td>AvgAUROC† FPR95↓</td><td></td></tr><tr><td>MSP [18]</td><td>83.0</td><td>69.4</td><td>72.0</td><td>88.7</td><td>57.5</td><td>90.3</td><td>70.8</td><td>82.8</td><td>88.1</td><td>67.3</td><td>80.6</td><td>84.0</td><td>73.7</td><td>80.3</td><td>80.8</td><td>77.2</td></tr><tr><td>MLS [45]</td><td>84.9</td><td>58.2</td><td>79.0</td><td>81.0</td><td>54.0</td><td>92.8</td><td>72.6</td><td>77.3</td><td>89.4</td><td>53.8</td><td>83.4</td><td>73.1</td><td>76.4</td><td>75.3</td><td>83.0</td><td>67.4</td></tr><tr><td>ODIN [27]</td><td>84.9</td><td>58.2</td><td>79.0</td><td>80.9</td><td>54.0</td><td>92.8</td><td>72.6</td><td>77.3</td><td>90.2</td><td>47.9</td><td>83.3</td><td>71.7</td><td>76.3</td><td>76.8</td><td>83.3</td><td>65.5</td></tr><tr><td>Energy [28]</td><td>84.8</td><td>59.7</td><td>79.1</td><td>81.4</td><td>53.8</td><td>93.2</td><td>72.6</td><td>78.1</td><td>89.5</td><td>50.6</td><td>81.6</td><td>75.8</td><td>76.6</td><td>75.5</td><td>82.6</td><td>67.3</td></tr><tr><td>GradNorm [21]</td><td>77.5</td><td>73.3</td><td>73.3</td><td>87.4</td><td>51.0</td><td>92.9</td><td>67.2</td><td>84.5</td><td>88.5</td><td>50.7</td><td>77.4</td><td>75.3</td><td>75.2</td><td>76.8</td><td>80.4</td><td>67.6</td></tr><tr><td>ReAct [41]</td><td>87.6</td><td>54.0</td><td>79.0</td><td>78.6</td><td>58.9</td><td>93.1</td><td>75.1</td><td>75.3</td><td>90.3</td><td>48.9</td><td>82.4</td><td>75.8</td><td>75.4</td><td>77.6</td><td>82.7</td><td>67.4</td></tr><tr><td>NF</td><td>76.9</td><td>77.3</td><td>71.7</td><td>82.7</td><td>61.8</td><td>86.2</td><td>70.2</td><td>82.1</td><td>88.0</td><td>47.7</td><td>80.6</td><td>68.2</td><td>75.6</td><td>81.4</td><td>81.4</td><td>65.8</td></tr><tr><td>OE+mixup [19]</td><td>76.8</td><td>77.8</td><td>74.9</td><td>87.2</td><td>57.6</td><td>89.9</td><td>69.8</td><td>85.0</td><td>72.6</td><td>83.5</td><td>72.0</td><td>88.5</td><td>62.5</td><td>87.8</td><td>69.0</td><td>86.6</td></tr><tr><td>Cosine proto</td><td>90.0</td><td>43.7</td><td>78.5</td><td>75.3</td><td>65.5</td><td>85.7</td><td>78.0</td><td>68.2</td><td>91.0</td><td>41.0</td><td>82.1</td><td>78.2</td><td>77.6</td><td>75.6</td><td>83.6</td><td>64.9</td></tr><tr><td>CE (L2)</td><td>83.1</td><td>59.3</td><td>74.5</td><td>77.2</td><td>67.1</td><td>86.8</td><td>74.9</td><td>74.4</td><td>85.1</td><td>64.4</td><td>78.9</td><td>83.9</td><td>73.2</td><td>79.1</td><td>79.1</td><td>75.8</td></tr><tr><td>SubArcface [11]</td><td>86.7</td><td>58.5</td><td>78.4</td><td>76.1</td><td>65.0</td><td>84.0</td><td>76.7</td><td>72.9</td><td>87.1</td><td>61.3</td><td>78.9</td><td>76.9</td><td>73.7</td><td>81.4</td><td>79.9</td><td>73.2</td></tr></table>
152
+
153
+ # 4.3 Synthetic to Real Benchmark
154
+
155
+ Training on synthetic data is fundamental, especially when only a few real-world samples are available for a given task. This is often the case for 3D point cloud learning, for which ScanObjectNN [44] is one of the largest publicly available real-world object datasets despite counting less than 3k samples. We thus analyze how models trained on synthetic data perform when tested on real-world data.
156
+
157
+ How does the OOD detection performance trend changes when testing on Real-World data? Table 3 provides an overview of the results on the Synthetic to Real benchmark. W.r.t. Table 1 we notice a general degradation in performance due to the domain shift between train (synthetic) and test (real-world). Interestingly, the PointNet++ backbone seems to more robust to the domain shift than DGCNN. For example, the MSP baseline with PointNet++ outperforms the DGCNN counterpart by 8.9 pp in terms of AUROC and 7.4 in terms of FPR95. We include a more comprehensive analysis of the impact of the backbone used in the Synthetic to Real benchmark in Fig. 5 (middle). Most of the methods that performed well in the Synthetic benchmark turn out to be less robust than the simple MSP baseline, and thus can no longer outperform it. This is true for the vast majority of discriminative methods. For VAE it holds a similar discussion to what was done for the synthetic counterpart. NF performs consistently on both backbones with a clear AUROC improvement of 4.6 pp over MSP when applied on top of DGCNN and only a slight improvement for PointNet++. In the case of OE+mixup, the generated outliers used to finetune the classifier model are not representative of the real-world test domain and thus do not allow for improvement over the MSP baseline.
158
+
159
+ The results of distance based methods are highly dependent on the specific backbone chosen. Cosine proto performs particularly well on PN2, but fails miserably on DGCNN, most likely because DGCNN prototypes trained on synthetic are not representative of the real-world test distribution. A similar consideration can be done for CE $(L^2)$ , confirming the robustness of PointNet++ to the domain shift. Finally, SubArcFace demonstrates its reliability, as it achieves good results on both backbones and the best overall results on average.
160
+
161
+ For both MSP and SubArcFace we studied the impact of several backbones also considering the closed set performance as shown in the middle part of Fig. 5. The plot shows a linearly growing
162
+
163
+ ![](images/d2ea8947eb22582c9037ea16ac30efa2e10144262776cc5bc72cea1ea57290f0.jpg)
164
+
165
+ ![](images/3a5865391073bea960461123a8530cc305d28484c041e5b6c9b118dd746ac669.jpg)
166
+
167
+ ![](images/776642ed17ddbedbcc8ddfc7ea04689f22ff4ead300abd029007e9976076a8ee.jpg)
168
+
169
+ ![](images/f473e81dae66853ef8bd239489e660967da091afc489170b93575e7d7298c189.jpg)
170
+ Figure 7: AUROC scores across methods and backbones for the 3DOS benchmark tracks indicated by the respective titles.
171
+
172
+ trend for both methods and the results also confirm the advantage of PointNet++ over more complex networks.
173
+
174
+ # 4.4 Real to Real Benchmark
175
+
176
+ To complete our analysis we ran the most relevant methods on the Real to Real Benchmark and present the results in Table 4. Overall, the trend for all approaches is consistent with what was observed in the Synthetic to Real case. The Cosine proto approach, which already demonstrated effectiveness with PointNet++ in the Synthetic to Real benchmark, now ranks first for both DGCNN and PointNet++. We also highlight that PointNet++ maintains better performance than DGCNN confirming its robustness when dealing with noisy and corrupted real-world data.
177
+
178
+ For both MSP and Cosine proto we studied the impact of several backbones also considering the closed set performance as shown in the right part of Fig. 5.
179
+
180
+ # 5 Conclusions
181
+
182
+ We presented 3DOS, the first benchmark for 3D Open Set learning that takes into account several settings and three scenarios with different types of distributional shifts. Our analysis reveals that cutting-edge 2D Open Set methods do not easily transfer their state-of-the-art performance to 3D data, with simple representation learning approaches such as CE $(L^2)$ , SubArcFace and Cosine proto often outperforming them. Furthermore, the performances of the 3D Open Set methods depend on the chosen backbone: PointNet++ has proven to be extremely robust in processing real-world data, even across domains, outperforming more recent and complex networks. The point density may be an issue for baseline approaches but has a minimal impact on distance-based strategies as CE $(L^2)$ . Finally, Open Set on 3D data becomes extremely difficult when dealing with the combination of semantic and domain shift.
183
+
184
+ Figure 7 depicts a summary overview of the three studied scenarios indicating how the Synthetic to Real is the most challenging case, followed by the Real to Real and finally by the Synthetic Benchmark. This confirms that the domain shift between synthetic and real data adds extra challenges over the semantic shift. Moreover, it is interesting to notice that the improvement provided by the best Open Set methods over the MLS/MSP baselines is quite visible in the Synthetic Benchmark (DGCNN SubArcFace > MLS, +3.1 AUROC), but is reduced in the Synthetic to Real (PointNet++ Cosine Proto > MSP, +2.6 AUROC) and Real to Real cases (PointNet++ Cosine Proto > MLP, +0.6 AUROC), which clearly asks for new approaches and reveals room for improvements.
185
+
186
+ We hope that this benchmark will serve as a solid foundation for future research in this area, pushing for the development of Open Set methods tailored for 3D data and able to exploit their peculiarity.
187
+
188
+ Acknowledgements We acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support. We also acknowledge the support of the European H2020 Elise project (www.elise-ai.eu).
189
+
190
+ # References
191
+
192
+ [1] D. Abati, A. Porrello, S. Calderara, and R. Cucchiara. Latent Space Autoregression for Novelty Detection. In CVPR, 2019.
193
+ [2] A. Alliegro, D. Boscaini, and T. Tommasi. Joint supervised and self-supervised learning for 3d real world challenges. In ICPR, 2021.
194
+ [3] A. Bhardwaj, S. Pimpale, S. Kumar, and B. Banerjee. Empowering knowledge distillation via open set recognition for robust 3d point cloud classification. Pattern Recognition Letters, 151:172-179, 2021.
195
+ [4] A. Cardace, R. Spezialetti, P. Z. Ramirez, S. Salti, and L. D. Stefano. Refrec: Pseudo-labels refinement via shape reconstruction for unsupervised 3d domain adaptation. In 3DV, 2021.
196
+ [5] J. Cen, P. Yun, J. Cai, M. Wang, and M. Liu. Open-set 3d object detection. In 3DV, 2021.
197
+ [6] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. ShapeNet: An Information-Rich 3D Model Repository. preprint arXiv:1512.03012, 2015.
198
+ [7] G. Chen, P. Peng, X. Wang, and Y. Tian. Adversarial reciprocal points learning for open set recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
199
+ [8] G. Chen, L. Qiao, Y. Shi, P. Peng, J. Li, T. Huang, S. Pu, and Y. Tian. Learning open set network with discriminative reciprocal points. In ECCV, 2020.
200
+ [9] J. Chen, Y. Li, X. Wu, Y. Liang, and S. Jha. Atom: Robustifying out-of-distribution detection using outlier mining. In ECML, 2021.
201
+ [10] S. Choi and S.-Y. Chung. Novelty detection via blurring. In ICLR, 2020.
202
+ [11] J. Deng, J. Guo, T. Liu, M. Gong, and S. Zafeiriou. Sub-center arcface: Boosting face recognition by large-scale noisy web faces. In ECCV, 2020.
203
+ [12] J. Deng, J. Guo, N. Xue, and S. Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 2019.
204
+ [13] L. Dinh, J. Sohl-Dickstein, and S. Bengio. Density estimation using Real NVP. In ICLR, 2017.
205
+ [14] X. Du, Z. Wang, M. Cai, and Y. Li. VOS: Learning What You Don't Know by Virtual Outlier Synthesis. In ICLR, 2022.
206
+ [15] D. Fontanel, F. Cermelli, M. Mancini, and B. Caputo. Detecting anomalies in semantic segmentation with prototypes. In CVPR-W, 2021.
207
+ [16] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. In ICML, 2017.
208
+ [17] M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu. Pct: Point cloud transformer. Computational Visual Media, 7(2):187-199, 2021.
209
+ [18] D. Hendrycks and K. Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. *ICLR*, 2017.
210
+ [19] D. Hendrycks, M. Mazeika, and T. Dietterich. Deep anomaly detection with outlier exposure. In ICLR, 2019.
211
+ [20] H. Huang, Z. Li, L. Wang, S. Chen, B. Dong, and X. Zhou. Feature space singularity for out-of-distribution detection. In SafeAI-W, 2021.
212
+ [21] R. Huang, A. Geng, and Y. Li. On the importance of gradients for detecting distributional shifts in the wild. In NeurIPS, 2021.
213
+ [22] P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan. Supervised contrastive learning. In NeurIPS, 2020.
214
+
215
+ [23] S. Kong and D. Ramanan. OpenGAN: Open-Set Recognition via Open Data Generation. In ICCV, 2021.
216
+ [24] D. Lee, J. Lee, J. Lee, H. Lee, M. Lee, S. Woo, and S. Lee. Regularization strategy for point cloud via rigidly mixed sample. In CVPR, 2021.
217
+ [25] K. Lee, K. Lee, H. Lee, and J. Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS, 2018.
218
+ [26] Y. Li and N. Vasconcelos. Background data resampling for outlier-aware classification. In CVPR, 2020.
219
+ [27] S. Liang, Y. Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR, 2018.
220
+ [28] W. Liu, X. Wang, J. Owens, and Y. Li. Energy-based out-of-distribution detection. In NeurIPS, 2020.
221
+ [29] Y. Liu, B. Fan, S. Xiang, and C. Pan. Relation-shape convolutional neural network for point cloud analysis. In CVPR, 2019.
222
+ [30] X. Ma, C. Qin, H. You, H. Ran, and Y. Fu. Rethinking network design and local geometry in point cloud: A simple residual MLP framework. In ICLR, 2022.
223
+ [31] M. Masuda, R. Hachiuma, R. Fujii, H. Saito, and Y. Sekikawa. Toward unsupervised 3d point cloud anomaly detection using variational autoencoder. In ICIP, 2021.
224
+ [32] L. Neal, M. Olson, X. Fern, W.-K. Wong, and F. Li. Open set learning with counterfactual images. In ECCV, 2018.
225
+ [33] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR, 2015.
226
+ [34] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NeurIPS, 2017.
227
+ [35] J. Ren, P. J. Liu, E. Fertig, J. Snoek, R. Poplin, M. Depristo, J. Dillon, and B. Lakshminarayanan. Likelihood ratios for out-of-distribution detection. In NeurIPS, 2019.
228
+ [36] Y. Ruan, Y. Dubois, and C. J. Maddison. Optimal representations for covariate shift. In ICLR, 2022.
229
+ [37] M. Rudolph, B. Wandt, and B. Rosenhahn. Same same but differnet: Semi-supervised defect detection with normalizing flows. In WACV, 2021.
230
+ [38] C. S. Sastry and S. Oore. Detecting out-of-distribution examples with Gram matrices. In ICML, 2020.
231
+ [39] V. Sehwag, M. Chiang, and P. Mittal. SSD: A Unified Framework for Self-Supervised Outlier Detection. In ICLR, 2021.
232
+ [40] Y. Shu, Y. Shi, Y. Wang, T. Huang, and Y. Tian. P-ODN: Prototype-based open deep network for open set recognition. Nature Scientific Reports, 10(1):1-13, 2020.
233
+ [41] Y. Sun, C. Guo, and Y. Li. ReAct: Out-of-distribution Detection With Rectified Activations. In NeurIPS, 2021.
234
+ [42] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Schlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
235
+ [43] J. Tack, S. Mo, J. Jeong, and J. Shin. CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances. In NeurIPS, 2020.
236
+ [44] M. A. Uy, Q.-H. Pham, B.-S. Hua, D. T. Nguyen, and S.-K. Yeung. Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data. In ICCV, 2019.
237
+
238
+ [45] S. Vaze, K. Han, A. Vedaldi, and A. Zisserman. Open-Set Recognition: A Good Closed-Set Classifier is All You Need. In ICLR, 2022.
239
+ [46] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu. Cosface: Large margin cosine loss for deep face recognition. In CVPR, 2018.
240
+ [47] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic Graph CNN for Learning on Point Clouds. ACM Transactions on Graphics (TOG), 38(5):1-12, 2019.
241
+ [48] K. Wong, S. Wang, M. Ren, M. Liang, and R. Urtasun. Identifying unknown instances for autonomous driving. In CoRL, 2019.
242
+ [49] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3D ShapeNets: A Deep Representation for Volumetric Shapes. In CVPR, 2015.
243
+ [50] T. Xiang, C. Zhang, Y. Song, J. Yu, and W. Cai. Walk in the cloud: Learning curves for point clouds shape analysis. In ICCV, 2021.
244
+ [51] Z. Xiao, Q. Yan, and Y. Amit. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. In NeurIPS, 2020.
245
+ [52] M. Xu, J. Zhang, Z. Zhou, M. Xu, X. Qi, and Y. Qiao. Learning geometry-disentangled representation for complementary understanding of 3d object point cloud. In AAAI, 2021.
246
+ [53] J. Yang, H. Wang, L. Feng, X. Yan, H. Zheng, W. Zhang, and Z. Liu. Semantically coherent out-of-distribution detection. In ICCV, 2021.
247
+ [54] J. Yang, K. Zhou, Y. Li, and Z. Liu. Generalized out-of-distribution detection: A survey. preprint arXiv:2110.11334, 2021.
248
+ [55] Y. Yang, C. Feng, Y. Shen, and D. Tian. FoldingNet: Point Cloud Auto-Encoder via Deep Grid Deformation. In CVPR, 2018.
249
+ [56] Q. Yu and K. Aizawa. Unsupervised out-of-distribution detection by maximum classifier discrepancy. In ICCV, 2019.
250
+ [57] H. Zhang, A. Li, J. Guo, and Y. Guo. Hybrid models for open set recognition. In ECCV, 2020.
251
+ [58] J. Zhang, N. Inkawich, Y. Chen, and H. Li. Fine-grained out-of-distribution detection with mixup outlier exposure. preprint arXiv:2106.03917, 2021.
3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1a6fded605d3d742dd85f01a5604b739f2c90f3570e17b385db53a33abeac0e
3
+ size 619132
3dostowards3dopensetlearningbenchmarkingandunderstandingsemanticnoveltydetectiononpointclouds/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2aaec03a7fd31a3da6fb8aedca33917c98c5756856d3199fe3e8f37a91871f7c
3
+ size 313355
4dunsupervisedobjectdiscovery/6ad6a509-61be-4729-9c43-0ccd04335532_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88efa0805f2e5ef3a4231ee8df9f5fac70197ccec2f070eb798867af304d4fd2
3
+ size 83016
4dunsupervisedobjectdiscovery/6ad6a509-61be-4729-9c43-0ccd04335532_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2260a381184b721b762b51ae980740378d2d0e4abc5a0451fbd75e6792e316f9
3
+ size 98822
4dunsupervisedobjectdiscovery/6ad6a509-61be-4729-9c43-0ccd04335532_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8914fef373acc58a09dda24633d8897495326385af3cdccf5ca49efa5b49e44b
3
+ size 9744933
4dunsupervisedobjectdiscovery/full.md ADDED
@@ -0,0 +1,334 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 4D Unsupervised Object Discovery
2
+
3
+ Yuqi Wang $^{1,2}$ Yuntao Chen $^{3}$ Zhaoxiang Zhang $^{1,2,3}$
4
+
5
+ <sup>1</sup> Center for Research on Intelligent Perception and Computing (CRIPAC), National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA)
6
+
7
+ $^{2}$ School of Artificial Intelligence, University of Chinese Academy of Sciences $^{3}$ Centre for Artificial Intelligence and Robotics, HKISI_CAS
8
+
9
+ {wangyuqi2020, zhaoxiang.zhang}@ia.ac.cn chenyuntao08@gmail.com
10
+
11
+ # Abstract
12
+
13
+ Object discovery is a core task in computer vision. While fast progresses have been made in supervised object detection, its unsupervised counterpart remains largely unexplored. With the growth of data volume, the expensive cost of annotations is the major limitation hindering further study. Therefore, discovering objects without annotations has great significance. However, this task seems impractical on still-image or point cloud alone due to the lack of discriminative information. Previous studies underlook the crucial temporal information and constraints naturally behind multi-modal inputs. In this paper, we propose 4D unsupervised object discovery, jointly discovering objects from 4D data - 3D point clouds and 2D RGB images with temporal information. We present the first practical approach for this task by proposing a ClusterNet on 3D point clouds, which is jointly iteratively optimized with a 2D localization network. Extensive experiments on the large-scale Waymo Open Dataset suggest that the localization network and ClusterNet achieve competitive performance on both class-agnostic 2D object detection and 3D instance segmentation, bridging the gap between unsupervised methods and full supervised ones. Codes and models will be made available at https://github.com/Robertwyq/LSMOL.
14
+
15
+ # 1 Introduction
16
+
17
+ Computer vision researchers have been trying to locate objects in complex scenes without human annotations for a long time. Current supervised methods achieve remarkable performance on 2D detection [31, 15, 30, 38, 6] and 3D detection [49, 27, 34, 33, 47], benefiting from high-capacity models and massive annotated data, but tend to fail for scenarios that lack training data. Therefore, unsupervised object discovery is critical for relieving the demand for training labels in deep networks, where raw data are infinite and cheap, but annotations are limited and expensive.
18
+
19
+ However, unsupervised object discovery in complex scenes used to believe impractical. Only a few studies pay attention to this field and achieve limited performance in simple scenarios, far inferior to the supervised model. Recent methods [35, 42] discover objects on 2D still-image utilizing the self-supervised learning [7, 43] to distinguish primary objects from the background, then fine-tune a localization network using the pseudo label. Although these methods outperform the previous generation of object proposal methods [39, 2, 50], their detection results are still far behind supervised models. Furthermore, contrastive learning-guided methods have difficulty in distinguishing different instances within the same category. Alternatively, the 3D point cloud can be
20
+
21
+ decomposed into different class-agnostic instances based on proximity cues [5, 4], but due to lack of semantic information, it is difficult to identify the foreground instances. These problems can be mitigated by the complementary characteristics of 2D RGB images and 3D point clouds. The point cloud data provides accurate location information, while the RGB data contains rich texture and color information. Therefore, [37] proposed to aid unsupervised object detection with LiDAR clues, but it still depends on self-supervised models [14] to identify foreground objects. In summary, all previous methods rely heavily on the self-supervised learning models and overlook the important information from the time dimension.
22
+
23
+ To these ends, we propose a new task named 4D unsupervised object discovery, discovering objects utilizing 4D data - 3D point clouds and 2D RGB images with temporal information [25]. The task needs to joint discover objects on RGB images as in 2D Object Detection and objects on 3D point clouds as in 3D Instance Segmentation. Thanks to the popularization of LiDAR sensors in autonomous driving and consumer electronics (e.g., iPad Pro), such 4D data has become much more readily available, indicating the great potential of this task for general application.
24
+
25
+ In this paper, we present the first practical solution for 4D unsupervised object discovery. We proposed a joint iterative optimization for ClusterNet on 3D point cloud and localization network in RGB images, utilizing the spatio-temporal consistency from multi-modality. Specifically, the ClusterNet was trained with supervision from motion cues initially, which can be obtained from temporally consecutive point clouds. The 3D instance segmentation output by ClusterNet can be further projected to the 2D image as supervision for the localization network. Conversely, 2D detection can also help to refine the 3D instance segmentation by utilizing appearance information. In this way, the 2D localization network and 3D ClusterNet can benefit from each other through joint optimization. Temporal information could serve as a constraint in the optimization.
26
+
27
+ Our main contributions are as follows: (1) we proposed a new task termed 4D Unsupervised Object Discovery, aiming at jointly discovering the objects in the 2D image and 3D Point Cloud without manual annotations. (2) we proposed a ClusterNet on 3D point clouds for 3D instance segmentation, which is jointly iterative optimized with a 2D localization network. (3) Experiments on the Waymo Open Dataset [36] suggest the feasibility of the task and the effectiveness of our approach. We outperform the state-of-the-art unsupervised object discovery by a significant margin, superior to supervised methods with limited annotations, and even comparable to supervised methods with full annotations.
28
+
29
+ # 2 Related work
30
+
31
+ # 2.1 Supervised object detection
32
+
33
+ Object detection from Image. 2D object detection has made great progress in recent years. Two-stage methods represented by the RCNN family [13, 31, 15] extract region proposals first and refine them with deep neural networks. One-stage methods likeYOLO [30], SSD [22] and RetinaNet [21] predict the class-wise bounding box in one-shot based on the anchors. FCOS [38] and CenterNet [48] further detect objects without predefined anchors.
34
+
35
+ Object detection from Point Cloud. LiDAR-based 3D object detection develops rapidly along with autonomous driving. Point-based methods [28, 27, 46] directly estimated 3D bounding boxes from point clouds. The computing efficiency is affected by the number of points, so these methods are usually suitable for indoor scenes. Voxel-based methods [49, 45, 33] operate on the 3D voxelized point cloud are capable for large outdoor scenes. However, voxel resolution can greatly affect performance but is limited by computational constraints. Second [45] and PVRCNN [33] further apply sparse 3D convolutions to reduce compute. CenterPoint [47] extends the idea of anchor-free from 2D detection and proposes a center-based representation of bird-eye view (BEV).
36
+
37
+ # 2.2 Unsupervised object discovery
38
+
39
+ Bottom-up clustering. Clustering methods combine similar elements based on proximity cues, applicable to point clouds and image data. Selective Search [39], MCG [2] and Edge Box [50] can propose a large number of candidate objects with the help of appearance cues, but it is difficult to identify objects from the background. Similarly, point cloud data can decompose into distinct segments according to density-based methods [10, 5, 32] but is unable to determine which is foreground.
40
+
41
+ ![](images/dec8861adb755dfa0c15e40ce28a395c9b43fde14241f67a251ce095a7bc28c9.jpg)
42
+
43
+ ![](images/dccdecbc84292a095a6b9c55fdfa13f4e4c1c5ae06574b158d5b67dde8fc0735.jpg)
44
+ $I^{t}$
45
+
46
+ ![](images/495932cd07ad40c258d9a87d495ebf9a51e44093696c35b38291c62ea63d801f.jpg)
47
+ #
48
+
49
+ ![](images/1a06f05cf1c8d3f0080a41050a686b97f4646007b4ae7dddb8ec8bddc440fe49.jpg)
50
+ 2D-3D Cues
51
+ Figure 1: The pipeline of 4D unsupervised object discovery. The input is the corresponding 2D frames and 3D point clouds. The task needs to discover objects on both images and point clouds without manual annotations. The overall process can be divided into two steps: (1) 3D instance initialization and (2) joint iterative optimization. (1) 3D instance initialization: motion cues serve as the initial cues for training the ClusterNet. (2) Joint iterative optimization: the localization network and ClusterNet are optimized jointly by 2D-3D cues and temporal cues.
52
+
53
+ ![](images/3b55a249e42040eb9f717679226ffb0ff6f9eaab34619f5ab8368939ced199b6.jpg)
54
+
55
+ ![](images/5e0383838b2ff086afb8e84b3791c2a23e78474cdf768be24aaaadad79541440.jpg)
56
+
57
+ ![](images/2c4888032590b3bd89c464c768435a535b8b1c805e42aa5ec2322074e7b9e02e.jpg)
58
+
59
+ ![](images/fb5c07ed15c60986194a234f71a17ec2063250bc00d5eca35906804fd243c541.jpg)
60
+
61
+ ![](images/4f33fe943aa0ebff523489d83a645fcda65f8ad7f220f3f39f7b53d8a7eca183.jpg)
62
+
63
+ ![](images/4712cec0b448999bb1313876c15a214e8a629756c8be74a48f854106ff7d8ac0.jpg)
64
+
65
+ ![](images/85b5ad6811681df74b5bc8cd14e81fa7bda0bf2118d3b9e7c3fcbad9f1b60e1f.jpg)
66
+
67
+ ![](images/9e5da557215a38acbe53a9dc597b9b74e5fe7d3be67c550319071e3624f10739.jpg)
68
+
69
+ ![](images/9e40f189c836c45f58e6c41840fbe5ad0cfad09e72f8a530dfd92dea4dee5894.jpg)
70
+
71
+ ![](images/b3bbc4dfd558df6a7802ffbadaa44e9ced66849252c2545bc7cd95232af4fa17.jpg)
72
+
73
+ ![](images/29729b5a0fcd2c3198297103d1e60abe3682fbd0c5ff6ce5bd262edbf3351a9e.jpg)
74
+
75
+ ![](images/3ddfb264a55701295ecadd1dc4dce5bc3b7d2ebea7e36937c7daa84da52dd676.jpg)
76
+ $b_1^t,\dots,b_k^t$
77
+ $\xi_1^t,\dots ,\xi_n^t$
78
+
79
+ ![](images/fbfc737ac322d8bc3d34b4bc4e86f8586e16a457967e8e3657f50fa5fafd320a.jpg)
80
+ Temporal Cues
81
+
82
+ Top-down learning. Recently, self-supervised learning [8, 14, 7, 43] are capable to learn discriminate features without labels. Therefore, many methods attempt to introduce such properties to discriminate foreground objects without manual annotations. LOST [35] utilized a pre-trained DINO [7] to extract primary object attention from the background as the pseudo label and then finetuned an object detector. FreeSOLO [42] further proposed a unified framework for generating pseudo masks and iterative training. However, the performance relies heavily on the pre-trained self-supervised model, which determines the upper limits of such methods. Furthermore, such attention-based methods learned by contrastive learning also have the problem of distinguishing different instances within the same category. Our approach adopts top-down learning as well. Instead of aiding by an external self-supervised model, we look for geometric information to discover objects in the scene naturally.
83
+
84
+ # 3 Algorithm
85
+
86
+ # 3.1 Task definition and algorithm overview
87
+
88
+ The task of 4D Unsupervised Object Discovery is defined as follows. As shown in Figure 1, the input is a set of video clips recorded in both 2D video frames $I^{t}$ and 3D point clouds $P^{t}$ at frame $t$ during training. Since the point cloud and image data provide complementary information about location and appearance, they can serve as the natural cues guiding the training process mutually. During inference, the trained localization network $L_{\theta_1}$ is applied to still-image for 2D object detection, and the trained ClusterNet $N_{\theta_2}$ is applied to the point cloud for 3D instance segmentation.
89
+
90
+ $$
91
+ \left\{b _ {1} ^ {t}, \dots , b _ {k} ^ {t} \right\} = L _ {\theta_ {1}} \left(I ^ {t}\right), \quad \left\{\xi_ {1} ^ {t}, \dots , \xi_ {n} ^ {t} \right\} = N _ {\theta_ {2}} \left(P ^ {t}\right) \tag {1}
92
+ $$
93
+
94
+ $\{b_1^t,\dots,b_k^t\}$ are the 2D bounding box predictions by localization network $L_{\theta_1}$ at frame $t$ . $\{\xi_1^t,\dots,\xi_n^t\}$ are the 3D instance segments output by ClusterNet $N_{\theta_2}$ . $k$ and $n$ denotes the instance index.
95
+
96
+ $$
97
+ \theta_ {1} ^ {*}, \theta_ {2} ^ {*} = \underset {\theta_ {1}, \theta_ {2}} {\arg \min } f \left(L _ {\theta_ {1}} \left(I ^ {t}\right), N _ {\theta_ {2}} \left(P ^ {t}\right), t\right) \tag {2}
98
+ $$
99
+
100
+ Our solution exploits the spatio-temporal consistency on 2D video frames and 3D point clouds. The algorithm can be formulated into a joint optimization function $f$ in Eq.2. $\theta_{1}$ and $\theta_{2}$ are the parameters of the network need to optimize. Temporal information $t$ serve as the natural constraint in function $f$ . The localization network $L_{\theta_1}$ utilized Faster R-CNN as default. We propose a ClusterNet $N_{\theta_2}$ for 3D instance segmentation. Detail implementation will discuss in section 3.2. The major challenge is the optimization for function $f$ without annotations. To overcome the challenge, we seek for motion cues, 2D-3D cues and temporal cues to serve as the supervision. All these cues are extracted naturally in the informative 4D data. (1) motion cues, represented as 3D scene flow, can distinguish movable segments from the background. It uses to train the ClusterNet $N_{\theta_2}$ initially. (2) 2D-3D cues, reflecting the mapping between LiDAR points and RGB pixels, can be used as a bridge to optimize the $L_{\theta_1}$ and $N_{\theta_2}$ iteratively. It indicates the output of either network can be further used to optimize another network. (3) temporal cues, encouraging the temporal-consistent discovery in 2D and 3D view, can serve as the constraint to optimize the function together. More details will introduce in section 3.3.
101
+
102
+ # 3.2 ClusterNet
103
+
104
+ ClusterNet generates 3D instance segmentation from raw point clouds. As shown in Figure 2, given a point cloud $P \in \{(x,y,z)_i,i = 1,\dots,N\}$ , the network is able to give each point a class type $y_{i} \in \{1,0\}$ (indicating foreground or background) and instance ID $d_{i} \in \{1,\dots,n\}$ . Thus we can obtain $n$ candidate segments $\xi_{i} = \{(x,y,z)_{j}|y_{j} = 1,d_{j} = i\}$ on the point cloud. $n$ represents the number of instance segments in one frame of point cloud, and it is different in each frame.
105
+
106
+ Network design. The model first voxelized 3D points $(x,y,z)_i$ and extract voxelized features by a transformer-based feature extractor [11]. We further project these voxelized features back to each point. The feature dimensions of points become $3 + C$ (3 means $XYZ$ and $C$ denotes the embedding dim). Inspired by the VoteNet [27], we leverage a voting module to predict the class type and center offset for each point. Specifically, the voting module is realized with a multi-layer perception (MLP) network. The voting module takes point feature $f_{i}\in \mathcal{R}^{3 + C}$ and outputs the Euclidean space offset $\Delta x_{i}\in \mathcal{R}^{3}$ and class type prediction $y_{i}$ . The final loss is the weighted sum of the class prediction and center regression:
107
+
108
+ $$
109
+ \mathcal {L} = \mathcal {L} _ {\text {c e n t e r}} + \lambda \mathcal {L} _ {\text {c l s}} \tag {3}
110
+ $$
111
+
112
+ The class prediction loss $\mathcal{L}_{cls}$ choose the focal loss [21] to balance the points of foreground and background. The predicted 3D offset $\Delta x_{i}$ is supervised by a regression loss:
113
+
114
+ $$
115
+ \mathcal {L} _ {\text {c e n t e r}} = \frac {1}{M} \Sigma_ {i} \| \Delta \boldsymbol {x} _ {i} - \Delta \boldsymbol {x} _ {i} ^ {*} \| _ {1} \mathbb {I} \left[ y _ {i} ^ {*} = 1 \right] \tag {4}
116
+ $$
117
+
118
+ where $\mathbb{1}[y_i^* = 1]$ indicates whether a point belongs to the foreground according to the ground truth $y_{i}^{*}$ . $M$ is the total number of foreground points. $\Delta x_{i}^{*}$ is the ground truth offset from the point position $x_{i}$ to the instance center it belongs to. According to spatial proximity, we could further group the points into candidate instance segments with the predicted class type and center offset.
119
+
120
+ 3D instance initialization. It is more challenging to obtain the supervision signal without manual annotation than the network design. The model was trained initially by motion cues. Specifically, motion provides strong cues for identifying foreground points and grouping parts into objects since moving points generally belong to objects and have the same motion pattern if they belong to the same instance. We could estimate the 3D scene flow $S^t$ from the sequence of point clouds $P^t$ using the unsupervised method [19] at frame $t$ . 3D scene flow describes the motion of all the 3D points in the scene, represented as $S^t = \{(\nu_x, \nu_y, \nu_z)_i^t, i = 1, \dots, N\}$ .
121
+
122
+ Combining the scene flow $(\nu_{x},\nu_{y},\nu_{z})_{i}$ and point location in 2D $(u,\nu)_i$ and 3D $(x,y,z)_i$ , we can obtain $(u,\nu ,x,y,z,\nu_{x},\nu_{y},\nu_{z})_{i}$ for each point $p_i$ in $P^t$ . Then, we cluster the points with HDBSCAN [5] to divide the scan into $m$ segments, which will be the instance candidates $\xi_1,\ldots ,\xi_m$ . However, these instance candidates contain both foreground and background segments. We further assign each point $p_i$ of segment $\xi_j$ a binary label $y_{i}^{*}$ to distinguish foreground points using the motion cues (3D scene flow), as shown in Eq. 5.
123
+
124
+ $$
125
+ y _ {i} ^ {*} = \mathbb {1} \left[ \left\{\frac {1}{| \xi_ {j} |} \sum_ {p _ {i} \in \xi_ {j}} \mathbb {1} \left[ \| S ^ {t} \left(p _ {i}\right) \| _ {2} > \sigma \right] \right\} > \eta \right] \tag {5}
126
+ $$
127
+
128
+ ![](images/26fcce5a50f16113443106297d26ceb25063e6cf0cc1ce5504f495304892f92b.jpg)
129
+ Figure 2: Overview of the ClusterNet architecture. A backbone extracts voxelized features for point clouds, given an input point cloud of $N$ points with $XYZ$ coordinates. Each point predicts a class type and center through a voting module. Then the points are clustered into instance segmentation.
130
+
131
+ in which $\mathbb{1}[]$ is the indicator function. $|\xi_j|$ represents the total number of points belonging to segment $\xi_{j}$ . $p_i\in \xi_j$ is a point in segment $\xi_{j}$ . $\| S^t (p_i)\| _2$ represents the velocity of the point $p_i$ , and $\sigma$ denotes the threshold for velocity. $\eta$ determines the ratio of being a foreground object. $\sigma = 0.05$ and $\eta = 0.8$ by default. $y_{i}^{*} = 1$ means the point belongs to foreground segments. When the proportion of moving points in the segment is greater than the threshold $\eta$ , we regard it as a foreground object, and all the points it contains are labelled as foreground. These foreground segments selected by motion serve as the pseudo ground truth to train the ClusterNet initially.
132
+
133
+ # 3.3 Joint iterative optimization
134
+
135
+ ClusterNet trained by the motion cues serves as the initial weights for $\theta_{2}$ , which is the initialization (iter 0) of joint iterative optimization. Although movable objects can separate from the background with the motion cues, there are many static objects (e.g., parked cars or pedestrians waiting for traffic lights) in the scenes. Discovering both movable and static objects relies on further joint optimization by 2D-3D cues and temporal cues. In section 3.3.1, we introduce the specific process of joint iterative optimization. Specifically, The 3D segments output by $N_{\theta_2}$ can project to the 2D image to train the $L_{\theta_1}$ , and the 2D proposals output by $L_{\theta_1}$ can lift back to 3D view to train the $N_{\theta_2}$ . Temporal consistency ensures the objects appear continuously in 2D and 3D views, which is a critical constraint in optimization. The joint optimization can be iterated several times since the 2D localization network and 3D ClusterNet can benefit from each other. In section 3.3.2, we will introduce the technical design for static object discovery.
136
+
137
+ # 3.3.1 Model training
138
+
139
+ In Eq. 6, our goal is to optimize the $\theta_{1}$ and $\theta_{2}$ without annotations. $I^{t}$ and $P^{t}$ denote the RGB image and point cloud at frame $t$ . It is challenging to optimize both parameters simultaneously, so we divide the optimization process into two iterative steps: 2D step and 3D step.
140
+
141
+ $$
142
+ \boldsymbol {\theta} _ {1} ^ {*}, \boldsymbol {\theta} _ {2} ^ {*} = \underset {\theta_ {1}, \theta_ {2}} {\arg \min } f (\boldsymbol {L} _ {\theta_ {1}} (I ^ {t}), \boldsymbol {N} _ {\theta_ {2}} (P ^ {t}), t)
143
+ $$
144
+
145
+ 2D Step: $\theta_{\mathbf{1}}^{*} = \underset {\theta_{1}}{\arg \min}f(L_{\theta_{\mathbf{1}}}(\boldsymbol{I}^{t}),N_{\theta_{2}}(\boldsymbol{P}^{t}),t)$ (6)
146
+
147
+ 3D Step: $\theta_{2}^{*} = \underset{\theta_{2}}{\arg \min} f(L_{\theta_{1}}(I^{t}), N_{\theta_{2}}(P^{t}), t)$
148
+
149
+ 2D step. In this step, the $\theta_{2}$ is fixed and optimized $\theta_{1}$ . Since the ClusterNet $N_{\theta_2}$ are able to generate 3D instance segments $\xi_{1},\ldots ,\xi_{n}$ in 3D space, we can further project the 3D instance segments to 2D image plane by the transformation $T_{cl}$ (from the LiDAR sensor to the camera) and projection matrix $P_{pc}$ (from camera to pixels) defined by the camera intrinsic.
150
+
151
+ $$
152
+ \binom {\boldsymbol {u}} {1} = P _ {p c} T _ {c l} \binom {\boldsymbol {x}} {1} \tag {7}
153
+ $$
154
+
155
+ in which $\pmb{u}$ denotes the pixel location in the 2D image plane, and $\pmb{x}$ represents the 3D position of LiDAR points. Hence we can obtain the object point sets $\{\omega_1,\dots,\omega_n\}$ in the 2D image plane by projecting the LiDAR points of 3D instance segments $\{\xi_1,\dots,\xi_n\}$ . The 2D bounding boxes $\{b_1^*,\dots,b_n^*\}$ derived from projected object point sets $\{\omega_1,\dots,\omega_n\}$ , can use to optimize the weights of localization network $L_{\theta_1}$ .
156
+
157
+ 3D step. In this step, the $\theta_{1}$ is fixed and optimized $\theta_{2}$ . The localization network $L_{\theta_1}$ can output 2D bounding box predictions $\{b_1^a,\dots,b_k^a\}$ based on the image appearance information. It enables us to discover more objects in the scene (e.g., parked cars regarded as background by motion cues). We can get the updated 2D object set $b^{*}$ by Eq. 8 (box IoU set to 0.3 in Non-Maximum Suppression).
158
+
159
+ $$
160
+ b ^ {*} = N M S \left(\left\{b _ {1} ^ {*}, \dots , b _ {n} ^ {*} \right\} \cup \left\{b _ {1} ^ {a}, \dots , b _ {k} ^ {a} \right\}\right) \tag {8}
161
+ $$
162
+
163
+ Since many 3D instance segments may have been labelled as background by motion cues before, we later refined the label with the help of the 2D object set $b^{*}$ . Although the projection from LiDAR to the image is non-invertible without dense depth maps, we could still utilize the mapping between the LiDAR point and image pixels. It suggests using the LiDAR points within the 2D bounding box to relabel the 3D instance segments. However, the bounding box may contain many LiDAR points corresponding to different 3D instance segments. Practically, we only consider the primary segment $\xi_{j}$ (with most points) inside the bounding box and relabel the primary segment as the foreground object. With the refined label, we obtain the updated 3D object set $\xi^{*}$ of 3D instance segments $\{\xi_1^*,\dots,\xi_n^*\}$ , which further utilize to optimize the weights for ClusterNet $N_{\theta_2}$ .
164
+
165
+ Temporal cues. Temporal information can be integrated into the 2D step and 3D step as extra constraints. As shown in Eq. 9, $b^{t}, \xi^{t}$ represent the predicted 2D bounding box set and predicted 3D segments set for frame $t$ by $L_{\theta_1}$ and $N_{\theta_2}$ . $b_{*}^{t}, \xi_{*}^{t}$ denote the pseudo annotation for 2D bounding boxes $(\{b_{1}^{*}, \dots, b_{n}^{*}\})$ and 3D instance segments $(\{\xi_{1}^{*}, \dots, \xi_{n}^{*}\})$ from previous 2D step and 3D step. $\mathcal{L}_{2D}$ is the loss for the localization network, and $\mathcal{L}_{3D}$ is the loss for ClusterNet, which is introduced in Eq. 3. $\mathcal{L}_{smooth}$ encourages that the same object has consistent object labels across frames. The constraint can be added to both 2D views and 3D views. Therefore, it can help find new potential objects and filter out wrong annotations across time. More details are illustrated in Appendix C.
166
+
167
+ $$
168
+ f \left(L _ {\theta_ {1}} \left(I ^ {t}\right), N _ {\theta_ {2}} \left(P ^ {t}\right), t\right) = \underbrace {\mathcal {L} _ {2 D} \left(b ^ {t} , b _ {*} ^ {t}\right) + \mathcal {L} _ {s m o o t h} \left(b _ {*} ^ {t}\right)} _ {\text {2 D s t e p}} + \underbrace {\mathcal {L} _ {3 D} \left(\xi^ {t} , \xi_ {*} ^ {t}\right) + \mathcal {L} _ {s m o o t h} \left(\xi_ {*} ^ {t}\right)} _ {\text {3 D s t e p}} \tag {9}
169
+ $$
170
+
171
+ # 3.3.2 Static object discovery
172
+
173
+ Static object discovery is crucial in joint iterative training since the initialization by motion could handle movable objects well. During the joint iterative training, two technical designs are important for static object discovery. One is from the aspect of visual appearance, the other is from the aspect of temporal information.
174
+
175
+ Discover static objects by visual appearance. 2D localization network learns the object representation by visual appearance. It indicates the good generalization ability for static objects since movable objects and static objects usually have similar visual appearances. However, a critical design is the selection of positive and negative samples in model training. Initially, the 2D pseudo annotations generated by motion cues mainly come from moving objects. It is crucial to avoid static objects becoming negative samples so that the model can have better generalization ability to static objects. Table 4 compares different sampling strategies for the training.
176
+
177
+ Discover static objects by temporal information. Temporal information is also beneficial for static object discovery. Due to the occlusion in the 2D view, it is more applicable to discover potential new objects by tracking in the 3D view. Practically, we used Kalman filtering for 3D tracking, and rediscover new objects in the static tracklets (center offset between the start and end frames less than 3 meters). Since we only focus on static objects, the mean center of the tracklet would be a good prediction for lost objects.
178
+
179
+ # 4 Experiments
180
+
181
+ # 4.1 Dataset and implementation details
182
+
183
+ We evaluate our method on the challenging Waymo Open Dataset (WOD) [36], which provides 3D point clouds and 2D RGB image data that is suitable for our task setting. It is of great significance to verify our unsupervised method under such a real and large-scale complex scene.
184
+
185
+ Dataset. Waymo Open Dataset [36] is a recently released large-scale dataset for autonomous driving. We utilize point clouds from the 'top' LiDAR (64 channels, a maximum distance of 75 meters),
186
+
187
+ and video frames (at a resolution of $1280 \times 1920$ pixels) from the 'front' camera. The training and validation sets contain around 158k and 40k frames, respectively. All training frames and validation frames are manually annotated with 2D bounding boxes and 3D bounding boxes, which are capable of evaluating the performance of 2D object detection and 3D instance segmentation. Furthermore, WOD also provides the scene flow annotation in the latest version [17], which can illustrate the upper potential of our method.
188
+
189
+ Evaluation protocol. Evaluation is conducted on the annotated validation set of WOD. We evaluate the performance of 2D object detection and 3D instance segmentation. The dataset contains four annotated object categories ('vehicles', 'pedestrians', 'cyclists', and 'sign'). We test the class-agnostic average precision (AP) score for vehicles, pedestrians, and cyclists. For 2D object detection, the AP score is reported at the box intersection-over-union (IoU) threshold of 0.5. For better analysis, results are also evaluated on small (area $< 32^2$ pixels), medium ( $32^2$ pixels $<$ area $< 96^2$ pixels) and large objects (area $>96^2$ pixels). We also calculated the average recall (AR) to measure the ability of object discovery. For 3D instance segmentation, no previous metrics have been proposed on WOD. Referring to the 2D AP metrics, we propose to compute 3D AP score based on the IoU between predicted instance point sets and the ground truth. The ground truth for the instance segmentation can be obtained by labelling the point within 3D bounding boxes. The 3D AP score is reported at the point sets IoU threshold of 0.7 and 0.9, denoted as $\mathrm{AP}^{70}$ and $\mathrm{AP}^{90}$ , respectively. We also calculated the recall and foreground IoU for better analysis, which can measure the ability of object discovery from more perspectives. Note here the $\mathrm{AP}^{2D}$ denotes 2D object detection $\mathrm{AP}^{50}$ score. $\mathrm{AP}^{3D}$ denotes the 3D instance segmentation $\mathrm{AP}^{70}$ score.
190
+
191
+ Implementation details. Our implementation is based on the open-sourced code of mmdetection3d [9] for 3D detection and detectron2 [44] for 2D detection. For 2D localization network, we utilize Faster R-CNN [31] with FPN [20] by default, where ResNet-50 [16] is used as the backbone. The network is trained on 8 GPUs (A100) with 2 images per GPU for 12k iterations. The learning rate is initialized to 0.02 and is divided by 10 at the 6k and the 9k iterations. The weight decay and the momentum parameters are set as $10^{-4}$ and 0.9, respectively. For 3D ClusterNet, the input raw point clouds removed ground points first by [4] and remained the points that can only be seen on the front camera. The cluster range is $[0m,74.88m]$ for the X-axis, $[-37.44m,37.44m]$ for the Y-axis and $[-2m,4m]$ for the Z-axis. The voxel size is $(0.32m,0.32m,6m)$ . The feature extractor for voxelized points is [11], and the embedding dim for $C$ is 128. In the focal loss for class prediction, we set $\gamma = 2.0$ , $\alpha = 0.8$ . The balance weight $\lambda$ for Eq. 3 is set to 5. During inference, we set the minimum number of points to 5 for clustering. The ClusterNet is trained on 8 GPUs (A100) with 2 point clouds per GPU for 12 epochs. The learning rate is initialized to $10^{-5}$ and adopts the cyclic cosine strategy (target learning rate is $10^{-3}$ ). For hyper-parameters in HDBSCAN [5], we set the min cluster size to 15, and the others follow the default. For more implementation details, please refer to Appendix A.
192
+
193
+ # 4.2 Main results
194
+
195
+ 2D object detection. Table 1 compares the results between annotation from manual and annotation from our method (termed as ClusterNet) for class-agnostic 2D object detection. All the experiments in Table 1 utilized the same model (Faster R-CNN [31]) for fair evaluation. For the 2D bounding box annotations, the distant boxes are LiDAR invisible. The manual-annotated supervised baseline is trained with LiDAR visible 2D box annotations for a fair comparison. Even though our method still has the gap 43.2 vs. 54.4 to supervised baseline using fully manual annotation (1137k bounding boxes), we can outperform the case when the annotation is limited. Limited annotation is frequent in real-world applications. Compared with $10\%$ manual annotation (127k bounding boxes), our method could achieve 43.2 AP without any manual annotations and beat the 33.8 AP by a large margin. Since our method relies on the motion cues estimated unsupervised, we proved that the performance could increase to an incredible 51.8 AP with ground truth scene flow, which is very close to the performance of fully manual annotation but without bounding box annotation. Since the previous unsupervised methods only focused on still 2D images and could not extract the objects from the background accurately, it is no surprise they could only achieve poor results in such challenging scenes. LOST [35] can only extract one primary object from the background, which does not apply to the driving scenes. Freesolo [42] often generates a large mask for a row of cars, which can not distinguish specific instances. Felzenszwalb Segmentation [12] generate potential proposals by graph-based segmentation but lacks the ability to identify the foreground objects. Our method has great performance advantages over these still-image methods.
196
+
197
+ Table 1: Class-agnostic 2D object detection
198
+
199
+ <table><tr><td>annotation setting</td><td>#images</td><td>#bboxes</td><td>network weights initialized from</td><td>AP50</td><td>AP50S</td><td>AP50M</td><td>AP50L</td><td>AR50</td><td>AR50S</td><td>AR50M</td><td>AR50L</td></tr><tr><td colspan="12">supervised</td></tr><tr><td>fully manual annotation</td><td>158k</td><td>1137k</td><td>ImageNet</td><td>54.4</td><td>20.5</td><td>72.4</td><td>90.9</td><td>62.8</td><td>35.5</td><td>80.8</td><td>94.0</td></tr><tr><td>fully manual annotation</td><td>158k</td><td>1137k</td><td>scratch</td><td>52.5</td><td>23.5</td><td>67.6</td><td>86.3</td><td>62.3</td><td>34.9</td><td>80.0</td><td>93.3</td></tr><tr><td>10% manual annotation</td><td>15k</td><td>127k</td><td>ImageNet</td><td>33.8</td><td>5.5</td><td>45.3</td><td>74.9</td><td>36.1</td><td>9.7</td><td>48.6</td><td>76.7</td></tr><tr><td>10% manual annotation</td><td>15k</td><td>127k</td><td>scratch</td><td>31.6</td><td>5.7</td><td>42.5</td><td>72.2</td><td>35.9</td><td>8.6</td><td>47.7</td><td>75.3</td></tr><tr><td colspan="12">unsupervised</td></tr><tr><td>Felzenszwalb [12]</td><td>158k</td><td>0</td><td>ImageNet</td><td>0.4</td><td>0.0</td><td>0.5</td><td>1.1</td><td>11.1</td><td>0.6</td><td>14.5</td><td>30.7</td></tr><tr><td>LOST [35]</td><td>158k</td><td>0</td><td>ImageNet</td><td>1.9</td><td>0.0</td><td>1.0</td><td>7.6</td><td>5.0</td><td>0.0</td><td>0.4</td><td>27.9</td></tr><tr><td>FreeSolo [42]</td><td>158k</td><td>0</td><td>ImageNet</td><td>1.0</td><td>0.2</td><td>1.0</td><td>1.9</td><td>2.2</td><td>0.0</td><td>0.1</td><td>12.7</td></tr><tr><td>ClusterNet (w/ gt sceneflow)</td><td>158k</td><td>0</td><td>scratch</td><td>51.8</td><td>21.3</td><td>70.2</td><td>89.5</td><td>60.8</td><td>30.2</td><td>81.2</td><td>94.8</td></tr><tr><td>ClusterNet</td><td>158k</td><td>0</td><td>scratch</td><td>43.2</td><td>18.4</td><td>56.5</td><td>81.8</td><td>55.4</td><td>26.7</td><td>71.9</td><td>93.1</td></tr></table>
200
+
201
+ 3D instance segmentation. Table 2 illustrates the effectiveness of our ClusterNet on 3D instance segmentation. Our ClusterNet achieved $26.2\mathrm{AP}^{70}$ and $19.2\mathrm{AP}^{90}$ without any annotation, superior to $10\%$ supervised baseline $23.6\mathrm{AP}^{70}$ and $15.5\mathrm{AP}^{90}$ with $397\mathrm{k}$ 3D bounding boxes annotation. We proved that our method with accurate motion cues (ground truth scene flow) could achieve $42.0\mathrm{AP}^{70}$ and $33.2\mathrm{AP}^{90}$ , even comparable to that supervised baseline with fully manual annotation (4268k 3D bounding boxes). No previous method can achieve such high performance under an unsupervised setting. Figure 3 illustrates the object prediction of our approach on the WOD validation set.
202
+
203
+ Table 2: Class-agnostic 3D instance segmentation
204
+
205
+ <table><tr><td>annotation setting</td><td>#point clouds</td><td>#3D bboxes</td><td>AP70</td><td>AP90</td><td>Recall70</td><td>Recall90</td><td>IoU</td></tr><tr><td colspan="8">supervised</td></tr><tr><td>fully manual annotation</td><td>158k</td><td>4268k</td><td>45.7</td><td>37.3</td><td>75.1</td><td>65.1</td><td>92.2</td></tr><tr><td>10% manual annotation</td><td>15k</td><td>397k</td><td>23.6</td><td>15.5</td><td>61.8</td><td>48.7</td><td>81.6</td></tr><tr><td colspan="8">unsupervised</td></tr><tr><td>ClusterNet (w/ gt sceneflow)</td><td>158k</td><td>0</td><td>42.0</td><td>33.2</td><td>61.7</td><td>52.3</td><td>88.1</td></tr><tr><td>ClusterNet</td><td>158k</td><td>0</td><td>26.2</td><td>19.2</td><td>40.0</td><td>32.8</td><td>64.9</td></tr></table>
206
+
207
+ ![](images/efd7ed142008afaf13304a5d808191f8824c2bfd6d26d30463ef07c293d4f307.jpg)
208
+
209
+ ![](images/a6b5c3bed9935d74e0fc8d0a9f4a51b50dedb13b018eed77ed44327220ad7a91.jpg)
210
+
211
+ ![](images/dd9221272c39023b42284969faee2d9f7f082a65df6c2398f331187f7362dd8e.jpg)
212
+
213
+ ![](images/04b52d8b5bf1a4f036386c39f85501d6fd7beeedae565b686078b533e2ac8a9c.jpg)
214
+
215
+ ![](images/de02d69881eea4659179f9e1a65278e07d1fa4a542b6d42a0a368207869ed9eb.jpg)
216
+ Figure 3: Visualization for 2D object detection and 3D instance segmentation on the WOD validation set. Our approach could achieve such incredible results without any annotations.
217
+
218
+ 2D instance segmentation. We can also conduct instance segmentation by projecting the LiDAR points of 3D instance segments to the 2D image plane. The key difference is that instance segmentation masks other than object bounding boxes deriving from 3D instance segments as pseudo annotations. We utilized alpha shape [1] to generate the mask of object points (LiDAR points projected to the 2D image). The localization network can change conveniently to Mask R-CNN [15] for instance segmentation without manual annotations. Some predictions on validation set are illustrated in Figure 4 and Appendix D. Because the Waymo Open Dataset did not provide the annotation for instance mask, the performance of instance segmentation cannot be evaluated quantitatively.
219
+
220
+ ![](images/d600f44050dfc2b42ae80d8e2a82592bf6222736d743a325fe705fb4fa9d4030.jpg)
221
+
222
+ ![](images/68bc873b704d65de5df2d1818f87e60675fec336d171992bba2024ebddfa151c.jpg)
223
+
224
+ ![](images/0e8d5664c8a4175af31c6849fa01ad05751d870d252d8dc771864b5d5077eae2.jpg)
225
+
226
+ ![](images/a5e4fc78b61f7f9adbf41e9a14a0a0eb4383a47980c1ef898231c13d3d41ff66.jpg)
227
+
228
+ ![](images/93ddf7ccad3996c9f429412779ddcb4374f897a508aa5f06a01894a08ba44055.jpg)
229
+ Figure 4: Instance segmentation by our approach when using Mask R-CNN [15] as our localization network, without manual annotations. Our method can generate high-quality instance masks.
230
+
231
+ ![](images/f690f291dbc6140226b907bf7f7b9e641c5731342cba4b24054504a256006796.jpg)
232
+
233
+ ![](images/79a53582bedf0aaa537dcc69222dd1959f0e498425829bb5783fa8fbcbd0921c.jpg)
234
+
235
+ ![](images/730126b9a2f0d104e67eb60b161329741c3245615b84516262dd47e242ee4dc9.jpg)
236
+
237
+ # 4.3 Ablation studies
238
+
239
+ Analysis of multi cues for training. Table 3 analyze the contributions of multi cues in our approach to the WOD validation set. The final results are obtained after three iterations of joint optimization. $\mathrm{AP^{2D}}$ denotes the $\mathrm{AP^{50}}$ score for 2D object detection, and $\mathrm{AP^{3D}}$ denotes the $\mathrm{AP^{70}}$ score for 3D instance segmentation. The ClusterNet was trained with the pseudo-annotations obtained by HDBSCAN Clustering [5] for the first time. So a simple baseline is directly using the HDBSCAN for 3D instance segmentation and project on 2D for localization network training. In comparison, we demonstrate the effectiveness of using ClusterNet; the performance increased by $4.9\mathrm{AP^{2D}}$ (from 25.1 to 30.0) and $2.2\mathrm{AP^{3D}}$ (from 4.6 to 6.8). Furthermore, the performance improved significantly by joint optimizing for the ClusterNet and localization network, with 2D-3D cues and temporal cues.
240
+
241
+ Table 3: Analysis of multi cues for training.
242
+
243
+ <table><tr><td>Method</td><td>point cloud</td><td>motion cues</td><td>2D-3D cues</td><td>temporal cues</td><td>AP2D↑</td><td>AP3D↑</td></tr><tr><td rowspan="2">HDBSCAN [5]</td><td>✓</td><td></td><td></td><td></td><td>14.9</td><td>2.1</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td>25.1</td><td>4.6</td></tr><tr><td rowspan="3">ClusterNet</td><td>✓</td><td>✓</td><td></td><td></td><td>30.0</td><td>6.8</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td></td><td>40.4</td><td>25.7</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>43.2</td><td>26.2</td></tr></table>
244
+
245
+ Table 4: Ablation on sampling strategy.
246
+
247
+ <table><tr><td>sampling strategy</td><td>AP50</td><td>AP50S</td><td>AP50M</td><td>AP50L</td></tr><tr><td>(a) IoU+&gt;0.7, IoU-&lt;0.3</td><td>27.8</td><td>3.2</td><td>37.2</td><td>70.0</td></tr><tr><td>(b) IoU+&gt;0.6, IoU-&lt;0.4</td><td>28.2</td><td>3.3</td><td>36.9</td><td>71.7</td></tr><tr><td>(c) IoU+&gt;0.6, 0.1&lt;IoU-&lt;0.4</td><td>30.0</td><td>4.3</td><td>39.4</td><td>73.2</td></tr></table>
248
+
249
+ Table 5: Ablation on early stopping.
250
+
251
+ <table><tr><td>iterations</td><td>AP50</td><td>AR50</td></tr><tr><td>3000</td><td>30.0</td><td>43.3</td></tr><tr><td>6000</td><td>27.9</td><td>40.4</td></tr><tr><td>12000</td><td>25.3</td><td>35.0</td></tr></table>
252
+
253
+ Training strategy for localization network. Training the localization network with the pseudo annotations generated by ClusterNet is quite different from training with manual annotations. Specifically, the pseudo annotations will be noisy and incomplete before joint iterative training. Therefore, two points we found crucial for the initial training: (1) sampling strategy in RPN (Region Proposal Network) and (2) early stopping for training. The following ablation experiments are conducted for the first-time training of the localization network. Table 4 compares three different strategies: (a) sample anchors with box IoU $>0.7$ as positive example and box IoU $<0.3$ as negative example, as in the standard Faster R-CNN; (b) sample anchors with box IoU $>0.6$ as positive example and box IoU $<0.4$ as negative example; (c) sample anchors with box IoU $>0.6$ as positive example, and box $0.1 < \mathrm{IoU} < 0.4$ as negative example. In this way, the strategy considerably reduces the chance
254
+
255
+ of sampling static objects as negative examples. Table 5 compares different training iterations and shows that early stopping improves the generalization performance. Since the pseudo annotations have noise, training for a long time may overfit the noise in the training set, leading to the degradation of generalization performance.
256
+
257
+ Joint iterative optimization. Table 6 presents the effectiveness of our joint iterative optimization for the 2D localization network and 3D ClusterNet. Iteration 0 represents the initial performance of ClusterNet trained by motion cues (estimated scene flow). Next, each iteration means a 2D step and a 3D step. Even though the model did not perform well at the beginning, with joint iterative optimization, both $\mathrm{AP^{2D}}$ and $\mathrm{AP^{3D}}$ improved rapidly. Applying more than one iteration improves the results, indicating that the 2D localization network and 3D ClusterNet can benefit from each other. We set the iteration number as 3 by default.
258
+
259
+ Minimum points for ClusterNet. Minimum points determine the minimum number of LiDAR points for 3D instance segments. Table 7 analyze the model performance under different parameters during the inference. We set the min points to 5 by default.
260
+
261
+ Table 6: Joint iterative optimization.
262
+
263
+ <table><tr><td>iterations</td><td>AP2D</td><td>AP3D</td></tr><tr><td>0</td><td>/</td><td>6.8</td></tr><tr><td>1</td><td>30.0</td><td>20.2</td></tr><tr><td>2</td><td>37.4</td><td>25.4</td></tr><tr><td>3</td><td>43.2</td><td>26.2</td></tr><tr><td>4</td><td>42.8</td><td>25.9</td></tr></table>
264
+
265
+ Table 7: Minimum points for ClusterNet.
266
+
267
+ <table><tr><td>min points</td><td>AP3D</td></tr><tr><td>2</td><td>25.3</td></tr><tr><td>5</td><td>26.2</td></tr><tr><td>10</td><td>26.0</td></tr><tr><td>20</td><td>25.5</td></tr></table>
268
+
269
+ # 5 Discussion and conclusions
270
+
271
+ Discussion. Unsupervised object discovery used to believe infeasible due to the ambiguity of objects and the complexity of scenarios. However, 4D data with the sequence of image frames and point clouds provide enough cues to discover the movable objects, even without manual annotation. The complementary information behind the 3D LiDAR points and 2D image and constraints from temporal are the critical factors for the success of unsupervised object discovery. With 4D sensor data readily available onboard, our approach shows extraordinary potential for scenarios with limited or no annotation. The only limitation is that our method is suitable for movable objects (vehicles, pedestrians); static things (never move) like beds or chairs can not be discovered.
272
+
273
+ Conclusions. In this work, we propose a new task named 4D Unsupervised Object Discovery. The task needs to discover objects both on the image and point clouds without manual annotations. We present the first practical approach for this task by proposing a ClusterNet for 3D instance segmentation and joint iterative optimization. Extensive experiments on the large-scale Waymo Open Dataset demonstrate the effectiveness of our approach. So far as we know, we are the first work to achieve such high performance for unsupervised 2D object detection and 3D instance segmentation, bridging the gap between unsupervised methods and supervised methods. Our work sheds light on a new perspective on the future study of unsupervised object discovery.
274
+
275
+ Societal Impacts. The development of unsupervised object discovery requires large datasets, introducing privacy issues. The technology of unsupervised detection dramatically reduces the labelling cost; it may affect the people engaged in the labelling industry in the future. The elimination of human intervention may also cause some data annotators to lose their current jobs. Our approach only tests in driving scenes for effectiveness, which may lead to some wrong detection in other scenes.
276
+
277
+ # Acknowledgments and Disclosure of Funding
278
+
279
+ The authors thank the anonymous reviewers for their constructive comments. This work was supported in part by the Major Project for New Generation of AI (No.2018AAA0100400), the National Natural Science Foundation of China (No. 61836014, No. U21B2042, No. 62072457, No. 62006231), and the InnoHK program. The authors would like to thank Xizhou Zhu and Jifeng Dai for conceiving an early idea of this work. Also, our sincere and hearty appreciations go to Jiawei He, Lue Fan and Yuxi Wang, who polishes our paper and offers many valuable suggestions.
280
+
281
+ # References
282
+
283
+ [1] Nataraj Akkiraju, Herbert Edelsbrunner, Michael Facello, Ping Fu, EP Mucke, and Carlos Varela. Alpha shapes: definition and software. In Proceedings of International Computational Geometry Software Workshop, 1995. 8
284
+ [2] Pablo Arbeláez, Jordi Pont-Tuset, Jonathan T Barron, Ferran Marques, and Jitendra Malik. Multiscale combinatorial grouping. In CVPR, 2014. 1, 2, 15
285
+ [3] Alex Bewley, Zongyuan Ge, Lionel Ott, Fabio Ramos, and Ben Upcroft. Simple online and realtime tracking. In ICIP, 2016. 16
286
+ [4] Igor Bogoslavskyi and Cyrill Stachniss. Efficient online segmentation for sparse 3d laser scans. PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science, 2017. 2, 7, 14
287
+ [5] Ricardo JGB Campello, Davoud Moulavi, and Jörg Sander. Density-based clustering based on hierarchical density estimates. In Pacific-Asia conference on knowledge discovery and data mining, 2013. 2, 4, 7, 9, 14, 15
288
+ [6] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020. 1
289
+ [7] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, 2021. 1, 3, 14
290
+ [8] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In ICML, 2020. 3
291
+ [9] MMDetection3D Contributors. MMDetection3D: OpenMMLab next-generation platform for general 3D object detection. https://github.com/open-mmlab/ddetection3d, 2020.7
292
+ [10] Bertrand Douillard, James Underwood, Noah Kuntz, Vsevolod Vlaskine, Alastair Quadros, Peter Morton, and Alon Frenkel. On the segmentation of 3d lidar point clouds. In ICRA, 2011. 2
293
+ [11] Lue Fan, Ziqi Pang, Tianyuan Zhang, Yu-Xiong Wang, Hang Zhao, Feng Wang, Naiyan Wang, and Zhaoxiang Zhang. Embracing single stride 3d object detector with sparse transformer. arXiv preprint arXiv:2112.06375, 2021. 4, 7, 14
294
+ [12] Pedro F Felzenszwalb and Daniel P Huttenlocher. Efficient graph-based image segmentation. IJCV, 2004. 7, 8, 14, 15
295
+ [13] Ross Girshick. Fast r-cnn. In ICCV, 2015. 2
296
+ [14] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, 2020. 2, 3
297
+ [15] Kaiming He, Georgia Gkioxari, Piotr Dolkar, and Ross Girshick. Mask r-cnn. In ICCV, 2017. 1, 2, 8, 9
298
+ [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 7
299
+ [17] Philipp Jund, Chris Sweeney, Nichola Abdo, Zhifeng Chen, and Jonathon Shlens. Scalable scene flow from point clouds in the real world. RAL, 2021. 7
300
+ [18] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 14
301
+ [19] Xueqian Li, Jhony Kaesemodel Pontes, and Simon Lucey. Neural scene flow prior. NeurIPS, 2021. 4, 14
302
+ [20] Tsung-Yi Lin, Piotr Dálár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In CVPR, 2017. 7
303
+
304
+ [21] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dóllár. Focal loss for dense object detection. In ICCV, 2017. 2, 4
305
+ [22] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In ECCV, 2016. 2
306
+ [23] Xiankai Lu, Wenguan Wang, Chao Ma, Jianbing Shen, Ling Shao, and Fatih Porikli. See more, know more: Unsupervised video object segmentation with co-attention siamese networks. In CVPR, 2019. 15
307
+ [24] Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In CVPR, 2016. 15
308
+ [25] AJ Piergiovanni, Vincent Casser, Michael S Ryoo, and Anelia Angelova. 4d-net for learned multi-modal alignment. In ICCV, 2021. 2
309
+ [26] Jordi Pont-Tuset, Federico Perazzi, Sergi Caelles, Pablo Arbeláez, Alex Sorkine-Hornung, and Luc Van Gool. The 2017 davis challenge on video object segmentation. arXiv preprint arXiv:1704.00675, 2017. 15
310
+ [27] Charles R Qi, Or Litany, Kaiming He, and Leonidas J Guibas. Deep hough voting for 3d object detection in point clouds. In ICCV, 2019. 1, 2, 4
311
+ [28] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. 2
312
+ [29] Hazem Rashed, Mohamed Ramzy, Victor Vaquero, Ahmad El Sallab, Ganesh Sistu, and Senthil Yogamani. Fusemodnet: Real-time camera and lidar based moving object detection for robust low-light autonomous driving. In ICCV Workshops, 2019. 15
313
+ [30] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In CVPR, 2017. 1, 2
314
+ [31] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. NeurIPS, 2015. 1, 2, 7, 14
315
+ [32] Radu Bogdan Rusu and Steve Cousins. 3d is here: Point cloud library (pcl). In ICRA, 2011. 2
316
+ [33] Shaoshuai Shi, Chaoxu Guo, Li Jiang, Zhe Wang, Jianping Shi, Xiaogang Wang, and Hongsheng Li. Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In CVPR, 2020. 1, 2
317
+ [34] Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. Pointrcnn: 3d object proposal generation and detection from point cloud. In CVPR, 2019. 1
318
+ [35] Oriane Simeoni, Gilles Puy, Huy V Vo, Simon Roburin, Spyros Gidaris, Andrei Bursuc, Patrick Pérez, Renaud Marlet, and Jean Ponce. Localizing objects with self-supervised transformers and no labels. arXiv preprint arXiv:2109.14279, 2021. 1, 3, 7, 8, 14, 15
319
+ [36] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In CVPR, 2020. 2, 6
320
+ [37] Hao Tian, Yuntao Chen, Jifeng Dai, Zhaoxiang Zhang, and Xizhou Zhu. Unsupervised object detection with lidar clues. In CVPR, 2021. 2
321
+ [38] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In ICCV, 2019. 1, 2
322
+ [39] Jasper RR Uijlings, Koen EA Van De Sande, Theo Gevers, and Arnold WM Smeulders. Selective search for object recognition. IJCV, 2013. 1, 2
323
+ [40] Carles Ventura, Miriam Bellver, Andreu Girbau, Amaia Salvador, Ferran Marques, and Xavier Giro-i Nieto. Rvos: End-to-end recurrent network for video object segmentation. In CVPR, 2019. 15
324
+
325
+ [41] Qitai Wang, Yuntao Chen, Ziqi Pang, Naiyan Wang, and Zhaoxiang Zhang. Immortal tracker: Tracklet never dies. arXiv preprint arXiv:2111.13672, 2021. 16
326
+ [42] Xinlong Wang, Zhiding Yu, Shalini De Mello, Jan Kautz, Anima Anandkumar, Chunhua Shen, and Jose M Alvarez. Freesolo: Learning to segment objects without annotations. arXiv preprint arXiv:2202.12181, 2022. 1, 3, 7, 8
327
+ [43] Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, and Lei Li. Dense contrastive learning for self-supervised visual pre-training. In CVPR, 2021. 1, 3
328
+ [44] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detector2. https://github.com/facebookresearch/detectron2, 2019. 7, 14
329
+ [45] Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. Sensors, 2018. 2
330
+ [46] Zetong Yang, Yanan Sun, Shu Liu, and Jiaya Jia. 3dssd: Point-based 3d single stage object detector. In CVPR, 2020. 2
331
+ [47] Tianwei Yin, Xingyi Zhou, and Philipp Krahenbuhl. Center-based 3d object detection and tracking. In CVPR, 2021. 1, 2
332
+ [48] Xingyi Zhou, Dequan Wang, and Philipp Krahenbuhl. Objects as points. arXiv preprint arXiv:1904.07850, 2019. 2
333
+ [49] Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In CVPR, 2018. 1, 2
334
+ [50] C Lawrence Zitnick and Piotr Dólár. Edge boxes: Locating object proposals from edges. In ECCV, 2014. 1, 2
4dunsupervisedobjectdiscovery/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e2a9a2a8eb2db1f33f376d5ab7ff0f54415db348376a8d76c74e1a4f23d7cf5
3
+ size 468012
4dunsupervisedobjectdiscovery/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c467a30496e6a8e8528b9703f22bf4d30de70f1c9bc0737de51b2b3265f38b08
3
+ size 475556
abenchmarkforcompositionalvisualreasoning/67b372e6-264d-4e8c-8738-917bc01af149_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dab0849b7d99918258c9c2cbd46f77f414e055edf179703ffebe0328afa0c14a
3
+ size 69426
abenchmarkforcompositionalvisualreasoning/67b372e6-264d-4e8c-8738-917bc01af149_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a52ec83d676ee091e6ea7a54e9cdeb38a2fe8e48c9ba09d50479b26f9846e066
3
+ size 84341
abenchmarkforcompositionalvisualreasoning/67b372e6-264d-4e8c-8738-917bc01af149_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e46907e202227062aac4ef442c27f02cc13c7fa6e878b621be518c1bbe4f859
3
+ size 867903
abenchmarkforcompositionalvisualreasoning/full.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Benchmark for Compositional Visual Reasoning
2
+
3
+ Aimen Zerroug $^{1,2,3}$ , Mohit Vaishnav $^{1,2,3}$ , Julien Colin $^{2}$ , Sebastian Musslick $^{2}$ , Thomas Serre $^{1,2}$
4
+
5
+ <sup>1</sup> Artificial and Natural Intelligence Toulouse Institute, Université de Toulouse, France
6
+
7
+ $^{2}$ Carney Institute for Brain Science, Dept. of Cognitive Linguistic & Psychological Sciences Brown University, Providence, RI 02912
8
+
9
+ <sup>3</sup> Centre de Recherche Cerveau et Cognition, CNRS, Université de Toulouse, France
10
+
11
+ {aimen_zerroug, mohit_vaishnav, julien_colin, sebastian_musslick, thomas_serre}@brown.edu
12
+
13
+ # Abstract
14
+
15
+ A fundamental component of human vision is our ability to parse complex visual scenes and judge the relations between their constituent objects. AI benchmarks for visual reasoning have driven rapid progress in recent years with state-of-the-art systems now reaching human accuracy on some of these benchmarks. Yet, there remains a major gap between humans and AI systems in terms of the sample efficiency with which they learn new visual reasoning tasks. Humans' remarkable efficiency at learning has been at least partially attributed to their ability to harness compositionality – allowing them to efficiently take advantage of previously gained knowledge when learning new tasks. Here, we introduce a novel visual reasoning benchmark, Compositional Visual Relations (CVR), to drive progress towards the development of more data-efficient learning algorithms. We take inspiration from fluid intelligence and non-verbal reasoning tests and describe a novel method for creating compositions of abstract rules and generating image datasets corresponding to these rules at scale. Our proposed benchmark includes measures of sample efficiency, generalization, compositionality, and transfer across task rules. We systematically evaluate modern neural architectures and find that convolutional architectures surpass transformer-based architectures across all performance measures in most data regimes. However, all computational models are much less data efficient than humans, even after learning informative visual representations using self-supervision. Overall, we hope our challenge will spur interest in developing neural architectures that can learn to harness compositionality for more efficient learning.
16
+
17
+ # 1 Introduction
18
+
19
+ Visual reasoning is a complex ability requiring a high level of abstraction over high dimensional sensory input. It highlights human's capacity to manipulate concepts and relations as symbols extracted from visual input. The efficiency with which humans learn new visual concepts and relations, as exemplified by fluid intelligence and non-verbal reasoning tests, is equally fascinating. In the pursuit of human-level artificial intelligence, a growing body of research is attempting to emulate this skill in machines, and deep neural networks are at the forefront of the field.
20
+
21
+ Deep learning approaches are prime candidates as models of human intelligence due to their success at learning from data while relying on simple design principles. However, these architectures are imperfect models of human intelligence, as shown by their lack of sample efficiency, the inability to generalize to unfamiliar situations [13] and the lack of robustness [14]. Their ability to perform well
22
+
23
+ in large-data regimes has skewed research towards scaling up datasets and architectures with little consideration for the sample efficiency of these systems.
24
+
25
+ Only a few benchmarks address these aspects of human intelligence. One such benchmark, ARC [9] provides diverse visual reasoning problems. However, the extreme scarcity of training samples, only 3 samples per task, renders the benchmark difficult for all methods, especially neural networks. Other benchmarks have led to the development of new neural network-based models that address particular gaps between human and machine intelligence [3, 43, 12]. Some focus on evaluating the task's perceptual requirements [12], which include detecting features, recognizing objects, perceptual grouping and spatial reasoning. Others evaluate logical reasoning requirements [3, 43], such as symbolic reasoning, making analogies and causal reasoning. However, they lack either the variety of abstract relations present in the scene or the semantic and structural variety of scenes over which they instantiate these abstract relations.
26
+
27
+ Creating novel visual reasoning tasks can be challenging. In this benchmark, we standardize a process for creating tasks compositionally based on an elementary set of relations and abstractions. This process allows us to exploit a wide range of visual relations as well as abstract rules, thus, making it possible to evaluate both the perceptual and logical requirements of visual reasoning. The compositional nature of the tasks provides an opportunity to investigate the learning strategies wielded by existing methods. Among these methods, we focus on state-of-the-art abstract visual reasoning models and standard vision models. These models have been
28
+
29
+ ![](images/28b043765faef234a051dfc48e5fea62dcae02c3418fc42a65ed0286bf534d3e.jpg)
30
+ Figure 1: Visual reasoning benchmarks: State-of-the-art models achieve super-human accuracy [40, 38] on several visual-reasoning benchmarks such as RAVEN [43] PGM [3] and SVRT [12]. However, some benchmarks continue to pose a challenge for current models, such as ARC [9]. The fundamental difference between these different benchmarks is the number of unique task rules they composed out of their priors and the number of samples available for training architectures on individual rules. This difference sheds light on two poorly researched aspects of human intelligence: learning in low-sample regimes and harnessing compositionality. The proposed CVR challenge aims to fill the gap between current benchmarks to encourage the development of more sample-efficient and more versatile neural architectures for visual reasoning.
31
+
32
+ shown to reach high performance on several visual reasoning tasks in previous works [40, 38], but they always require large amounts of data. This paper's subject of interest is quantifying these models' sample efficiency.
33
+
34
+ # Contributions Our contributions can be summarized as follows:
35
+
36
+ - A novel visual reasoning benchmark called Compositional Visual Relations (CVR) with 103 unique tasks over distinct scene structures.
37
+ - A novel method for generating visual reasoning problems with a compositionality prior.
38
+ - A systematic analysis of the sample efficiency of baseline visual reasoning architectures.
39
+ - An empirical study of models' capacity at using compositionality to solve complex problems.
40
+
41
+ Our large-scale experiments capture a multitude of setups, including multi-task and individual task training, pre-training with self-supervision on dataset images to contrast learning of visual representations vs. abstract visual reasoning rules, training over a range of data regimes, and testing transfer learning between dataset tasks. We present an in-depth analysis of task difficulty, which provides insights into the strengths and weaknesses of current models. Overall, we find that the best baselines trained in the most favorable conditions fall short of human sample efficiency for learning those same tasks. While models appear to be capable of transferring knowledge across tasks, we show that they do not leverage compositionality to efficiently learn task components. We hope to inspire research on more efficient visual reasoning models by releasing our dataset. The code for generating the full dataset and training models is available here.
42
+
43
+ ![](images/2d3aa562b720caa137052b2eb6c18c348449e668eefedecc4ea4be21da751f88.jpg)
44
+ Figure 2: Scene Generation: A scene in our image dataset is composed of objects. (a) An object is a closed contour with several attributes. (b) A relation is a constraint for the generation process over scene attributes. (c) The elementary relations control unique scene attributes. They are used for building task rules in a compositional manner. Each task uses a Reference rule and an Odd-One-Out rule to generate images. (d) Odd-One-Out problems are randomly generated using a program. Three images are generated following the Reference rule, and a fourth image (highlighted in red) is generated following the Odd-One-Out rule.
45
+
46
+ ![](images/87a5c1d3cf152a82bb63c7ff039a260308bc248393d586e4a7dcd9d0f44eedb7.jpg)
47
+
48
+ ![](images/5143c9a8768c89ef748bdff445b8ea3d4e05cd519e7ec3d619d5e5b132b78a9e.jpg)
49
+
50
+ # 2 Compositional Visual Relations Dataset
51
+
52
+ CVR is a synthetic visual reasoning dataset that builds on prior AI benchmarks [12, 9] and is inspired by a cognitive science literature [37] on visual reasoning. In the following, we will describe the generation process of the dataset.
53
+
54
+ Odd-One-Out The odd one out task has been employed in prior work to test visual reasoning [27]. A sample problem consists of 4 images generated such that one of them is an outlier according to a rule. The goal of the task is to select the outlier. The learner is expected to test several hypotheses in order to detect the outlier. This process requires them to infer the hidden scene structure and relationships between the objects.
55
+
56
+ Scene generation Each image contains one scene composed of multiple objects as shown in Figure 2. An object is defined as a closed contour with a set of object attributes: shape, position, size, color, rotation and flip. Other attributes describe the scene or low-level relations between objects. Count corresponds to the number of objects, groups of objects or relations. Insideness indicates
57
+
58
+ that an object contains another object within its contour. Contact indicates that two object contours are touching. These 9 attributes are the basis for the 9 elementary relations. For example, a "size" relation is a constraint on the sizes of certain objects in the scene. Relations are expressed with natural language or logical, relational and arithmetic operators over scene attributes. Relations and objects are represented as nodes in the scene graph. Relations define groups of objects and can have attributes of their own. Thus, it is possible to create abstract relations over these relations' attributes. A scene can be generated from a template that we call a structure. The concepts of structure, scene graph and relations are used for formalizing the process behind designing a task. In practice, the
59
+
60
+ Algorithm 1: Problem Generation Program: Generates problem samples of the shape-size task in Figure 2
61
+
62
+ $n\gets 4$ //Number of objects
63
+ for $i\gets 1$ to 4 do
64
+ $s\gets$ sample_size() $s^{\prime}\gets s\times rand([2 / 3,1 / 4])$
65
+ if $i = 4$ then //0dd-One-Out $[s_i]^{1 - n}\gets [s,s',s,s']$
66
+ else $\left[\left[s_i\right]^{1-n}\right.\gets\left[s,s,s',s'\right]$
67
+ end
68
+ $[o,o^{\prime}]\gets$ sample_shapes(n=2) $[o_i]^{1 - n}\gets [o,o,o',o')$ $[p_i]^{1 - n}\gets$ sample_position([si]1-n)
69
+ $[c_i]^{1 - n}\gets$ sample_color(n=1)
70
+ end
71
+ $[\text{scene}]_{1 - 4} = [[o,p,s,c]^{1 - n}]_{1 - 4}$ $[\text{image}]_{1 - 4} = [\text{render} (\text{scene})]_{1 - 4}$
72
+
73
+ ![](images/a7f7c371a8c44ca7c3340b98e3809e17a527915106048a672ccc87bd3e5ee7e6.jpg)
74
+ Figure 4: Examples of task rules that are composed of a pair of relations. More examples of tasks and algorithms are provided in the SI.
75
+
76
+ generation process is a program implemented by the task designer to generate problem samples of one task randomly. The Pseudo-code for an example program is detailed in Alg. 1.
77
+
78
+ Rules and problem creation The generation process described above can be used to instantiate different tasks; binary classification, few-shot binary classification, or a raven's progressive matrix. In this paper, we choose to apply this process to create odd-one-out problems. First, the task designer selects target relations and incorporates them into a new scene structure. In Figure 2, the target relations are size and shape similarity; they are added to a scene with 4 objects. Then, a reference rule and an odd rule are chosen such that they combine target relations in different ways. The reference and odd rules in the example vary only in the size or shape attributes. A valid odd-one-out rule contradicts the reference rule such that any strategy used to solve the task must involve exclusively reasoning over the target relations. Given a scene structure, a reference and an odd-one-out rule, the generation process has a set of free parameters that control the generation process for new samples. The problem's difficulty level can be varied by randomizing or fixing these parameters. In the shape-size task, the range of color values and the variation of objects across the 4 images are examples of free parameters. More random parameters result in a higher difficulty. We create generalization test sets by changing the sets of fixed or random parameters. For more details on the generalization test sets we refer the reader to the SI.
79
+
80
+ Dataset details CVR incorporates 103 unique reference rules, including 9 rules instantiating the 9 elementary visual relations and 94 additional rules built on compositions of the relations.
81
+
82
+ These compositions span all pairs of elementary rules and include up to 4 relations. While some rules are composed of the same elementary relations, they remain unique in their scene structure or associations with other relations. 20 are compositions of single elementary relations, 65 are compositions of a pair of relations and 9 are compositions of more than 2 elementary relations. Figure 3 details the number of unique rules for each pair of elementary relations. The procedural generation of problem samples helps us create an arbitrary number of samples. We create 10,000
83
+
84
+ ![](images/6602463e901f11ad2fa6116dbfba4c4125eeeef1d9657d9c55e05cbcc59ce0eb.jpg)
85
+ Figure 3: Dataset rules: Each square represents the number of rules that are a composition of the associated elementary relations and the bar plot shows the number of rules that involve each elementary relation.
86
+
87
+ <table><tr><td colspan="2">N train samples</td><td>20</td><td>50</td><td>100</td><td>200</td><td>500</td><td>1000</td><td>SES</td><td>AUC</td><td>10000</td></tr><tr><td rowspan="6">rand-init</td><td>ResNet-50[15]</td><td>28.0</td><td>1</td><td>31.1</td><td>1</td><td>32.5</td><td>3</td><td>34.0</td><td>6</td><td>38.7 12</td></tr><tr><td>ViT-small[11]</td><td>28.6</td><td>1</td><td>30.1</td><td>4</td><td>30.9</td><td>4</td><td>31.9</td><td>4</td><td>33.8 4</td></tr><tr><td>SCL[40]</td><td>26.9</td><td>0</td><td>30.0</td><td>1</td><td>30.3</td><td>2</td><td>30.0</td><td>2</td><td>31.4 2</td></tr><tr><td>WReN[3]</td><td>30.0</td><td>0</td><td>32.0</td><td>2</td><td>32.9</td><td>2</td><td>34.1</td><td>3</td><td>36.3 6</td></tr><tr><td>SCL-ResNet 18</td><td>31.4</td><td>1</td><td>37.3</td><td>9</td><td>37.8</td><td>9</td><td>39.6</td><td>15</td><td>42.7 21</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td>38.4 39.5</td></tr><tr><td rowspan="5">joint</td><td>ResNet-50</td><td>27.5</td><td>0</td><td>28.2</td><td>0</td><td>29.9</td><td>2</td><td>33.9</td><td>6</td><td>52.1 29</td></tr><tr><td>ViT-small</td><td>27.3</td><td>1</td><td>27.8</td><td>2</td><td>28.0</td><td>1</td><td>28.1</td><td>1</td><td>29.9 2</td></tr><tr><td>SCL</td><td>25.8</td><td>0</td><td>25.8</td><td>0</td><td>28.3</td><td>1</td><td>34.1</td><td>3</td><td>43.2 22</td></tr><tr><td>WReN</td><td>26.8</td><td>0</td><td>27.6</td><td>0</td><td>28.5</td><td>0</td><td>30.1</td><td>0</td><td>36.4 9</td></tr><tr><td>SCL-ResNet 18</td><td>26.4</td><td>0</td><td>28.4</td><td>0</td><td>31.6</td><td>4</td><td>40.7</td><td>13</td><td>51.4 32</td></tr><tr><td rowspan="4">SSL</td><td>ResNet-50</td><td>40.5</td><td>13</td><td>47.3</td><td>18</td><td>52.9</td><td>29</td><td>56.8</td><td>34</td><td>61.9 42</td></tr><tr><td>ViT-small</td><td>46.7</td><td>16</td><td>51.6</td><td>24</td><td>54.8</td><td>29</td><td>57.5</td><td>38</td><td>62.0 44</td></tr><tr><td>ResNet-50</td><td>44.3</td><td>16</td><td>50.3</td><td>24</td><td>55.3</td><td>30</td><td>59.5</td><td>42</td><td>68.9 49</td></tr><tr><td>ViT-small</td><td>39.3</td><td>15</td><td>39.5</td><td>13</td><td>40.8</td><td>14</td><td>44.1</td><td>16</td><td>53.3 30</td></tr><tr><td rowspan="3">IN</td><td>ResNet-50</td><td>32.0</td><td>2</td><td>35.1</td><td>5</td><td>39.0</td><td>9</td><td>43.8</td><td>13</td><td>57.7 48</td></tr><tr><td>ViT-small</td><td>27.9</td><td>2</td><td>28.2</td><td>1</td><td>28.6</td><td>2</td><td>30.0</td><td>2</td><td>35.6 5</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td>47.2 24</td></tr><tr><td rowspan="4">CLIP</td><td>ResNet-50</td><td>28.7</td><td>0</td><td>32.0</td><td>2</td><td>40.8</td><td>11</td><td>46.9</td><td>18</td><td>59.7 40</td></tr><tr><td>ViT-base</td><td>31.1</td><td>1</td><td>37.4</td><td>7</td><td>43.9</td><td>14</td><td>56.0</td><td>30</td><td>68.9 48</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td>78.8 62</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td>48.9 52.7</td></tr></table>
88
+
89
+ Table 1: Performance comparison: For each model, we report the accuracy and number of tasks with accuracy above $80\%$ . SES is the Sample Efficiency Score; it favors models with high performance in low data regimes and consistent accuracy across regimes. SES and AUC are computed over the 20-1000 data regimes. OOD generalization results are provided in the SI.
90
+
91
+ training problem samples, 500 validation samples and 1,000 test samples for each task. We also create a generalization test set of 1000 samples.
92
+
93
+ We define the compositionality prior as the task's design constraint which ensures that solving the task requires reasoning over its elementary components. In the size-shape task, shown in figure 2, the outlier can be differentiated from the other images by reasoning purely on size and shape. In the context of CVR, compositional extends beyond combinations of object attributes, such as novel color and shape combinations in an object, to higher levels of abstractions; groups of objects and scene configurations. For example, the position-rotation composition rule in Fig. 4 requires reasoning over the rotation properties of two sets of objects in each scene, and the position properties of objects within each set.
94
+
95
+ CVR constitutes a significant extension to the Synthetic Visual Reasoning Test (SVRT) [12] in that it provides a systematic reorganization based on an explicit compositionality prior. Among the 23 SVRT tasks, many share relations, such as tasks #1 and #21, which both involve shape similarity judgments. Most of these tasks can still be found amongst CVR's rules. At the same time, CVR is more general because it substitutes binary classification tasks with odd-one-out tasks which allows one to explore more general versions of these tasks, with a broader set of task parameters. For example, in SVRT's task #7, images of 3 groups of 2 same shapes are discriminated from images of 2 groups of 3 same shapes. This task is a special case in CVR of a more general shape-count rule with $n$ groups of $m$ objects where the values are randomly sampled across problem samples. Unlike procedurally generated RPM benchmarks [3, 43], CVR does not rely on a small set of fixed templates for the creation of task rules. The shapes are randomly created and positions are not fixed on a grid (for most rules), which renders the visual tasks difficult for models that rely on rote memorization [20]. Other attributes are sampled uniformly from a continuous interval.
96
+
97
+ # 3 Experimental setting
98
+
99
+ Baseline models In our experiments, we select two vision models commonly used in computer vision. We evaluate ResNet [15], a convolutional architecture used as a baseline in several bench-
100
+
101
+ marks [3, 43, 38] and also used as a backbone in standard VQA models. We also evaluate ViT, a transformer-based architecture [11]. ViT is used for various vision tasks, such as image classification, object recognition, captioning and recently in visual reasoning on SVRT [28]. To compare the architectures fairly, we choose ResNet-50 and ViT-small, which have an equal number of parameters. Additionally, we evaluate two baseline visual reasoning models designed for solving RPMs: SCL [40] which boasts state-of-the-art accuracy on RAVEN and PGM, and WReN [3] which is based on a relational reasoning model [33]. Finally, we present SCL-ResNet-18 which consists of an SCL with ResNet as a visual backbone thus combining ResNet's perception skills with SCL's reasoning skills.
102
+
103
+ Joint vs. individual rule learning Models are either trained in a single task (individual) or multi-task (joint) setting. In the context of the multi-task training on CVR, one image is considered an odd-one-out with respect to a reference rule. However, because of the randomness of scene generation, a different image might be considered an odd-one-out with respect to a different, irrelevant rule. To illustrate this problem, let's take the elementary size rule as an example. In this rule, each image contains one object. Due to the random sampling of object attributes, it is possible for one image to be considered an outlier with respect to the color rule (The attributes in the 4 images are i-small/green, ii-large/green, iii-small/green, iv-small/blue). Without specifying that the task to solve involves a size relation, the model could incorrectly choose the fourth image because it is an outlier with respect to the color rule. Thus, models trained on several tasks could easily confound rules. To avoid this problem, models are provided with a rule embedding vector. Given the rule token, models can learn several strategies and use the correct one for each problem sample. We also compare the multi-task and single task settings, as they allow for testing the model's capacity and efficiency at learning several strategies and routines to solve different rules. All hyperparameter choices and training details are provided in the SI.
104
+
105
+ Self-Supervised pre-training Unlike humans who spend a lifetime analyzing visual information, randomly initialized neural networks have no visual experience. To provide a more fair comparison between humans and neural networks, we pre-train baseline models on a subset of the training data. Self-Supervised Learning (SSL) has seen a rise in popularity due to its usefulness in pre-training models on unlabelled data. By using SSL, we aim to dissociate feature learning from abstract visual reasoning in standard vision models. We pre-trained ViT-small and ResNet-50 on 1 million images from the dataset following MoCo-v3 [8]. In addition to SSL pre-trained models, we also finetune models pre-trained on object recognition and image annotation. Since image annotation requires visual reasoning capabilities, these pretrained models provide a more fair comparison with humans, who regularly perform the task. We select ResNet-50 and ViT-small pre-trained on ImageNet [10]. We also pick CLIP [31] visual encoders ResNet-50 and ViT-Base, which are trained jointly with a language model on image annotation.
106
+
107
+ <table><tr><td>N training samples</td><td colspan="2">20</td><td colspan="2">1000</td></tr><tr><td>ResNet-50</td><td>28.0</td><td>0</td><td>57.9</td><td>14</td></tr><tr><td>ViT-small</td><td>29.3</td><td>1</td><td>32.7</td><td>3</td></tr><tr><td>SCL</td><td>26.4</td><td>0</td><td>44.9</td><td>11</td></tr><tr><td>WReN</td><td>27.5</td><td>0</td><td>42.4</td><td>10</td></tr><tr><td>SCL-ResNet 18</td><td>26.8</td><td>0</td><td>64.1</td><td>18</td></tr><tr><td>ResNet-50 SSL</td><td>45.7</td><td>7</td><td>78.3</td><td>25</td></tr><tr><td>ViT-small SSL</td><td>38.7</td><td>6</td><td>60.3</td><td>17</td></tr><tr><td>Humans</td><td>78.7</td><td>26</td><td>-</td><td>-</td></tr></table>
108
+
109
+ Table 2: Human Baseline: performance of models on joint training experiments is compared to the human baseline. The analysis is restricted to the 45 tasks used for evaluating humans. ResNet 50 approaches human-level performance only after SSL pre-training and finetuning on all task rules with 1000 samples per rule. Which is 50 times higher than the number of samples needed by humans.
110
+
111
+ Human Baseline As found in [12], having 21 participants solve the 9 tasks based on elementary relations and 36 randomly sampled complex tasks is sufficient to yield a reliable human baseline. We used 20 problem samples for each task which corresponds to the lowest number of samples used for training baseline models. Each participant completed 6 different tasks. More details about the behavioral experiment are provided in the SI.
112
+
113
+ # 4 Results
114
+
115
+ Sample Efficiency Baseline models are trained in six data regimes ranging from 20 to 1000 training samples. All sample efficiency results are summarized in Table 1. Randomly guessing yields $25\%$
116
+
117
+ ![](images/77224d7f2c2b0631e32dc428edf6e86e9191c6eacfc12bef4c8fb575072a2e4e.jpg)
118
+
119
+ ![](images/99bb5701e2d782f50cdd2dc986645220277833947ae2a28a51a7de07353cfa5f.jpg)
120
+ (a) Curriculum
121
+ (b) Reverse Curriculum
122
+ Figure 5: Compositionality: We evaluate models' capacity to reuse knowledge. (a) Models trained with a curriculum are compared to models trained from scratch. Models trained with a curriculum are overall more sample efficient. (b) Models trained on compositions are evaluated zero-shot on the respective elementary rules. Models fail overall to generalize from compositions to elementary rules.
123
+
124
+ accuracy. We observe that most randomly initialized models are slightly above chance accuracy after training in low data regimes. They achieve an increase in performance only when provided with more than 500 training samples. SCL-ResNet-18 performs the best in high data regimes, followed by ResNet-50. SCL and ViT have the lowest performance in high data regimes. This result is unsurprising since transformer architectures generally learn better in high data regimes (millions of data points). This is consistent with prior work [38] which finds that ViTs do not learn several SVRT tasks even when trained on 100k samples. Although SCL's performance is near chance, it achieves the best performance when it is augmented with a ResNet-18, which is a strong vision backbone. This jump in performance is indicative of the two architectures' complementary roles in visual reasoning. Results in Table 1 and Fig. 6 show a clear positive effect of pretraining on all models. SSL pre-trained models achieve the highest performance compared to object recognition and image annotation pretrained models. We observe that ViT benefits from a larger architecture coupled with pre-training on a large image annotation dataset. This highlights transformers' reliance on large model sizes and datasets.
125
+
126
+ In order to quantify sample efficiency systematically for all models, we compute the area under the curve (AUC), which corresponds to the unweighted average performance across data regimes. We also introduce the Sample Efficiency Score (SES) as an empirical evaluation metric for our experimental setting. It consists of a weighted average of accuracy where the weights are reversely proportional to number of samples: $SES = \frac{\sum_{n}a_{n}w_{n}}{\sum_{n}w_{n}}$ where $w_{n} = \frac{1}{1 + \log(n)}$ and $n$ is the number of samples. This score favors models that learn with the fewest samples while considering consistency in the overall performance. We observe that SCL-ResNet-18 scores the highest in the individual and joint training settings. In the SSL finetuning condition, ViT and ResNet-50 have a similar SES when trained on individual tasks, but ResNet-50 performs better in the joint training setting. These results hint at the efficiency of convolutional architectures in visual reasoning tasks. Collapsing across all data regimes and training paradigms, the best performance on CVR is given by ResNet-50, in the joint training setting with 10k data points per rule. It achieves 93.7% accuracy. This high performance in the 10,000 data regime demonstrates the models' capacity to learn the majority of rules in the dataset and suggests that failure in lower data regimes is explained by their sample inefficiency.
127
+
128
+ Finally, we compare model performance to the human baseline. We observe in Table 2 that humans far exceed the accuracy of all models with only 20 samples. This result aligns with previous work on the SVRT dataset [12] where participants solved similar tasks with less than 20 samples. These results highlight the gap between humans and machines in sample efficiency and emphasize the need to develop more sample-efficient architectures.
129
+
130
+ Compositionality Transferring knowledge and skills across tasks is a crucial feature of intelligent systems. With our experimental setup, this can be characterized in several ways. A compositional model should reuse acquired skills to learn efficiently. Thus, when it is trained on all rules jointly, it should be more sample efficient because the rules in the dataset share elementary components. In Table1 and Figure6, we observe that ResNet-50 achieves higher performance on joint training compared to individual rule training, while ViT has the opposite effect. The trend is consistent across data regimes and other settings. These results highlight convolutional architectures' learning efficiency compared to transformer architectures.
131
+
132
+ We investigate compositionality further by asking whether learning elementary rules provides a good initialization for learning their compositions. For example, a model that can judge object positions and sizes should not require many training samples to associate sizes with positions. We pick a set of complex rules with at least two different elementary relations, train models to reach the maximum accuracy possible on component relations, then finetune the models on the compositions. We call this experimental condition the curriculum condition since the condition is akin to incrementally teaching routines to a model. We compare model performance in the curriculum condition to performance when training from scratch. The results highlighted in Figure 5a show positive effects for most models but more significantly for convolution-based architectures. These results indicate that the baselines use skills acquired during pre-training to learn the composition rules, and that this pretraining helps to varying degrees. We refer the readers to the SI for additional analyses and quantitative results.
133
+
134
+ Finally, we evaluate transfer learning from composition rules to elementary rules. We name this condition the reverse curriculum condition. The working hypothesis is that models that rely on
135
+
136
+ compositionality will be able to solve elementary relations without finetuning if they learn the composition. We compare performance on a composition rule to zero-shot accuracy on the respective elementary rules in Figure 5b. We observe that all models perform worse on the elementary relations. These results might indicate that although the baselines could transfer skills from elementary rules to their compositions, they do not necessarily use an efficient strategy that decomposes tasks into their elementary components. Additional analyses are presented in the SI.
137
+
138
+ ![](images/471bc71bb95f9427b66ee8ae9b354dd2b248cdfc6b605fa27bc16370deea0c0b.jpg)
139
+
140
+ ![](images/3378c48eb832928f90cbf4d080296d21f9222656fb9e87906dff9867cbc0af57.jpg)
141
+ Figure 6: Sample efficiency: The percentage of tasks for which performance is above $80\%$ plotted against the number of training samples per task rule, with random initialization (top) and with SSL pre-training (bottom).
142
+
143
+ Task difficulty We analyze the performance of all models in the standard setting: joint training on all rules from random initialization. Figure 7 shows the average performance of each model on each elementary rule and composition rule. Since the dataset contains several compositions of each pair of elementary rules, the accuracy shown in each square is averaged over composition rules that share the same pair of elementary rules. Certain rules are solvable by all models, such as the position, size, color, and count elementary rules. Additionally, other rules pose a challenge for all models, these rules are compositions of count, flip, rotation or shape. Models that rely on a convolutional backbone were able to solve most spatial rules; position, size, inside and contact. However, they fail on rules that incorporate shapes and their transformations; shape, rotation, flip. Composition rules built with the Count relation proved to be a challenge for most models. We believe that models are capable of solving several tasks, such as the counting elementary rule, by relying on shortcuts; this could be a summation of all pixels in the image, for example. These shortcuts prevent models from learning
144
+
145
+ ![](images/b7888fb7f6645cdff93936a54a0bb93bba2f577b8c2d049b6b80699402a7a5cc.jpg)
146
+ (a) Individual
147
+ Figure 7: Task analysis: The performance at 1000 samples is shown for each model. Performance on elementary rules is shown on the top row of each matrix. The elementary relations of each composition are indicated by the annotations. Performance is averaged over different compositions of the same pair. We observe that most models fail on "color" based tasks.
148
+
149
+ abstract rules and hinder generalization. In line with the previous results, SCL-ResNet-18 seems to solve more elementary rules and compositions than the other 3 models.
150
+
151
+ # 5 Related Work
152
+
153
+ Visual reasoning benchmarks Visual reasoning has been a subject of AI research for decades, and several benchmarks address many relevant tasks. This includes language-guided reasoning benchmarks such as CLEVR [18], which has been extended in its visual composition by recent work [23], physics-based reasoning and reasoning over time dynamics [42, 2]. Abstract visual reasoning benchmarks are more relevant to our work. Raven's Progressive Matrices (RPMs) which were introduced in 1938 [6] are one example used to test human fluid intelligence. Procedural generation techniques for RPMs [39] enabled the creation of the PGM dataset and RAVEN [3, 43]. They also inspired Bongard-Logo [29], a concept learning and reasoning benchmark based on Bongard's 100 visual reasoning problems [4]. Another reasoning dataset, SVRT [12], focuses on evaluating similarity-based judgment and spatial reasoning. Besides these synthetic datasets, real-world datasets were developed with similar task structures to Bongard-Logo and RPM [35, 17]. In this work, we take inspiration from SVRT and develop a more extensive set of rules with careful considerations for the choice of rules and using a novel rule generation method. Finally, Abstract Reasoning Corpus [9] is a general intelligence test introduced with a new methodology for evaluating intelligence and generalization. The numerous problems presented in this benchmark are constructed with a variety of human priors. The unique nature of the task, requiring solvers to generate the answer, and the limited amount of training data render the benchmark difficult for neural network-based methods. We follow a similar approach in our dataset by creating several unique problem templates. However, we restrict the number of samples to a reasonable range to evaluate the sample efficiency of candidate models.
154
+
155
+ Compositionality Compositionality is a highly studied topic in AI research. Although there is agreement over the high-level definition of compositionality; the ability to represent new abstractions based on their constituents and their contexts, there is little consensus on methods for characterizing compositional generalization in neural networks. Several tests for compositionality have been proposed in language [26], mathematics [34], logical reasoning and navigation [5, 21, 32, 41] and visual reasoning [18, 36, 1]. Recent work [16] attempts to identify components of compositionality and proposes a test suit that unifies them. These tests evaluate the model's capacity to manipulate
156
+
157
+ concepts during inference. Systematicity tests the novel combination of features, akin to CLEVR's CoGenT [18] and C-VQA [1] where novel combinations of shapes and colors introduced in the test set, and localism tests the model's ability to account for context similarly to samples from Winoground [36]. Our work explores compositional generalization from a new perspective; CVR evaluates the model's compositionality while learning novel concepts. A compositional model reuses previously learned concepts to accelerate learning and decomposes complex tasks into elementary components. These aspects of compositionality are tested under settings that employ curricula. Furthermore, we evaluate compositionality over the reasoning operations necessary to solve a given problem. Finally, generating a synthetic dataset allows for evaluating reasoning at high levels of abstraction; groups of objects and scene configurations, as exemplified by tasks in Figure 4.
158
+
159
+ Neuroscience/Psychology Several theories attempt to propose an understanding of the mechanisms behind visual reasoning. Gestalt psychology provides principles hypothesized to be used by the visual system as an initial set of abstractions. Another theory describes visual reasoning as a sequence of elemental operations called visual routines [37] orchestrated by higher-level cognitive processes. These elemental operations are hypothesized to form the basis for spatial reasoning, same-different judgment, perceptual grouping, contour tracing and many other visual skills [7]. Evaluating these skills in standard vision models is a recurring subject in machine learning and neuroscience research [19, 24, 30]. To provide a comprehensive evaluation of visual reasoning, it is important to include task sets that require various visual skills within humans' capabilities.
160
+
161
+ # 6 Discussion and Future Work
162
+
163
+ In this work, we have proposed a novel benchmark that focuses on two important aspects of human intelligence – compositionality and sample efficiency. Inspired by visual cognition theories [37], the proposed challenge addresses the limitations of existing benchmarks in the following ways: (1) it extends previous benchmarks by providing a variety of visual reasoning tasks that vary in relations and scene structures, (2) all tasks in the benchmark were designed with compositionality prior, which allows for an in-depth analysis of each model's strengths and weaknesses, and (3) it provides a quantitative measure of sample efficiency.
164
+
165
+ Using this benchmark, we performed an analysis of the sample efficiency of existing machine learning models and their ability to harness compositionality. Our results suggest that even the best pre-trained neural architectures require orders of magnitude more training samples than humans to reach the same level of accuracy, which is consistent with prior work on sample efficiency [22]. Our evaluation further revealed that current neural architectures fail to learn several tasks even when provided an abundance of samples and extensive prior visual experience. These results highlight the importance of developing more data-efficient and vision-oriented neural architectures for achieving human-level artificial intelligence. In addition, we evaluated models' generalization ability across rules – from elementary rules to compositions and vice versa. We find that convolutional architectures benefit from learning all visual reasoning tasks jointly and transferring skills learned during training on elementary rules. However, they also failed to generalize systematically from compositions to their individual rules. These results indicate that convolutional architectures are capable of transferring skills across tasks but do not learn by decomposing a visual task into its elementary components.
166
+
167
+ While our work addresses important questions on sample efficiency and compositionality, we note a few possible limitations of our proposed benchmark. CVR is quite extensive in terms of the visual relations it contains, but it can always be further improved in its use of elementary visual relations. For example, the shapes could be parametrically generated based on specific geometric features. Hopefully, CVR can be expanded in future work to test more routines by including additional relations borrowed from other, more narrow challenges, including occlusion [19], line tracing [25], and physics-based relations. The rules in the current benchmark are limited to 2 or 3 levels of abstraction to evaluate relations systematically. Similarly, our evaluation methods for sample efficiency and compositionality could be further improved and adapted to different settings. For example, the sample efficiency score is an empirical metric used only for evaluating our benchmark. It requires training all models on all data regimes for the score to be consistent. Although our work is not unique in addressing sample efficiency, its aim is to promote more sample efficient and general models. We hope that the release of our benchmark will encourage researchers in the field to test their own model's sample efficiency and compositionality.
168
+
169
+ # 7 Acknowledgments
170
+
171
+ This work was supported by ONR (N00014-19-1-2029), NSF (IIS-1912280 and EAR-1925481), DARPA (D19AC00015), NIH/NINDS (R21 NS 112743), and the ANR-3IA Artificial and Natural Intelligence Toulouse Institute (ANR-19-PI3A-0004). Additional support provided by the Carney Institute for Brain Science and the Center for Computation and Visualization (CCV) and CALMIP supercomputing center (Grant 2016-p20019, 2016-p22041) at Federal University of Toulouse Midi-Pyrénées. We acknowledge the Cloud TPU hardware resources that Google made available via the TensorFlow Research Cloud (TFRC) program as well as computing hardware supported by NIH Office of the Director grant S10OD025181.
172
+
173
+ # References
174
+
175
+ [1] Aishwarya Agrawal, Aniruddha Kembhavi, Dhruv Batra, and Devi Parikh. C-vqa: A compositional split of the visual question answering (vqa) v1.0 dataset. arXiv preprint arXiv:1704.08243, 2017.
176
+ [2] Anton Bakhtin, Laurens van der Maaten, Justin Johnson, Laura Gustafson, and Ross Girshick. Phyre: A new benchmark for physical reasoning. Advances in Neural Information Processing Systems, 32, 2019.
177
+ [3] David Barrett, Felix Hill, Adam Santoro, Ari Morcos, and Timothy Lillicrap. Measuring abstract reasoning in neural networks. In International conference on machine learning, pages 511-520. PMLR, 2018.
178
+ [4] Mikhail Moiseevich Bongard. The recognition problem. Technical report, FOREIGN TECHNOLOGY DIV WRIGHT-PATTERSON AFB OHIO, 1968.
179
+ [5] Samuel R Bowman, Christopher D Manning, and Christopher Potts. Tree-structured composition in neural networks without tree-structured architectures. arXiv preprint arXiv:1506.04834, 2015.
180
+ [6] Henry R Burke. Raven's progressive matrices (1938): More on norms, reliability, and validity. Journal of Clinical Psychology, 41(2):231-235, 1985.
181
+ [7] Patrick Cavanagh. Visual cognition. Vision research, 51(13):1538-1551, 2011.
182
+ [8] Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9640-9649, 2021.
183
+ [9] François Chollet. On the measure of intelligence. November 2019.
184
+ [10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009.
185
+ [11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
186
+ [12] François Fleuret, Ting Li, Charles Dubout, Emma K Wampler, Steven Yantis, and Donald Geman. Comparing machines and humans on a visual categorization test. Proc. Natl. Acad. Sci. U. S. A., 108(43):17621-17625, October 2011.
187
+ [13] Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665-673, 2020.
188
+ [14] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
189
+ [15] Kaiming He, X Zhang, S Ren, and J Sun. Deep residual learning for image recognition." computer vision and pattern recognition (2015). Google Scholar There is no corresponding record for this reference, pages 770-778, 2015.
190
+ [16] Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. Compositionality decomposed: How do neural networks generalise? Journal of Artificial Intelligence Research, 67:757-795, 2020.
191
+ [17] Huaizu Jiang, Xiaojian Ma, Weili Nie, Zhiding Yu, Yuke Zhu, and Anima Anandkumar. Bongard-hoi: Benchmarking few-shot visual reasoning for human-object interactions. In Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
192
+
193
+ [18] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2901–2910, 2017.
194
+ [19] Junkyung Kim, Drew Linsley, Kalpit Thakkar, and Thomas Serre. Disentangling neural mechanisms for perceptual grouping. arXiv preprint arXiv:1906.01558, 2019.
195
+ [20] Junkyung Kim, Matthew Ricci, and Thomas Serre. Not-so-clevr: learning same-different relations strains feedforward neural networks. Interface focus, 8(4):20180011, 2018.
196
+ [21] Brenden Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In International conference on machine learning, pages 2873-2882. PMLR, 2018.
197
+ [22] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015.
198
+ [23] Zechen Li and Anders Søgaard. Qlevr: A diagnostic dataset for quantificational language and elementary visual reasoning. arXiv preprint arXiv:2205.03075, 2022.
199
+ [24] Drew Linsley, Junkyung Kim, Alekh Ashok, and Thomas Serre. Recurrent neural circuits for contour detection. arXiv preprint arXiv:2010.15314, 2020.
200
+ [25] Drew Linsley, Junkyung Kim, Vijay Veerabadran, Charles Windolf, and Thomas Serre. Learning long-range spatial dependencies with horizontal gated recurrent units. Advances in neural information processing systems, 31, 2018.
201
+ [26] Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. Assessing the ability of lstms to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521-535, 2016.
202
+ [27] Jacek Mandziuk and Adam Žychowski. Deepiq: A human-inspired ai system for solving iq test problems. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE, 2019.
203
+ [28] Nicola Messina, Giuseppe Amato, Fabio Carrara, Claudio Gennaro, and Fabrizio Falchi. Recurrent vision transformer for solving visual reasoning problems. arXiv preprint arXiv:2111.14576, 2021.
204
+ [29] Weili Nie, Zhiding Yu, Lei Mao, Ankit B Patel, Yuke Zhu, and Anima Anandkumar. Bongard-LOGO: A new benchmark for human-level concept learning and reasoning. Adv. Neural Inf. Process. Syst., 33:16468–16480, 2020.
205
+ [30] Guillermo Puebla and Jeffrey S Bowers. Can deep convolutional neural networks support relational reasoning in the same-different task? September 2021.
206
+ [31] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748-8763. PMLR, 18-24 Jul 2021.
207
+ [32] Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, and Brenden M Lake. A benchmark for systematic generalization in grounded language understanding. Advances in neural information processing systems, 33:19861-19872, 2020.
208
+ [33] Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. Advances in neural information processing systems, 30, 2017.
209
+ [34] David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. arXiv preprint arXiv:1904.01557, 2019.
210
+ [35] Damien Teney, Peng Wang, Jiewei Cao, Lingqiao Liu, Chunhua Shen, and Anton van den Hengel. V-prom: A benchmark for visual reasoning using visual progressive matrices. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 12071-12078, 2020.
211
+ [36] Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5238–5248, 2022.
212
+
213
+ [37] Shimon Ullman. Visual routines. In Readings in computer vision, pages 298-328. Elsevier, 1987.
214
+ [38] Mohit Vaishnav, Remi Cadene, Andrea Alamia, Drew Linsley, Rufin VanRullen, and Thomas Serre. Understanding the computational demands underlying visual reasoning. *Neural Computation*, 34(5):1075–1099, 2022.
215
+ [39] Ke Wang and Zhendong Su. Automatic generation of raven's progressive matrices. In Twenty-fourth international joint conference on artificial intelligence, 2015.
216
+ [40] Yuhuai Wu, Honghua Dong, Roger Grosse, and Jimmy Ba. The scattering compositional learner: Discovering objects, attributes, relationships in analogical reasoning. arXiv preprint arXiv:2007.04212, 2020.
217
+ [41] Zhengxuan Wu, Elisa Kreiss, Desmond C Ong, and Christopher Potts. Reascan: Compositional reasoning in language grounding. arXiv preprint arXiv:2109.08994, 2021.
218
+ [42] Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, and Joshua B Tenenbaum. Clevrer: Collision events for video representation and reasoning. arXiv preprint arXiv:1910.01442, 2019.
219
+ [43] Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5317-5327, 2019.
abenchmarkforcompositionalvisualreasoning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:343d9def1613c7b1c18cd6d43e0b673e4777f1b752abc3a9b948ed23916856eb
3
+ size 648959
abenchmarkforcompositionalvisualreasoning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cafb0515039f8204e7abe06748884d1dbe2c42dd5dc6c9ee908adcdd9131c03
3
+ size 265532
abestofbothworldsalgorithmforbanditswithdelayedfeedback/9a6c173a-992b-4704-b0db-40d588be1e9a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df659ea904d40934dee078e0fc7c8587733e3df35efa77ab70a2fc2f27ada433
3
+ size 75989
abestofbothworldsalgorithmforbanditswithdelayedfeedback/9a6c173a-992b-4704-b0db-40d588be1e9a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8dd9134464a680e39a48969d97c330ca14fc00fe16bb86520efbb923a94ad6ab
3
+ size 92491
abestofbothworldsalgorithmforbanditswithdelayedfeedback/9a6c173a-992b-4704-b0db-40d588be1e9a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24997897c4133bb45b58a76655d2b13789736c040aef07f55bc59e1be5511fee
3
+ size 562721
abestofbothworldsalgorithmforbanditswithdelayedfeedback/full.md ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Best-of-Both-Worlds Algorithm for Bandits with Delayed Feedback
2
+
3
+ Saeed Masoudian
4
+
5
+ University of Copenhagen
6
+
7
+ saeed.masoudian@di.ku.dk
8
+
9
+ Julian Zimmert
10
+
11
+ Google Research
12
+
13
+ zimmert@google.com
14
+
15
+ Yevgeny Seldin
16
+
17
+ University of Copenhagen
18
+
19
+ seldin@di.ku.dk
20
+
21
+ # Abstract
22
+
23
+ We present a modified tuning of the algorithm of Zimmert and Seldin [2020] for adversarial multiarmed bandits with delayed feedback, which in addition to the minimax optimal adversarial regret guarantee shown by Zimmert and Seldin simultaneously achieves a near-optimal regret guarantee in the stochastic setting with fixed delays. Specifically, the adversarial regret guarantee is $\mathcal{O}(\sqrt{TK} + \sqrt{dT\log K})$ , where $T$ is the time horizon, $K$ is the number of arms, and $d$ is the fixed delay, whereas the stochastic regret guarantee is $\mathcal{O}\left(\sum_{i \neq i^*} \left( \frac{1}{\Delta_i} \log (T) + \frac{d}{\Delta_i \log K} \right) + dK^{1/3} \log K\right)$ , where $\Delta_i$ are the suboptimality gaps. We also present an extension of the algorithm to the case of arbitrary delays, which is based on an oracle knowledge of the maximal delay $d_{max}$ and achieves $\mathcal{O}(\sqrt{TK} + \sqrt{D\log K} + d_{max}K^{1/3} \log K)$ regret in the adversarial regime, where $D$ is the total delay, and $\mathcal{O}\left( \sum_{i \neq i^*} \left( \frac{1}{\Delta_i} \log (T) + \frac{\sigma_{max}}{\Delta_i \log K} \right) + d_{max}K^{1/3} \log K \right)$ regret in the stochastic regime, where $\sigma_{max}$ is the maximal number of outstanding observations. Finally, we present a lower bound that matches the refined adversarial regret upper bound achieved by the skipping technique of Zimmert and Seldin [2020] in the adversarial setting.
24
+
25
+ # 1 Introduction
26
+
27
+ Delayed feedback is a common challenge in many online learning problems, including multi-armed bandits. The literature studying multi-armed bandit games with delayed feedback builds on prior work on bandit problems with no delays. The researchers have traditionally separated the study of bandit games in stochastic environments [Thompson, 1933; Robbins, 1952; Lai and Robbins, 1985; Auer et al., 2002] and in adversarial environments [Auer et al., 2002b]. However, in practice the environments are rarely purely stochastic, whereas they may not be fully adversarial either. Furthermore, the exact nature of an environment is not always known in practice. Therefore, in recent years there has been an increasing interest in algorithms that perform well in both regimes with no prior knowledge of the regime [Bubeck and Slivkins, 2012; Seldin and Slivkins, 2014; Auer and Chiang, 2016; Seldin and Lugosi, 2017; Wei and Luo, 2018]. The quest for best-of-both-worlds algorithms for no-delay setting culminated with the Tsallis-INF algorithm proposed by Zimmert and Seldin [2019], which achieves the optimal regret bounds in both stochastic and adversarial environments. The algorithm and analysis were further improved by Zimmert and Seldin [2021] and Masoudian and Seldin [2021], who, in particular, derived improved regret bounds for intermediate
28
+
29
+ regimes between stochastic and adversarial, while Ito [2021] removed an assumption on uniqueness of the best arm, which was used in the early works.
30
+
31
+ Our goal is to extend best-of-both-worlds results to multi-armed bandits with delayed feedback. So far the literature on multi-armed bandits with delayed feedback has followed the traditional separation into stochastic and adversarial. In the stochastic regime [Joulani et al. 2013] showed that if the delays are random (generated i.i.d), then compared to the non-delayed stochastic multi-armed bandit setting, the regret only increases additively by a factor that is proportional to the expected delay. In the adversarial setting [Cesa-Bianchi et al. 2019] have studied the case of uniform delays $d$ . They derived a lower bound $\Omega(\max(\sqrt{KT}, \sqrt{dT} \log K))$ and an almost matching upper bound $\mathcal{O}(\sqrt{KT} \log K + \sqrt{dT} \log K)$ . Thune et al. [2019] and Bistritz et al. [2019] extended the results to arbitrary delays, achieving $\mathcal{O}(\sqrt{KT} \log K + \sqrt{D} \log K)$ regret bounds based on oracle knowledge of the total delay $D$ and time horizon $T$ . Thune et al. [2019] also proposed a skipping technique based on advance knowledge of the delays "at action time", which allowed to exclude excessively large delays from $D$ . Finally, Zimmert and Seldin [2020] introduced an FTRL algorithm with a hybrid regularizer that achieved $\mathcal{O}(\sqrt{KT} + \sqrt{D} \log K)$ regret bound, matching the lower bound in the case of uniform delays and requiring no prior knowledge of $D$ or $T$ . The regularizer used by Zimmert and Seldin was a mix of the negative Tsallis entropy regularizer used in the Tsallis-INF algorithm for bandits and the negative entropy regularizer used in the Hedge algorithm for full information games, mixed with separate learning rates:
32
+
33
+ $$
34
+ F _ {t} (x) = - 2 \eta_ {t} ^ {- 1} \left(\sum_ {i = 1} ^ {K} \sqrt {x _ {i}}\right) + \gamma_ {t} ^ {- 1} \left(\sum_ {i = 1} ^ {K} x _ {i} (\log x _ {i} - 1)\right). \tag {1}
35
+ $$
36
+
37
+ Zimmert and Seldin [2020] also improved the skipping technique and achieved a refined regret bound $\mathcal{O}(\sqrt{KT} + \min_{S} (|S| + \sqrt{D_{\bar{S}} \log K}))$ , where $S$ is a set of skipped rounds and $D_{\bar{S}}$ is the total delay in non-skipped rounds. The refined skipping technique requires no advance knowledge of the delays. Their key step toward elimination of the need of advance knowledge of delays was to base the analysis on the count of the number of outstanding observations rather than the delays. The great advantage of skipping is that a few rounds with excessively large or potentially even infinite delays have a very limited impact on the regret bound. One of our contributions in this paper is a lower bound for the case of non-uniform delays, which matches the refined regret upper bound achieved by skipping.
38
+
39
+ Even though the hybrid regularizer used by Zimmert and Seldin [2020] was sharing the Tsallis entropy part with their best-of-both-worlds Tsallis-INF algorithm from Zimmert and Seldin [2021], and even though the adversarial analysis was partly similar to the analysis of the Tsallis-INF algorithm, Zimmert and Seldin [2020] did not manage to derive a regret bound for their algorithm in the stochastic setting with delayed feedback and left it as an open problem. The stochastic analysis of the Tsallis-INF algorithm is based on the self-bounding technique [Zimmert and Seldin 2021]. Application of this technique in the no delay setting is relatively straightforward, but in presence of delays it requires control of the drift of the playing distribution from the moment an action is played to the moment the feedback arrives. Cesa-Bianchi et al. [2019] have bounded the drift of the playing distribution of the EXP3 algorithm in the uniform delays setting with a fixed learning rate. But best-of-both-worlds algorithms require decreasing learning rates [Mourtada and Gaiffas 2019], which makes the drift control much more challenging. The problem gets even more challenging in the case of arbitrary delays, because it requires drift control over arbitrary long periods of time.
40
+
41
+ We apply an FTRL algorithm with the same hybrid regularizer as the one used by Zimmert and Seldin [2020], but with a different tuning of the learning rates. The new tuning has a minor effect on the adversarial regret bound, but allows us to make progress with the stochastic analysis. For the stochastic analysis we use the self-bounding technique. One of our key contributions is a general lemma that bounds the drift of the playing distribution derived from the time-varying hybrid regularizer over arbitrary delays. Using this lemma we derive near-optimal best-of-both-worlds regret guarantees for the case of fixed delays. But even with the lemma at hand, application of the self-bounding technique in presence of arbitrary delays is still much more challenging than in the no delays or fixed delay setting. Therefore, we resort to introducing an assumption of oracle knowledge of the maximal delay, which limits the maximal period of time over which we need to keep control over the drift. Our contributions are summarized below. To keep the presentation simple we assume uniqueness of the best arm throughout the paper. Tools for eliminating the uniqueness of the best arm assumption were proposed by [to 2021].
42
+
43
+ 1. We show that in the arbitrary delays setting with an oracle knowledge of the maximal delay $d_{max}$ , our algorithm achieves $\mathcal{O}(\sqrt{KT} + \sqrt{D\log K} + d_{max}K^{1/3}\log K)$ regret bound in the adversarial regime simultaneously with $\mathcal{O}\left(\sum_{i \neq i^*} \left(\frac{\log T}{\Delta_i} + \frac{\sigma_{max}}{\Delta_i\log K}\right) + d_{max}K^{1/3}\log K\right)$ regret bound in the stochastic regime, where $\sigma_{max}$ is the maximal number of outstanding observations. We note that $\sigma_{max} \leq d_{max}$ , but it may potentially be much smaller. For example, if the first observation has a delay of $T$ and all the remaining observations have zero delay, then $d_{max} = T$ , but $\sigma_{max} = 1$ .
44
+ 2. In the case of uniform delays the above bounds simplify to $\mathcal{O}(\sqrt{KT} +\sqrt{dT\log K} + dK^{1 / 3}\log K)$ in the adversarial case and $\mathcal{O}\left(\sum_{i\neq i^*}(\frac{\log T}{\Delta_i} +\frac{d}{\Delta_i\log K}) + dK^{1 / 3}\log K\right)$ in the stochastic case. For $T\geq dK^{2 / 3}\log K$ the last term in the adversarial regret bound is dominated by the middle term, which leads to the minimax optimal $\mathcal{O}(\sqrt{KT} +\sqrt{dT\log K})$ adversarial regret. The stochastic regret lower bound is trivially $\Omega (\min \{d\frac{\sum_{i\neq i^*}\Delta_i}{K},\sum_{i\neq i^*}\frac{\log T}{\Delta_i}\}) = \Omega (d\frac{\sum_{i\neq i^*}\Delta_i}{K} +\sum_{i\neq i^*}\frac{\log T}{\Delta_i})$ and, therefore, our stochastic regret upper bound is near-optimal.
45
+ 3. We present an $\Omega\left(\sqrt{KT} + \min_{S}(|S| + \sqrt{D_{\bar{S}} \log K})\right)$ regret lower bound for adversarial multi-armed bandits with non-uniformly delayed feedback, which matches the refined regret upper bound achieved by the skipping technique of Zimmert and Seldin [2020].
46
+
47
+ # 2 Problem setting
48
+
49
+ We study the multi-armed bandit with delays problem, in which at time $t = 1,2,\ldots$ the learner chooses an arm $I_{t}$ among a set of $K$ arms and instantaneously suffers a loss $\ell_{t,I_t}$ from a loss vector $\ell_t\in [0,1]^K$ generated by the environment, but $\ell_{t,I_t}$ is not observed by the learner immediately. After a delay of $d_{t}$ , at the end of round $t + d_{t}$ , the learner observes the pair $(t,\ell_{t,I_t})$ , namely, the loss and the index of the game round the loss is coming from. The sequence of delays $d_{1},d_{2},\dots$ is selected arbitrarily by the environment. Without loss of generality we can assume that all the outstanding observations are revealed at the end of the game, i.e., $t + d_{t}\leq T$ for all $t$ , where $T$ is the time horizon, unknown to the learner. We consider two regimes, oblivious adversarial and stochastic. The performance of the learner is evaluated using pseudo-regret, which is defined as
50
+
51
+ $$
52
+ \overline {{R e g}} _ {T} = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \ell_ {t, I _ {t}} \right] - \min _ {i \in [ K ]} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \ell_ {t, i} \right] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(\ell_ {t, I _ {t}} - \ell_ {t, i _ {T} ^ {*}}\right) \right],
53
+ $$
54
+
55
+ where $i_T^* \in \mathrm{argmin}_{i \in [K]} \mathbb{E}\left[\sum_{t=t}^{T} \ell_{t,i}\right]$ is a best arm in hindsight in expectation over the loss generation model and the randomness of the learner. In the oblivious adversarial setting the losses are independent of the actions taken by the algorithm and considered to be deterministic, and the pseudo-regret is equal to the expected regret.
56
+
57
+ Additional Notation: We use $\Delta^n$ to denote the probability simplex over $n + 1$ points. The characteristic function of a closed convex set $\mathcal{A}$ is denoted by $\mathcal{I}_A(x)$ and satisfies $\mathcal{I}_A(x) = 0$ for $x\in \mathcal{A}$ and $\mathcal{I}_A(x) = \infty$ otherwise. The convex conjugate of a function $f:\mathbb{R}^n\to \mathbb{R}$ is defined by $f^{*}(y) = \sup_{x\in \mathbb{R}^{n}}\{\langle x,y\rangle -f(x)\}$ . We also use bar to denote that the function domain is restricted to $\Delta^n$ , e.g., $\bar{f} (x) = \left\{ \begin{array}{ll}f(x), & \text{if} x\in \Delta^n\\ \infty , & \text{otherwise} \end{array} \right.$ . We denote the indicator function of an event $\mathcal{E}$ by $\mathbb{1}(\mathcal{E})$ and use $\mathbb{1}_t(i)$ as a shorthand for $\mathbb{1}(I_t = i)$ . The probability distribution over arms that is played by the learner at round $t$ is denoted by $x_{t}\in \Delta^{K - 1}$ .
58
+
59
+ # 3 Algorithm
60
+
61
+ The algorithm is based on Follow The Regularized Leader (FTRL) algorithm with the hybrid regularizer used by Zimmert and Seldin [2020], stated in equation (1). At each time step $t$ let $\sigma_{t} = \sum_{s=1}^{t-1} \mathbb{1}(s + d_{s} \geq t)$ be the number of outstanding observations and $D_{t} = \sum_{s=1}^{t} \sigma_{t}$ be the
62
+
63
+ cumulative number of outstanding observations, then the learning rates are defined as
64
+
65
+ $$
66
+ \eta_ {t} ^ {- 1} = \sqrt {t + \eta_ {0}}, \quad \gamma_ {t} ^ {- 1} = \sqrt {\frac {\sum_ {s = 1} ^ {t} \sigma_ {s} + \gamma_ {0}}{\log K}}, \tag {2}
67
+ $$
68
+
69
+ where $\eta_0 = 10d_{max} + d_{max}^2 / \left(K^{1/3}\log(K)\right)^2$ and $\gamma_0 = 24^2d_{max}^2K^{2/3}\log(K)$ . The update rule for the distribution over actions played by the learner is
70
+
71
+ $$
72
+ x _ {t} = \nabla \bar {F} _ {t} ^ {*} (- \hat {L} _ {t} ^ {o b s}) = \arg \min _ {x \in \Delta^ {K - 1}} \langle \hat {L} _ {t} ^ {o b s}, x \rangle + F _ {t} (x), \tag {3}
73
+ $$
74
+
75
+ where $\hat{L}_t^{obs} = \sum_{s=1}^{t-1} \hat{\ell}_s \mathbb{1}(s + d_s < t)$ is the cumulative importance-weighted observed loss and $\hat{\ell}_s$ is an importance-weighted estimate of the loss vector $\ell_s$ defined by
76
+
77
+ $$
78
+ \hat {\ell} _ {t, i} = \frac {\ell_ {t , i} \mathbb {1} (I _ {t} = i)}{x _ {t , i}}.
79
+ $$
80
+
81
+ At the beginning of round $t$ the algorithm calculates the cumulative number of outstanding observations $\mathcal{D}_t$ and uses it to define the learning rate $\gamma_t$ . Next, it uses the FTRL update rule defined in (3) to define a distribution over actions $x_t$ from which to draw action $I_t$ . Finally, at the end of round $t$ it receives the delayed observations and updates the cumulative loss estimation vector accordingly, so that $\hat{L}_{t+1}^{obs} = \hat{L}_t^{obs} + \sum_{s=1}^t \hat{\ell}_s \mathbb{1}(s + d_s = t)$ . The complete algorithm is provided in Algorithm 1.
82
+
83
+ Algorithm 1: FTRL with advance tuning for delayed bandit
84
+ 1 Initialize $\mathcal{D}_0 = 0$ and $\hat{L}_1^{obs} = 0_K$ (where $0_K$ is a zero vector in $\mathbb{R}^K$ )
85
+ 2 for $t = 1, \dots, n$ do
86
+ 3 Set $\sigma_t = \sum_{s=1}^{t-1} \mathbb{1}(s + d_s > t)$
87
+ 4 Update $\mathcal{D}_t = \mathcal{D}_{t-1} + \sigma_t$
88
+ 5 Set $x_t = \arg \min_{x \in \Delta^{K-1}} \langle \hat{L}_t^{obs}, x \rangle + F_t(x) \quad // F_t$ is defined in (1) and $\eta_t$ and $\gamma_t$ in (2)
89
+ 6 Sample $I_t \sim x_t$
90
+ 7 Observe $(s, \ell_{s,I_s})$ for all $s$ that satisfy $s + d_s = t$
91
+ 8 $\hat{L}_{t+1}^{obs} = \hat{L}_t^{obs} + \sum_{s=1}^{t} \hat{\ell}_s \mathbb{1}(s + d_s = t)$
92
+
93
+ # 4 Best-of-both-worlds regret bounds for Algorithm 1
94
+
95
+ In this section we provide best-of-both-worlds regret bounds for Algorithm $\boxed{1}$ . First, in Theorem $\boxed{1}$ we provide regret bounds for an arbitrary delay setting, where we assume an oracle access to $d_{max}$ . Then, in Corollary $\boxed{2}$ we specialize the result to a fixed delay setting.
96
+
97
+ Theorem 1. Assume that Algorithm $\square$ is given an oracle knowledge of $d_{\text{max}}$ . Then its pseudo-regret for any sequence of delays and losses satisfies
98
+
99
+ $$
100
+ \overline {{R e g}} _ {T} = \mathcal {O} (\sqrt {T K} + \sqrt {D \log K} + d _ {\max } K ^ {1 / 3} \log K).
101
+ $$
102
+
103
+ Furthermore, in the stochastic regime the pseudo-regret additionally satisfies
104
+
105
+ $$
106
+ \overline {{R e g}} _ {T} = \mathcal {O} \left(\sum_ {i \neq i ^ {*}} (\frac {1}{\Delta_ {i}} \log (T) + \frac {\sigma_ {m a x}}{\Delta_ {i} \log K}) + d _ {m a x} K ^ {1 / 3} \log K\right).
107
+ $$
108
+
109
+ A sketch of the proof is provided in Section 5 and detailed constants are worked out in Appendix C. For fixed delays Theorem 1 gives the following corollary.
110
+
111
+ Corollary 2. If the delays are fixed and equal to $d$ , and $T \geq dK^{2/3} \log K$ , then the pseudo-regret of Algorithm 7 always satisfies
112
+
113
+ $$
114
+ \overline {{R e g}} _ {T} = \mathcal {O} (\sqrt {T K} + \sqrt {d T \log K})
115
+ $$
116
+
117
+ and in the stochastic setting it additionally satisfies
118
+
119
+ $$
120
+ \overline {{R e g}} _ {T} = \mathcal {O} \left(\sum_ {i \neq i ^ {*}} \left(\frac {1}{\Delta_ {i}} \log (T) + \frac {d}{\Delta_ {i} \log K}\right) + d K ^ {1 / 3} \log K\right).
121
+ $$
122
+
123
+ In the adversarial regime with fixed delays $d$ , regret lower bound is $\Omega\left(\sqrt{KT} + \sqrt{dT\log K}\right)$ whereas in the stochastic regime with fixed delays the regret lower bound is trivially $\Omega\left(d\frac{\sum_{i \neq i*} \Delta_i}{K} + \sum_{i \neq i*} \frac{\log T}{\Delta_i}\right)$ . Thus, in the adversarial regime the corollary yields the minimax optimal regret bound and in the stochastic regime it is near-optimal. More explicitly, it is optimal within a multiplicative factor of $\sum_{i \neq i*} \frac{1}{\Delta_i \log K} + \frac{K^{4/3} \log K}{\sum_{i \neq i*} \Delta_i}$ in front of $d$ .
124
+
125
+ If we fix a total delay budget $D$ , then uniform delays $d = D / T$ is a special case, and in this sense Theorem 1 is also optimal in the adversarial regime and near-optimal in the stochastic regime, although for non-uniform delays improved regret bounds can potentially be achieved by skipping. We also note that having the dependence on $\sigma_{max}$ in the middle term of the stochastic regret bound in Theorem 1 is better than having a dependence on $d_{max}$ , since $\sigma_{max} \leq d_{max}$ , and in some cases it can be significantly smaller, as shown in the example in the Introduction and quantified by the following lemma.
126
+
127
+ Lemma 3. Let $d_{\text{max}}(S) = \max_{s \in S} d_s$ , where $S \subseteq \{1, \ldots, T\}$ is a subset of rounds. Let $\bar{S} = \{1, \ldots, T\} \setminus S$ be the remaining rounds. Then
128
+
129
+ $$
130
+ \sigma_ {m a x} \leq \min _ {S \subseteq \{1, \dots , T \}} \left\{| S | + d _ {m a x} (\bar {S}) \right\}.
131
+ $$
132
+
133
+ A proof of Lemma 3 is provided in Appendix A.
134
+
135
+ Finally, we note that the result in Theorem $\boxed{1}$ is easily extendable to the corrupted regime, because the proof relies on the same self-bounding technique as the one used by Zimmert and Seldin [2021]. If we denote by $B_T^{stoch}$ the regret upper bound in the stochastic regime in Theorem 1 and by $C$ the total corruption budget, then in the corrupted regime the regret would be $\mathcal{O}(B_T^{stoch} + \sqrt{B_T^{stoch}C})$ . The proof is straightforward, following the lines of Zimmert and Seldin [2021], and, therefore, left out.
136
+
137
+ # 5 A proof sketch of Theorem 1
138
+
139
+ In this section we provide a sketch of a proof of Theorem 1. We provide a proof sketch for the stochastic bound in Section 5.1. Afterwards, in Section 5.2, we show how the analysis of Zimmert and Seldin [2020] gives the adversarial bound stated in Theorem 1.
140
+
141
+ # 5.1 Stochastic Bound
142
+
143
+ We start by providing a key lemma (Lemma 4) that controls the drift of the playing distribution derived from the time-varying hybrid regularizer over arbitrary delays. We then introduce a drifted version of the pseudo-regret defined in 4, for which we use the key lemma to show that the drifted version of the pseudo-regret is close to the actual one. As a result, it is sufficient to bound the drifted version. The analysis of the drifted pseudo-regret follows by the standard analysis of the FTRL algorithm [Lattimore and Szepesvári, 2020] that decomposes the pseudo-regret (drifted pseudo-regret in our case) into stability and penalty terms. Thereafter, we proceed by using Lemma 4 again, this time to bound the stability term in order to apply the self-bounding technique [Zimmert and Seldin, 2019], which yields logarithmic regret in the stochastic setting. Our key lemma is the following.
144
+
145
+ Lemma 4 (The Key Lemma). For any $i \in [K]$ and $s, t \in [T]$ , where $s \leq t$ and $t - s \leq d_{\max}$ , we have
146
+
147
+ $$
148
+ x _ {t, i} \leq 2 x _ {s, i}.
149
+ $$
150
+
151
+ A detailed proof of the lemma is provided in Appendix B. Below we explain the high level idea behind the proof.
152
+
153
+ Proof sketch. We know that $x_{t} = \nabla \bar{F}_{t}^{*}(-\hat{L}_{t}^{obs})$ and $x_{s} = \nabla \bar{F}_{s}^{*}(-\hat{L}_{s}^{obs})$ , so we introduce $\tilde{x} = \nabla \bar{F}_{s}^{*}(-\hat{L}_{t}^{obs})$ as an auxiliary variable to bridge between $x_{t}$ and $x_{s}$ . The analysis consists of two key steps and is based on induction on $(t, s)$ .
154
+
155
+ **Deviation Induced by the Loss Shift:** This step controls the drift when we fix the learning rates and shift the cumulative loss. We prove the following inequality:
156
+
157
+ $$
158
+ \tilde {x} _ {i} \leq \frac {3}{2} x _ {\mathrm {s}, i}.
159
+ $$
160
+
161
+ Note that this step uses the induction assumption for $(s, s - d_r)$ for all $r < s : r + d_r = s$ .
162
+
163
+ Deviation Induced by the Change of Regularizer: In this step we bound the drift when the cumulative loss vector is fixed and we change the regularizer. We show that
164
+
165
+ $$
166
+ x _ {t, i} \leq \frac {4}{3} \tilde {x} _ {i}.
167
+ $$
168
+
169
+ Combining these two steps gives us the desired bound. A proof of these steps is provided in Appendix B
170
+
171
+ We use Lemma 4 to relate the drifted pseudo-regret to the actual pseudo-regret. Let $A_{t} = \{s:s\leq t$ and $s + d_{s} = t\}$ be the set of rounds for which feedback arrives at round $t$ . We define the observed loss vector at time $t$ as $\hat{\ell}_t^{obs} = \sum_{s\in A_t}\hat{\ell}_s$ and the drifted pseudo-regret as
172
+
173
+ $$
174
+ \overline {{R e g _ {T} ^ {d r i f t}}} = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(\langle x _ {t}, \hat {\ell} _ {t} ^ {o b s} \rangle - \hat {\ell} _ {t, i _ {T} ^ {*}} ^ {o b s}\right) \right]. \tag {4}
175
+ $$
176
+
177
+ We rewrite the drifted regret as
178
+
179
+ $$
180
+ \begin{array}{l} \overline {{R e g}} _ {T} ^ {d r i f t} = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \sum_ {s \in A _ {t}} \left(\langle x _ {t}, \hat {\ell} _ {s} \rangle - \hat {\ell} _ {s, i _ {T} ^ {*}}\right) \right] \\ = \sum_ {t = 1} ^ {T} \sum_ {s \in A _ {t}} \sum_ {i = 1} ^ {K} \mathbb {E} [ x _ {t, i} (\hat {\ell} _ {s, i} - \hat {\ell} _ {s, i _ {T} ^ {*}}) ] \\ = \sum_ {t = 1} ^ {T} \sum_ {s \in A _ {t}} \sum_ {i = 1} ^ {K} \mathbb {E} [ x _ {t, i} ] \Delta_ {i} = \sum_ {t = 1} ^ {T} \sum_ {i = 1} ^ {K} \mathbb {E} [ x _ {t + d _ {t}, i} ] \Delta_ {i}, \\ \end{array}
181
+ $$
182
+
183
+ where when taking the expectation we use the facts that $\hat{\ell}_s$ has no impact on the determination of $x_{t}$ and that the loss estimators are unbiased. Using Lemma 4 we make a connection between pseudo-regret and the drifted version:
184
+
185
+ $$
186
+ \begin{array}{l} \overline {{R e g}} _ {T} ^ {d r i f t} = \sum_ {t = 1} ^ {T} \sum_ {i = 1} ^ {K} \mathbb {E} [ x _ {t + d _ {t}, i} ] \Delta_ {i} \geq \sum_ {t = 1} ^ {T - d _ {m a x}} \sum_ {i = 1} ^ {K} \frac {1}{2} \mathbb {E} [ x _ {t + d _ {m a x}, i} ] \Delta_ {i} \\ = \frac {1}{2} \sum_ {t = d _ {m a x} + 1} ^ {T} \sum_ {i = 1} ^ {K} \mathbb {E} [ x _ {t, i} ] \Delta_ {i} \\ \geq \frac {1}{2} \sum_ {t = 1} ^ {T} \sum_ {i = 1} ^ {K} \mathbb {E} [ x _ {t, i} ] \Delta_ {i} - \frac {d _ {m a x}}{2} = \frac {1}{2} \overline {{R e g}} _ {T} - \frac {d _ {m a x}}{2}, \\ \end{array}
187
+ $$
188
+
189
+ where the first inequality follows by Lemma 4, and the second inequality uses $\sum_{t=1}^{d_{max}} \mathbb{E}[x_{t,i}] \Delta_i \leq d_{max}$ . As a result, we have $\overline{\text{Reg}_T} \leq 2 \overline{\text{Reg}_T}^{\text{drift}} + d_{max}$ and it suffices to upper bound $\overline{\text{Reg}_T}^{\text{drift}}$ . We follow the standard analysis of FTRL, which decomposes the drifted pseudo-regret into $stabiltidy$ and penalty terms as
190
+
191
+ $$
192
+ \overline {{R e g _ {T} ^ {d r i f t}}} = \mathbb {E} \left[ \underbrace {\sum_ {t = 1} ^ {T} \langle x _ {t} , \hat {\ell} _ {t} ^ {o b s} \rangle + \bar {F} _ {t} ^ {*} (- \hat {L} _ {t + 1} ^ {o b s}) - \bar {F} _ {t} ^ {*} (- \hat {L} _ {t} ^ {o b s})} _ {s t a b i l i t y} \right] + \mathbb {E} \left[ \underbrace {\sum_ {t = 1} ^ {T} \bar {F} _ {t} ^ {*} (- \hat {L} _ {t} ^ {o b s}) - \bar {F} _ {t} ^ {*} (- \hat {L} _ {t + 1} ^ {o b s}) - \ell_ {t , i _ {T} ^ {*}}} _ {p e n a l t y} \right].
193
+ $$
194
+
195
+ For the penalty term we have the following bound by Abernethy et al. [2015]
196
+
197
+ $$
198
+ p e n a l t y \leq \sum_ {t = 2} ^ {T} \left(F _ {t - 1} \left(x _ {t}\right) - F _ {t} \left(x _ {t}\right)\right) + F _ {T} \left(\mathrm {e} _ {i _ {T} ^ {*}}\right) - F _ {1} \left(x _ {1}\right),
199
+ $$
200
+
201
+ where $\mathrm{e}_{i_T^*}$ denotes a unit vector in $\mathbb{R}^K$ with the $i_T^*$ -th element being one and zero elsewhere. By replacing the closed form of the regularizer in this bound and using the facts that $\eta_t^{-1} - \eta_{t-1}^{-1} = \mathcal{O}(\eta_t)$ , $\gamma_t^{-1} - \gamma_{t-1}^{-1} = \mathcal{O}(\sigma_t\gamma_t / \log K)$ , and $x_{t,i_T^*}^{\frac{1}{2}} - 1 \leq 0$ , we obtain
202
+
203
+ $$
204
+ \text {p e n a l t y} \leq \mathcal {O} \left(\sum_ {t = 2} ^ {T} \sum_ {i \neq i ^ {*}} \eta_ {t} x _ {t, i} ^ {\frac {1}{2}} + \sum_ {t = 2} ^ {T} \sum_ {i = 1} ^ {K} \frac {\sigma_ {t} \gamma_ {t} x _ {t , i} \log \left(1 / x _ {t , i}\right)}{\log K}\right) + 2 \sqrt {\eta_ {0} (K - 1)} + \sqrt {\gamma_ {0} \log K}. \tag {5}
205
+ $$
206
+
207
+ In order to control the stability term we derive Lemma 5
208
+
209
+ Lemma 5 (Stability). Let $v_{t} = |A_{t}|$ . For any $\alpha_{t} \leq \gamma_{t}^{-1}$ we have
210
+
211
+ $$
212
+ s t a b i l i t y \leq \sum_ {t = 1} ^ {T} \sum_ {i = 1} ^ {K} 2 f _ {t} ^ {\prime \prime} (x _ {t, i}) ^ {- 1} (\hat {\ell} _ {t, i} ^ {o b s} - \alpha_ {t}) ^ {2}.
213
+ $$
214
+
215
+ Furthermore, $\alpha_{t} = \frac{\sum_{j=1}^{K} f^{\prime\prime}(x_{t,j})^{-1} \hat{t}_{t,j}^{obs}}{\sum_{j=1}^{K} f^{\prime\prime}(x_{t,j})^{-1}}$ satisfies $\alpha_{t} \leq \gamma_{t}^{-1}$ and yields
216
+
217
+ $$
218
+ \mathbb {E} [ \text {s t a b i l i t y} ] \leq \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} 2 \gamma_ {t} \left(v _ {t} - 1\right) v _ {t} \mathbb {E} \left[ x _ {t, i} \right] \Delta_ {i} + \sum_ {t = 1} ^ {T} \sum_ {s \in A _ {t}} \sum_ {i = 1} ^ {K} 2 \eta_ {t} \mathbb {E} \left[ x _ {t, i} ^ {3 / 2} x _ {s, i} ^ {- 1} \left(1 - x _ {s, i}\right) \right]. \tag {6}
219
+ $$
220
+
221
+ A proof of the stability lemma is provided in Appendix A.3. We apply Lemma 4 to (6) to give bounds $v_{t}x_{t,i} = \sum_{s\in A_{t}}x_{t,i}\leq 2\sum_{s\in A_{t}}x_{s,i}$ and $x_{t,i}^{3 / 2}x_{s,i}^{-1}(1 - x_{s,i})\leq 2^{3 / 2}x_{s,i}^{1 / 2}(1 - x_{s,i})$ . Moreover, in order to remove the best arm $i^{*}$ from the summation in the later bound we use $x_{s,i^*}^{1 / 2}(1 - x_{s,i^*})\leq \sum_{i\neq i^*}x_{s,i}\leq \sum_{i\neq i^*}x_{s,i}^{1 / 2}$ . These bounds together with the facts that we can change the order of the summations and that each $t$ belongs to exactly one $A_{s}$ , gives us the following stability bound
222
+
223
+ $$
224
+ \mathbb {E} [ \text {s t a b i l i t y} ] = \mathcal {O} \left(\sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \eta_ {t} \mathbb {E} \left[ x _ {t, i} ^ {1 / 2} \right] + \sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \gamma_ {t + d _ {t}} \left(v _ {t + d _ {t}} - 1\right) \mathbb {E} \left[ x _ {t, i} \right] \Delta_ {i}\right). \tag {7}
225
+ $$
226
+
227
+ By combining (7), (5), and the fact that $\overline{Reg}_T \leq 2\overline{Reg}_T^{drift} + d_{max}$ , we show that there exist constants $a, b, c \geq 0$ , such that
228
+
229
+ $$
230
+ \begin{array}{l} \overline {{R e g}} _ {T} \leq \mathbb {E} \left[ a \underbrace {\sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \eta_ {t} x _ {t , i} ^ {1 / 2}} _ {A} + b \underbrace {\sum_ {t = 1} ^ {T} \sum_ {i \neq i ^ {*}} \gamma_ {t + d _ {t}} (v _ {t + d _ {t}} - 1) x _ {t , i} \Delta_ {i}} _ {B} + c \underbrace {\sum_ {t = 2} ^ {T} \sum_ {i = 1} ^ {K} \frac {\sigma_ {t} \gamma_ {t} x _ {t , i} \log (1 / x _ {t , i})}{\log K}} _ {C} \right] \\ + \underbrace {4 \sqrt {\eta_ {0} (K - 1)} + 2 \sqrt {\gamma_ {0} \log K} + d _ {\max }} _ {D}. \tag {8} \\ \end{array}
231
+ $$
232
+
233
+ Self bounding analysis: We use the self-bounding technique to write $\overline{Reg}_T = 4\overline{Reg}_T - 3\overline{Reg}_T$ , and then based on (8) we have
234
+
235
+ $$
236
+ \overline {{R e g}} _ {T} \leq \mathbb {E} \left[ 4 a A - \overline {{R e g}} _ {T} \right] + \mathbb {E} \left[ 4 b B - \overline {{R e g}} _ {T} \right] + \mathbb {E} \left[ 4 c C - \overline {{R e g}} _ {T} \right] + 4 D. \tag {9}
237
+ $$
238
+
239
+ For $D$ we can substitute the values of $\gamma_0$ and $\eta_0$ and get
240
+
241
+ $$
242
+ D = \mathcal {O} \left(d _ {\max } (K - 1) ^ {1 / 3} \log K\right). \tag {10}
243
+ $$
244
+
245
+ Upper bounding $A, B,$ and $C$ requires separate and elaborate analysis, which we do in Lemmas 6.7 and 8 respectively. Proofs of these lemmas are provided in Appendix A.2
246
+
247
+ Lemma 6 (A bound for $4aA - \overline{Reg}_T$ ). We have the following bound for any $a \geq 0$ :
248
+
249
+ $$
250
+ 4 a A - \overline {{R e g}} _ {T} \leq \sum_ {i \neq i ^ {*}} \frac {4 a ^ {2}}{\Delta_ {i}} \log (T / \eta_ {0} + 1) + 1. \tag {11}
251
+ $$
252
+
253
+ Lemma 6 contributes the logarithmic (in $T$ ) term to the regret bound.
254
+
255
+ Lemma 7 (A bound for $4bB - \overline{Reg}_T$ ). Let $v_{max} = \max_{t \in [T]} v_t$ , then for any $b \geq 0$ :
256
+
257
+ $$
258
+ 4 b B - \overline {{R e g}} _ {T} \leq 6 4 b ^ {2} v _ {\max } \log K. \tag {12}
259
+ $$
260
+
261
+ It is evident that $v_{max} \leq \sigma_{max} \leq d_{max}$ , so the bound in Lemma 7 contributes an $\mathcal{O}(d_{max}\log K)$ term to the regret bound.
262
+
263
+ Lemma 8 (A bound for $4cC - \overline{Re} g_T$ ). For any $c \geq 0$ :
264
+
265
+ $$
266
+ 4 c C - \overline {{R e g}} _ {T} \leq \sum_ {i \neq i ^ {*}} \frac {1 2 8 c ^ {2} \sigma_ {\max }}{\Delta_ {i} \log K}. \tag {13}
267
+ $$
268
+
269
+ Part of the pseudo-regret bound that corresponds to Lemma 8 comes from the penalty term related to the negative entropy part of the regularizer. In this part, despite the fact that $\sigma_{max}$ can be much smaller than $d_{max}$ (Lemma 3), the $\sum_{i\neq i^{*}}\frac{\sigma_{max}}{\Delta_{i}\log K}$ term could be very large when the suboptimality gaps are small. In Appendix D we show how an asymmetric oracle learning rate $\gamma_{t,i}\simeq \gamma_t / \sqrt{\Delta_i}$ for the negative entropy regularizer can be used to remove the $\sum_{i\neq i^{*}}1 / \Delta_{i}$ factor in front of $\sigma_{max}$ . The possibility of removing this factor without the oracle knowledge is left as an open question.
270
+
271
+ Finally, by plugging (10), (11), (12), (13) into (9) we obtain the desired regret bound.
272
+
273
+ # 5.2 Adversarial bound
274
+
275
+ For the adversarial regime we use the final bound of Zimmert and Seldin [2021], which holds for any non-increasing learning rates:
276
+
277
+ $$
278
+ \overline {{R e g}} _ {T} \leq \sum_ {t = 1} ^ {T} \eta_ {t} \sqrt {K} + \sum_ {t = 1} ^ {T} \gamma_ {t} \sigma_ {t} + 2 \eta_ {T} ^ {- 1} \sqrt {K} + \gamma_ {T} ^ {- 1} \log K.
279
+ $$
280
+
281
+ It suffices to substitute the values of the learning rates and use Lemma 11 for function $\frac{1}{\sqrt{x}}$ :
282
+
283
+ $$
284
+ \begin{array}{l} \overline {{R e g}} _ {T} \leq \sum_ {t = 1} ^ {T} \frac {\sqrt {K}}{\sqrt {t + \eta_ {0}}} + \sum_ {t = 1} ^ {T} \frac {\sigma_ {t} \sqrt {\log K}}{\sqrt {D _ {t} + \gamma_ {0}}} + 2 \sqrt {K T + K \eta_ {0}} + \sqrt {\log (K) D _ {T} + \gamma_ {0} \log (K)} \\ = \mathcal {O} \left(\sqrt {K T} + \sqrt {\log (K) D _ {T}} + d _ {m a x} K ^ {1 / 3} \log K\right). \\ \end{array}
285
+ $$
286
+
287
+ # 6 Refined lower bound
288
+
289
+ In this section, we prove a tight lower bound for adversarial regret with arbitrary delays. [Thune et al. 2019] have proposed a skipping technique to achieve refined regret upper bounds in the adversarial regime with non-uniform delays. The technique was improved by Zimmert and Seldin [2020], but it remained unknown whether the refined regret bounds for regimes with non-uniform delays are tight. We answer this question positively by showing that the regret bound of Zimmert and Seldin [2020] is not improvable without additional assumptions. We first derive a refined lower bound for full-information games with variable loss ranges, which might be of independent interest. A proof is provided in Appendix E.
290
+
291
+ Theorem 9. Let $L_{1} \geq L_{2} \geq \dots \geq L_{T} \geq 0$ be a non-increasing sequence of positive reals and assume that there exists a permutation $\rho : [T] \to [T]$ , such that the losses at time $t$ are bounded in $[0, L_{\rho(t)}]^{K}$ . The minimax regret $Reg^{*}$ in the corresponding adversarial full-information game satisfies
292
+
293
+ $$
294
+ R e g ^ {*} \geq \max \left\{\frac {1}{2} \sum_ {t = 1} ^ {\lfloor \log_ {2} (K) \rfloor} L _ {t}, \frac {1}{3 2} \sqrt {\sum_ {t = \lfloor \log_ {2} (K) \rfloor} ^ {T} L _ {t} ^ {2} \log (K)} \right\}.
295
+ $$
296
+
297
+ From here we can directly obtain a lower bound for the full-information game with variable delays. This implies the same lower bound for bandits, since we have strictly less information available.
298
+
299
+ Corollary 10. Let $(d_t)_{t=1}^T$ be a sequence of non-increasing delays, such that $d_t \leq T + 1 - t$ and let an oblivious adversary select all loss vectors $(\ell_t)_{t=1}^T$ in $[0,1]^K$ before the start of the game. The minimax regret of the full-information game is bounded from below by
300
+
301
+ $$
302
+ R e g ^ {*} = \Omega \left(\min _ {S \subset [ T ]} | S | + \sqrt {D _ {\bar {S}} \log (K)}\right), w h e r e D _ {\bar {S}} = \sum_ {t \in [ T ] \backslash S} d _ {t}.
303
+ $$
304
+
305
+ Proof. We divide the time horizon greedily into $M$ buckets, such that the actions for all timesteps inside a bucket have to be chosen before the first feedback from any timestep inside the bucket is received. In other words, let bucket $B_{m} = \{b_{m},\dots ,b_{m + 1} - 1\}$ , then $\forall t\in B_m:t + d_t > b_{m + 1} - 1$ while $\exists t\in B_m:t + d_t = b_{m + 1}$ . This division of buckets has the following properties:
306
+
307
+ (i) monotonically decreasing sizes: $|B_1| \geq |B_2| \geq \dots \geq |B_M|$ .
308
+ (ii) upper bound on the sum of delays: $\forall m\in [M - 1]:|B_m|^2\geq \sum_{t\in B_{m + 1}}d_t$
309
+
310
+ Both properties follow directly from the non-decreasing nature of the delays.
311
+
312
+ $$
313
+ \begin{array}{l} \left| B _ {m} \right| = b _ {m + 1} - b _ {m} \leq b _ {m} + d _ {b _ {m}} - b _ {m} = d _ {b _ {m}} \\ \left| B _ {m} \right| = \min _ {t \in B _ {m}} \left\{d _ {t} + t - b _ {m} \right\} \geq d _ {b _ {m + 1} - 1} + \min _ {t \in B _ {m}} \left\{t - b _ {m} \right\} \geq d _ {b _ {m + 1} - 1}. \\ \end{array}
314
+ $$
315
+
316
+ Hence
317
+
318
+ $$
319
+ \begin{array}{l} \left| B _ {m} \right| \geq d _ {b _ {m + 1} - 1} \geq d _ {b _ {m + 1}} \geq \left| B _ {m + 1} \right|, \\ \sum_ {t \in B _ {m + 1}} d _ {t} \leq | B _ {m + 1} | \cdot d _ {b _ {m + 1}} \leq | B _ {m + 1} | \cdot | B _ {m} | \leq | B _ {m} | ^ {2}. \\ \end{array}
320
+ $$
321
+
322
+ Set $S' = \bigcup_{m=1}^{\lfloor \log_2(K) \rfloor} B_m$ and let the adversary set all losses within a bucket to the same value, then the game reduces to a full information game over $M$ rounds with loss ranges $|B_1|, |B_2|, \ldots, |B_M|$ . Applying Theorem 9 yields
323
+
324
+ $$
325
+ \begin{array}{l} R e g ^ {*} \geq \max \left\{\frac {1}{2} \sum_ {m = 1} ^ {\lfloor \log_ {2} (K) \rfloor} | B _ {m} |, \frac {1}{3 2} \sqrt {\sum_ {m = \lfloor \log_ {2} (K) \rfloor} ^ {M} | B _ {m} | ^ {2} \log (K)} \right\} \\ \geq \max \left\{\frac {1}{2} | S ^ {\prime} |, \frac {1}{3 2} \sqrt {\sum_ {t \in \bar {S} ^ {\prime}} d _ {t} \log (K)} \right\} = \Omega \left(\min _ {S \subset [ T ]} | S | + \sqrt {\sum_ {t \in \bar {S}} d _ {t} \log (K)}\right). \\ \end{array}
326
+ $$
327
+
328
+ # 7 Discussion
329
+
330
+ We have presented a best-of-both-worlds analysis of a slightly modified version of the algorithm of Zimmert and Seldin [2020] for bandits with delayed feedback. The key novelty of our analysis is the control of the drift of the playing distribution over arbitrary, but bounded, time intervals when the learning rate is changing over time. This control is necessary for best-of-both-worlds guarantees, but it is much more challenging than the drift control over fixed time intervals with fixed learning rate that appeared in prior work.
331
+
332
+ We also presented an adversarial regret lower bound matching the skipping-based refined regret upper bound of Zimmert and Seldin [2020] within constants.
333
+
334
+ Our work leads to several exciting open questions. The main one is whether skipping can be used to eliminate the need in oracle knowledge of $d_{max}$ . If possible, this would remedy the deterioration of the adversarial bound by the additive factor of $d_{max}$ , because the skipping threshold would be dominated by $\sqrt{D_{\bar{S}} \log K}$ . Another open question is whether the $\frac{\sigma_{max}}{\Delta_i}$ term can be eliminated from the stochastic bound. Yet another open question is whether the $d_{max}$ factor in the stochastic bound can be reduced to $\sigma_{max}$ and whether the multiplicative terms dependent on $K$ can be eliminated. An extension of the results to first order bounds, that depend on the cumulative loss of the best action rather than $T$ , and extension to arm dependent delays are also open questions. For now it was only done in the adversarial setting [Gyorgy and Joulani 2021, Van Der Hoeven and Cesa-Bianchi 2022].
335
+
336
+ # Acknowledgments and Disclosure of Funding
337
+
338
+ This project has received funding from European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199. YS acknowledges partial support by the Independent Research Fund Denmark, grant number 9040-00361B.
339
+
340
+ # References
341
+
342
+ Jacob D Abernethy, Chansoo Lee, and Ambuj Tewari. Fighting bandits with a new kind of smoothness. In Advances in Neural Information Processing Systems (NeurIPS). 2015.
343
+ Peter Auer and Chao-Kai Chiang. An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits. In Proceedings of the Conference on Learning Theory (COLT), 2016.
344
+ Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47, 2002.
345
+ Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32, 2002b.
346
+ Ilai Bistritz, Zhengyuan Zhou, Xi Chen, Nicholas Bambos, and Jose Blanchet. Online exp3 learning in adversarial bandits with delayed feedback. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
347
+ Sébastien Bubeck and Aleksandrs Slivkins. The best of both worlds: Stochastic and adversarial bandits. In Proceedings of the Conference on Learning Theory (COLT), 2012.
348
+ Nicol'o Cesa-Bianchi, Claudio Gentile, Yishay Mansour, and Alberto Minora. Delay and cooperation in nonstochastic bandits. In Journal of Machine Learning Research, 2019.
349
+ Andras Gyorgy and Pooria Joulani. Adapting to delays and data in adversarial multi-armed bandits. In Proceedings of the International Conference on Machine Learning (ICML), 2021.
350
+ Shinji Ito. Parameter-free multi-armed bandit algorithms with hybrid data-dependent regret bounds. In Proceedings of the Conference on Learning Theory (COLT), 2021.
351
+ Pooria Joulani, Andras Gyorgy, and Csaba Szepesvari. Online learning under delayed feedback. In Proceedings of the International Conference on Machine Learning (ICML), 2013.
352
+ Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6, 1985.
353
+ Tor Lattimore and Csaba Szepesvári. Bandit Algorithms. Cambridge University Press, 2020.
354
+ Saeed Masoudian and Yevgeny Seldin. Improved analysis of the tsallis-inf algorithm in stochastically constrained adversarial bandits and stochastic bandits with adversarial corruptions. In Proceedings of the Conference on Learning Theory (COLT), 2021.
355
+ Jaouad Mourtada and Stephane Gaiffas. On the optimality of the hedge algorithm in the stochastic regime. Journal of Machine Learning Research, 20, 2019.
356
+ Francesco Orabona. A modern introduction to online learning. https://arxiv.org/abs/1912.13213, 2019.
357
+ Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58, 1952.
358
+ Yevgeny Seldin and Gábor Lugosi. An improved parametrization and analysis of the EXP3++ algorithm for stochastic and adversarial bandits. In Proceedings of the Conference on Learning Theory (COLT), 2017.
359
+ Yevgeny Seldin and Aleksandrs Slivkins. One practical algorithm for both stochastic and adversarial bandits. In Proceedings of the International Conference on Machine Learning (ICML), 2014.
360
+ William R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25, 1933.
361
+ Tobias Sommer Thune, Nicolò Cesa-Bianchi, and Yevgeny Seldin. Nonstochastic multiarmed bandits with unrestricted delays. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
362
+ Dirk Van Der Hoeven and Nicolò Cesa-Bianchi. Nonstochastic bandits and experts with arm-dependent delays. In Proceedings on the International Conference on Artificial Intelligence and Statistics (AISTATS), 2022.
363
+
364
+ Chen-Yu Wei and Haipeng Luo. More adaptive algorithms for adversarial bandits. In Proceedings of the Conference on Learning Theory (COLT), 2018.
365
+
366
+ Julian Zimmert and Yevgeny Seldin. An optimal algorithm for stochastic and adversarial bandits. In Proceedings on the International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.
367
+
368
+ Julian Zimmert and Yevgeny Seldin. An optimal algorithm for adversarial bandits with arbitrary delays. In Proceedings on the International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
369
+
370
+ Julian Zimmert and Yevgeny Seldin. Tsallis-INF: An optimal algorithm for stochastic and adversarial bandits. Journal of Machine Learning Research, 2021.
371
+
372
+ # Checklist
373
+
374
+ 1. For all authors...
375
+
376
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
377
+ (b) Did you describe the limitations of your work? [Yes] All assumptions are stated in the statements of the theorems.
378
+ (c) Did you discuss any potential negative societal impacts of your work? [N/A] The main applications of our work are in theoretical and guarantees for Multi-armed bandit setting with delays. Multi-armed bandit is a very fundamental problem in sequential decision making which is base for many online learning problem. So this is not a relevant issue in our work.
379
+ (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
380
+
381
+ 2. If you are including theoretical results...
382
+
383
+ (a) Did you state the full set of assumptions of all theoretical results? [Yes]
384
+ (b) Did you include complete proofs of all theoretical results? [Yes]
385
+
386
+ 3. If you ran experiments...
387
+
388
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]
389
+ (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A]
390
+ (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A]
391
+ (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]
392
+
393
+ 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
394
+
395
+ (a) If your work uses assets, did you cite the creators? [N/A]
396
+ (b) Did you mention the license of the assets? [N/A]
397
+ (c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
398
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A]
399
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A]
400
+
401
+ 5. If you used crowdsourcing or conducted research with human subjects...
402
+
403
+ (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
404
+ (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
405
+ (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
abestofbothworldsalgorithmforbanditswithdelayedfeedback/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d05581275e58e490eebb2e2759ed156c75b92eb3461d0d53181994760504b00b
3
+ size 345045
abestofbothworldsalgorithmforbanditswithdelayedfeedback/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0f4658621a4c48903b9a573a8b610ab8c1438935dae3192b4e46775e894a039
3
+ size 479464
aboostingapproachtoreinforcementlearning/e2e16445-7251-410e-8e14-fb6768630d8b_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8adcbc9ff811d6b6ee6908fa5730d0bd81ecc39e318e947dae90df38ce0b026
3
+ size 76360
aboostingapproachtoreinforcementlearning/e2e16445-7251-410e-8e14-fb6768630d8b_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ad644a377538ad33a300bafc2081d8db45621ed2716902e81eb72805d40bc68
3
+ size 101857