# 3D Spatial Recognition without Spatially Labeled 3D
Zhongzheng Ren $^{1,2^*}$ Ishan Misra $^{1}$ Alexander G. Schwing $^{2}$ Rohit Girdhar $^{1}$ $^{1}$ Facebook AI Research University of Illinois at Urbana-Champaign
https://facebookresearch.github.io/WyPR

Figure 1: (Left) Our framework, WyPR, jointly learns semantic segmentation and object detection for point cloud data from only scene-level class tags. We find that encouraging consistency between the two tasks is key. (Right) Sample segmentation results from ScanNet val set, without seeing any point-level labels during training. Please refer to § 4.4 and Appendix F for more analysis and visualizations.

# Abstract
We introduce WyPR, a Weakly-supervised framework for Point cloud Recognition, requiring only scene-level class tags as supervision. WyPR jointly addresses three core 3D recognition tasks: point-level semantic segmentation, 3D proposal generation, and 3D object detection, coupling their predictions through self and cross-task consistency losses. We show that in conjunction with standard multiple-instance learning objectives, WyPR can detect and segment objects in point cloud data without access to any spatial labels at training time. We demonstrate its efficacy using the ScanNet and S3DIS datasets, outperforming prior state of the art on weakly-supervised segmentation by more than $6\%$ mIoU. In addition, we set up the first benchmark for weakly-supervised 3D object detection on both datasets, where WyPR outperforms standard approaches and establishes strong baselines for future work.
# 1. Introduction
Recognition (i.e., segmentation and detection) of 3D objects is a key step towards scene understanding. With the recent development of consumer-level depth sensors (e.g., LiDAR [13, 43]) and the advances of computer vision algorithms, 3D data collection has become more convenient and inexpensive. However, existing 3D recognition systems often fail to scale as they rely on strong supervision, such as point level semantic labels or 3D bounding
boxes [9, 29, 32], which are time consuming to obtain. For example, while the popular large-scale indoor 3D dataset ScanNet [10] was collected by only 20 people, the annotation effort involved more than 500 annotators spending nearly 22.3 minutes per scan. Furthermore, due to the high annotation cost, existing 3D object detection datasets have limited themselves to a small number of object classes. This time consuming labeling process is a major bottleneck preventing the community from scaling 3D recognition.
Motivated by this observation, we study 3D weakly-supervised learning with only scene-level class tags available as supervision to train semantic segmentation and object detection models. Scene-level tags are very efficient to annotate, taking only a second or less for each object in the scene [36]. Hence, methods that rely on such supervision can be scaled more easily than those that rely on box-level supervision.
For this we develop the novel weakly-supervised framework called WyPR, shown in Fig. 1. Using just scene level tags, it jointly learns both segmentation of point cloud and detection of 3D boxes. Why should joint learning of segmentation and detection perform better than independently learning the two tasks? First, since these two tasks are related, joint training is mutually beneficial for representation learning. Second, these tasks naturally constrain each other, leading to effective self-supervised objectives that further improve performance. For example, the semantic labels of points within a bounding box should be consistent, and vice versa. Lastly, directly learning to regress to dimensions of 3D bounding boxes, as common in supervised ap
| Methods | [46] | [56] | [59] | [51] | [53] | [33] | WyPR |
| Weak labels | 2D boxes | 2D inst seg | sparse label | 2D sem seg | region & scene tags | scene tags | scene tags |
| Tasks | det | det | seg | seg | seg | det | det + seg |
| Dataset | indoor | outdoor | indoor & objects | indoor | indoor | outdoor | indoor |
Table 1: Summary of closely related work in weakly-supervised 3D recognition. Compared to prior work, our proposed method (WyPR) uses the readily available scene tags, and jointly learns detection and segmentation in the more challenging indoor room setting.
proaches [28, 29, 39], is extremely challenging using weak labels. Learning weakly-supervised segmentation first permits a two-stage detection framework, where object proposals are generated bottom-up conditioned on segmentation prediction and further classified using a weakly-supervised detection algorithm.
To achieve this, WyPR operates on point cloud data of complex indoor scenes and combines a weakly-supervised semantic segmentation stage (§ 3.1) with a weakly-supervised object detection stage (§ 3.2). The latter takes as input the geometric representation of the input scene and a set of computed 3D proposals from GSS, our novel Geometric Selective Search algorithm (§ 3.3). GSS uses local geometric structures (e.g., planes) and the previously computed segmentation, for bottom-up proposal generation. Due to the uninformative nature of weak labels, weakly-supervised frameworks often suffer from noisy prediction and high variance. We address this by encouraging both cross-task and cross-transformation consistency through self-supervised objectives. We evaluate WyPR on standard 3D datasets, i.e., ScanNet and S3DIS (§ 4), improving over prior work on weakly-supervised 3D segmentation by more than $6\%$ mIoU, and establishing new benchmarks and strong baselines for weakly-supervised 3D detection.
Our contributions are as follows: 1) a novel point cloud framework to jointly learn weakly-supervised semantic segmentation and object detection, which significantly outperforms single task baselines; 2) an unsupervised 3D proposal generation algorithm, geometric selective search (GSS), for point cloud data; and 3) state-of-the-art results on weakly-supervised semantic segmentation, and benchmarks on weakly-supervised proposal generation and object detection.
# 2. Related work
3D datasets. Semantically labeled 3D data can be broadly classified into indoor [2, 5, 10, 41] and outdoor [7, 8, 14, 44] settings. ScanNet [10], a popular 3D detection and segmentation dataset, contains 20 classes labeled in 1500 scenes. While this dataset is large, it is small in comparison to 2D datasets, which reach tens of millions of images [21] and thousands of instance labels [17]. While the popularity of advanced 3D sensors [13, 43] could lead to a similar growth in 3D data, annotating that data would still be
extremely time consuming. This underscores the need to develop weakly-supervised techniques for 3D recognition.
3D representations. 3D data is often represented via a point cloud, and processed using one of two main backbone architectures. The first [9, 15, 16, 37] projects points to intermediate volumetric grids, and then processes them using convolutional nets. These methods are efficient but suffer from information loss due to the discretization into voxels. The second operates directly on points [31, 32, 47, 52], processing them in parallel either using a pointwise MLP [31, 32], graph convolution [52], or point convolution [47]. Our method is compatible with either backbone architecture. We adopt PointNet++ [32] for experimentation.
3D tasks. Semantic segmentation [2, 6, 10], object detection [29, 30, 39, 42], and classification [57] are the standard recognition tasks defined on 3D data. For segmentation, the two most common tasks are point-level object parts segmentation [6] and scene object segmentation [2, 10], the latter of which we address in this work using weak supervision. For 3D object detection, standard techniques leverage either only a point cloud [29, 39, 61], or a point cloud together with the corresponding multi-view RGB images [18, 28, 30]. Unlike 2D where offline proposal generation methods [48, 64] are widely studied and generalize well to unseen datasets, 3D proposals generated from a point cloud are often trained in a supervised manner [20, 29, 39] and overfit to the training set. We propose an unsupervised 3D proposal generation algorithm GSS, which we further improve using weak supervision.
Weakly-supervised learning. Weak labels in the form of image-level class tags are widely studied in 2D tasks such as image localization [58, 63], semantic segmentation [27, 55], and object detection [3, 35, 45]. Prior work mostly formulates weakly-supervised learning as a multiple instance learning problem, where the target tasks are learned implicitly in a multi-label classification framework. Pipelined [40, 54] or end-to-end self-training modules [35, 45] have also been demonstrated to be beneficial.
Weakly-supervised learning in 3D. Compared to its 2D counterpart, weakly-supervised learning for 3D tasks is relatively unexplored. We summarize all relevant prior work in Tab. 1. For semantic segmentation, Wang et al. [51] leverage 2D segmentation as weak labels, Xu et al. [59] use a sparsely labeled point cloud, and Wei et al. [53] utilize both area and scene-level class tags during training. For object detection, recent work uses small sets of labeled 3D

Figure 2: Approach Overview. A backbone network extracts geometric features which are used by the segmentation head to compute a point-level segmentation map. The segmentation map is passed into the 3D proposal generation module GSS, and the resulting proposals along with original features are used to detect 3D object instances. Through a series of self and cross-task consistency losses along with multiple-instance learning objectives, WyPR is trained end-to-end using only scene-level tags as supervision.
data [24, 46, 62], 2D instance segmentation [56], and click annotation [24] as supervision. However, obtaining these labels is still time consuming. A closely related concurrent work [33] focuses on autonomous driving, building upon a small number of relatively easy objects (e.g., car, pedestrian) while still using image data. In contrast, we focus on complex indoor scenes, exclusively relying on the 3D point cloud, i.e., no images are required.
Multi-task learning. Multi-task learning [4] has been widely studied for various vision tasks [12, 19, 25, 34]. It is of particular importance for weakly-supervised [23, 36] or self-supervised 2D object detection [11, 34] as multitasking provides mutual regularization and hence better representation learning. For detection and segmentation, prior work has studied joint training with 2D data [23] or supervised 3D data [49]. In this paper, we develop a novel framework for learning both tasks under weak supervision.
# 3. WyPR
Our goal is to use weak supervision in the form of scene-level tags and learn a joint 3D segmentation and detection model, which we refer to as WyPR. Specifically, we assume availability of data $\mathcal{D} = \{(\mathcal{P},\mathbf{y})\}$ of point cloud $\mathcal{P}$ and corresponding scene-level tags $\mathbf{y}\in \{0,1\} ^C$ , which indicate absence or presence of the $C$ object classes. $\mathcal{P}$ is a set of six-dimensional points $\mathbf{p}\in \mathcal{P}$ , represented by their 3D location and RGB-color. Note, $\mathbf{y}$ only indicates existence of objects in the scene and does not contain any information about perpoint semantic labels or object locations.
Approach overview. Fig. 2 provides an overview of our model which consists of three parametric modules: a backbone network, followed by a segmentation and a detection head. We first extract geometric features from the input point cloud using the backbone network. Specifically, we use the variant of PointNet++ [32] following VoteNet [29], which is an encoder-decoder network with skip connections. The features are then fed into the segmentation and detection modules. The segmentation module assigns each point from the input point cloud $\mathcal{P}$ to one of $C$ classes. We use this segmentation output to generate 3D region proposals $\mathcal{R}$ that are likely to contain objects in the scene. Finally,
the detection module classifies each proposal into either one of $C$ classes or background (not an object) class, using the backbone features corresponding to that proposal.
Notation. We denote the output of the segmentation module as $\mathbf{S}_{\mathrm{seg}} \in \mathbb{R}^{|\mathcal{P}| \times C}$ , where the rows represent the score logits over the $C$ classes for all points $\mathcal{P}$ . The detection module produces a score matrix $\mathbf{S}_{\mathrm{det}} \in \mathbb{R}^{|\mathcal{R}| \times (C + 1)}$ over the $C$ classes and background for all 3D proposals $\mathcal{R}$ . For readability, we also use $\mathbf{p}, \mathbf{r}$ as indices into $\mathbf{S}_{\mathrm{seg}}, \mathbf{S}_{\mathrm{det}}$ in the following sections.
# 3.1. Weakly-supervised 3D semantic segmentation
The segmentation module consists of two identical heads that independently process the backbone features using a series of unit PointNet [31] and nearest neighbor upsampling layers (Fig. 2 green region). The output from these heads are two score matrices $\mathbf{U}_{\mathrm{seg}}$ , $\mathbf{S}_{\mathrm{seg}} \in \mathbb{R}^{|\mathcal{P}| \times C}$ respectively, containing logits over $C$ object classes for all points $\mathbf{p} \in \mathcal{P}$ . The parameters of the backbone and the segmentation module are optimized to minimize a composed loss
$$
\mathcal {L} _ {\mathrm {s e g}} = \mathcal {L} _ {\mathrm {s e g}} ^ {\mathrm {M I L}} + \mathcal {L} _ {\mathrm {s e g}} ^ {\mathrm {S E L F}} + \mathcal {L} _ {\mathrm {s e g}} ^ {\mathrm {C S T}} + \mathcal {L} _ {\mathrm {d} \rightarrow \mathrm {s}} + \mathcal {L} _ {\text {s m o o t h}}, \tag {1}
$$
where $\mathcal{L}_{\mathrm{seg}}^{\mathrm{MIL}}$ denotes a multiple-instance learning (MIL) loss, $\mathcal{L}_{\mathrm{seg}}^{\mathrm{SELF}}$ denotes a self-training loss, $\mathcal{L}_{\mathrm{seg}}^{\mathrm{CST}}$ and $\mathcal{L}_{\mathrm{d}\rightarrow \mathrm{s}}$ represent consistency loss across geometric transformations and tasks respectively, and $\mathcal{L}_{\mathrm{smooth}}$ is a smoothness regularization loss. We describe the individual loss terms next.
MIL loss. The multiple-instance learning loss [54, 55] encourages to learn the per-point semantic segmentation logits without access to point-level supervision. We first convert the per-point logits $\mathbf{U}_{\mathrm{seg}}$ into a scene-level prediction $\phi$ via average pooling and a sigmoid normalization
$$
\phi [ c ] = \operatorname {s i g m o i d} \left(\frac {1}{| \mathcal {P} |} \sum_ {\mathbf {p} \in \mathcal {P}} \mathbf {U} _ {\mathrm {s e g}} [ \mathbf {p}, c ]\right). \tag {2}
$$
The scene-level prediction $\phi$ is then supervised using the scene-level tags $\mathbf{y}$ using the binary cross-entropy loss
$$
\mathcal {L} _ {\mathrm {s e g}} ^ {\mathrm {M I L}} = - \sum_ {c = 1} ^ {C} \mathbf {y} [ c ] \log \phi [ c ] - (1 - \mathbf {y} [ c ]) \log (1 - \phi [ c ]). \tag {3}
$$
Algorithm 1 Segmentation pseudo label generation
Input: class label $\mathbf{y}$ , segmentation logits $\mathbf{U}_{\mathrm{seg}}$ , threshold $p_1$
Output: pseudo label $\tilde{\mathbf{Y}}_{\mathrm{seg}}$
1: $\hat{\mathbf{Y}}_{\mathrm{seg}} = \mathbf{0}$ initialize to zero matrix
2: for each point $\mathbf{p}\in \mathcal{P}$ do
3: $c = \operatorname {argmax}(\mathbf{y}\odot \mathbf{U}_{\mathrm{seg}}[\mathbf{p},:])$ element-wise product
4: $\hat{\mathbf{Y}}_{\mathrm{seg}}[\mathbf{p},c] = 1$
5: for ground-truth class $c$ where $\mathbf{y}[c] = 1$ do
6: $\mathcal{P}'[c]\gets$ lowest $p_1$ -th percentile of $\hat{\mathbf{Y}}_{\mathrm{seg}}[:,c]$
7: $\hat{\mathbf{Y}}_{\mathrm{seg}}[\mathbf{p},c] = 0\forall \mathbf{p}\in \mathcal{P}'[c]$ ignore points with low score
Self-training loss. Inspired by the success of self-training in weakly-supervised detection [35, 45, 50], we further incorporate a self-training loss. The previously computed segmentation logits $\mathbf{U}_{\mathrm{seg}}$ are used to supervise the final segmentation logits $\mathbf{S}_{\mathrm{seg}}$ via a cross-entropy loss
$$
\mathcal {L} _ {\mathrm {s e g}} ^ {\mathrm {S E L F}} = - \frac {1}{| \mathcal {P} |} \sum_ {\mathbf {p} \in \mathcal {P}} \sum_ {c = 1} ^ {C} \hat {\mathbf {Y}} _ {\mathrm {s e g}} [ \mathbf {p}, c ] \log \psi [ \mathbf {p}, c ], \tag {4}
$$
where $\psi[\mathbf{p}, c] = \mathrm{softmax}(\mathbf{S}_{\mathrm{seg}}[\mathbf{p}, c])$ denotes the probability of point $\mathbf{p}$ belonging to class $c$ , and $\hat{\mathbf{Y}}_{\mathrm{seg}}[\mathbf{p}, c] \in \{0, 1\}$ is the point-level pseudo class label inferred from score matrix $\mathbf{U}_{\mathrm{seg}}$ . We detail the process of computing the pseudo label in Alg. 1. Intuitively, the algorithm ignores noisy predictions in $\mathbf{U}_{\mathrm{seg}}$ , leading to robust self-supervision for $\mathbf{S}_{\mathrm{seg}}$ . Cross-transformation consistency loss. In addition, we use $\mathcal{L}_{\mathrm{seg}}^{\mathrm{CST}}$ to encourage that the segmentation predictions are consistent across data augmentations $\mathcal{T}$ . We obtain an augmented point cloud $\tilde{\mathcal{P}} = \mathcal{T}(\mathcal{P})$ by changing the original scene $\mathcal{P}$ via standard augmentations (see §4 and Appendix C.1 for details). We predict the semantic segmentation $\tilde{\mathbf{S}}_{\mathrm{seg}}$ on this transformed point cloud. The consistency loss is then formulated as
$$
\mathcal {L} _ {\mathrm {s e g}} ^ {\mathrm {C S T}} = \frac {1}{| \mathcal {P} \cap \tilde {\mathcal {P}} |} \sum_ {\mathbf {p} \in \mathcal {P} \cap \tilde {\mathcal {P}}} D _ {\mathrm {K L}} (\psi [ \mathbf {p}, \cdot ] | | \tilde {\psi} [ \mathbf {p}, \cdot ]), \tag {5}
$$
where $\psi[\mathbf{p}, c] = \text{softmax}(\mathbf{S}_{\text{seg}}[\mathbf{p}, c])$ and $\tilde{\psi}[\mathbf{p}, c] = \text{softmax}(\tilde{\mathbf{S}}_{\text{seg}}[\mathbf{p}, c])$ are the probabilities of the point $\mathbf{p}$ belonging to class $c$ , and $D_{\mathrm{KL}}$ is the KL divergence over $C$ classes for points that are common across the transformation. This loss encourages the probability distributions for semantic segmentation of corresponding points within the point cloud $\mathcal{P}$ and $\tilde{\mathcal{P}}$ to match.
Cross-task consistency loss. We further employ a cross-task regularization term $\mathcal{L}_{\mathrm{d}\to \mathrm{s}}$ . It uses the detection results to refine the segmentation prediction. Intuitively, all points within a confident bounding box prediction should have the same semantic label. Assume we have access to a set of confident bounding boxes $\mathbf{r} \in \mathcal{R}^*$ and their corresponding predicted score matrix $\mathbf{S}_{\mathrm{det}} \in \mathbb{R}^{|\mathcal{R}^*| \times (C + 1)}$ . Using this information, we encourage consistency via a cross entropy
Algorithm 2 Detection pseudo label generation
Input: class label $\mathbf{y}$ detection logits $\mathbf{U}_{\mathrm{det}}$ proposals $\mathcal{R}$ threshold $\tau ,p_{2}$ Output: pseudo label $\hat{\mathbf{Y}}_{\mathrm{det}}$
1: for ground-truth class $c$ where $\mathbf{y}[c] = 1$ do
2: $\hat{\mathbf{Y}}_{\mathrm{det}} = \mathbf{0}$ initialize to zero matrix
3: $\mathcal{R}^{\prime}[c]\gets$ top $p_2$ -th percentile of $\mathbf{U}_{\mathrm{det}}[:,c]\triangleright \mathcal{R}^{\prime}[c]$ is descending
4: $\mathcal{R}^{*}[c]\leftarrow \mathbf{r}_{1}^{\prime}$ save 1st RoI (top-scoring) $\mathbf{r}_1^\prime \in \mathcal{R}^\prime [c]$
5: for $i\in \{2,\dots ,|\mathcal{R}'[c]|\}$ do start from the 2nd highest
6: $\mathcal{R}^{*}[c]\gets \mathbf{r}_{i}^{\prime}$ if IoU $(\mathbf{r}_i',\hat{\mathbf{r}})$ $< \tau \forall \hat{\mathbf{r}}\in \mathcal{R}^{*}[c]$
7: $\hat{\mathbf{Y}}_{\mathrm{det}}[\mathbf{r},c] = 1\forall \mathbf{r}\in \mathcal{R}^{*}[c]$
loss on the point-level predictions, with the box-level prediction as a soft target
$$
\mathcal {L} _ {\mathrm {d} \rightarrow \mathrm {s}} = - \frac {1}{| \mathcal {R} ^ {*} |} \sum_ {\mathbf {r} \in \mathcal {R} ^ {*}} \frac {1}{| \mathcal {P} ^ {r} |} \sum_ {\mathbf {p} \in \mathcal {P} ^ {\mathbf {r}}} \sum_ {c = 1} ^ {C} \boldsymbol {\xi} [ \mathbf {r}, c ] \log \psi [ \mathbf {p}, c ], \tag {6}
$$
where $\psi[\mathbf{p}, c]$ is the point probability from Eq. (4), $\xi[\mathbf{r}, c] = \text{softmax}(\mathbf{S}_{\text{det}}[\mathbf{r}, c])$ denotes the probability of proposal $\mathbf{r}$ belonging to object class $c$ , and $\mathcal{P}^{\mathbf{r}}$ denotes the set of points within proposal $\mathbf{r}$ . In practice, the confident bounding boxes $\mathcal{R}^*$ are obtained from Alg. 2, discussed later in §3.2.
Smoothness regularization. Finally, we compute $\mathcal{L}_{\mathrm{smooth}}$ to encourage local smoothness. We first detect a set of planes $\mathcal{G}$ from input point cloud $\mathcal{P}$ using an unsupervised off-the-shelf shape detection algorithm [22] detailed in Appendix B. We then compute
$$
\mathcal {L} _ {\text {s m o o t h}} = - \sum_ {i = 1} ^ {| \mathcal {G} |} \frac {1}{| \mathcal {G} [ i ] |} \sum_ {\mathbf {p} \in \mathcal {G} [ i ]} \sum_ {c = 1} ^ {C} \bar {\psi} [ c ] \log \psi [ \mathbf {p}, c ], \tag {7}
$$
where $\bar{\psi}[c] = \frac{\sum_{\mathbf{p} \in \mathcal{G}[i]} \psi[\mathbf{p}, c]}{|\mathcal{G}[i]|}$ is the mean probability of all the points which lie inside plane $\mathcal{G}[i]$ for class $c$ .
# 3.2. Weakly-supervised 3D object detection
Our object detection module assumes access to a set of 3D region proposals $\mathcal{R}$ (discussed in § 3.3) and uses the backbone features to classify the proposals into one of the $C$ object classes or background (Fig. 2 blue region). Each region of interest (RoI) $\mathbf{r} \in \mathbb{R}^6$ is represented by a six-dimensional vector denoting its center location and its width, height and length. We extract RoI features by averaging the backbone features of all the points within each proposal. Inspired by prior 2D literature [3], we use three separate linear layers to extract classification logits $\mathbf{S}_{\mathrm{cls}} \in \mathbb{R}^{|\mathcal{R}| \times (C + 1)}$ , objectness logits $\mathbf{S}_{\mathrm{obj}} \in \mathbb{R}^{|\mathcal{R}| \times (C + 1)}$ , and final detection logits $\mathbf{S}_{\mathrm{det}} \in \mathbb{R}^{|\mathcal{R}| \times (C + 1)}$ from the RoI features. As in [3], we normalize $\mathbf{S}_{\mathrm{cls}}$ using a softmax function over rows to obtain the probability over object classes for each proposal. Similarly, we normalize $\mathbf{S}_{\mathrm{obj}}$ over columns to obtain a probability over proposals for each class. Intuitively, $\mathbf{S}_{\mathrm{cls}}[\mathbf{r}, c]$ represents the probability of region $\mathbf{r}$ being classified as class $c$ , and $\mathbf{S}_{\mathrm{obj}}[\mathbf{r}, c]$ is the probability of detecting region $\mathbf{r}$ for class $c$ . We aggregate the evidence from

Point cloud

Detected shapes
Figure 3: Geometric Selective Search (GSS). Our algorithm takes as input the point cloud and detected planes (left column). It then hierarchically groups the neighboring planes into sub-regions and generates 3D proposals for the combined regions (middle column). We run the algorithm multiple times with different grouping criteria to encourage high recall of final output proposals (right column).

HACiter20

HACiter40

HACiter60

GSS proposals
both matrices via element-wise multiplication to obtain the score matrix $\mathbf{U}_{\mathrm{det}} = \mathbf{S}_{\mathrm{cls}}\odot \mathbf{S}_{\mathrm{obj}}$ . Similar to the self-training discussed earlier for segmentation, we infer pseudo-labels from $\mathbf{U}_{\mathrm{det}}$ to supervise the final detection logits $\mathbf{S}_{\mathrm{det}}$ . We learn the backbone and the detection module using the loss
$$
\mathcal {L} _ {\det } = \mathcal {L} _ {\det } ^ {\mathrm {M I L}} + \mathcal {L} _ {\det } ^ {\mathrm {S E L F}} + \mathcal {L} _ {\det } ^ {\mathrm {C S T}}, \tag {8}
$$
where $\mathcal{L}_{\mathrm{det}}^{\mathrm{MIL}}$ is a MIL objective for detection, $\mathcal{L}_{\mathrm{det}}^{\mathrm{SELF}}$ is a self-training loss, and $\mathcal{L}_{\mathrm{det}}^{\mathrm{CST}}$ is the cross-transformation consistency loss. All the terms are described next.
MIL loss. Similar to the segmentation head, the multiple instance learning (MIL) loss for detection is
$$
\mathcal {L} _ {\det } ^ {\mathrm {M I L}} = - \sum_ {c = 1} ^ {C + 1} \mathbf {y} [ c ] \log \boldsymbol {\mu} [ c ] - (1 - \mathbf {y} [ c ]) \log (1 - \boldsymbol {\mu} [ c ]), \tag {9}
$$
where $\pmb{\mu}[c] = \sum_{\mathbf{r} \in \mathcal{R}} \mathbf{U}_{\mathrm{det}}[\mathbf{r}, c]$ is the row-sum of the score matrix $\mathbf{U}_{\mathrm{det}}$ for class $c$ . This sum-pooling operation aggregates RoI scores into a scene-level score vector $\pmb{\mu}$ , which is used for multi-label scene classification.
Self-training loss. As done before for segmentation, we incorporate a self-training loss for detection as well. The final detection logits $\mathbf{S}_{\mathrm{det}}$ are supervised by $\mathbf{U}_{\mathrm{det}}$ via
$$
\mathcal {L} _ {\det } ^ {\text {S E L F}} = - \frac {1}{| \mathcal {R} |} \sum_ {\mathbf {r} \in \mathcal {R}} \sum_ {c = 1} ^ {C + 1} \hat {\mathbf {Y}} _ {\det } [ \mathbf {r}, c ] \log \boldsymbol {\xi} [ \mathbf {r}, c ], \tag {10}
$$
where $\xi[\mathbf{r}, c] = \mathrm{softmax}(\mathbf{S}_{\mathrm{det}}[\mathbf{r}, c])$ denotes the probability of proposal $\mathbf{r}$ belonging to object class $c$ , and $\hat{\mathbf{Y}}_{\mathrm{det}}[\mathbf{r}, c] \in \{0, 1\}$ is the RoI pseudo class label inferred from score matrix $\mathbf{U}_{\mathrm{det}}$ . The pseudo label $\hat{\mathbf{Y}}_{\mathrm{det}}$ is computed using Alg. 2. Conceptually, this algorithm selects a set of confident yet diverse predictions as the pseudo labels for self-training.
Cross-transformation consistency loss. Following the consistency loss for semantic segmentation (Eq. (5)), we encourage detection predictions to be consistent under transformation $\mathcal{T}$ via
$$
\mathcal {L} _ {\det } ^ {\mathrm {C S T}} = \frac {1}{| \mathcal {R} |} \sum_ {\mathbf {r} \in \mathcal {R}} D _ {\mathrm {K L}} \left(\boldsymbol {\xi} [ \mathbf {r}, \cdot ] \mid \mid \tilde {\boldsymbol {\xi}} [ \mathcal {T} (\mathbf {r}), \cdot ]\right), \tag {11}
$$
where $\xi[\mathbf{r}, c]$ refers to the RoI probability introduced in Eq. (10), $\tilde{\xi}[\mathcal{T}(\mathbf{r}), c]$ denotes the RoI probability obtained from the transformed input $\tilde{\mathcal{P}} = \mathcal{T}(\mathcal{P})$ and proposal $\mathcal{T}(\mathbf{r})$ via the same backbone and detection module.
# 3.3. Geometric Selective Search (GSS)
The detection module uses a proposal set $\mathcal{R}$ as input. In weakly-supervised learning, proposals are necessary because it is not possible to mimic supervised methods that directly predict 3D bounding box parameters (e.g., size and location). The key observation which inspires our novel 3D proposal generation algorithm is that most indoor objects are rigid and mainly consist of basic geometric structures (e.g., planes, cylinders, spheres). We thus devise a bottom-up solution termed Geometric Selective Search (GSS), first detecting basic geometric shapes which are then grouped hierarchically to form 3D proposals (Fig. 2 brown region).
Given an input point cloud with unoriented normals, we adopt a region-growing-based method [22, 26] for detecting primitive shapes (e.g., planes) as shown in Fig. 3 left. We choose region-growing over the popular RANSAC-based methods [38] because 1) it is deterministic; 2) it performs better in the presence of large scenes with fine-grained details. We then apply hierarchical agglomerative clustering (HAC) to iteratively group the detected shapes into sub-regions. In each HAC iteration, we compute the similarity score $s$ between all spatially overlapping sub-regions and group the two most similar regions. We iterate until no neighbors can be found or only one region is left. Every time we generate a new region, we also compute the axis-aligned bounding boxes of the new region and add it into the proposal pool. We illustrate the process of growing the proposal pool during HAC in Fig. 3 (middle columns).
In order to pick which two regions $\mathbf{n}_i, \mathbf{n}_j$ to group, HAC uses a similarity score
$$
s \left(\mathbf {n} _ {i}, \mathbf {n} _ {j}\right) = w _ {1} s _ {\text {s i z e}} + w _ {2} s _ {\text {v o l u m e}} + w _ {3} s _ {\text {f i l l}} + w _ {4} s _ {\text {s e g}}, \tag {12}
$$
where $w_{i}\in \{0,1\} \forall i\in \{1,\dots ,4\}$ are binary indicators. $s_{\mathrm{size}}$ and $s_{\mathrm{volume}}\in [0,1]$ measure size and volume compatibility and encourage small regions to merge early; $s_{\mathrm{fill}}\in [0,1]$ measures how well two regions are aligned. Besides similarities of low-level cues, we also measure highlevel semantic similarities by incorporating segmentation similarity $s_{\mathrm{seg}}\in [0,1]$ . This score is the histogram intersection of the normalized $C$ -dimensional class histogram of two regions' points. The class labels of these points are computed from $\mathbf{S}_{\mathrm{seg}}$ using the inference procedure de
scribed in § 4. Please see Appendix A for the exact formulation of the above metrics. During training, as the segmentation module improves, $s_{\mathrm{seg}}$ increasingly prefers grouping regions which correspond to the same object. A similar idea to compute proposals from segmentations has also been widely adopted in the 2D case [1, 48]. In practice, we find that multiple runs of HAC with different $w_i$ values, results in a more diverse set of proposals as each run uses a different weighted similarity measure. We provide the values of $w_i$ for different runs in Appendix A.
GSS can be made completely unsupervised by removing the segmentation term $s_{\mathrm{seg}}$ from Eq. (13). This variant is also valuable as the proposals can be pre-computed offline and are of decent quality (verified in § 4). These proposals are independent of any specific supervision and can benefit various downstream unsupervised or weakly-supervised 3D recognition tasks, akin to Selective Search [48] or Edge Boxes [64] in 2D. This is distinct from existing 3D proposal techniques that either use 2D image cues [33] or full bounding box supervision [28, 29].
# 4. Experiments
We empirically evaluate WyPR on two standard 3D benchmarks. We first provide the key implementation details (more details in Appendix C) and describe the baseline methods we compare to ( $\S$ 4.1). We then present the quantitative results ( $\S$ 4.2 and 4.3), ablate our design choices and present qualitative results ( $\S$ 4.4).
Input. Our network takes as input a fixed-size point cloud, where 40K points are randomly sub-sampled from the original scan. In addition to using color (RGB) and coordinates (XYZ) as input features, following [29], we include surface normal and a height feature of each point.
Augmentation. We augment the input point cloud at two places in our framework: (1) data augmentation at the input, and (2) to compute the consistency loss in Eq. (5) and Eq. (11). In practice, we find it beneficial to apply different geometric transformations for the above two purposes. To augment the input, we follow [29] and use random sub-sampling of 40K points, random flipping in both horizontal and vertical directions, and random rotation of $[-5,5]$ degrees around the upright-axis. To compute the consistency loss, we use random flipping, point jittering, random rotation with an angle uniform in $[0,30]$ degrees around the upright-axis, random scaling by a factor from $[0.8,1.2]$ , and point dropout ( $p = 0.1$ ). Finally, we also find that jittering the point cloud is crucial to obtain good proposals for noisy point clouds (analyzed in § 4.4).
Network architecture. (1) Backbone. We use Point-Net++ [32] as the backbone model to compute the point cloud features. The model has 4 set abstraction (SA) layers and 2 feature propagation (FP) layers. The four SA layers sub-sample the point cloud to 2048, 1024, 512 and
| Method | Split | mIoU |
| Weakly-supervised methods |
| PCAM [53] | train | 22.1 |
| MPRM [53] | train | 24.4 |
| WyPR | train | 30.7 |
| MIL-seg | val | 20.7 |
| WyPR | val | 29.6 |
| WyPR+prior | val | 31.1 |
| WyPR | test | 24.0 |
| Supervised methods |
| VoteNet [29] | test | 55.7 |
| SparseConvNet [9] | test | 73.6 |
Table 2: 3D semantic segmentation on ScanNet. WyPR outperforms standard baselines and existing state-of-the-art [53]. We also report fully supervised methods for reference. Note that models on train and val sets leverage axis alignment information from [29], which is not present and hence not used for experiments on the test set. See Appendix E for per-class performance.
256 points using a receptive radius of 0.2, 0.4, 0.8 and 1.2 meters respectively. The two FP layers up-sample the last SA layer's output back from 256 to 1024 points. The final output has $(256 + 3)$ dimensions (feature $+$ 3D coordinates). (2) Segmentation module. This module is implemented as two FP layers which upsample the backbone features (1024 points) to the input size (40K points), and a two layer MLP (implemented as two $1\times 1$ convolutional layers) which convert the features into per-point classification logits. (3) Detection module. This module has 3 fully-connected layers, computing the classification $\mathbf{S}_{\mathrm{cls}}$ , objectness $\mathbf{S}_{\mathrm{obj}}$ , and final classification logits $\mathbf{S}_{\mathrm{det}}$ respectively, as described in § 3.2.
Training. We train the entire network end-to-end from scratch with an Adam optimizer for 200 epochs. We use 8 GPUs with a batch size of 32. The initial leaning rate is 0.003 and is decayed by $10 \times$ at epoch $\{120, 160, 180\}$ .
Inference. (1) Segmentation. We generate the segmentation mask from the predicted logits $(\mathbf{S}_{\mathrm{seg}})$ by taking the class with highest score for each point. We then post-process the output for smoothness by using the detected planes (as in Eq. (7)), and assign each point in the plane to the most frequently occurring class. (2) Detection. Following [29], we post-process the final output probability, $\mathrm{softmax}(\mathbf{S}_{\mathrm{det}})$ , by thresholding to drop predictions with score $< 0.01$ and class-wise non-maximum suppression (NMS) with IoU threshold 0.25.
Dataset. We use the ScanNet [10] and S3DIS [2] datasets to evaluate our method. ScanNet contains 1.2K training and 300 validation examples of hundreds of different rooms, annotated with 20 semantic categories. We extract ground truth bounding boxes from instance segmentation masks following [29]. To demonstrate the generalizability of our method, we further evaluate on S3DIS, which contains 6
| Methods | Proposal | Detection |
| #boxes | MABO | AR | mAP |
| Unsupervised methods |
| Qin et al. [33] | 1k | 0.092 | 23.6 | - |
| GSS | ≤256 | 0.321 | 73.4 | - |
| GSS | ≤1k | 0.378 | 86.2 | - |
| Weakly-supervised methods |
| MIL-det (unsup. GSS) | ≤1k | 0.378 | 86.2 | 9.6 |
| WyPR | ≤1k | 0.409 | 89.3 | 18.3 |
| WyPR+prior | ≤1k | 0.427 | 90.5 | 19.7 |
| Supervised methods |
| F-PointNet [30] | - | - | - | 10.8 |
| GSPN [60] | - | - | - | 17.7 |
| 3DSIS [18] | - | - | - | 40.2 |
| VoteNet [29] | 256 | 0.436 | 84.7 | 58.6 |
| VoteNet [29] | 1k | 0.450 | 88.1 | 55.3 |
floors of 3 different buildings and 13 objects classes. We use the fold #1 split following prior work [2, 9], where area 5 is used for testing and the rest for training.
Evaluation. We report mean intersection over union (mIoU) across all classes for semantic segmentation, mean average precision (mAP) across all classes at IoU 0.25 for object detection, and average recall (AR) and mean average best overlap (MABO) across all classes for proposal generation. Please see [10, 29, 48] for more on these metrics.
# 4.1. Baselines
Besides comparing to the few existing 3D weakly-supervised learning methods, we build the following baselines, using standard weakly-supervised learning techniques:
MIL-seg: Single task segmentation trained with Eq. (3).
MIL-det: Single task object detection, which uses the unsupervised GSS proposals and is trained with Eq. (9).
WyPR: Our full model trained with Eq. (1) and Eq. (8).
WyPR+prior: We compute per-class mean shapes using external synthetic datasets [6, 57], and use those to reject proposals and pseudo labels in the WyPR detection module that do not satisfy the prior. We also use a floor height prior for segmentation. Please see Appendix D for details.
# 4.2. Quantitative results on ScanNet
Semantic Segmentation. Apart from the above baselines we compare WyPR to recent approaches, PCAM [53] and MPRM [53]. PCAM can be interpreted as MIL-seg with a KPCov [47] backbone, and MPRM adds multiple additional self-attention modules to PCAM. Since prior work
Table 3: 3D object detection on ScanNet. Unsupervised GSS outperforms concurrent work [33] by a large margin. In the weakly-supervised setting, WyPR outperforms standard baselines and even some fully supervised approaches [30, 60].
| Methods | Segmentation mIoU | Proposal MABO AR | Detection mAP |
| Weakly-supervised methods |
| MIL-seg | 17.6 | - | - | - |
| MIL-det (unsup. GSS) | - | 0.412 | 84.9 | 15.1 |
| WyPR | 22.3 | 0.441 | 88.3 | 19.3 |
| Supervised methods |
| PointNet++ [31] | 41.1 | - | - | - |
| SparseConvNet [9] | 62.4 | - | - | - |
| Armeni et al. [2] | - | - | - | 49.9 |
Table 4: Generalizing to S3DIS. WyPR seamlessly generalizes to S3DIS, and outperforms standard baselines for both weakly-supervised segmentation and detection.
| Removed | Seg. losses | Det. losses | Seg. Det. |
| \( \mathcal{L}_{\text{seg}}^{\text{SELF}} \) | \( \mathcal{L}_{\text{seg}}^{\text{CST}} \) | \( \mathcal{L}_{\text{d-s}} \) | \( \mathcal{L}_{\text{smooth}} \) | \( \mathcal{L}_{\text{det}}^{\text{SELF}} \) | \( \mathcal{L}_{\text{det}}^{\text{CST}} \) | mIoU | mAP |
| Self-training | | ✓ | ✓ | ✓ | | ✓ | 22.1 | 13.2 |
| Cross-transformation cst. | ✓ | | ✓ | ✓ | ✓ | | 28.2 | 16.9 |
| Cross-task consistency | ✓ | ✓ | | ✓ | ✓ | ✓ | 26.7 | 17.4 |
| Local smoothness | ✓ | ✓ | ✓ | | ✓ | ✓ | 27.3 | 17.8 |
| WyPR | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 29.6 | 18.3 |
Table 5: Ablation study of losses. We remove one set of losses at a time. All models are trained with $\mathcal{L}_{\mathrm{seg}}^{\mathrm{MIL}}$ and $\mathcal{L}_{\mathrm{det}}^{\mathrm{MIL}}$ .
reports results on the training set only, we compare against their results on the training set in Tab. 2 (top 3 rows). WyPR outperforms both methods (PCAM and MPRM) by a significant margin $(+8.6\% / +6.3\%)$ . Since the main difference between prior work and our method is our joint detection-segmentation framework, these results show the effectiveness of joint-training. When comparing against our baselines on the validation set (Tab. 2 middle) our joint model outperforms the single-task baseline (MIL-seg) by $8.9\%$ . We observe a large performance gap when comparing against state-of-the-art fully supervised models (bottom two rows). One possible solution to minimize the gap is to utilize an external object prior (e.g., shape) from readily-available synthetic data, which improves results by $+1.5\%$ .
Object Detection. To the best of our knowledge, no prior work has explored weakly-supervised 3D object detection using scene-level tags. We compare against our baseline methods in Tab. 3 (middle rows). Our model significantly outperforms the single-task baseline (MIL-det) by $8.7\%$ mAP, and achieves competitive results compared to even some fully supervised methods (F-PointNet [30] and GSPN [60], numbers borrowed from [29]). However, the performance gap is large when compared to the state-of-the-art fully supervised methods. Similar to segmentation, the performance of our model can be further improved by incorporating an external object prior $(+1.4\%)$ .
Proposal Generation. GSS can be made unsupervised by relying only on low-level shape and color cues, i.e., removal

Figure 4: Effect of jittering and #proposals. Jittering the point cloud before proposal generation results in a $>2\%$ gain in AP. The performance varies gracefully with #proposals, and we find 1000 proposals to have the right balance for high precision and recall.
ing $s_{\mathrm{seg}}$ from Eq. (13) (\$ 3.3). We compare the unsupervised GSS to a concurrent unsupervised 3D proposal approach by Qin et al. [33]. We adapt their method, originally designed for outdoor environments, to indoor scenes by replacing their front-view projection to a Y-Z plane projection. For a fair comparison we use 1000 proposals and report results in Tab. 3 (top rows). Unsupervised GSS outperforms [33] by a large margin, and obtains recall values comparable to even supervised approaches. The complete GSS, including the weakly-supervised similarity $s_{\mathrm{seg}}$ , further improves over the unsupervised baseline (+3.1% AR/+0.031 MABO), and outperforms supervised methods on recall (+1.2%), indicating the importance of joint training.
# 4.3. Generalizing to S3DIS
We train WyPR on S3DIS following the settings of § 4.2. Since there is no prior weakly-supervised work on this dataset, we compare against our baselines from § 4.1. The results are summarized in Tab. 4, where WyPR outperforms both single-task baselines with gains of $4.7\%$ mIoU for segmentation, $3.4\%$ AR for proposal generation, and $4.2\%$ mAP for detection. These results demonstrate that our design choices are not specific to ScanNet and generalize to other 3D datasets.
# 4.4. Analysis
Which loss terms matter? In Tab. 5 we analyze the relative contribution of the loss terms in Eq. (1) and (8). We find self-training to be the most critical. Removing $\mathcal{L}_{\mathrm{seg}}^{\mathrm{SELF}}$ and $\mathcal{L}_{\mathrm{det}}^{\mathrm{SELF}}$ leads to a significant drop in both metrics: $-7.5\%$ mIoU and $-5.1\%$ mAP. This is consistent with observations in prior work on weak-supervision [34, 54]. Next, we find enforcing consistency between detection and segmentation tasks to add large gains, especially for segmentation: $2.9\%$ mIoU. Enforcing consistency across transformations is particularly important for detection, leading to a $1.4\%$ mAP gain. Finally, encouraging smoothness over primitive structures improves both metrics by $1.7\%$ mIoU and $0.5\%$ mAP.
Jittering for proposal generation. We observe that scanned point clouds are often imperfect, with large holes

Figure 5: Qualitative results on ScanNet. WyPR+prior is able to segment, generate proposals and detect objects without having ever seen any spatial annotations.
in objects due to occlusions, clutter or sensor artifacts. This makes it challenging for GSS to correctly group parts. To overcome this, we jitter the points in 3D space using a random multiplier within range $[1 - \delta /2,1 + \delta /2]$ and decide the neighboring regions based on the jittered points. This simple technique counts spatially close but non-overlapping regions as neighbors, and greatly improves GSS results. We show the impact of $\delta$ in Fig. 4 (left).
Number of proposals. We randomly sample at most 250, 500, 1000, 1500, 2000 regions from the same set of computed proposals and report the recall and detection mAP in Fig. 4 (right). Using fewer proposals hurts both the recall and precision since the model misses many relevant objects. In contrast, a large number of proposals increases recall but hurts precision, presumably because too many proposals increase the false positive rate of the detection module. We find 1000 proposals to be a good balance between precision and recall, and use this number for all our experiments.
Qualitative results. Fig. 5 shows a few representative examples of our model's predictions on ScanNet. As can be seen, input point clouds are quite challenging, with large amounts of clutter and sensor imperfections. Nevertheless, our model is able to recognize objects such as chairs, tables, and sofa with good accuracy. Please see Appendix F for more results, analysis and failure modes.
# 5. Conclusion
We propose WyPR, a novel framework for joint 3D semantic segmentation and object detection, trained using only scene-level class tags as supervision. It leverages a novel unsupervised 3D proposal generation approach (GSS) along with natural constraints between the segmentation and detection tasks. Through extensive experimentation on standard datasets we show WyPR outperforms single task baselines and prior state-of-the-art methods on both tasks.
Acknowledgements. This work was supported in part by NSF under Grant #1718221, 2008387 and MRI #1725729, NIFA award 2020-67021-32799. The authors thank Zaiwei Zhang and the Facebook AI team for helpful discussions and feedback.
# References
[1] P. Arbeláez, J. Pont-Tuset, J. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping. In CVPR, 2014. 6
[2] Iro Armeni, Sasha Sax, Amir R Zamir, and Silvio Savarese. Joint 2d-3d-semantic data for indoor scene understanding. arXiv:1702.01105, 2017. 2, 6, 7
[3] Hakan Bilen and Andrea Vedaldi. Weakly supervised deep detection networks. In CVPR, 2016. 2, 4
[4] Rich Caruana. Multitask learning. Machine Learning, 1997. 3
[5] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. In 3DV, 2017. 2
[6] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. Shapenet: An information-rich 3d model repository. arXiv:1512.03012, 2015. 2, 7, 14
[7] Ming-Fang Chang, John Lambert, Patsorn Sangkloy, Jagjeet Singh, Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, et al. Argoverse: 3d tracking and forecasting with rich maps. In CVPR, 2019. 2
[8] Sungjoon Choi, Qian-Yi Zhou, Stephen Miller, and Vladlen Koltun. A large dataset of object scans. arXiv:1602.02481, 2016. 2
[9] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In CVPR, 2019. 1, 2, 6, 7
[10] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, 2017. 1, 2, 6, 7
[11] Carl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. In ICCV, 2017. 3
[12] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, 2015. 3
[13] Cat Franklin. Apple unveils new ipad pro with breakthrough lidar scanner and brings trackpad support to ipados. https://www.apple.com/, 2020. 1, 2
[14] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. *IJRR*, 2013. 2
[15] Rohit Girdhar, David Ford Fouhey, Mikel Rodriguez, and Abhinav Gupta. Learning a predictable and generative vector representation for objects. In ECCV, 2016. 2
[16] Benjamin Graham, Martin Engelcke, and Laurens van der Maaten. 3d semantic segmentation with submanifold sparse convolutional networks. In CVPR, 2018. 2
[17] Agrim Gupta, Piotr Dollar, and Ross Girshick. LVIS: A dataset for large vocabulary instance segmentation. In CVPR, 2019. 2
[18] Ji Hou, Angela Dai, and Matthias Nießner. 3d-sis: 3d semantic instance segmentation of rgb-d scans. In CVPR, 2019. 2, 7
[19] Iasonas Kokkinos. _Ubernet: Training a 'universal' convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory_. In _CVPR_ 2017.
3
[20] Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven L Waslander. Joint 3d proposal generation and object detection from view aggregation. In IROS, 2018. 2
[21] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv:1811.00982, 2018. 2
[22] Florent Lafarge and Clément Mallet. Creating large-scale city models from 3d-point clouds: a robust approach with hybrid representation. *IJCV*, 2012. 4, 5, 11, 13
[23] Xiaoyan Li, Meina Kan, Shiguang Shan, and Xilin Chen. Weakly supervised object detection with segmentation collaboration. In CVPR, 2019. 3
[24] Qinghao Meng, Wenguan Wang, Tianfei Zhou, Jianbing Shen, Luc Van Gool, and Dengxin Dai. Weakly supervised 3d object detection from lidar point cloud. In ECCV, 2020. 3
[25] Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross-stitch networks for multi-task learning. In CVPR, 2016. 3
[26] Sven Oesau, Yannick Verdie, Clément Jamin, Pierre Alliez, Florent Lafarge, Simon Giraudot, Thien Hoang, and Dmitry Anisimov. Shape detection. In CGAL User and Reference Manual. CGAL Editorial Board, 5.1 edition, 2020. 5, 13
[27] Deepak Pathak, Philipp Krahenbuhl, and Trevor Darrell. Constrained convolutional neural networks for weakly supervised segmentation. In ICCV, 2015. 2
[28] Charles R. Qi, Xinlei Chen, Or Litany, and Leonidas J. Guibas. Imvotenet: Boosting 3d object detection in point clouds with image votes. In CVPR, 2020. 2, 6
[29] Charles R Qi, Or Litany, Kaiming He, and Leonidas J Guibas. Deep hough voting for 3d object detection in point clouds. In ICCV, 2019. 1, 2, 3, 6, 7, 13, 14
[30] Charles R Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J Guibas. Frustum pointnets for 3d object detection from rgb-d data. In CVPR, 2018. 2, 7
[31] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. 2, 3, 7
[32] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NeurIPS, 2017. 1, 2, 3, 6
[33] Zengyi Qin, Jinglu Wang, and Yan Lu. Weakly supervised 3d object detection from point clouds. In ACM MM, 2020. 2, 3, 6, 7, 8
[34] Zhongzheng Ren and Yong Jae Lee. Cross-domain self-supervised multi-task feature learning using synthetic imagery. In CVPR, 2018. 3, 8
[35] Zhongzheng Ren, Zhiding Yu, Xiaodong Yang, MingYu Liu, Yong Jae Lee, Alexander G. Schwing, and Jan Kautz. Instance-aware, context-focused, and memory-efficient weakly supervised object detection. In CVPR, 2020. 2,4
[36] Zhongzheng Ren, Zhiding Yu, Xiaodong Yang, Ming-Yu Liu, Alexander G. Schwing, and Jan Kautz. $\mathbf{UFO}^2$ : A unified framework towards omni-supervised object detection. In
ECCV, 2020. 1, 3
[37] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In CVPR, 2017. 2
[38] Ruwen Schnabel, Roland Wahl, and Reinhard Klein. Efficient ransac for point-cloud shape detection. In Computer graphics forum, 2007. 5, 13
[39] Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. Pointr-cnn: 3d object proposal generation and detection from point cloud. In CVPR, 2019. 2
[40] Krishna Kumar Singh, Fanyi Xiao, and Yong Jae Lee. Track and transfer: Watching videos to simulate strong human supervision for weakly-supervised object detection. In CVPR, 2016. 2
[41] Shuran Song, Samuel P Lichtenberg, and Jianxiong Xiao. SUN RGB-D: A rgb-d scene understanding benchmark suite. In CVPR, 2015. 2
[42] Shuran Song and Jianxiong Xiao. Deep sliding shapes for amodal 3d object detection in rgb-d images. In CVPR, 2016. 2
[43] Scott Stein. Lidar on the iphone 12 pro. https://www.cnet.com/, 2020. 1, 2
[44] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In CVPR, 2020. 2
[45] Peng Tang, Xinggang Wang, Xiang Bai, and Wenyu Liu. Multiple instance detection network with online instance classifier refinement. In CVPR, 2017. 2, 4
[46] Yew Siang Tang and Gim Hee Lee. Transferable semi-supervised 3d object detection from rgb-d data. In CVPR, 2019. 2, 3
[47] Hugues Thomas, Charles R. Qi, Jean-Emmanuel Deschaud, Beatrix Marcotegui, François Goulette, and Leonidas J. Guibas. Kpconv: Flexible and deformable convolution for point clouds. In ICCV, 2019. 2, 7
[48] J.R.R. Uijlings, K.E.A. van de Sande, T. Gevers, and A.W.M. Smeulders. Selective search for object recognition. IJCV, 2013. 2, 6, 7, 11
[49] Ozan Unal, Luc Van Gool, and Dengxin Dai. Improving point cloud semantic segmentation by learning 3d object proposal generation. arXiv:2009.10569, 2020. 3
[50] Fang Wan, Chang Liu, Wei Ke, Xiangyang Ji, Jianbin Jiao, and Qixiang Ye. C-mil: Continuation multiple instance learning for weakly supervised object detection. In CVPR, 2019. 4
[51] Haiyan Wang, Xuejian Rong, Liang Yang, Shuihua Wang, and Yingli Tian. Towards weakly supervised semantic segmentation in 3d graph-structured point clouds of wild scenes. In BMVC, 2019. 2
[52] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, and Justin M. Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics, 2019. 2
[53] Jiacheng Wei, Guosheng Lin, Kim-Hui Yap, Tzu-Yi Hung, and Lihua Xie. Multi-path region mining for weakly supervised 3d semantic segmentation on point clouds. In CVPR, 2020. 2, 6, 7, 14, 15
[54] Yunchao Wei, Jiashi Feng, Xiaodan Liang, Ming-Ming
Cheng, Yao Zhao, and Shuicheng Yan. Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. In CVPR, 2017. 2, 3, 8
[55] Yunchao Wei, Huaxin Xiao, Honghui Shi, Zequn Jie, Jiashi Feng, and Thomas S Huang. Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation. In CVPR, 2018. 2, 3
[56] Benjamin Wilson, Zsolt Kira, and James Hays. 3d for free: Crossmodal transfer learning using hd maps. arXiv:2008.10592, 2020. 2, 3
[57] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguuang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In CVPR, 2015. 2, 7, 14
[58] Jia Xu, Alexander Schwing, and Raquel Urtasun. Tell Me What You See and I will Show You Where It Is. In CVPR, 2014. 2
[59] Xun Xu and Gim Hee Lee. Weakly supervised semantic point cloud segmentation: Towards 10x fewer labels. In CVPR, 2020. 2
[60] Li Yi, Wang Zhao, He Wang, Minhyuk Sung, and Leonidas J Guibas. Gspn: Generative shape proposal network for 3d instance segmentation in point cloud. In CVPR, 2019. 7
[61] Zaiwei Zhang, Bo Sun, Haitao Yang, and Qixing Huang. H3dnet: 3d object detection using hybrid geometric primitives. In ECCV, 2020. 2
[62] Na Zhao, Tat-Seng Chua, and Gim Hee Lee. Sess: Self-ensembling semi-supervised 3d object detection. In CVPR, 2020. 3
[63] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In CVPR, 2016. 2
[64] C. Lawrence Zitnick and Piotr Dólar. Edge boxes: Locating object proposals from edges. In ECCV, 2014. 2, 6