Add Batch c33e6f5c-0eae-4003-871c-34b2c891a7d9
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/795103ed-ec59-49fd-adff-69d819d6a94f_content_list.json +3 -0
- 2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/795103ed-ec59-49fd-adff-69d819d6a94f_model.json +3 -0
- 2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/795103ed-ec59-49fd-adff-69d819d6a94f_origin.pdf +3 -0
- 2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/full.md +307 -0
- 2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/images.zip +3 -0
- 2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/layout.json +3 -0
- 2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/68525c70-fbeb-4652-9a57-0517f70184c0_content_list.json +3 -0
- 2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/68525c70-fbeb-4652-9a57-0517f70184c0_model.json +3 -0
- 2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/68525c70-fbeb-4652-9a57-0517f70184c0_origin.pdf +3 -0
- 2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/full.md +305 -0
- 2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/images.zip +3 -0
- 2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/layout.json +3 -0
- 360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/89caa7e2-1f8d-4045-b07b-69d10a586ed8_content_list.json +3 -0
- 360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/89caa7e2-1f8d-4045-b07b-69d10a586ed8_model.json +3 -0
- 360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/89caa7e2-1f8d-4045-b07b-69d10a586ed8_origin.pdf +3 -0
- 360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/full.md +275 -0
- 360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/images.zip +3 -0
- 360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/layout.json +3 -0
- 3dawareblendingwithgenerativenerfs/c6b7200b-73ab-4d3a-939b-a50441252220_content_list.json +3 -0
- 3dawareblendingwithgenerativenerfs/c6b7200b-73ab-4d3a-939b-a50441252220_model.json +3 -0
- 3dawareblendingwithgenerativenerfs/c6b7200b-73ab-4d3a-939b-a50441252220_origin.pdf +3 -0
- 3dawareblendingwithgenerativenerfs/full.md +401 -0
- 3dawareblendingwithgenerativenerfs/images.zip +3 -0
- 3dawareblendingwithgenerativenerfs/layout.json +3 -0
- 3dawaregenerativemodelforimprovedsideviewimagesynthesis/9c2d56de-d217-40ca-abcf-42949762d793_content_list.json +3 -0
- 3dawaregenerativemodelforimprovedsideviewimagesynthesis/9c2d56de-d217-40ca-abcf-42949762d793_model.json +3 -0
- 3dawaregenerativemodelforimprovedsideviewimagesynthesis/9c2d56de-d217-40ca-abcf-42949762d793_origin.pdf +3 -0
- 3dawaregenerativemodelforimprovedsideviewimagesynthesis/full.md +462 -0
- 3dawaregenerativemodelforimprovedsideviewimagesynthesis/images.zip +3 -0
- 3dawaregenerativemodelforimprovedsideviewimagesynthesis/layout.json +3 -0
- 3dawareimagegenerationusing2ddiffusionmodels/a1f92a77-46e3-4d3f-a4f1-2561244d5fed_content_list.json +3 -0
- 3dawareimagegenerationusing2ddiffusionmodels/a1f92a77-46e3-4d3f-a4f1-2561244d5fed_model.json +3 -0
- 3dawareimagegenerationusing2ddiffusionmodels/a1f92a77-46e3-4d3f-a4f1-2561244d5fed_origin.pdf +3 -0
- 3dawareimagegenerationusing2ddiffusionmodels/full.md +359 -0
- 3dawareimagegenerationusing2ddiffusionmodels/images.zip +3 -0
- 3dawareimagegenerationusing2ddiffusionmodels/layout.json +3 -0
- 3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/2fa63626-9b1d-415b-b571-b36c444bdefd_content_list.json +3 -0
- 3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/2fa63626-9b1d-415b-b571-b36c444bdefd_model.json +3 -0
- 3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/2fa63626-9b1d-415b-b571-b36c444bdefd_origin.pdf +3 -0
- 3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/full.md +361 -0
- 3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/images.zip +3 -0
- 3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/layout.json +3 -0
- 3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/940e4977-9767-4aba-96fa-fc3c8ae3c067_content_list.json +3 -0
- 3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/940e4977-9767-4aba-96fa-fc3c8ae3c067_model.json +3 -0
- 3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/940e4977-9767-4aba-96fa-fc3c8ae3c067_origin.pdf +3 -0
- 3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/full.md +361 -0
- 3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/images.zip +3 -0
- 3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/layout.json +3 -0
- 3dhackerspectrumbaseddecisionboundarygenerationforhardlabel3dpointcloudattack/de922a90-2feb-49b4-b9ef-6ae72d2ba6c0_content_list.json +3 -0
- 3dhackerspectrumbaseddecisionboundarygenerationforhardlabel3dpointcloudattack/de922a90-2feb-49b4-b9ef-6ae72d2ba6c0_model.json +3 -0
2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/795103ed-ec59-49fd-adff-69d819d6a94f_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b5f9edf96141d5e56d3569e0dfdf14a1848d427e3d407178822433ff55513c0
|
| 3 |
+
size 82992
|
2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/795103ed-ec59-49fd-adff-69d819d6a94f_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ec7e81dce29e72c49c65ece49878bd37c42310d243006333c4b59f4bbdc05740
|
| 3 |
+
size 103750
|
2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/795103ed-ec59-49fd-adff-69d819d6a94f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fb3ba5f5135769dad16d1c0fe9a6ba9f6cbb8ebb72154c28aeaf4966e0cae851
|
| 3 |
+
size 2430670
|
2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/full.md
ADDED
|
@@ -0,0 +1,307 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 2D-3D Interlaced Transformer for Point Cloud Segmentation with Scene-Level Supervision
|
| 2 |
+
|
| 3 |
+
Cheng-Kun Yang $^{1,2}$ Min-Hung Chen $^{3}$ Yung-Yu Chuang $^{1}$ Yen-Yu Lin $^{4,5}$ $^{1}$ National Taiwan University $^{2}$ MediaTek $^{3}$ NVIDIA
|
| 4 |
+
$^{4}$ National Yang Ming Chiao Tung University $^{5}$ Academia Sinica
|
| 5 |
+
|
| 6 |
+
# Abstract
|
| 7 |
+
|
| 8 |
+
We present a Multimodal Interlaced Transformer (MIT) that jointly considers 2D and 3D data for weakly supervised point cloud segmentation. Research studies have shown that 2D and 3D features are complementary for point cloud segmentation. However, existing methods require extra 2D annotations to achieve 2D-3D information fusion. Considering the high annotation cost of point clouds, effective 2D and 3D feature fusion based on weakly supervised learning is in great demand. To this end, we propose a transformer model with two encoders and one decoder for weakly supervised point cloud segmentation using only scene-level class tags. Specifically, the two encoders compute the selfattended features for 3D point clouds and 2D multi-view images, respectively. The decoder implements interlaced 2D-3D cross-attention and carries out implicit 2D and 3D feature fusion. We alternately switch the roles of queries and key-value pairs in the decoder layers. It turns out that the 2D and 3D features are iteratively enriched by each other. Experiments show that it performs favorably against existing weakly supervised point cloud segmentation methods by a large margin on the S3DIS and ScanNet benchmarks. The project page will be available at https://jimmy15923.github.io/mit_web/.
|
| 9 |
+
|
| 10 |
+
# 1. Introduction
|
| 11 |
+
|
| 12 |
+
Point cloud segmentation offers rich geometric and semantic information of a 3D scene, thereby being essential to many 3D applications, such as scene understanding [5, 10, 16, 26, 36], augmented reality [2, 35], and autonomous driving [7, 8, 13]. However, developing reliable models is time-consuming and challenging due to the need for vast per-point annotations and the difficulty in capturing detailed semantic clues from textureless point clouds.
|
| 13 |
+
|
| 14 |
+
Research efforts have been made to address the aforementioned issues. Several methods are proposed to derive point cloud segmentation models using various weak supervisions, such as sparsely labeled points [24, 33, 58, 65],
|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
Figure 1: Overview of the Multimodal Interlaced Transformer (MIT). The input includes a 3D point cloud, multi-view 2D images, and class-level tags of a scene. Our method is a transformer model with two encoders and one decoder. The two encoders compute features for 3D voxel tokens and 2D view tokens, respectively. The decoder conducts interlaced 2D-3D attention and carries out 2D and 3D feature fusion. In its odd layers, 3D voxels serve as queries and are enriched by the semantic features of 2D views, acting as key-value pairs. In the even layers, the roles of 3D voxels and 2D views switch: 2D views are described by additional 3D geometric features.
|
| 18 |
+
|
| 19 |
+
bounding box labels [9], subcloud-level annotations [54], and scene-level tags [40, 61]. These weak annotations are cost-efficient and can significantly reduce the annotation burden. On the other hand, recent studies [19, 20, 23, 32, 41, 52, 53, 62] witness the remarkable success of 2D vision, and utilize 2D image features to enhance the 3D recognition task. They show promising results because 2D detailed texture clues are well complementary to 3D geometry features.
|
| 20 |
+
|
| 21 |
+
Although 2D-3D fusion is effective, current methods require extra annotation costs for 2D images. To the best of our knowledge, no prior work has explored fusing 2D-3D features under extremely weak supervision, where only scene-level class tags of the 3D scene are given. It is challenging to derive a segmentation model that leverages both 2D and 3D data under scene-level supervision, as no perpoint/pixel annotations or per-image class tags are avail
|
| 22 |
+
|
| 23 |
+
able to guide the learning process. Furthermore, existing 2D-3D fusion methods require camera poses or depth maps to establish pixel-to-point correspondences, adding extra burdens on data collection and processing. In this work, we address these difficulties by proposing a Multimodal Interlaced Transformer (MIT) that works with scene-level supervision and can implicitly fuse 2D, and 3D features without camera poses and depth maps.
|
| 24 |
+
|
| 25 |
+
Our MIT is a transformer model with two encoders and one decoder, and can carry out weakly supervised point cloud segmentation. As shown in Figure 1, the input to our method includes the 3D point cloud, multi-view images, and scene-level tags of a scene. The two encoders utilize the self-attention mechanism to extract the features of the 3D point cloud and the 2D multi-view images, respectively. The decoder computes the proposed interlaced 2D-3D attention and can implicitly fuse the 2D and 3D data.
|
| 26 |
+
|
| 27 |
+
Specifically, one encoder is derived for 3D feature extraction, where the voxels of the input point cloud yield the data tokens. The other encoder is for 2D multi-view images, where images serve as data tokens. Also, the multi-class tokens [57] are included to match the class-level annotations. The encoders capture long-range dependencies and aggregate class-specific features for their respective modalities.
|
| 28 |
+
|
| 29 |
+
The decoder comprises 2D-3D interlaced layers, and is developed to fuse 2D and 3D features, where the correspondences between 3D voxels and 2D views are implicitly computed via cross-attention. In odd layers of the decoder, 3D voxels are enriched by 2D image features, while in even layers, 2D views are augmented by 3D geometric features. Specifically, in each odd layer, each 3D voxel serves as a query, while 2D views act as key-value pairs. Through cross-attention, a query is a weighted combination of the values. Together with residual learning, this query (3D voxel) is characterized by the fused 3D and 2D features. In each even layer, the roles of 3D voxels and 2D views switch: 3D voxels and 2D views become key-value pairs and queries, respectively. This way, 2D views are described by the augmented 2D and 3D features.
|
| 30 |
+
|
| 31 |
+
By leveraging multi-view information without extra annotation effort, our proposed MIT effectively fuses the 2D and 3D features and significantly improves 3D point cloud segmentation. The main contribution of this work is three-fold. First, to the best of our knowledge, we make the first attempt to fuse 2D-3D information for point cloud segmentation under scene-level supervision. Second, we enable this new task by presenting a new model named Multimodal Interlanced Transformer (MIT) that implicitly fuses 2D-3D information via interlaced attention, which does not rely on camera pose information. Besides, a contrastive loss is developed to align the class tokens across modalities Third, our method performs favorably against existing methods on the large-scale ScanNet [11] and S3DIS [3] benchmarks.
|
| 32 |
+
|
| 33 |
+
# 2. Related Work
|
| 34 |
+
|
| 35 |
+
Weakly supervised point cloud segmentation. This task aims at learning a point cloud segmentation model using weakly annotated data, such as sparsely labeled points [16, 18,27,28,30,33,43,45,47,49,56,58,65,66], box-level labels [9], subcloud-level labels [54, 61] and scene-level labels [24, 40]. Significant progress has been made in the setting of using sparsely labeled points: The state-of-the-art methods [18, 33, 63] show comparable performances with supervised ones. These methods usually utilize self-supervised pre-training [16, 33], graph propagation [33, 47], and contrastive learning [28, 66] to derive the models. Despite the effectiveness, they require at least one annotated point for each category in a scene. Hence, it is not straightforward to extend these methods to work with scene-level or subcloud-level annotations.
|
| 36 |
+
|
| 37 |
+
In this work, we aim to develop a segmentation method based on a more challenging setting of using scene-level annotations. The literature about point cloud segmentation with scene-level annotations is relatively rare. Yang et al. [61] derive a transformer by applying multiple instance learning to paired point clouds. However, their performance is much inferior to fully supervised methods. Kweon and Yoon [24] leverage 2D and 3D data for point cloud segmentation by introducing additional image-level class tags, which require extra annotation efforts. Our method compensates for the lack of point-level or pixel-level annotations by integrating additional 2D features while using scene-level annotation only.
|
| 38 |
+
|
| 39 |
+
2D and 3D fusion for point cloud applications. Given the accessible or syntheticable [23] 2D images in most 3D dataset, research studies [14, 17, 19, 20, 23, 24, 29, 32, 41, 46, 52, 53, 55, 59, 60, 62, 64] explore 2D data to enhance 3D applications. Hu et al. [19] and Robert et al. [41] construct a pixel-point mapping matrix to fuse 2D and 3D features for point cloud segmentation. Despite the effectiveness, existing methods rely on camera poses and/or depth maps to build the correspondences between the 2D and 3D domains. In contrast, our method learns a transformer with interlaced 2D-3D attention, enabling the implicit integration of 2D and 3D features without the need for camera poses or depth maps.
|
| 40 |
+
|
| 41 |
+
Query and key-value pair swapping. Cross-attention is widely used in the transformer decoder. It captures the dependency between queries and key-value pairs. Umam et al. [48] and Lim et al. [21] swap queries and key-value pairs for point cloud decomposition and generation, respectively. Different from their methods working with data in a domain, our method generalizes query and key-value pair swapping to cross-domain feature fusion. In addition, we develop a contrastive loss for 2D and 3D feature alignment.
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
Figure 2: An overview of our Multimodal Interlaced Transformer (MIT) for weakly supervised point cloud segmentation. It is a transformer-based model with two encoders, $\tilde{f}_{3\mathrm{D}}$ and $\tilde{f}_{2\mathrm{D}}$ , for modality-specific feature extraction and one decoder, $f_{d}$ for feature fusion. The 2D and 3D pooled features, $\hat{s}_{2\mathrm{D}}$ and $\hat{s}_{3\mathrm{D}}$ , are added to each learnable position embedding ( $\hat{z}_{2\mathrm{D}}$ and $\hat{z}_{3\mathrm{D}}$ ), and further prepended with the class tokens and passed through the encoders to obtain self-attended features, $F_{2\mathrm{D}}$ and $F_{3\mathrm{D}}$ . The predicted class scores for each modality are obtained through average pooling and class-aware layers.
|
| 45 |
+
|
| 46 |
+
# 3. Proposed Method
|
| 47 |
+
|
| 48 |
+
We present the proposed method in this section. We first give the problem statement in Section 3.1. Then, we specify the developed MIT with an encoder-decoder architecture in Section 3.2 and Section 3.3. Finally, the implementation details are provided in Section 3.4.
|
| 49 |
+
|
| 50 |
+
# 3.1. Problem Statement
|
| 51 |
+
|
| 52 |
+
We are given a set of $N$ point clouds as well as their corresponding RGB multi-view images and the class tag annotations, i.e., $\{P_n,V_n,\mathbf{y}_n\}_{n = 1}^N$ , where $P_{n}$ denotes the nth point cloud, $V_{n}$ represents the multi-view images, and $\mathbf{y}_n$ is the class-level labels. Note that $P_{n}$ , $V_{n}$ , and $\mathbf{y}_n$ are acquired from the same scene. Without loss of generality, we assume that each point cloud consists of $M$ points, i.e., $P_{n} = \{\mathbf{p}_{nm}\}_{m = 1}^{M}$ , where each point $\mathbf{p}_{nm}\in \mathbb{R}^6$ is represented by its 3D coordinate and RGB color. The RGB multi-view images are grabbed in the same scene as $P_{n}$ and consist of a set of $T$ images, i.e., $V_{n} = \{\mathbf{v}_{nt}\}_{t = 1}^{T}$ . Each image $\mathbf{v}_{nt}\in \mathbb{R}^{H\times W\times 3}$ is of resolution $H\times W$ with RGB channels. The class tags of $P_{n}$ , i.e., $\mathbf{y}_n\in \{0,1\} ^C$ , are a $C$ -dimensional binary vector storing which categories are present, where $C$ is the number of categories of interest.
|
| 53 |
+
|
| 54 |
+
With the weakly annotated dataset $\{P_n, V_n, \mathbf{y}_n\}_{n=1}^N$ , we aim to derive a model for point cloud segmentation that classifies each point of a testing cloud into one of the $C$ categories. Note that in this weakly supervised setting, neither points nor pixels are labeled, and camera poses are unavailable, making it challenging to enhance 3D point cloud segmentation by incorporating additional 2D features due to the absence of point/pixel supervision and explicit correspondences between 2D pixels and 3D points. Furthermore,
|
| 55 |
+
|
| 56 |
+
as multi-view images share the same scene-level class tag, the lack of individual class tag annotation for each view image may lead to an inaccurate semantic understanding of each image.
|
| 57 |
+
|
| 58 |
+
Method overview. Figure 2 illustrates the network architecture of MIT, which comprises two transformer encoders, $\tilde{f}_{3\mathrm{D}}$ and $\tilde{f}_{2\mathrm{D}}$ , and one decoder $f_{d}$ . The two encoders extract features for 3D point clouds and 2D multi-view images, respectively. The decoder is developed for 2D-3D feature fusion, which utilizes cross-attention to connect 2D and 3D data implicitly. They are elaborated in the following.
|
| 59 |
+
|
| 60 |
+
# 3.2. Transformer Encoders
|
| 61 |
+
|
| 62 |
+
3D point cloud feature extraction. A 3D backbone $f_{3\mathrm{D}}$ , e.g., MinkowskiNet [10] or PointNet++ [38], is applied to extract the point embedding $s_{3\mathrm{D}} \in \mathbb{R}^{M \times D}$ for all $M$ points of a point cloud $P$ . Like WYPR [31], we perform supervoxel partition using an unsupervised off-the-shelf algorithm [25]. The 3D coordinates of $P$ are fed into a coordinate embedding module $f_{emb}$ , which is composed of two $1 \times 1$ convolution layers with ReLU activation, to get the positional embedding $z_{3\mathrm{D}} \in \mathbb{R}^{M \times D}$ , where $D$ is the embedding dimension. We aggregate both the point features and point positional embedding through supervoxel average pooling [33], producing the supervoxel features $\hat{s}_{3\mathrm{D}} \in \mathbb{R}^{S \times D}$ and pooled positional embedding $\hat{z}_{3\mathrm{D}} \in \mathbb{R}^{S \times D}$ , where $S$ is the number of the supervoxels in $P$ . The supervoxel features are added to the positional embedding.
|
| 63 |
+
|
| 64 |
+
To learn the class-specific representation for fitting the scene-level supervision, we prepend $C$ learnable class tokens [57] $c_{\mathrm{3D}} \in \mathbb{R}^{C \times D}$ with $S$ supervoxel tokens. Total $(C + S)$ tokens are fed into the transformer encoder $\tilde{f}_{\mathrm{3D}}$ .
|
| 65 |
+
|
| 66 |
+

|
| 67 |
+
Figure 3: The architecture of an interlaced block. The multilayer perceptron with residual learning is not present for simplicity but is used in the block.
|
| 68 |
+
|
| 69 |
+
Through the self-attention mechanism, the dependencies of the class and supervoxel tokens are captured, producing the self-attended 3D features $F_{\mathrm{3D}} \in \mathbb{R}^{(C + S) \times D}$ .
|
| 70 |
+
|
| 71 |
+
2D multi-view images feature extraction. A 2D backbone network $f_{2\mathrm{D}}$ , e.g., ResNet [15], is employed to extract image features $s_{2\mathrm{D}} \in \mathbb{R}^{T \times H' \times W' \times D}$ , where $H' = H / 32$ and $W' = W / 32$ . We apply global average pooling to image features $s_{2\mathrm{D}}$ along the spatial dimensions. The pooled image features $\hat{s}_{2\mathrm{D}} \in \mathbb{R}^{T \times D}$ are added to the learnable positional embedding $\hat{z}_{2\mathrm{D}} \in \mathbb{R}^{T \times D}$ , producing $T$ view tokens. Analogous to 3D feature extraction, another transformer encoder $\tilde{f}_{2\mathrm{D}}$ is applied to $C$ class tokens $c_{2\mathrm{D}} \in \mathbb{R}^{C \times D}$ and $T$ view tokens, obtaining the self-attended 2D features $F_{2\mathrm{D}} \in \mathbb{R}^{(C + T) \times D}$ .
|
| 72 |
+
|
| 73 |
+
Encoder optimization. During training, we consider a point cloud $P$ and its associated $T$ multi-view images $\{\mathbf{v}_t\}$ and scene-level label $\mathbf{y}$ . The 2D and 3D self-attended features $F_{2\mathrm{D}}$ and $F_{3\mathrm{D}}$ are compiled as specified above. We conduct the multi-label classification losses [40, 57] for optimization.
|
| 74 |
+
|
| 75 |
+
For 3D attended features $F_{\mathrm{3D}} \in \mathbb{R}^{(C + S) \times D}$ , we divide it into $C$ class tokens $F_{\mathrm{3D}}^c \in \mathbb{R}^{C \times D}$ and $S$ supervoxel tokens $F_{\mathrm{3D}}^s \in \mathbb{R}^{S \times D}$ . For the class tokens $F_{\mathrm{3D}}^c$ , the $C$ class scores are estimated by applying average pooling along the feature dimension. The multi-label classification loss $\mathcal{L}_{\mathrm{3D}}^c$ is computed based on the estimated class scores and the scene-level ground-truth labels $\mathbf{y}$ . For the supervoxel tokens $F_{\mathrm{3D}}^s$ , we introduce a class-aware layer [44], i.e., a $1 \times 1$ convolution layer with $C$ filters, which maps the supervoxel tokens $F_{\mathrm{3D}}^s$ into the class activation maps (CAM) $\tilde{F}_{\mathrm{3D}}^s \in \mathbb{R}^{S \times C}$ . The estimated class scores are obtained by applying global average pooling to $\tilde{F}_{\mathrm{3D}}^s$ along the dimension of supervoxels. The multi-label classification loss $\mathcal{L}_{\mathrm{3D}}^s$ is computed based on the class scores and label $\mathbf{y}$ . The loss for the 3D modality is defined by $\mathcal{L}_{\mathrm{3D}} = \mathcal{L}_{\mathrm{3D}}^c + \mathcal{L}_{\mathrm{3D}}^s$ . For the self-attended 2D features $F_{\mathrm{2D}} \in \mathbb{R}^{(C + T) \times D}$ of the $C$ class tokens and $T$ view tokens, the 2D loss is similarly defined by $\mathcal{L}_{\mathrm{2D}} = \mathcal{L}_{\mathrm{2D}}^c + \mathcal{L}_{\mathrm{2D}}^t$ .
|
| 76 |
+
|
| 77 |
+
In sum, both encoders are derived in a weakly-supervised manner using the objective function
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\mathcal {L} _ {e n c} = \mathcal {L} _ {\mathrm {2 D}} + \mathcal {L} _ {\mathrm {3 D}}. \tag {1}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
# 3.3. Transformer Decoder
|
| 84 |
+
|
| 85 |
+
The two encoders produce self-attended 3D features $F_{\mathrm{3D}}$ of $C + S$ tokens and 2D features $F_{\mathrm{2D}}$ of $C + T$ tokens, respectively. We propose a decoder that performs interlaced 2D-3D cross-attention for feature fusion. The decoder $f_{d}$ in Figure 2 is a stack of $R$ interlaced blocks. Each interlaced block is composed of two successive decoder layers, as shown in Figure 3. In the first layer of this block, 3D tokens are enriched by 2D features, while in the second layer, 2D tokens are enriched by 3D features.
|
| 86 |
+
|
| 87 |
+
In the odd/first layer (the blue-shaded region in Figure 3), the $C + S$ tokens in $F_{\mathrm{3D}}$ serve as the queries, while the $C + T$ tokens in $F_{\mathrm{3D}}$ act as the key-value pairs. Through scaled dot-product attention [50], the cross-modal attention matrix $A \in \mathbb{R}^{(C + S) \times (C + T)}$ (the yellow-shaded region) is computed to store the consensus between the 3D tokens and 2D tokens. As we focus on exploring the relationships between 3D tokens and merely 2D view tokens in this layer, we ignore the attention values related to the 2D class tokens in $A$ . Specifically, only the query-to-view attention values $A_{q2v} \in \mathbb{R}^{(C + S) \times T}$ (green dots in Figure 3) are considered. This is implemented by applying submatrix extraction to the attention matrix $A$ and the value matrix $V \in \mathbb{R}^{(C + T) \times D}$ , i.e., $A_{q2v} = A[1: C + S, C + 1: C + T]$ and $V_d = V[C + 1: C + T; :]$ .
|
| 88 |
+
|
| 89 |
+
After applying the softmax operation to $A_{q2v}$ , we perform matrix multiplication between the query-to-view attention matrix $A_{q2v}$ and the masked value matrix $V_d$ . This way, each query (3D token) is a weighted sum of the values (2D view tokens). Together with a residual connection, the resultant 3D tokens $F_{\hat{3}\mathrm{D}}$ are enriched by 2D features. It turns out that implicit feature fusion from 3D features to 2D features is carried out without using annotated data.
|
| 90 |
+
|
| 91 |
+
In the even/second layer (the green-shaded region in Figure 3), the roles of $F_{\mathrm{3D}}$ and $F_{2\mathrm{D}}$ switch: The former serves as the key-value pairs while the latter yields the queries. After a similar procedure, the resultant 2D tokens $F_{2\mathrm{D}} \in \mathbb{R}^{(C + T)\times D}$ are augmented with 3D information. $F_{2\mathrm{D}}$ and $F_{3\mathrm{D}}$ are the output of the interlaced block. By stacking $R$ interlaced blocks, the proposed decoder is built to fuse 2D and 3D features iteratively.
|
| 92 |
+
|
| 93 |
+
Decoder optimization. In the last interlaced block, the 2D class scores and 3D class scores can be estimated by applying average pooling to the corresponding class tokens. The multi-label classification losses for 2D $\mathcal{L}_{\tilde{2}\tilde{D}}$ and 3D $\mathcal{L}_{\tilde{3}\tilde{D}}$ data can be computed between the ground truth and the estimated class scores.
|
| 94 |
+
|
| 95 |
+
To mine additional supervisory signals, we employ contrastive learning on the class-to-class attention matrix $A_{c2c} = A[1:C,1:C] \in \mathbb{R}^{C \times C}$ . Though the 2D class tokens and 3D class tokens attend to respective modalities, they share the same class tags. Hence, the attention value between a pair of class tokens belonging to the same class should be larger than those between tokens of different classes, which can be enforced by the N-pair loss [39]. We employ this regularization in all attention matrices in the decoder layers
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\begin{array}{l} \mathcal {L} _ {c o n} = \frac {1}{2 R} \sum_ {\substack {r = 1 \\ 2 R}} ^ {2 R} \sum_ {i = 1} ^ {C} - \log \frac {A _ {i i} ^ {r}}{\sum_ {j = 1} ^ {C} A _ {i j} ^ {r}} \tag{2} \\ + \frac {1}{2 R} \sum_ {r = 1} ^ {2 R} \sum_ {j = 1} ^ {C} - \log \frac {A _ {j j} ^ {r}}{\sum_ {i = 1} ^ {C} A _ {i j} ^ {r}}, \\ \end{array}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where $A^r$ is the attention matrix in the $r$ th decoder layer.
|
| 102 |
+
|
| 103 |
+
The objective function of learning the decoder is
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\mathcal {L} _ {d e c} = \mathcal {L} _ {\tilde {2} \mathrm {D}} + \mathcal {L} _ {\tilde {3} \mathrm {D}} + \alpha \mathcal {L} _ {c o n}, \tag {3}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $\alpha$ is a positive constant.
|
| 110 |
+
|
| 111 |
+
# 3.4. Implementation Details
|
| 112 |
+
|
| 113 |
+
The proposed method is implemented in PyTorch. ResNet-50 [15] pre-trained on ImageNet [42] serves as the 2D feature extractor while MinkowskiNet [10] works as the 3D feature extractor in the experiments. The numbers of heads, encoder layers, interlaced blocks, embedding dimension, and the width of FFN in the transformer are set to 4, 3, 2, 96, and 96, respectively. 16 multi-view images are randomly sampled for each scene. The model is trained on eight NVIDIA 3090 GPUs with 500 epochs. The batch size, learning rate, and weight decay are set to $32$ , $10^{-2}$ , and $10^{-4}$ , respectively. We use AdamW [22] as the optimizer. The weight $\alpha$ for $\mathcal{L}_{con}$ is set to 0.5.
|
| 114 |
+
|
| 115 |
+
Inference. Given a point cloud $P$ for inference, we feed it into our 3D encoder for feature extraction. The 3D CAM $\tilde{F}_{\mathrm{3D}}^{s} \in \mathbb{R}^{S \times C}$ , i.e., the segmentation result, is then obtained by passing the extracted features into the class-aware layer, as specified in Section 3.2. In MCTformer [57], 3D CAM can be further refined by the class-to-voxel attention maps $A_{c2s} \in \mathbb{R}^{C \times S}$ from the last $K$ transformer encoder layers, where $K = 3$ . The refined 3D CAM is obtained through element-wise multiplication between CAM and the attention maps: $F = \tilde{F}_{\mathrm{3D}}^{s} \odot A_{c2s}$ , where $\odot$ denotes Hadamard product. In addition, we consider the class-to-voxel attention maps in the interlaced decoder, if multi-view images are provided, which can be extracted from all the even layers, producing another refined 3D CAM $\hat{F}$ . Finally, the segmentation results can be obtained by applying the element-wise max operation to $F$ and $\hat{F}$ .
|
| 116 |
+
|
| 117 |
+
We followed the common practice in [6,33,40,54,58,61] to generate pseudo-segmentation labels by running inference on the training set. Then use a segmentation model, e.g. Res U-Net [16], to train on the pseudo labels with high confidence, i.e. over 0.5, and derive the segmentation model with 150 epochs. No further post-processing is applied.
|
| 118 |
+
|
| 119 |
+
# 4. Experimental Results
|
| 120 |
+
|
| 121 |
+
This section evaluates the proposed method. We begin by introducing the datasets and evaluation metrics. The competing methods are then presented and compared. Finally, we present ablation studies for each proposed component and analysis of our method.
|
| 122 |
+
|
| 123 |
+
# 4.1. Datasets and Evaluation Metrics
|
| 124 |
+
|
| 125 |
+
The experiments were conducted using two large-scale point cloud datasets with multi-view images, S3DIS [3] and ScanNet [11]. S3DIS [3] contains 272 scenes from six indoor areas. A total of 70,496 RGB images are collected. Each scene is represented by a point cloud with 3D coordinates and RGB values. Each point and pixel is labeled with one of 13 categories. Following previous works [37, 38, 40, 51, 58], area 5 is taken as the test scene. ScanNet [11] includes 1,201 training scenes, 312 validation scenes, and 100 test scenes with 20 classes. Over 2.5 million RGB images are collected. Following [19], we sample one image out of every twenty to avoid redundancy in image selection. The mean intersection over Union (mIoU) is employed as the evaluation metric for both datasets.
|
| 126 |
+
|
| 127 |
+
# 4.2. Competing Methods and Comparisons
|
| 128 |
+
|
| 129 |
+
We compare our MIT with competing weakly supervised point cloud segmentation and 2D-3D feature fusion methods.
|
| 130 |
+
|
| 131 |
+
# 4.2.1 Point Cloud Segmentation Method Comparison
|
| 132 |
+
|
| 133 |
+
We compare our proposed method to state-of-the-art methods for segmenting point clouds with scene-level supervision. We also consider methods utilizing different supervision signals and extra data as input. To begin with, we present the fully supervised methods [10, 38, 41, 53] for point cloud segmentation as they suggest potential performance upper bounds. Next, we show the methods [24, 33, 63] that employ various types of weak labels. Finally, we compare the segmentation methods [40, 54, 61] utilizing scene-level labels that indicate whether each class appears in the scene.
|
| 134 |
+
|
| 135 |
+
Table 1 reports the mIoU results of the competing methods using different types of supervision or extra input data, such as RGB images, camera poses, or depth maps. Existing methods that fuse 2D images with 3D data have demonstrated superior performance compared to 3D-only meth
|
| 136 |
+
|
| 137 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Sup.</td><td colspan="3">Extra inputs</td><td colspan="2">ScanNet</td><td>S3DIS</td></tr><tr><td>RGB</td><td>Pose</td><td>Depth</td><td>Val</td><td>Test</td><td>Test</td></tr><tr><td>MinkUNet [10]</td><td>F.</td><td>-</td><td>-</td><td>-</td><td>72.2</td><td>73.6</td><td>65.8</td></tr><tr><td>DeepViewAgg [41]</td><td>F.</td><td>✓</td><td>✓</td><td>-</td><td>71.0</td><td>-</td><td>67.2</td></tr><tr><td>SemAffiNet [53]</td><td>F.</td><td>✓</td><td>✓</td><td>✓</td><td>-</td><td>74.9</td><td>71.6</td></tr><tr><td>OTOC [33]</td><td>P.</td><td>-</td><td>-</td><td>-</td><td>-</td><td>59.4</td><td>50.1</td></tr><tr><td>Yu et al. [63]</td><td>P.</td><td>✓</td><td>✓</td><td>✓</td><td>-</td><td>63.9</td><td>-</td></tr><tr><td>Kweon et al. [24]</td><td>S. + I.</td><td>✓</td><td>✓</td><td>-</td><td>49.6</td><td>47.4</td><td>-</td></tr><tr><td>MPRM [54]</td><td>S.</td><td>-</td><td>-</td><td>-</td><td>24.4</td><td>-</td><td>10.3</td></tr><tr><td>MIL-Trans [61]</td><td>S.</td><td>-</td><td>-</td><td>-</td><td>26.2</td><td>-</td><td>12.9</td></tr><tr><td>WYPR [40]</td><td>S.</td><td>-</td><td>-</td><td>-</td><td>29.6</td><td>24.0</td><td>22.3</td></tr><tr><td>MIT (3D-only)</td><td>S.</td><td>-</td><td>-</td><td>-</td><td>31.6</td><td>26.4</td><td>23.1</td></tr><tr><td>MIT (Ours)</td><td>S.</td><td>✓</td><td>-</td><td>-</td><td>35.8</td><td>31.7</td><td>27.7</td></tr></table>
|
| 138 |
+
|
| 139 |
+
ods. However, the reliance on camera poses or depth maps limits their applicability. In contrast, our MIT can benefit from 2D images without such requirements, enhancing its generalizability.
|
| 140 |
+
|
| 141 |
+
By using efficient scene-level annotation, our MIT with 3D data only (the blue-shaded region in Figure 2) shows comparable results to the state-of-the-art weakly supervised method [40], demonstrating the effectiveness of transformer encoder with the multi-class token [57]. The proposed interlace decoder further enhances the performance of the MIT with 3D-only data by incorporating the 2D image information. Without introducing extra annotation costs, our method with 2D-3D fusion outperforms the existing methods by a large margin on both the ScanNet and S3DIS datasets. This result demonstrates once again that 2D and 3D data are complementary. More importantly, the proposed method is capable of utilizing their complementarity in a weakly supervised manner.
|
| 142 |
+
|
| 143 |
+
Kweon et al. [24] also confirms the effectiveness of combining 2D-3D data. However, their method requires nonnegligible extra annotation costs on the images. According to [4,61], their method incurs more than five times the annotation cost required for scene-level annotation and even more than the sparsely labeled points setting.
|
| 144 |
+
|
| 145 |
+
We summarize the advantages of the scene-level setting in three aspects: 1) Efficiency: Scene-level supervision is much more efficient to collect than other weak supervision types. According to [40, 61], the labeling cost of sparse points (1% of points in ScanNet) is more than ten times higher than our scene-level setting. 2) Generalization: Our method based on scene-level supervision can be extended to other forms of weak supervision. Section 4.3.2 evaluates our method trained with diverse weak supervision types. 3)
|
| 146 |
+
|
| 147 |
+
Table 1: Quantitative results (mIoU) of several point-cloud segmentation methods with diverse supervisions and input data settings on the ScanNet and S3DIS datasets. "Sup." indicates the type of supervision. "F." represents full annotation. "P." gives sparsely labeled points. "S." denotes scene-level annotation. "I." implies image-level annotation.
|
| 148 |
+
|
| 149 |
+
<table><tr><td>Method</td><td>Fusion</td><td>Pose</td><td>Depth</td><td>mIoU</td></tr><tr><td>MIL-Trans [61]</td><td>MLP</td><td>✓</td><td>-</td><td>25.6</td></tr><tr><td>MIL-Trans [61]</td><td>BPM [19]</td><td>✓</td><td>✓</td><td>25.9</td></tr><tr><td>MIT</td><td>MLP</td><td>✓</td><td>-</td><td>32.6</td></tr><tr><td>MIT</td><td>BPM [19]</td><td>✓</td><td>✓</td><td>32.4</td></tr><tr><td>MIT</td><td>Interlaced</td><td>-</td><td>-</td><td>35.8</td></tr><tr><td>MIT</td><td>Interlaced</td><td>✓</td><td>✓</td><td>37.1</td></tr></table>
|
| 150 |
+
|
| 151 |
+
Table 2: Quantitative results (mIoU) of our method (interlaced decoder) and competing methods with different 2D-3D fusion strategies on the ScanNet validation set using scene-level annotations.
|
| 152 |
+
|
| 153 |
+
Potential: Existing weakly supervised point cloud segmentation methods focus on the sparse-point supervision setting and achieve performances almost as good as fully supervised ones. Therefore, working with lower annotation costs, such as scene-level tags, shows potential and is worth exploring. Our method effectively carries it out by utilizing information from unlabeled images.
|
| 154 |
+
|
| 155 |
+
# 4.2.2 2D-3D Fusion Method Comparison
|
| 156 |
+
|
| 157 |
+
As far as we are aware, our MIT is the first attempt at exploring 2D-3D fusion without poses, and the model is derived through scene-level supervision. Hence, there is no existing fusion method for performance comparison. To evaluate our method, we explore two approaches for 2D-3D feature fusion. First, we design a baseline method using a simple multi-layer perceptron for 2D-3D fusion. For each 3D voxel, we locate the nearest 2D pixel and concatenate the 3D feature with the 2D feature, followed by a $1 \times 1$ convolution to perform 2D-3D feature fusion. Second, we employ the bidirectional projection module [19] for 2D-3D fusion, which utilizes the pixel-to-point link matrix to fuse the 2D-3D features.
|
| 158 |
+
|
| 159 |
+
We apply the 2D-3D fusion methods on a weakly supervised point cloud segmentation method, MIL-transformer [61], as well as our proposed method. Table 2 provides the mIoU results of the competing 2D-3D fusion methods. Our proposed interlaced decoder achieves superior results compared to the two competing 2D-3D fusion methods. More importantly, the interlaced decoder implements 2D-3D fusion without using poses or depths and performs even better when camera information is available (more details in Section 4.3.2 and supplementary materials).
|
| 160 |
+
|
| 161 |
+
Our interlaced decoder offers the following advantages. Multi-view aggregation: The view quality differs in different views of the same 3D point, such as occlusion, or no 2D-3D correspondence. Through the attention mechanism, the decoder learns how to effectively aggregates the multi-view information based on the semantic information. Global attention: The decoder can capture long-range dependencies,
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
Figure 4: Qualitative results on the ScanNet dataset with scene-level supervision. The colored boxes highlight the differences between our MIT and MIT with 3D data only, and their corresponding views are shown on the right with outlines of the same color. For each view, the tags at the top indicate the results of the multi-label classification.
|
| 165 |
+
Figure 4 shows the qualitative results of our MIT with and without using the complementary 2D data. By utilizing both 3D and 2D data, our method achieves promising segmenta
|
| 166 |
+
|
| 167 |
+
i.e., the receptive field is the whole scene. Low overhead: The computational bottleneck of the decoder lies in cross-attention, whose complexity is linear to $N_{2\mathrm{D}} \times N_{3\mathrm{D}}$ , where $N_{2\mathrm{D}}$ and $N_{3\mathrm{D}}$ are the numbers of 2D and 3D tokens, respectively. Since we cast each 2D view into a token via global average pooling, $N_{2\mathrm{D}} = C + T$ , where $C$ is the number of classes and $T$ is the number of 2D views. As shown in Table 4, we can achieve good results by giving $T = 16$ views; hence $N_{2\mathrm{D}}$ can be small. To summarize, the proposed interlaced decoder introduces an acceptable cost but provides multi-view aggregation with global attention. Moreover, our interlaced decoder can further enrich features in 2D and 3D domains for better 3D segmentation.
|
| 168 |
+
|
| 169 |
+
# 4.2.3 Qualitative Results
|
| 170 |
+
|
| 171 |
+
tion results without using any point-level supervision. With the help of detailed texture features in the 2D image, our method is able to classify objects with very similar geometric shapes, for example, door and wall. Take the second row of Figure 4 as an example; MIT successfully segments the points belonging to the door by cooperating with the correct prediction from our 2D view (marked in blue), while the 3D only MIT fails to locate the points of the door, by only considering geometric and color features.
|
| 172 |
+
|
| 173 |
+
In addition, the category co-occurrence issue could hinder the optimization of the model with 3D data only. Since optimization is based on scene-level labels, it is difficult to learn discriminative features for those co-occurring categories. As demonstrated in the second and third rows of Figure 4, MIT (3D-only) often fails to classify the chairs and tables since these categories often co-occur in a scene. In contrast, our method leverages multi-view information during training. As each view only captures a small part of the scene, the issue of category co-occurrence could be alleviated by using a more comprehensive approach.
|
| 174 |
+
|
| 175 |
+
<table><tr><td>Query</td><td>Key-Value</td><td>MCT</td><td>Interlaced</td><td>Lcon</td><td>mIoU</td></tr><tr><td>-</td><td>-</td><td></td><td></td><td></td><td>26.1</td></tr><tr><td>-</td><td>-</td><td>✓</td><td></td><td></td><td>31.6</td></tr><tr><td>3D</td><td>2D</td><td>✓</td><td></td><td></td><td>33.7</td></tr><tr><td>3D</td><td>2D</td><td>✓</td><td>✓</td><td></td><td>35.4</td></tr><tr><td>2D</td><td>3D</td><td>✓</td><td>✓</td><td></td><td>35.2</td></tr><tr><td>3D</td><td>2D</td><td>✓</td><td>✓</td><td>✓</td><td>35.8</td></tr></table>
|
| 176 |
+
|
| 177 |
+
viated, resulting in better segmentation performance. With the proposed interlaced decoder, the network can learn more corresponding features between view and voxel under weak supervision. Additionally, the data tokens with position embedding and class tokens with contrastive loss facilitate the linking of views and voxels.
|
| 178 |
+
|
| 179 |
+
# 4.3. Ablation Studies and Performance Analysis
|
| 180 |
+
|
| 181 |
+
To evaluate the effectiveness of the proposed components, we perform ablation studies and analyze their performance. We present ablation studies to evaluate the impacts of the proposed components and provide performance analysis.
|
| 182 |
+
|
| 183 |
+
# 4.3.1 Contributions of Components
|
| 184 |
+
|
| 185 |
+
To evaluate the effectiveness of each proposed component, we first construct the baseline by considering only 3D data and utilizing class activation maps [54,58]. Then, we assess the contributions of each component, including the multiclass token transformer encoder (MCT), the interlaced decoder (Interlaced), and the N-pair loss $(\mathcal{L}_{con})$ , by successively adding each one to the baseline. In addition, we evaluate the roles for 2D and 3D, as query and key-value pairs, by switching them. The result of the standard transformer decoder is also reported (the third row of Table 3) by taking 3D as query and 2D as key-value. Table 3 illustrates the performance when using different combinations of the proposed modules and loss. The results validate that each component contributes to the performance of our method.
|
| 186 |
+
|
| 187 |
+
# 4.3.2 Performance Analysis
|
| 188 |
+
|
| 189 |
+
We discuss the extension of the proposed method and evaluate our method with different parameters and synthesized images in the following.
|
| 190 |
+
|
| 191 |
+
Table 3: The mIoU performance of different combinations of proposed components on the validation set of the ScanNet dataset. "Query" and "Key-Value" denote the input to the decoder. "MCT" and "Interlaced" are the multi-class tokens encoder and decoder architectures respectively. $\mathcal{L}_{con}$ denotes the contrastive loss on the class tokens.
|
| 192 |
+
|
| 193 |
+
<table><tr><td>Number of views</td><td>4</td><td>16</td><td>32</td><td>64</td></tr><tr><td>mIoU</td><td>29.7</td><td>32.7</td><td>30.9</td><td>31.2</td></tr></table>
|
| 194 |
+
|
| 195 |
+
Table 4: Performance with different numbers of views on the mIoU of pseudo labels on the ScanNet.
|
| 196 |
+
|
| 197 |
+
<table><tr><td>R interlaced blocks</td><td>1</td><td>2</td><td>3</td><td>4</td></tr><tr><td>mIoU</td><td>31.4</td><td>32.7</td><td>32.1</td><td>32.4</td></tr></table>
|
| 198 |
+
|
| 199 |
+
Table 5: Performance with different numbers of interlaced blocks on the mIoU of pseudo labels on the ScanNet.
|
| 200 |
+
|
| 201 |
+
Extension with known poses and depths. When camera poses, and depth maps are available, the correspondence between 3D world coordinates and 2D pixels can be established. Therefore, we can explicitly construct the position correlation between 2D views and 3D voxels. To this end, we first generate the 3D world coordinate maps for each view by following Yu et al. [63]. All the 3D coordinate maps are fed into the coordinate embedding module $f_{emb}$ to obtain positional embedding, which is then added to the 2D image features. Through explicit positional information between the 2D view and 3D voxel, we can further boost performance, as shown in the last row of Table 2.
|
| 202 |
+
|
| 203 |
+
Analysis of parameters. We explore the influence of the number of 2D views and interlaced blocks by evaluating the quality of pseudo labels on the training set. Table 4 shows the performance of our MIT with different numbers of views used in the transformer. We found that performance is stable when given a sufficient number of views, as also reported in [19]. Table 5 presents the performance by altering the number of the proposed interlaced blocks. The results indicate that stacking two interlaced blocks performs the best while being saturated by adding more blocks.
|
| 204 |
+
|
| 205 |
+
Experiments with virtual image rendering. Among the limitations of our method is the need for multi-view 2D images within the 3D dataset. A potential solution would be the virtual view rendering of the 3D data. Several studies [23, 34] suggest that synthesized images can further improve 3D segmentation performance. With the help of virtual view rendering [23], our method still achieves competitive results (34.3% mIoU on ScanNet validation set) using the synthesized RGB images.
|
| 206 |
+
|
| 207 |
+
Extensions to other weak supervisions. Thanks to the flexibility of the transformer, our method can be easily adapted to other weakly supervised settings, such as additional image-level labels, subcloud-level annotation, or sparsely labeled points.
|
| 208 |
+
|
| 209 |
+
<table><tr><td></td><td>Scene</td><td>Scene+Image</td><td>Subcloud</td><td>20pts</td></tr><tr><td>mIoU</td><td>35.8</td><td>45.4</td><td>46.8</td><td>61.9</td></tr><tr><td>label effort</td><td>< 1 min</td><td>5 min</td><td>3 min</td><td>2 min</td></tr></table>
|
| 210 |
+
|
| 211 |
+
For the extra image-label annotation, it provides the class tag indicating the existing object category within each view image. Several methods [1,44] are proposed to derive a 2D segmentation model based on this supervision and achieve promising results. Our method can easily train on image-level supervision by computing the multi-label classification loss on each image token.
|
| 212 |
+
|
| 213 |
+
Regarding the subcloud-level annotation, it sequentially crops a sphere point cloud from the scene and labels the existing objects within the sphere. This type of supervision alleviates the severe class imbalance issue in scene-level supervision. Our approach can be directly trained on subcloud-level supervision by considering the corresponding multi-view images in the subcloud.
|
| 214 |
+
|
| 215 |
+
For the setting with sparsely labeled points [33, 61], we can calculate the cross entropy loss on the self-attended voxel features $\hat{F}_{\mathrm{3D}}$ and the labeled points. Furthermore, we note that the sparsely labeled 3D points can be projected onto the 2D image pixels, generating 2D pixel annotation. In spite of this, we do not explore this operation in our experiments and leave it for future research.
|
| 216 |
+
|
| 217 |
+
Table 6 shows the performance of our method under different types of weak supervision and the corresponding annotation cost. While scene-level annotation is the most efficient [61], its performance has room for improvement. The extra image-level labels can improve the performance of scene-level supervision but introduce additional burdens due to the large number of view images in each scene. According to [4], the image-level labels cost 20 seconds per image. In line with [24], which utilized 17 multi-view images in their setting, we used 16 images per scene, resulting in an additional five-minute annotation time. Even though both image-level and subcloud-level supervision types do not require point-level annotation, they could require more annotation efforts due to the large number of views and subclouds that need to be annotated. Sparsely labeled points, on the other hand, may perform better with less annotation effort.
|
| 218 |
+
|
| 219 |
+
Our approach can work effectively with diverse weak supervision, allowing for flexible savings in annotation costs. More importantly, our MIT shows promising results by using the most efficient scene-level supervision, while other weakly supervised methods cannot be straightforwardly applied in this scenario.
|
| 220 |
+
|
| 221 |
+
Table 6: The mIoU performance of our MIT and its average annotation time per scene of different weak supervisions on ScanNet.
|
| 222 |
+
|
| 223 |
+
<table><tr><td>3D Backbone</td><td>2D Backbone</td><td>mIoU</td></tr><tr><td>ResUNet-18</td><td>ResNet-50</td><td>32.7</td></tr><tr><td>ResUNet-18</td><td>ResNet-101</td><td>33.1</td></tr><tr><td>ResUNet-34</td><td>ResNet-101</td><td>32.9</td></tr></table>
|
| 224 |
+
|
| 225 |
+
Table 7: Performance with different backbones on the mIoU of pseudo labels on the ScanNet.
|
| 226 |
+
|
| 227 |
+
Experiments with different backbones. Table 7 provides the performance (pseudo-label quality in mIoU) of our method on ScanNet with different 2D and 3D backbones, including different versions of 2D ResNet and 3D ResUNet. Our method's performance is consistent across different backbones.
|
| 228 |
+
|
| 229 |
+
# 5. Conclusion
|
| 230 |
+
|
| 231 |
+
This paper presents a multimodal interlaced transformer, MIT, for weakly supervised point cloud segmentation. Our method represents the first attempt at 2D and 3D information fusion with scene-level annotation. Through the use of the proposed interlaced decoder, which performs implicit 2D-3D feature fusion via cross-attention, we are able to effectively fuse 2D-3D features without using camera poses or depth maps. Our MIT achieves promising performance without using any point-level or pixel-level annotations. Furthermore, we develop class token consistency to align the multimodal features. MIT is end-to-end trainable. It has been extensively evaluated on two challenging real-world large-scale datasets. Experiments show that our method performs favorably against existing weakly supervised methods. We believe MIT has the potential to enhance other recognition tasks that involve both 2D and 3D observations, in an efficient manner.
|
| 232 |
+
|
| 233 |
+
Discussion and future work. Our current method has not utilized the spatial information conveyed in the images since global average pooling is applied to the image features. We attempted to flatten image features instead of using global average pooling to obtain patch tokens, similar to [12, 32], but achieved inferior results. One possible reason is that a large number of patch tokens introduces noise under scene-level supervision. A solution to this issue can achieve joint 2D-3D segmentation with weak supervision, which could be an interesting area for future research.
|
| 234 |
+
|
| 235 |
+
Acknowledgements. This work was supported in part by the National Science and Technology Council (NSTC) under grants 111-2628-E-A49-025-MY3, 112-2221-E-A49-090-MY3, 111-2634-F-002-023, 111-2634-F-006-012, 110-2221-E-002-124-MY3, and 111-2634-F-002-022. This work was funded in part by MediaTek, Qualcomm, NVIDIA, and NTU-112L900902.
|
| 236 |
+
|
| 237 |
+
# References
|
| 238 |
+
|
| 239 |
+
[1] Jiwoon Ahn and Suha Kwak. Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In CVPR, 2018. 9
|
| 240 |
+
[2] Evangelos Alexiou, Evgeniy Upenik, and Touradj Ebrahimi. Towards subjective quality assessment of point cloud imaging in augmented reality. In MMSP, 2017. 1
|
| 241 |
+
[3] Iro Armeni, Ozan Sener, Amir R Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3D semantic parsing of large-scale indoor spaces. In CVPR, 2016. 2, 5
|
| 242 |
+
[4] Amy Bearman, Olga Russakovsky, Vittorio Ferrari, and Li Fei-Fei. What's the point: Semantic segmentation with point supervision. In ECCV, 2016. 6, 9
|
| 243 |
+
[5] Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyrill Stachniss, and Jurgen Gall. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In ICCV, 2019. 1
|
| 244 |
+
[6] Yu-Ting Chang, Qiaosong Wang, Wei-Chih Hung, Robinson Piramuthu, Yi-Hsuan Tsai, and Ming-Hsuan Yang. Weakly-supervised semantic segmentation via sub-category exploration. In CVPR, 2020. 5
|
| 245 |
+
[7] Siheng Chen, Baoan Liu, Chen Feng, Carlos Vallespi-Gonzalez, and Carl Wellington. 3D point cloud processing and learning for autonomous driving: Impacting map creation, localization, and perception. IEEE Signal Processing Magazine, 2020. 1
|
| 246 |
+
[8] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3D object detection network for autonomous driving. In CVPR, 2017. 1
|
| 247 |
+
[9] Julian Chibane, Francis Engelmann, Tuan Anh Tran, and Gerard Pons-Moll. Box2mask: Weakly supervised 3D semantic instance segmentation using bounding boxes. In ECCV, 2022. 1, 2
|
| 248 |
+
[10] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4D spatio-temporal convnets: Minkowski convolutional neural networks. In CVPR, 2019. 1, 3, 5, 6
|
| 249 |
+
[11] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In ICCV, 2017. 2, 5
|
| 250 |
+
[12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv:2010.11929, 2020. 9
|
| 251 |
+
[13] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The KITTI dataset. *IJRR*, 2013. 1
|
| 252 |
+
[14] Kyle Genova, Xiaoqi Yin, Abhijit Kundu, Caroline Pantofaru, Forrester Cole, Avneesh Sud, Brian Brewington, Brian Shucker, and Thomas Funkhouser. Learning 3D semantic segmentation with only 2D image supervision. In International Conference on 3D Vision (3DV), 2021. 2
|
| 253 |
+
[15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 4, 5
|
| 254 |
+
[16] Ji Hou, Benjamin Graham, Matthias Nießner, and Saining Xie. Exploring data-efficient 3D scene understanding with contrastive scene contexts. In CVPR, 2021. 1, 2, 5
|
| 255 |
+
|
| 256 |
+
[17] Ji Hou, Saining Xie, Benjamin Graham, Angela Dai, and Matthias Nießner. Pri3D: Can 3D priors help 2D representation learning? In ICCV, 2021. 2
|
| 257 |
+
[18] Qingyong Hu, Bo Yang, Guangchi Fang, Yulan Guo, Ales Leonardis, Niki Trigoni, and Andrew Markham. SQN: Weakly-supervised semantic segmentation of large-scale 3D point clouds. In ECCV, 2022. 2
|
| 258 |
+
[19] Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, and Tien-Tsin Wong. Bidirectional projection network for cross dimension scene understanding. In CVPR, 2021. 1, 2, 5, 6, 8
|
| 259 |
+
[20] Maximilian Jaritz, Jiayuan Gu, and Hao Su. Multi-view pointnet for 3D scene understanding. In CVPR Workshop, 2019. 1, 2
|
| 260 |
+
[21] Jinwoo Kim, Jaehoon Yoo, Juho Lee, and Seunghoon Hong. Setvae: Learning hierarchical composition for generative modeling of set-structured data. In CVPR, 2021. 2
|
| 261 |
+
[22] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014. 5
|
| 262 |
+
[23] Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, and Caroline Pantofaru. Virtual multi-view fusion for 3D semantic segmentation. In ECCV, 2020. 1, 2, 8
|
| 263 |
+
[24] Hyeokjun Kweon and Kuk-Jin Yoon. Joint learning of 2D-3D weakly supervised semantic segmentation. In NeurIPS, 2022. 1, 2, 5, 6, 9
|
| 264 |
+
[25] Florent Lafarge and Clément Mallet. Creating large-scale city models from 3D-point clouds: A robust approach with hybrid representation. *IJCV*, 2012. 3
|
| 265 |
+
[26] Loic Landrieu and Martin Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In CVPR, 2018. 1
|
| 266 |
+
[27] Min Seok Lee, Seok Woo Yang, and Sung Won Han. Gaia: Graphical information gain based attention network for weakly supervised point cloud semantic segmentation. In WACV, 2023. 2
|
| 267 |
+
[28] Mengtian Li, Yuan Xie, Yunhang Shen, Bo Ke, Ruizhi Qiao, Bo Ren, Shaohui Lin, and Lizhuang Ma. Hybridcr: Weakly-supervised 3D point cloud semantic segmentation via hybrid contrastive regularization. In CVPR, 2022. 2
|
| 268 |
+
[29] Siqi Li, Changqing Zou, Yipeng Li, Xibin Zhao, and Yue Gao. Attention-based multi-modal fusion network for semantic scene completion. In AAAI, 2020. 2
|
| 269 |
+
[30] Kangcheng Liu, Yuzhi Zhao, Zhi Gao, and Ben M Chen. Weaklabel3d-net: A complete framework for real-scene lidar point clouds weakly supervised multi-tasks understanding. In ICRA, 2022. 2
|
| 270 |
+
[31] Qing Liu, Vignesh Ramanathan, Dhruv Mahajan, Alan Yuille, and Zhenheng Yang. Weakly supervised instance segmentation for videos with temporal mask consistency. In CVPR, 2021. 3
|
| 271 |
+
[32] Yingfei Liu, Tiancai Wang, Xiangyu Zhang, and Jian Sun. PETR: Position embedding transformation for multi-view 3D object detection. In ECCV, 2022. 1, 2, 9
|
| 272 |
+
[33] Zhengzhe Liu, Xiaojuan Qi, and Chi-Wing Fu. One thing one click: A self-training approach for weakly supervised 3D semantic segmentation. In CVPR, 2021. 1, 2, 3, 5, 6
|
| 273 |
+
|
| 274 |
+
[34] John McCormac, Ankur Handa, Stefan Leutenegger, and Andrew J Davison. SceneNet RGB-D: Can 5M synthetic images beat generic ImageNet pre-training on indoor segmentation? In ICCV, 2017.
|
| 275 |
+
[35] Youngmin Park, Vincent Lepetit, and Woontack Woo. Multiple 3D object tracking for augmented reality. In ISAMR, 2008. 1
|
| 276 |
+
[36] Charles R Qi, Xinlei Chen, Or Litany, and Leonidas J Guibas. ImVoteNet: Boosting 3D object detection in point clouds with image votes. In CVPR, 2020. 1
|
| 277 |
+
[37] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. PointNet: Deep learning on point sets for 3D classification and segmentation. In CVPR, 2017. 5
|
| 278 |
+
[38] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In NeurIPS, 2017. 3, 5
|
| 279 |
+
[39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICLR, 2021. 5
|
| 280 |
+
[40] Zhongzheng Ren, Ishan Misra, Alexander G Schwing, and Rohit Girdhar. 3D spatial recognition without spatially labeled 3D. In CVPR, 2021. 1, 2, 4, 5, 6
|
| 281 |
+
[41] Damien Robert, Bruno Vallet, and Loic Landrieu. Learning multi-view aggregation in the wild for large-scale 3D semantic segmentation. In CVPR, 2022. 1, 2, 5, 6
|
| 282 |
+
[42] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large scale visual recognition challenge. IJCV, 2015. 5
|
| 283 |
+
[43] Hanyu Shi, Jiacheng Wei, Ruibo Li, Fayao Liu, and Guosheng Lin. Weakly supervised segmentation on outdoor 4D point clouds with temporal matching and spatial graph propagation. In CVPR, 2022. 2
|
| 284 |
+
[44] Guolei Sun, Wenguan Wang, Jifeng Dai, and Luc Van Gool. Mining cross-image semantics for weakly supervised semantic segmentation. In ECCV, 2020. 4, 9
|
| 285 |
+
[45] Tianfang Sun, Zhizhong Zhang, Xin Tan, Yanyun Qu, Yuan Xie, and Lizhuang Ma. Image understands point cloud: Weakly supervised 3D semantic segmentation via association learning. arXiv:2209.07774, 2022. 2
|
| 286 |
+
[46] Weixuan Sun, Jing Zhang, and Nick Barnes. 3D guided weakly supervised semantic segmentation. In ACCV, 2020. 2
|
| 287 |
+
[47] An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, and Jie Zhou. Seg-Group: Seg-level supervision for 3D instance and semantic segmentation. TIP, 2022. 2
|
| 288 |
+
[48] Ardian Umam, Cheng-Kun Yang, Yung-Yu Chuang, Jen-Hui Chuang, and Yen-Yu Lin. Point mixSwap: Attentional point cloud mixing via swapping matched structural divisions. In ECCV, 2022. 2
|
| 289 |
+
[49] Ozan Unal, Dengxin Dai, and Luc Van Gool. Scribblesupervised lidar semantic segmentation. In CVPR, 2022. 2
|
| 290 |
+
[50] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. 4
|
| 291 |
+
|
| 292 |
+
[51] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. TOG, 2019. 5
|
| 293 |
+
[52] Yikai Wang, TengQi Ye, Lele Cao, Wenbing Huang, Fuchun Sun, Fengxiang He, and Dacheng Tao. Bridged transformer for vision and point cloud 3D object detection. In CVPR, 2022. 1, 2
|
| 294 |
+
[53] Ziyi Wang, Yongming Rao, Xumin Yu, Jie Zhou, and Jiwen Lu. SemAffiNet: Semantic-affine transformation for point cloud segmentation. In CVPR, 2022. 1, 2, 5, 6
|
| 295 |
+
[54] Jiacheng Wei, Guosheng Lin, Kim-Hui Yap, Tzu-Yi Hung, and Lihua Xie. Multi-path region mining for weakly supervised 3D semantic segmentation on point clouds. In CVPR, 2020. 1, 2, 5, 6, 8
|
| 296 |
+
[55] ZHENNAN WU, YANG LI, Yifei Huang, Lin Gu, Tatsuya Harada, and Hiroyuki Sato. 3D segmenter: 3D transformer based semantic segmentation via 2D panoramic distillation. In ICLR, 2023. 2
|
| 297 |
+
[56] Zhonghua Wu, Yicheng Wu, Guosheng Lin, Jianfei Cai, and Chen Qian. Dual adaptive transformations for weakly supervised point cloud segmentation. In ECCV, 2022. 2
|
| 298 |
+
[57] Lian Xu, Wanli Ouyang, Mohammed Bennamoun, Farid Boussaid, and Dan Xu. Multi-class token transformer for weakly supervised semantic segmentation. In CVPR, 2022. 2, 3, 4, 5, 6
|
| 299 |
+
[58] Xun Xu and Gim Hee Lee. Weakly supervised semantic point cloud segmentation: Towards 10x fewer labels. In CVPR, 2020. 1, 2, 5, 8
|
| 300 |
+
[59] Xu Yan, Jiantao Gao, Chaoda Zheng, Chao Zheng, Ruimao Zhang, Shuguang Cui, and Zhen Li. 2Dpass: 2D priors assisted semantic segmentation on lidar point clouds. In ECCV, 2022. 2
|
| 301 |
+
[60] Chaolong Yang, Yuyao Yan, Weiguang Zhao, Jianan Ye, Xi Yang, Amir Hussain, and Kaizhu Huang. Towards deeper and better multi-view feature fusion for 3D semantic segmentation. arXiv:2212.06682, 2022. 2
|
| 302 |
+
[61] Cheng-Kun Yang, Ji-Jia Wu, Kai-Syun Chen, Yung-Yu Chuang, and Yen-Yu Lin. An mil-derived transformer for weakly supervised point cloud segmentation. In CVPR, 2022. 1, 2, 5, 6, 9
|
| 303 |
+
[62] Ze Yang and Liwei Wang. Learning relationships for multi-view 3D object recognition. In ICCV, 2019. 1, 2
|
| 304 |
+
[63] Ping-Chung Yu, Cheng Sun, and Min Sun. Data efficient 3D learner via knowledge transferred from 2D model. In ECCV, 2022. 2, 5, 6, 8
|
| 305 |
+
[64] Zhihao Yuan, Xu Yan, Yinghong Liao, Yao Guo, Guanbin Li, Shuguang Cui, and Zhen Li. X-trans2cap: Cross-modal knowledge transfer using transformer for 3D dense captioning. In CVPR, 2022. 2
|
| 306 |
+
[65] Yachao Zhang, Zonghao Li, Yuan Xie, Yanyun Qu, Cuihua Li, and Tao Mei. Weakly supervised semantic segmentation for large-scale point cloud. In AAAI, 2021. 1, 2
|
| 307 |
+
[66] Yachao Zhang, Yanyun Qu, Yuan Xie, Zonghao Li, Shanshan Zheng, and Cuihua Li. Perturbed self-distillation: Weakly supervised large-scale point cloud semantic segmentation. In ICCV, 2021. 2
|
2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4b18ea3ef0ad8bd6a0a4f1feb5557bcd2e01ba32685b6bb8638630f8cd8b8b8c
|
| 3 |
+
size 457509
|
2d3dinterlacedtransformerforpointcloudsegmentationwithscenelevelsupervision/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:41d10a7362452840c5c587789d68acef8ae0decc5bb6273228cda4f7b85ef802
|
| 3 |
+
size 456674
|
2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/68525c70-fbeb-4652-9a57-0517f70184c0_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ca67d030295df95924675aca8fba8cafa6669ec1985f581744df30220e4b0253
|
| 3 |
+
size 80969
|
2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/68525c70-fbeb-4652-9a57-0517f70184c0_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d943dfb628edf3e408de837e7762aaff3de6132751c40a97c5f0227775ee9be1
|
| 3 |
+
size 100291
|
2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/68525c70-fbeb-4652-9a57-0517f70184c0_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:920a062f10ca6e69c6201763f331fd23a0603c92ff4cd3e40a19657a94bfd8ec
|
| 3 |
+
size 9274270
|
2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/full.md
ADDED
|
@@ -0,0 +1,305 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 2D3D-MATR: 2D-3D Matching Transformer for Detection-free Registration between Images and Point Clouds
|
| 2 |
+
|
| 3 |
+
Minhao Li $^{1*}$ Zheng Qin $^{2,1*}$ Zhirui Gao $^{1}$ Renjiao Yi $^{1}$ Chenyang Zhu $^{1}$ Yulan Guo $^{1,3}$ Kai Xu $^{1\dagger}$ $^{1}$ National University of Defense Technology
|
| 4 |
+
$^{2}$ Defense Innovation Institute, Academy of Military Sciences $^{3}$ Sun Yat-sen University
|
| 5 |
+
|
| 6 |
+

|
| 7 |
+
Input Image and Point Cloud
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
Multi-scale Patch Matching
|
| 11 |
+
|
| 12 |
+

|
| 13 |
+
Dense Pixel-Point Matching
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
High-Quality 2D-3D Correspondences
|
| 19 |
+
Figure 1: We propose 2D3D-MATR, a novel detection-free method for accurate inter-modality matching between images and point clouds. Our method adopts a coarse-to-fine pipeline where it first computes coarse correspondences between downsampled image patches and point patches and then extends them to form dense pixel-point correspondences within the patch region. A multi-scale sampling and matching scheme is devised to resolve the scale ambiguity in patch matching. Compared to detection-based P2-Net (bottom-right) and single-scale patch matching (middle-right), 2D3D-MATR (top-right) extracts significantly more accurate and dense 2D-3D correspondences. The inliers are in green and the outliers are in red.
|
| 20 |
+
|
| 21 |
+
# Abstract
|
| 22 |
+
|
| 23 |
+
The commonly adopted detect-then-match approach to registration finds difficulties in the cross-modality cases due to the incompatible keypoint detection and inconsistent feature description. We propose, 2D3D-MATR, a detection-free method for accurate and robust registration between images and point clouds. Our method adopts a coarse-to-fine pipeline where it first computes coarse correspondences between downsampled patches of the input image and the point cloud and then extends them to form dense correspondences between pixels and points within the patch region. The coarse-level patch matching is based on transformer which jointly learns global contextual constraints with self-attention and cross-modality correlations with cross-attention. To resolve the scale ambiguity in patch matching, we construct a multi-scale pyramid for each im
|
| 24 |
+
|
| 25 |
+
age patch and learn to find for each point patch the best matching image patch at a proper resolution level. Extensive experiments on two public benchmarks demonstrate that 2D3D-MATR outperforms the previous state-of-the-art P2-Net by around 20 percentage points on inlier ratio and over 10 points on registration recall. Our code and models are available at https://github.com/minhaolee/2D3DMATR.
|
| 26 |
+
|
| 27 |
+
# 1. Introduction
|
| 28 |
+
|
| 29 |
+
The inter-modality registration between images and point clouds finds applications in many computer vision tasks, e.g., 3D reconstruction, camera relocalization, SLAM and AR. It aims at estimating a rigid transformation that aligns a scene point cloud into the camera coordinates of an image capturing the same scene. The typical pipeline of 2D-3D registration is to first extract correspondences between
|
| 30 |
+
|
| 31 |
+
pixels and points and then adopt robust pose estimators such as PnP-RANSAC [25, 16] to recover the alignment transformation. Therefore, the accuracy of the putative correspondences is the crux of a successful registration.
|
| 32 |
+
|
| 33 |
+
Following the intra-modality correspondence methods for stereo images [12, 32, 44, 14] or point clouds [18, 9, 2, 21], 2D-3D matching methods [15, 37, 51] usually adopt a detect-then-match approach where 2D and 3D keypoints are first detected independently in the image and the point cloud, respectively, and then matched based on their associated descriptors. Such method, however, suffers from two difficulties. First, 2D and 3D keypoints are detected in different visual domains. While 2D keypoint detection is based on texture and color information, 3D detection is hinged on local geometry. This makes the detection of repeatable keypoints difficult. Second, 2D and 3D descriptors encode different visual information, which hampers extracting consistent descriptors for matching pixels and points. As a consequence, existing 2D-3D matching methods often lead to too low inlier ratio to be practically usable.
|
| 34 |
+
|
| 35 |
+
Recently, detection-free approach has received increasing attention in both stereo matching [41, 27, 54, 46] and point cloud registration [53, 38]. Saving the step of keypoint detection, it achieves high-quality correspondence with a coarse-to-fine pipeline: It first establishes coarse correspondences at the level of image or point patches and then refines them into fine-grained matching of pixels or points. This method has shown strong superiority over detection-based ones due to the exploitation of global contextual information at patch level. Such success, however, has not been attained for 2D-3D matching. This is because designing a coarse-level 2D-3D matching is non-trivial due to the scale ambiguity between image and point patches caused by perspective projection (see Fig. 1). On the one hand, the receptive fields for extracting 2D and 3D features could be misaligned, resulting in inconsistency between 2D and 3D features. On the other hand, there could be many pixels or points finding no counterpart on other side due to occlusion, leading to considerable ambiguity for fine-level matching.
|
| 36 |
+
|
| 37 |
+
We propose 2D3D-MATR, the first, to our knowledge, detection-free method for accurate and robust 2D-3D registration via addressing the challenges above. Adapting the coarse-to-fine pipeline, our method first computes coarse correspondences between downsampled patches of the input image and the point cloud and then extends them to form dense correspondences between pixels and points within the patch regions. To achieve accurate feature alignment between image and point patches, we design a coarse-level matching module based on transformer [50] which jointly learns global contextual constraints with self-attention and cross-modality correlations with cross-attention.
|
| 38 |
+
|
| 39 |
+
Our key insight is that the feature misalignment between 2D and 3D due to projection can be resolved by image-
|
| 40 |
+
|
| 41 |
+
space multi-scale sampling and matching, assuming that the area of local patches is small and the projection distortions is negligible. We construct a multi-scale pyramid for each image patch. During training, we find for each point patch the best matching image patch at a proper resolution level through computing the bilateral overlap between them in the image space. During test, our model can automatically infer 2D-3D patch correspondences at a proper scale and produces dense correspondence in a high inlier ratio. Extensive experiments on the RGB-D Scenes V2 [22] and 7-Scenes [17] benchmarks demonstrate the efficacy of our method. In particular, 2D3D-MATR outperforms the previous state-of-the-art P2-Net [51] by at least 20 percentage points on inlier ratio and over 10 points on registration recall on the two benchmarks. Our contributions include:
|
| 42 |
+
|
| 43 |
+
- The first detection-free coarse-to-fine matching network for 2D-3D registration which first establishes coarse correspondences of patch level and then refines them into dense correspondences of pixel/point level.
|
| 44 |
+
- A transformer-based coarse matching module learning well-aligned 2D and 3D features with both global contextual constraints and cross-modality correlations.
|
| 45 |
+
- A multi-scale 2D-3D matching scheme that resolves 2D-3D feature misalignment through learning image-space multi-scale features and feature-scale selection.
|
| 46 |
+
|
| 47 |
+
# 2. Related Work
|
| 48 |
+
|
| 49 |
+
Stereo image registration. Traditional stereo image registration methods usually adopt a detect-then-match pipeline to extract correspondences. A set of sparse keypoints are first detected and described with hand-crafted [31, 42] or learning-based descriptors [39, 14, 12, 44, 32] from both sides, which are then matched based on feature similarity. Keypoints detection is ill-posed and detection-free methods [41, 40, 24] propose to bypass the keypoint detection step by computing a correlation matrix between all pairs of features. However, the all-pair correlation matrix requires huge computation, making the putative correspondences relatively coarse-grained. For this reason, [27, 54, 46] further propose to adopt a coarse-to-fine matching framework, which achieves accurate and efficient image matching.
|
| 50 |
+
|
| 51 |
+
Point cloud registration. Similar progress as in image registration has also been witnessed in point cloud registration. Early works leverage hand-crafted descriptors such as PPF [13] and FPFH [43] for keypoint detection. And recent learning-based descriptors [11, 10, 9, 18, 2, 21] achieve more robust and accurate matching results. To bypass the keypoint detection, CoFiNet [53] introduces the coarse-to-fine strategy to the matching of point clouds. And GeoTransformer [38] further designs a transformation-invariant geometric structure embedding and achieves RANSAC-free point cloud registration. Moreover, there are also methods
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
Figure 2: Overall pipeline of 2D3D-MATR. We first progressively downsample the input image $\mathbf{I}$ and the point cloud $\mathbf{P}$ and learn multi-scale 2D and 3D features. The 2D and 3D features $\hat{\mathbf{F}}^{\mathcal{L}}$ and $\hat{\mathbf{F}}^{\mathcal{P}}$ at the coarsest stage are used to extract coarse correspondences between the local patches of the image and the point cloud. A multi-scale patch matching module is devised to learn global contextual constraints and cross-modality correlations. Next, the patch correspondences are extended to dense pixel-point correspondences based on the high-resolution features $\mathbf{F}^{\mathcal{L}}$ and $\mathbf{F}^{\mathcal{P}}$ . Finally, PnP-RANSAC is adopted to estimate the alignment transformation.
|
| 55 |
+
|
| 56 |
+
focusing on removing outlier correspondences [1, 8, 23], which act as an effective alternative of traditional robust estimators such as RANSAC [16].
|
| 57 |
+
|
| 58 |
+
Inter-modality registration. Compared to intra-modality matching problems, inter-modality matching between images and point clouds is more difficult. Based on how the correspondences are established, previous works can be classified into two categories. The first class focuses on visual localization in a known scene. The main idea of them is to predict the 3D coordinates of each image pixel with decision trees [45, 34, 35, 4, 49] or neural networks [3, 5, 6, 28, 29, 33, 52]. However, this class of methods lack generality to novel scenes. The second class follows the traditional detect-then-match pipeline [15, 37, 51], where keypoints are first detected from each modality and then matched with the associated descriptors. Compared to the first class, this class of methods have better generality theoretically. However, detecting repeatable inter-modality keypoints is much more difficult and unstable as keypoints are defined and described in different visual domains. For this reason, existing methods still suffer from low inlier ratio. In this work, we propose 2D3D-MATR to address these issues with two specific designs, i.e., coarse-to-fine matching and transformer-based multi-scale patch matching.
|
| 59 |
+
|
| 60 |
+
# 3. Method
|
| 61 |
+
|
| 62 |
+
# 3.1. Overview
|
| 63 |
+
|
| 64 |
+
Given a image $\mathbf{I} \in \mathbb{R}^{H \times W \times 3}$ and a point cloud $\mathbf{P} \in \mathbb{R}^{N \times 3}$ of a scene, the goal of 2D-3D registration is to recover the alignment transformation $\mathbf{T}$ between them, which is composed of a 3D rotation $\mathbf{R} \in \mathcal{SO}(3)$ and a 3D transla
|
| 65 |
+
|
| 66 |
+
tion $\mathbf{t} \in \mathbf{R}^3$ . A traditional 2D-3D registration pipeline first extracts correspondences $\mathcal{C} = \{(\mathbf{x}_i, \mathbf{y}_i) \mid \mathbf{x}_i \in \mathbb{R}^3, \mathbf{y}_i \in \mathbb{R}^2\}$ between 3D points and 2D pixels, and then estimates the transformation by minimizing the 2D projection error:
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\min _ {\mathbf {R}, \mathbf {t}} \sum_ {\left(\mathbf {x} _ {i}, \mathbf {y} _ {i}\right) \in \mathcal {C}} \| \mathcal {K} \left(\mathbf {R} \mathbf {x} _ {i} + \mathbf {t}, \mathbf {K}\right) - \mathbf {y} _ {i} \| ^ {2}, \tag {1}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
where $\mathbf{K}$ is the intrinsic matrix of the camera and $\kappa$ is the project function from 3D space to image plane. This problem can be effectively solved by PnP-RANSAC algorithm. However, the solution can be erroneous due to inaccurate correspondences.
|
| 73 |
+
|
| 74 |
+
In this work, we present a method to hierarchically extract inter-modality correspondences. We first adopt two respective backbones to learn features for the image and point cloud (Sec. 3.2). Next, we extract a set of coarse correspondences between the downsampled patches of the image and the point cloud (Sec. 3.3). At last, the patch correspondences are the refined to dense pixel-point correspondences on the fine level (Sec. 3.4). Fig. 2 illustrates the overall pipeline of our method.
|
| 75 |
+
|
| 76 |
+
# 3.2. Feature Extraction
|
| 77 |
+
|
| 78 |
+
Backbones. Given a pair of image and point cloud, two modality-specific encoder-decoder backbone networks are adopted for hierarchical feature extraction. For the image, we use a ResNet [19] with FPN [30] to generate multiscale image features. The downsampled 2D features $\hat{\mathbf{F}}^{\mathcal{I}} \in \mathbb{R}^{H \times \hat{W} \times \tilde{C}}$ at the smallest resolution and $\mathbf{F}^{\mathcal{I}} \in \mathbb{R}^{H \times W \times C}$ at the original resolution are used for matching in coarse and fine levels. For simplicity, we denote the pixel coordinate matrices for $\hat{\mathbf{F}}^{\mathcal{I}}$ and $\mathbf{F}^{\mathcal{I}}$ as $\hat{\mathbf{Q}} \in \mathbb{R}^{H \times \hat{W} \times 2}$ and
|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
Figure 3: Multi-scale patch matching. Given the coarse 2D and 3D features, we first learn global contextual constraints with self-attention and cross-modality correlations with cross-attention. Then we adopt an image-space multi-scale sampling and matching strategy to extract patch correspondences which are better aligned in the image plane.
|
| 82 |
+
|
| 83 |
+
$\mathbf{Q} \in \mathbb{R}^{H \times W \times 2}$ , respectively. For the point cloud, we adopt KPFCNN [48] to learn 3D features following [2, 21, 53, 38]. Unlike images which have fixed resolutions, point clouds usually have inconsistent sizes and KPFCNN dynamically downsamples them via grid downsampling. We use the points $\hat{\mathbf{P}} \in \mathbf{R}^{\hat{N} \times 3}$ corresponding to the coarsest level and their associated features $\hat{\mathbf{F}}^{\mathcal{P}} \in \mathbb{R}^{\hat{N} \times \hat{C}}$ for coarse-level matching, while fine-level matching is conducted on the input point cloud $\mathbf{P}$ and the associated features $\mathbf{F}^{\mathcal{P}} \in \mathbb{R}^{N \times C}$ . Patch construction. To extract patch correspondences on the coarse level, we need first associate each downsampled pixel (point) with an image (point) patch. For the image, we evenly divide $\mathbf{I}$ into $\hat{H} \times \hat{W}$ patches and each pixel in $\hat{\mathbf{F}}$ corresponds to an image patch of $\frac{H}{\hat{H}} \times \frac{W}{\hat{W}}$ pixels. For the point cloud, we use point-to-node partition [26] following [53, 38], which assigns each point in $\mathbf{P}$ to its nearest point in $\hat{\mathbf{P}}$ to compose the point patches.
|
| 84 |
+
|
| 85 |
+
# 3.3. Multi-scale Patch Matching
|
| 86 |
+
|
| 87 |
+
Attention-based feature refinement. Given the downsampled image $(\hat{\mathbf{Q}},\hat{\mathbf{F}}^{\mathcal{I}})$ and point cloud $(\hat{\mathbf{P}},\hat{\mathbf{F}}^{\mathcal{P}})$ , our goal in the coarse level is to extract patch correspondences that overlap with each other. However, inter-modality matching between 2D and 3D is non-trivial. On the one hand, 2D and 3D features are learned from different domains, leading to severe inconsistency between them. This problem is more serious in patch matching than point matching as patch features are learned from a large context, which aggravates the feature misalignment. Second, as noted in [46, 53, 38], coarse-level matching relaxes the matching criterion from the strict 3D distance into a much looser local texture-geometry similarity. This effectively eases the matching difficulty but requires more global context. For this reason, we devise a transformer-based [50] feature refinement module to learn global contextual constraints and cross-modality correlations.
|
| 88 |
+
|
| 89 |
+
Before feeding into transformer, we first augment the 2D
|
| 90 |
+
|
| 91 |
+
and 3D features with their positional information:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\hat {\mathbf {F}} _ {\mathrm {p o s}} ^ {\mathcal {I}} = \hat {\mathbf {F}} ^ {\mathcal {I}} + \phi (\hat {\mathbf {Q}}), \quad \hat {\mathbf {F}} _ {\mathrm {p o s}} ^ {\mathcal {P}} = \hat {\mathbf {F}} ^ {\mathcal {P}} + \phi (\hat {\mathbf {P}}), \tag {2}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
and $\phi (\cdot)$ is the Fourier embedding function [36]:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\phi (x) = \left[ x, \sin \left(2 ^ {0} x\right), \cos \left(2 ^ {0} x\right), \dots , \sin \left(2 ^ {L - 1} x\right), \cos \left(2 ^ {L - 1} x\right) \right], \tag {3}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
where $L$ is the length of the embedding. We then flatten the first two spatial dimensions of the 2D features for simplicity and use $\hat{\mathbf{F}}_{\mathrm{pos}}^{\mathcal{I}}, \hat{\mathbf{F}}_{\mathrm{pos}}^{\mathcal{P}}$ for future computation.
|
| 104 |
+
|
| 105 |
+
Afterwards, we leverage transformer to further refine the features in two modalities. Given anchor features $\mathbf{F}^{\mathcal{A}}\in$ $\mathbb{R}^{|\mathcal{A}|\times d}$ and memory features $\mathbf{F}^{\mathcal{M}}\in \mathbb{R}^{|\mathcal{M}|\times d}$ , transformer models the pairwise correlations between them with attention mechanism to generate more discriminative features. Specifically, the two set of features are first projected as:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\mathbf {Q} = \mathbf {F} ^ {\mathcal {A}} \mathbf {W} ^ {Q}, \mathbf {K} = \mathbf {F} ^ {\mathcal {M}} \mathbf {W} ^ {K}, \mathbf {V} = \mathbf {F} ^ {\mathcal {M}} \mathbf {W} ^ {V}, \tag {4}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where $\mathbf{W}^Q, \mathbf{W}^K, \mathbf{W}^Q \in \mathbf{R}^{d \times d}$ are the projection weights for query, key and value. The attention features for the anchor set are then computed as:
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\operatorname {A t t e n t i o n} \left(\mathbf {F} ^ {\mathcal {A}}, \mathbf {F} ^ {\mathcal {M}}\right) = \operatorname {S o f t m a x} \left(\frac {\mathbf {Q K} ^ {\top}}{d ^ {0 . 5}}\right) \mathbf {V}. \tag {5}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
And the attention features are further projected with a shallow MLP as the final output features.
|
| 118 |
+
|
| 119 |
+
We iteratively apply self-attention and cross-attention to refine the 2D and 3D features as shown in Fig. 3. In self-attention, we use the features from the same modality as both the anchor features and memory features for attention computation to encode intra-modality global contextual constraints. In cross-attention, we use the features from one modality as the anchor features and the other modality as the memory features to learn cross-modality correlations. By this means, we can obtain refined 2D and 3D features which are more discriminative and better aligned.
|
| 120 |
+
|
| 121 |
+

|
| 122 |
+
Figure 4: Scale misalignment between image patches and point patches due to perspective projection. Left: when the camera is far from the scene, the 3D patches on the keyboard are properly aligned with the 2D patches, and the 3D patch around the mouse is even slightly smaller than the matched 2D patch. Right: when camera move towards the scene, the 3D patches cover several 2D patches, leading to severe matching ambiguity.
|
| 123 |
+
|
| 124 |
+

|
| 125 |
+
|
| 126 |
+
The resultant features are denoted as $\hat{\mathbf{H}}^{\mathcal{I}}\in \mathbb{R}^{\hat{H}\times \hat{W}\times \hat{C}}$ and $\hat{\mathbf{H}}^{\mathcal{P}}\in \mathbb{R}^{|\hat{N} |\times \hat{C}}$ in 2D and 3D modalites, respectively.
|
| 127 |
+
|
| 128 |
+
Multi-scale matching. Due to the effect of perspective projection, the objects in images have the scale ambiguity problem, i.e., an object looks larger if it lies close to the camera and smaller if far from the camera. However, the scale of an object in the point cloud remains unchanged and is agnostic to camera motion. As a result, the 2D and 3D patches could be seriously misaligned: a 3D patch could cover many 2D patches when the camera moves close, and vice versa. Fig. 4 illustrates the misalignment between 2D and 3D patches. This causes significant ambiguous objective during training: considering two nearby point patches with different physical properties, they could be supervised to have similar features if covered by the same image patch. This is unexpected as it aggravates the feature misalignment and harms the distinctiveness of the features.
|
| 129 |
+
|
| 130 |
+
For this reason, we devise an image-space multi-scale sampling and matching strategy to alleviate the scale ambiguity between 2D and 3D patches. Technically, we first divide $\mathbf{I}$ into $\hat{H}_0 \times \hat{W}_0$ patches and then build a $K$ -level patch pyramid for each image patch. At each pyramid level, the patch size is halved to generate a more fine-grained patch partition. The features of the patch pyramid are obtained by a lightweight $K$ -stage CNN. We first downsample $\hat{\mathbf{H}}^{\mathcal{T}}$ to fit the finest patch pyramid level. The 2D features are then downsampled by a factor of 2 at each stage to match the resolutions of each patch pyramid level. For simplicity, the 2D patch features at the $k^{\mathrm{th}}$ level are denoted as $\hat{\mathbf{H}}_k^{\mathcal{T}}$ . At last, the multi-scale 2D patch features $\{\hat{\mathbf{H}}_k^{\mathcal{T}}\}$ and the 3D patch features $\hat{\mathbf{H}}^{\mathcal{P}}$ are normalized onto a unit hypersphere as the final features.
|
| 131 |
+
|
| 132 |
+
By leveraging the multi-scale matching strategy, for each 3D patch, we find the 2D patch that coincides the best with it on the image plane during training: the 3D patches far
|
| 133 |
+
|
| 134 |
+

|
| 135 |
+
Figure 5: Multi-scale patch matching based on image-space patch pyramid with 3 levels. One matched patch pair is shown in each pyramid level. The 3D patches far from the camera are matched to small 2D patches in a later level, while the close ones are matched to large 2D patches in a early level.
|
| 136 |
+
|
| 137 |
+
from the camera prefer small 2D patches in a later level, while the close ones are more likely to match with large 2D patches in a early level. Fig. 5 illustrates our multi-scale matching strategy, where our method provides 3D patches with better aligned 2D patches. This can effectively alleviate the matching ambiguity and reduce the difficulty in learning consistent 2D and 3D features. During inference, the putative patch correspondences $\hat{\mathcal{C}}$ are extracted with mutual top- $k$ selection [38]:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\begin{array}{l} \left(x _ {i}, y _ {i}\right) \text {i s} \Leftrightarrow \left(\hat {\mathbf {h}} _ {*} ^ {\mathcal {I}} \left(x _ {i}\right) \text {i s} k \mathrm {N N} \text {o f} \hat {\mathbf {h}} ^ {\mathcal {P}} \left(y _ {i}\right)\right) \wedge \tag {6} \\ \left(\hat {\mathbf {h}} ^ {\mathcal {P}} \left(y _ {i}\right) \text {i s} k \mathrm {N N} \text {o f} \hat {\mathbf {h}} _ {*} ^ {\mathcal {I}} \left(x _ {i}\right)\right) \\ \end{array}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
# 3.4. Dense Pixel-Point Matching
|
| 144 |
+
|
| 145 |
+
After obtaining the patch correspondences, we further refine them to dense pixel-point correspondences. For each $(x_{i},y_{i})\in \hat{\mathcal{C}}$ , we collect the fine-level 2D and 3D features of its local pixels and points from $\mathbf{F}^{\mathcal{I}}$ and $\mathbf{F}^{\mathcal{P}}$ , denoted as $\mathbf{F}_i^\mathcal{I}$ and $\mathbf{F}_i^\mathcal{P}$ . For computational efficiency, we uniformly sample $1/4$ of the pixels in each 2D patch. Following Sec. 3.3, we normalize $\mathbf{F}_i^\mathcal{I}$ and $\mathbf{F}_i^\mathcal{P}$ to unit-length vectors and match the pixels and points with mutual top- $k$ selection as the local dense correspondences of $(x_{i},y_{i})$ . We do not adopt a specific matching layer such as Sinkhorn [44, 53, 38] here as the 2D patches in large scales could have enormous pixels (e.g., 1600 pixels in our experiments), which causes unacceptable computational cost. On the contrary, mutual top- $k$ selection is very efficient and still achieves promising performance. At last, we gather the local correspondences of all $(x_{i},y_{i})$ from $\hat{\mathcal{C}}$ as the final dense pixel-point correspondences. Note that as the 2D patches from different scales
|
| 146 |
+
|
| 147 |
+
can overlap with each other, we explicitly remove the repeated correspondences from the final correspondences.
|
| 148 |
+
|
| 149 |
+
# 3.5. Loss Functions
|
| 150 |
+
|
| 151 |
+
Our model is trained in a metric learning fashion. On the coarse level, we adopt a scaled circle loss [47, 38] to supervise the patch features. On the fine level, another standard circle loss [47] is used to supervise the dense pixel and point features. The overall loss is then computed as $\mathcal{L}_{\mathrm{all}} = \mathcal{L}_{\mathrm{coarse}} + \lambda \mathcal{L}_{\mathrm{fine}}$ , where $\lambda = 1$ is a balance factor.
|
| 152 |
+
|
| 153 |
+
Compared to contrastive loss [7] and triplet loss [20], circle loss [47] has a circular decision boundary which facilitates convergence. Given an anchor descriptor $\mathbf{d}_i$ , the descriptors of its positive and negative pairs are denoted as $\mathcal{D}_i^{\mathcal{P}}$ and $\mathcal{D}_i^{\mathcal{N}}$ . The general circle loss on $\mathbf{d}_i$ is computed as:
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\mathcal {L} _ {i} = \frac {1}{\gamma} \log \left[ 1 + \sum_ {\mathbf {d} _ {j} \in \mathcal {D} _ {i} ^ {\mathcal {P}}} e ^ {\beta_ {p} ^ {i, j} \left(d _ {i} ^ {j} - \Delta_ {p}\right)} \cdot \sum_ {\mathbf {d} _ {k} \in \mathcal {D} _ {i} ^ {\mathcal {N}}} e ^ {\beta_ {n} ^ {i, k} \left(\Delta_ {n} - d _ {i} ^ {k}\right)} \right], \tag {7}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
where $d_i^j$ is the $\ell_2$ feature distance, $\beta_p^{i,j} = \gamma \lambda_p^{i,j}(d_i^j -\Delta_p)$ and $\beta_n^{i,k} = \gamma \lambda_n^{i,k}(\Delta_n - d_i^k)$ are the individual weights for the positive and negative pairs, where $\lambda_p^{i,j}$ and $\lambda_n^{i,k}$ are the scaling factors for the positive and negative pairs.
|
| 160 |
+
|
| 161 |
+
On the coarse level, we generate the ground truth based on the bilateral overlap. A patch pair is regarded as positive if the 2D and 3D overlap ratios between them are both at least $30\%$ , and as negative if both the overlap ratios are below $20\%$ . Please refer to Sec. 4.1 for more details. The overlap ratio between the 2D and 3D patches are used as $\lambda_{p}$ , and $\lambda_{n}$ is set to 1. On the fine level, a pixel-point pair is positive with the 3D distance is below $3.75\mathrm{cm}$ and the 2D distance is below 8 pixels, while being negative with a 3D distance above $10\mathrm{cm}$ or a 2D distance above 12 pixels. The scaling factors are all 1. We ignore all other pairs on both levels during training as the safe region. The margins are set to $\Delta_{p} = 0.1$ and $\Delta_{n} = 1.4$ following [21, 38].
|
| 162 |
+
|
| 163 |
+
# 4. Experiments
|
| 164 |
+
|
| 165 |
+
As there is no public 2D-3D registration benchmark, we build two challenging benchmarks based on RGB-D Scenes V2 [22] (Sec. 4.2) and 7Scenes [17] (Sec. 4.3) datasets, and evaluate the efficacy of 2D3D-MATR on them. Extensive ablation studies are provided to study the influence of different design choices (Sec. 4.4).
|
| 166 |
+
|
| 167 |
+
# 4.1. Implementation Details
|
| 168 |
+
|
| 169 |
+
Network architecture. We adopt a 4-stage ResNet [19] with FPN as the image backbone network, where the output channels of each stage are \{128, 128, 256, 512\}. The resolution of the input images is $480 \times 640$ and is downsampled to $60 \times 80$ in the coarsest level. For the 3D backbone, we use a 4-stage KPFCNN [48] where the output channels of each
|
| 170 |
+
|
| 171 |
+
stage are $\{128, 256, 512, 1024\}$ . The point clouds are voxelized with an initial voxel size of $2.5\mathrm{cm}$ which is doubled at each stage. In the coarse level, we resize the 2D features to $24\times 32$ before feeding them to the transformer for computational efficiency. All the transformer layers have 256 features channels with 4 attention heads and ReLU activation. In the patch pyramid, we use $H_0 = 6$ and $W_0 = 8$ in the coarsest level and build $K = 3$ pyramid levels, i.e., $\{6\times 8, 12\times 16, 24\times 32\}$ . In the fine level, we project both the 2D and 3D features to 128-d for feature matching.
|
| 172 |
+
|
| 173 |
+
Metrics. We mainly evaluate the models with 3 metrics: (1) Inlier Ratio (IR), the ratio of pixel-point matches whose 3D distance is below a certain threshold (i.e., $5\mathrm{cm}$ ) over all putative matches. (2) Feature Matching Recall (FMR), the ratio of image-point-cloud pairs whose inlier ratio is above a certain threshold (i.e., $10\%$ ). (3) Registration Recall (RR), the ratio of image-point-cloud pairs whose RMSE is below a certain threshold (i.e., $10\mathrm{cm}$ ).
|
| 174 |
+
|
| 175 |
+
Baselines. We mainly compare to 3 keypoint detection-based baseline methods: (1) FCGF2D3D, a 2D-3D implementation of FCGF [9] which samples random keypoints from the image and the point cloud. (2) Predator2D3D, a 2D-3D implementation of Predator [21] which leverages a graph network to learn the saliency of each pixel (point) for sampling keypoints. (3) P2-Net [51], a 2D-3D correspondence method which detects locally salient pixels (points) in the feature space. Note that albeit successful in point cloud registration, we find that Predator-2D3D fails to predict reliable saliency scores in 2D-3D scenarios. To this end, we ignore the saliency scores in Predator-2D3D and randomly sample keypoints according to the predicted overlap scores. For fair comparison, we use the same backbones for all the methods. Please refer to the supplementary material for more details.
|
| 176 |
+
|
| 177 |
+
# 4.2. Evaluations on RGB-D Scenes V2
|
| 178 |
+
|
| 179 |
+
Dataset. RGB-D Scenes V2 [22] contains 11427 RGB-D frames from 14 indoor scenes. For each scene, we fuse a point cloud fragment with every 25 consecutive depth frames and sample a RGB image every 25 frames. We select the image-point-cloud pairs with an overlap ratio of at least $30\%$ . The pairs from scenes 1-8 are used for training, scenes 9 and 10 for validation, and scenes 11-14 for testing. As last, we obtain a benchmark of 1748 training pairs, 236 for validation and 497 for testing.
|
| 180 |
+
|
| 181 |
+
Quantitative results. We first compare our method to the baselines on RGB-D Scenes V2 in Tab. 1. For Inlier Ratio, P2-Net outperforms FCGF-2D3D benefiting from the feature saliency-based keypoint detection. However, it still suffers from low inlier ratio. And albeit achieving better inlier ratio on the first two scenes, Predator-2D3D performs worse in the later two scenes where the camera is closer to the scene. On the contrary, thanks to the coarse-to-fine
|
| 182 |
+
|
| 183 |
+
<table><tr><td>Model</td><td>Scene-11</td><td>Scene-12</td><td>Scene-13</td><td>Scene-14</td><td>Mean</td></tr><tr><td>Mean depth (m)</td><td>1.74</td><td>1.66</td><td>1.18</td><td>1.39</td><td>1.49</td></tr><tr><td colspan="6">Inlier Ratio ↑</td></tr><tr><td>FCGF-2D3D [9]</td><td>6.8</td><td>8.5</td><td>11.8</td><td>5.4</td><td>8.1</td></tr><tr><td>P2-Net [51]</td><td>9.7</td><td>12.8</td><td>17.0</td><td>9.3</td><td>12.2</td></tr><tr><td>Predator-2D3D [21]</td><td>17.7</td><td>19.4</td><td>17.2</td><td>8.4</td><td>15.7</td></tr><tr><td>2D3D-MATR (ours)</td><td>32.8</td><td>34.4</td><td>39.2</td><td>23.3</td><td>32.4</td></tr><tr><td colspan="6">Feature Matching Recall ↑</td></tr><tr><td>FCGF-2D3D [9]</td><td>11.1</td><td>30.4</td><td>51.5</td><td>15.5</td><td>27.1</td></tr><tr><td>P2-Net [51]</td><td>48.6</td><td>65.7</td><td>82.5</td><td>41.6</td><td>59.6</td></tr><tr><td>Predator-2D3D [21]</td><td>86.1</td><td>89.2</td><td>63.9</td><td>24.3</td><td>65.9</td></tr><tr><td>2D3D-MATR (ours)</td><td>98.6</td><td>98.0</td><td>88.7</td><td>77.9</td><td>90.8</td></tr><tr><td colspan="6">Registration Recall ↑</td></tr><tr><td>FCGF-2D3D [9]</td><td>26.4</td><td>41.2</td><td>37.1</td><td>16.8</td><td>30.4</td></tr><tr><td>P2-Net [51]</td><td>40.3</td><td>40.2</td><td>41.2</td><td>31.9</td><td>38.4</td></tr><tr><td>Predator-2D3D [21]</td><td>44.4</td><td>41.2</td><td>21.6</td><td>13.7</td><td>30.2</td></tr><tr><td>2D3D-MATR (ours)</td><td>63.9</td><td>53.9</td><td>58.8</td><td>49.1</td><td>56.4</td></tr></table>
|
| 184 |
+
|
| 185 |
+
matching pipeline and the multi-scale patch pyramid, our 2D3D-MATR significantly improves the inlier ratio by 20 percentage points (pp). And this advantage further contributes to much higher Feature Matching Recall, where our method surpasses the second best P2-Net by over 24 pp.
|
| 186 |
+
|
| 187 |
+
For the most important Registration Recall, P2-Net achieves the best results among the three detection-based baselines. And our method outperforms P2-Net by 18 pp on registration recall thanks to the more accurate correspondences. These results have demonstrated the strong generality of our method to unseen scenes.
|
| 188 |
+
|
| 189 |
+
# 4.3. Evaluations on 7Scenes
|
| 190 |
+
|
| 191 |
+
Dataset. 7-Scenes [17] consists of 46 RGB-D sequences from 7 indoor scenes. We use the same method as above to prepare the image-point-cloud pairs from each scene and preserve the pairs that share at least $50\%$ overlap. We follow the official sequence split to generate the training, validation and testing data, which makes 4048 training pairs, 1011 validation pairs and 2304 testing pairs. Note that compared to the benchmark used in [51], we provide a more challenging one with richer viewpoint changes and smaller overlap ratios. For the evaluation results under the setting of [51], please refer to the supplementary material.
|
| 192 |
+
|
| 193 |
+
Quantitative results. In contrast with Sec. 4.2, we evaluate the generality to unseen viewpoints in known scenes on 7-Scenes. The results are demonstrated in Tab. 2. For Inlier Ratio, our method outperforms the second best P2-Net by over 18 pp. For Feature Matching Recall, 2D3DMATR achieves an average improvement of 13.1 pp. And our method surpasses the baseline methods by at least 10 pp on Registration Recall. More surprisingly, Predator-2D3D performs the worst on 7-Scenes. As the image-point-cloud
|
| 194 |
+
|
| 195 |
+
Table 1: Evaluation results on RGB-D Scenes V2. Bold-faced numbers highlight the best and the second best are underlined.
|
| 196 |
+
|
| 197 |
+
<table><tr><td>Model</td><td>Chess</td><td>Fire</td><td>Heads</td><td>Office</td><td>Pumpkin</td><td>Kitchen</td><td>Stairs</td><td>Mean</td></tr><tr><td>Mean depth (m)</td><td>1.78</td><td>1.55</td><td>0.80</td><td>2.03</td><td>2.25</td><td>2.13</td><td>1.84</td><td>1.77</td></tr><tr><td colspan="9">Inlier Ratio ↑</td></tr><tr><td>FCGF-2D3D [9]</td><td>34.2</td><td>32.8</td><td>14.8</td><td>26.0</td><td>23.3</td><td>22.5</td><td>6.0</td><td>22.8</td></tr><tr><td>P2-Net [51]</td><td>55.2</td><td>46.7</td><td>13.0</td><td>36.2</td><td>32.0</td><td>32.8</td><td>5.8</td><td>31.7</td></tr><tr><td>Predator-2D3D [21]</td><td>34.7</td><td>33.8</td><td>16.6</td><td>25.9</td><td>23.1</td><td>22.2</td><td>7.5</td><td>23.4</td></tr><tr><td>2D3D-MATR (ours)</td><td>72.1</td><td>66.0</td><td>31.3</td><td>60.7</td><td>50.2</td><td>52.5</td><td>18.1</td><td>50.1</td></tr><tr><td colspan="9">Feature Matching Recall ↑</td></tr><tr><td>FCGF-2D3D [9]</td><td>99.7</td><td>98.2</td><td>69.9</td><td>97.1</td><td>83.0</td><td>87.7</td><td>16.2</td><td>78.8</td></tr><tr><td>P2-Net [51]</td><td>100.0</td><td>99.3</td><td>58.9</td><td>99.1</td><td>87.2</td><td>92.2</td><td>16.2</td><td>79.0</td></tr><tr><td>Predator-2D3D [21]</td><td>91.3</td><td>95.1</td><td>76.7</td><td>88.6</td><td>79.2</td><td>80.6</td><td>31.1</td><td>77.5</td></tr><tr><td>2D3D-MATR (ours)</td><td>100.0</td><td>99.6</td><td>98.6</td><td>100.0</td><td>92.4</td><td>95.9</td><td>58.1</td><td>92.1</td></tr><tr><td colspan="9">Registration Recall ↑</td></tr><tr><td>FCGF-2D3D [9]</td><td>89.5</td><td>79.7</td><td>19.2</td><td>85.9</td><td>69.4</td><td>79.0</td><td>6.8</td><td>61.4</td></tr><tr><td>P2-Net [51]</td><td>96.9</td><td>86.5</td><td>20.5</td><td>91.7</td><td>75.3</td><td>85.2</td><td>4.1</td><td>65.7</td></tr><tr><td>Predator-2D3D [21]</td><td>69.6</td><td>60.7</td><td>17.8</td><td>62.9</td><td>56.2</td><td>62.6</td><td>9.5</td><td>48.5</td></tr><tr><td>2D3D-MATR (ours)</td><td>96.9</td><td>90.7</td><td>52.1</td><td>95.5</td><td>80.9</td><td>86.1</td><td>28.4</td><td>75.8</td></tr></table>
|
| 198 |
+
|
| 199 |
+
Table 2: Evaluation results on 7Scenes. Boldfaced numbers highlight the best and the second best are underlined.
|
| 200 |
+
|
| 201 |
+
pairs in 7-Scenes commonly share more overlap, we assume that explicitly predicting the overlap scores contributes to little benefit but harms the distinctiveness of the learned feature representations.
|
| 202 |
+
|
| 203 |
+
Compared to RGB-D Scenes V2, 7-Scenes have more significant scale variations across different scenes. Nevertheless, our method still outperforms the baseline methods by a large margin, demonstrating the strong robustness of 2D3D-MATR to scale variance. It is noteworthy that 2D3D-MATR achieves more significant improvements on the two hard scenes, i.e., heads and stairs. On the one hand, the camera is much closer to the scene surfaces in heads than in other scenes. This causes great difficulty to extract accurate correspondences as a small error in 3D space could be amplified on the image plane. On the other hand, stairs contains numerous repeated patterns which is hard to distinguish. Thanks to our multi-scale patch pyramid and coarse-to-fine matching strategy, our method can better handle these hard cases.
|
| 204 |
+
|
| 205 |
+
Qualitative results. Fig. 6 visualizes the extracted correspondences from P2-Net and 2D3D-MATR. We also show the single-scale version of 2D3D-MATR where $24 \times 32$ image patches are used. Our method extracts more accurate and more thoroughly distributed correspondences over the whole scene, which is crucial for successful registration. The last two rows shows two difficult cases from heads and stairs. In the $3^{\mathrm{rd}}$ row, P2-Net fails to detect reliable keypoints and thus suffers from low inlier ratio. Due to a near placement of the camera, the single-scale version of 2D3D-MATR can only extract the correspondences in the distant background areas. On the contrary, benefiting from multi-scale patch pyramid, full 2D3D-MATR extracts much more accurate correspondences distributed over the whole scene. And the $4^{\mathrm{th}}$ row contains repeated patterns
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
(a) P2-Net
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
(b) 2D3D-MATR (single-scale)
|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
(c) 2D3D-MATR (multi-scale)
|
| 215 |
+
Figure 6: Comparisons of correspondences on 7-Scenes. Our method extracts more accurate and more thoroughly distributed correspondences over the whole scene. And it extracts accurate correspondences from repeated patterns (see the $4^{\text{th}}$ row).
|
| 216 |
+
|
| 217 |
+
distributed from near to far. P2-Net detects keypoints near the boundaries but fails to match them correctly. Benefiting from the global contextual constraints and cross-modality correlations learned from the transformer module, 2D3DMATR extracts more accurate correspondences from the stairs. Please refer to the supplementary material for more visualizations.
|
| 218 |
+
|
| 219 |
+
# 4.4. Ablation Studies
|
| 220 |
+
|
| 221 |
+
We further conduct extensive ablation studies to investigate the efficacy of our designs on RGB-D Scenes V2. Following [38], we report another metric Patch Inlier Ratio (PIR), the ratio of patch correspondences whose overlap ratios are above a certain threshold (i.e., 0.3), to evaluate the performance on the coarse level.
|
| 222 |
+
|
| 223 |
+
Coarse-to-fine matching. First, we ablate the coarse-level matching step in our pipeline and match randomly sampled keypoints from both sides as correspondences. In this model, we apply the attention-based feature refinement module between the encoders and the decoders. As shown in Tab. 3(a), the performance drops significantly without the coarse-to-fine matching pipeline. Compared to strict pixelpoint matching, patch matching is more robust and reliable as more context could be leveraged. This effectively reduces the searching space during matching, and facilitates extracting accurate correspondences.
|
| 224 |
+
|
| 225 |
+
Feature refinement module. Next, we study the influx
|
| 226 |
+
|
| 227 |
+
<table><tr><td>Model</td><td>PIR</td><td>IR</td><td>FMR</td><td>RR</td></tr><tr><td>(a.1) 2D3D-MATR (full)</td><td>48.5</td><td>32.4</td><td>90.8</td><td>56.4</td></tr><tr><td>(a.2) 2D3D-MATR w/o coarse-to-fine</td><td>-</td><td>11.2</td><td>52.2</td><td>34.6</td></tr><tr><td>(a.1) 2D3D-MATR (full)</td><td>48.5</td><td>32.4</td><td>90.8</td><td>56.4</td></tr><tr><td>(b.2) 2D3D-MATR w/o self-attention</td><td>45.9</td><td>29.0</td><td>91.8</td><td>44.0</td></tr><tr><td>(b.3) 2D3D-MATR w/o cross-attention</td><td>50.4</td><td>29.3</td><td>89.1</td><td>47.7</td></tr><tr><td>(b.4) 2D3D-MATR w/o attention</td><td>37.0</td><td>23.1</td><td>87.0</td><td>42.3</td></tr><tr><td>(c.1) 2D3D-MATR (full)</td><td>48.5</td><td>32.4</td><td>90.8</td><td>56.4</td></tr><tr><td>(c.2) 2D3D-MATR w/ (24 × 32)</td><td>37.7</td><td>29.2</td><td>88.3</td><td>36.9</td></tr><tr><td>(c.3) 2D3D-MATR w/ (12 × 16)</td><td>44.2</td><td>29.9</td><td>89.2</td><td>51.7</td></tr><tr><td>(c.4) 2D3D-MATR w/ (6 × 8)</td><td>41.7</td><td>23.6</td><td>87.7</td><td>50.2</td></tr><tr><td>(c.5) 2D3D-MATR w/ (24 × 32, 12 × 16)</td><td>46.1</td><td>32.2</td><td>90.5</td><td>54.5</td></tr><tr><td>(c.6) 2D3D-MATR w/ (24 × 32, 6 × 8)</td><td>42.3</td><td>31.6</td><td>90.0</td><td>51.3</td></tr><tr><td>(c.7) 2D3D-MATR w/ (12 × 16, 6 × 8)</td><td>49.8</td><td>30.9</td><td>90.1</td><td>54.2</td></tr><tr><td>(d.1) 2D3D-MATR (full)</td><td>48.5</td><td>32.4</td><td>90.8</td><td>56.4</td></tr><tr><td>(d.2) 2D3D-MATR w/o mutual top-k</td><td>48.5</td><td>31.7</td><td>91.6</td><td>50.8</td></tr></table>
|
| 228 |
+
|
| 229 |
+
Table 3: Ablation studies on RGB-D Scenes V2. Bold-faced numbers highlight the best and the second best are underlined.
|
| 230 |
+
|
| 231 |
+
ence of the attention-based feature refinement in Tab. 3(b). We first remove the self-attention modules and the cross-attention modules in the network. The model without self-attention suffers from more serious performance degradation, which means global context plays a more important role than cross-modal aggregation in 2D-3D registration. We then completely remove all attention modules, which further degradates the performance.
|
| 232 |
+
|
| 233 |
+
Multi-scale patch pyramid. We further evaluate the efficiency of the multi-scale patch pyramid in Tab. 3(c). We progressively ablate each resolution level from our full model and evaluate the performance. Obviously, the models with one single resolution perform worse than the multi-scale models, demonstrating the effectiveness of our design. And note that the inlier ratios of the models with small resolution are lower. This is because the image patches in these models are larger and thus leads to more matching ambiguity.
|
| 234 |
+
|
| 235 |
+
Mutual top- $k$ selection. At last, we investigate the influence of the mutual top- $k$ selection in the point matching module by replacing with the non-mutual counterpart. This model achieves worse IR by 0.7 pp, better FMR by 0.8 pp, and significantly worse RR by 5.6 pp, showing the importance of the mutual top- $k$ selection. However, we also note that the model with non-mutual top- $k$ selection still beats all the baselines, which demonstrates the effectiveness of our method.
|
| 236 |
+
|
| 237 |
+
# 5. Conclusion
|
| 238 |
+
|
| 239 |
+
We have presented 2D3D-MATR to hierarchically extract pixel-point correspondences for inter-modality registration between images and point clouds. Benefiting from a coarse-to-fine matching pipeline, our method bypasses the need of keypoint detection across two modalities. We further construct a multi-scale patch pyramid to alleviate the scale ambiguity during patch matching. These designs significantly improve the quality of the extracted correspondences and contribute to accurate 2D-3D registration.
|
| 240 |
+
|
| 241 |
+
There are several potential directions where our method can be improved. First, 2D3D-MATR still relies on PnP-RANSAC for successful registration, which limits its computational efficiency. Second, we notice that the generality of 2D-3D matching to novel scenes is still inferior to unimodality matching between images or point clouds. This is a very important topic for real-world applications. At last, the uniform patch partition strategy in 2D3D-MATR is relatively simple and coarse, causing that the 2D and 3D patches are not perfectly aligned in most cases. And it is a promising research direction to extract better aligned patches by leveraging semantic information.
|
| 242 |
+
|
| 243 |
+
Acknowledgement. This work is supported in part by the National Key R&D Program of China (2018AAA0102200), the NSFC (62325221, 62132021, 62002375, 62002376), the Natural Science Foundation of Hunan Province of China (2021RC3071, 2022RC1104, 2021JJ40696) and the NUDT Research Grants (ZK22-52).
|
| 244 |
+
|
| 245 |
+
# References
|
| 246 |
+
|
| 247 |
+
[1] Xuyang Bai, Zixin Luo, Lei Zhou, Hongkai Chen, Lei Li, Zeyu Hu, Hongbo Fu, and Chiew-Lan Tai. Pointdsc: Robust point cloud registration using deep spatial consistency. In CVPR, pages 15859-15869, 2021. 3
|
| 248 |
+
|
| 249 |
+
[2] Xuyang Bai, Zixin Luo, Lei Zhou, Hongbo Fu, Long Quan, and Chiew-Lan Tai. D3feat: Joint learning of dense detection and description of 3d local features. In CVPR, pages 6359-6367, 2020. 2, 4
|
| 250 |
+
[3] Eric Brachmann, Alexander Krull, Sebastian Nowozin, Jamie Shotton, Frank Michel, Stefan Gumhold, and Carsten Rother. Dsac-differentiable ransac for camera localization. In CVPR, pages 6684-6692, 2017. 3
|
| 251 |
+
[4] Eric Brachmann, Frank Michel, Alexander Krull, Michael Ying Yang, Stefan Gumhold, et al. Uncertainty-driven 6d pose estimation of objects and scenes from a single rgb image. In CVPR, pages 3364-3372, 2016. 3
|
| 252 |
+
[5] Eric Brachmann and Carsten Rother. Learning less is more-6d camera localization via 3d surface regression. In CVPR, pages 4654–4662, 2018. 3
|
| 253 |
+
[6] Eric Brachmann and Carsten Rother. Neural-guided ransac: Learning where to sample model hypotheses. In CVPR, pages 4322-4331, 2019. 3
|
| 254 |
+
[7] Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In CVPR, volume 1, pages 539-546. IEEE, 2005. 6
|
| 255 |
+
[8] Christopher Choy, Wei Dong, and Vladlen Koltun. Deep global registration. In CVPR, pages 2514-2523, 2020. 3
|
| 256 |
+
[9] Christopher Choy, Jaesik Park, and Vladlen Koltun. Fully convolutional geometric features. In ICCV, pages 8958-8966, 2019. 2, 6, 7
|
| 257 |
+
[10] Haowen Deng, Tolga Birdal, and Slobodan Ilic. Ppf-foldnet: Unsupervised learning of rotation invariant 3d local descriptors. In ECCV, pages 602-618, 2018. 2
|
| 258 |
+
[11] Haowen Deng, Tolga Birdal, and Slobodan Ilic. Ppfnet: Global context aware local features for robust 3d point matching. In CVPR, pages 195-205, 2018. 2
|
| 259 |
+
[12] Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superpoint: Self-supervised interest point detection and description. In CVPRW, pages 224-236, 2018. 2
|
| 260 |
+
[13] Bertram Drost, Markus Ulrich, Nassir Navab, and Slobodan Ilic. Model globally, match locally: Efficient and robust 3d object recognition. In CVPR, pages 998-1005. IEEE, 2010. 2
|
| 261 |
+
[14] Mihai Dusmanu, Ignacio Rocco, Tomas Pajdla, Marc Pollefeys, Josef Sivic, Akihiko Torii, and Torsten Sattler. D2-net: A trainable cnn for joint description and detection of local features. In CVPR, pages 8092-8101, 2019. 2
|
| 262 |
+
[15] Mengdan Feng, Sixing Hu, Marcelo H Ang, and Gim Hee Lee. 2d3d-matchnet: Learning to match keypoints across 2d image and 3d point cloud. In ICRA, pages 4790-4796. IEEE, 2019. 2, 3
|
| 263 |
+
[16] Martin A Fischler and Robert C Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381-395, 1981. 2, 3
|
| 264 |
+
[17] Ben Glocker, Shahram Izadi, Jamie Shotton, and Antonio Criminisi. Real-time rgb-d camera relocalization. In ISMAR, pages 173-179. IEEE, 2013. 2, 6, 7
|
| 265 |
+
[18] Zan Gojcic, Caifa Zhou, Jan D Wegner, and Andreas Wieser. The perfect match: 3d point cloud matching with smoothed densities. In CVPR, pages 5545-5554, 2019. 2
|
| 266 |
+
|
| 267 |
+
[19] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 3, 6
|
| 268 |
+
[20] Elad Hoffer and Nir Ailon. Deep metric learning using triplet network. arXiv preprint arXiv:1412.6622, 2014. 6
|
| 269 |
+
[21] Shengyu Huang, Zan Gojcic, Mikhail Usvyatsov, Andreas Wieser, and Konrad Schindler. Predator: Registration of 3d point clouds with low overlap. In CVPR, pages 4267-4276, 2021. 2, 4, 6, 7
|
| 270 |
+
[22] Kevin Lai, Liefeng Bo, and Dieter Fox. Unsupervised feature learning for 3d scene labeling. In ICRA, pages 3050-3057. IEEE, 2014. 2, 6
|
| 271 |
+
[23] Junha Lee, Seungwook Kim, Minsu Cho, and Jaesik Park. Deep though voting for robust global registration. In ICCV, pages 15994-16003, 2021. 3
|
| 272 |
+
[24] Jae Yong Lee, Joseph DeGol, Victor Fragoso, and Sudipta N Sinha. Patchmatch-based neighborhood consensus for semantic correspondence. In CVPR, pages 13153-13163, 2021. 2
|
| 273 |
+
[25] Vincent Lepetit, Francesc Moreno-Noguer, and Pascal Fua. Epnp: An accurate o (n) solution to the pnp problem. IJCV, 81(2):155-166, 2009. 2
|
| 274 |
+
[26] Jiaxin Li, Ben M Chen, and Gim Hee Lee. So-net: Self-organizing network for point cloud analysis. In CVPR, pages 9397–9406, 2018. 4
|
| 275 |
+
[27] Xinghui Li, Kai Han, Shuda Li, and Victor Prisacariu. Dualresolution correspondence networks. NeurIPS, 33:17346-17357, 2020. 2
|
| 276 |
+
[28] Xiaotian Li, Shuzhe Wang, Yi Zhao, Jakob Verbeek, and Juho Kannala. Hierarchical scene coordinate classification and regression for visual localization. In CVPR, pages 11983-11992, 2020. 3
|
| 277 |
+
[29] Xiaotian Li, Juha Ylioinas, and Juho Kannala. Full-frame scene coordinate regression for image-based localization. arXiv preprint arXiv:1802.03237, 2018. 3
|
| 278 |
+
[30] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In CVPR, pages 2117-2125, 2017. 3
|
| 279 |
+
[31] David G Lowe. Object recognition from local scale-invariant features. In ICCV, volume 2, pages 1150-1157. IEEE, 1999. 2
|
| 280 |
+
[32] Zixin Luo, Lei Zhou, Xuyang Bai, Hongkai Chen, Jiahui Zhang, Yao Yao, Shiwei Li, Tian Fang, and Long Quan. Aslfeat: Learning local features of accurate shape and localization. In CVPR, pages 6589-6598, 2020. 2
|
| 281 |
+
[33] Daniela Massiceti, Alexander Krull, Eric Brachmann, Carsten Rother, and Philip HS Torr. Random forests versus neural networks—what's best for camera localization? In ICRA, pages 5118-5125. IEEE, 2017. 3
|
| 282 |
+
[34] Lili Meng, Jianhui Chen, Frederick Tung, James J Little, Julien Valentin, and Clarence W de Silva. Backtracking regression forests for accurate camera relocalization. In IROS, pages 6886-6893. IEEE, 2017. 3
|
| 283 |
+
[35] Lili Meng, Frederick Tung, James J Little, Julien Valentin, and Clarence W de Silva. Exploiting points and lines in regression forests for rgb-d camera relocalization. In IROS, pages 6827-6834. IEEE, 2018. 3
|
| 284 |
+
|
| 285 |
+
[36] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, pages 405-421, 2020. 4
|
| 286 |
+
[37] Quang-Hieu Pham, Mikaela Angelina Uy, Binh-Son Hua, Duc Thanh Nguyen, Gemma Roig, and Sai-Kit Yeung. Lcd: Learned cross-domain descriptors for 2d-3d matching. In AAAI, volume 34, pages 11856–11864, 2020. 2, 3
|
| 287 |
+
[38] Zheng Qin, Hao Yu, Changjian Wang, Yulan Guo, Yuxing Peng, and Kai Xu. Geometric transformer for fast and robust point cloud registration. In CVPR, pages 11143-11152, 2022. 2, 4, 5, 6, 8
|
| 288 |
+
[39] Jerome Revaud, Philippe Weinzaepfel, Cesar De Souza, and Martin Humenberger. R2d2: repeatable and reliable detector and descriptor. In NeurIPS, pages 12414-12424, 2019. 2
|
| 289 |
+
[40] Ignacio Rocco, Relja Arandjelovic, and Josef Sivic. Efficient neighbourhood consensus networks via submanifold sparse convolutions. In ECCV, pages 605-621. Springer, 2020. 2
|
| 290 |
+
[41] Ignacio Rocco, Mircea Cimpoi, Relja Arandjelovic, Akihiko Torii, Tomas Pajdla, and Josef Sivic. Neighbourhood consensus networks. *NeurIPS*, 31, 2018. 2
|
| 291 |
+
[42] Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. Orb: An efficient alternative to sift or surf. In ICCV, pages 2564-2571. Ieee, 2011. 2
|
| 292 |
+
[43] Radu Bogdan Rusu, Nico Blodow, and Michael Beetz. Fast point feature histograms (fpfh) for 3d registration. In ICRA, pages 3212-3217. IEEE, 2009. 2
|
| 293 |
+
[44] Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superglue: Learning feature matching with graph neural networks. In CVPR, pages 4938–4947, 2020. 2, 5
|
| 294 |
+
[45] Jamie Shotton, Ben Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, and Andrew Fitzgibbon. Scene coordinate regression forests for camera relocalization in rgb-d images. In CVPR, pages 2930-2937, 2013. 3
|
| 295 |
+
[46] Jiaming Sun, Zehong Shen, Yuang Wang, Hujun Bao, and Xiaowei Zhou. Loftr: Detector-free local feature matching with transformers. In CVPR, pages 8922-8931, 2021. 2, 4
|
| 296 |
+
[47] Yifan Sun, Changmao Cheng, Yuhan Zhang, Chi Zhang, Liang Zheng, Zhongdao Wang, and Yichen Wei. Circle loss: A unified perspective of pair similarity optimization. In CVPR, pages 6398-6407, 2020. 6
|
| 297 |
+
[48] Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatz Marcotegui, François Goulette, and Leonidas J Guibas. Kpconv: Flexible and deformable convolution for point clouds. In ICCV, pages 6411–6420, 2019. 4, 6
|
| 298 |
+
[49] Julien Valentin, Matthias Nießner, Jamie Shotton, Andrew Fitzgibbon, Shahram Izadi, and Philip HS Torr. Exploiting uncertainty in regression forests for accurate camera relocalization. In CVPR, pages 4400-4408, 2015. 3
|
| 299 |
+
[50] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. NeurIPS, 30, 2017. 2, 4
|
| 300 |
+
[51] Bing Wang, Changhao Chen, Zhaopeng Cui, Jie Qin, Chris Xiaoxuan Lu, Zhengdi Yu, Peijun Zhao, Zhen Dong, Fan Zhu, Niki Trigoni, et al. P2-net: Joint description and
|
| 301 |
+
|
| 302 |
+
detection of local features for pixel and point matching. In ICCV, pages 16004-16013, 2021. 2, 3, 6, 7
|
| 303 |
+
[52] Luwei Yang, Ziqian Bai, Chengzhou Tang, Honghua Li, Yasutaka Furukawa, and Ping Tan. Sanet: Scene agnostic network for camera localization. In ICCV, pages 42-51, 2019. 3
|
| 304 |
+
[53] Hao Yu, Fu Li, Mahdi Saleh, Benjamin Busam, and Slobodan Ilic. Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration. NeurIPS, 34:23872-23884, 2021. 2, 4, 5
|
| 305 |
+
[54] Qunjie Zhou, Torsten Sattler, and Laura Leal-Taixe. Patch2pix: Epipolar-guided pixel-level correspondences. In CVPR, pages 4669-4678, 2021. 2
|
2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6ff006dee74631152c62c6dc0570b21b75aad762ff420e701c8eb0c15d20d44f
|
| 3 |
+
size 863552
|
2d3dmatr2d3dmatchingtransformerfordetectionfreeregistrationbetweenimagesandpointclouds/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b8dee2d878ee71811c0f9f6d70eabce28dcfe35000f937197c65680bc3f4d01a
|
| 3 |
+
size 409741
|
360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/89caa7e2-1f8d-4045-b07b-69d10a586ed8_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:38b9f2760dc20f6661e455a37664c5b6601ac1dc2685c324a50631849ee72a7b
|
| 3 |
+
size 76389
|
360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/89caa7e2-1f8d-4045-b07b-69d10a586ed8_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1b2b73283e2629bc364cfd517c5b723f72970d3824f52ae2fad4035f4327f9c0
|
| 3 |
+
size 94721
|
360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/89caa7e2-1f8d-4045-b07b-69d10a586ed8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2e516750594f04a3646940c30d71b7209ac631a61030df02ac6827296aa7291d
|
| 3 |
+
size 8758576
|
360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/full.md
ADDED
|
@@ -0,0 +1,275 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 360VOT: A New Benchmark Dataset for Omnidirectional Visual Object Tracking
|
| 2 |
+
|
| 3 |
+
Huajian Huang Yinzhe Xu Yingshu Chen Sai-Kit Yeung The Hong Kong University of Science and Technology
|
| 4 |
+
|
| 5 |
+
{hhuangbg, yxuck, ychengw}@connect.ust.hk, saikit@ust.hk
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
Figure 1: Example sequences and annotations of 360VOT benchmark dataset. The target objects in each $360^{\circ}$ frame are annotated with four different representations as ground truth, including bounding box, rotated bounding box, bounding field-of-view, and rotated bounding field-of-view. 360VOT brings distinct challenges for tracking, e.g., crossing border (CB), large distortion (LD) and stitching artifact (SA).
|
| 9 |
+
|
| 10 |
+
# Abstract
|
| 11 |
+
|
| 12 |
+
$360^{\circ}$ images can provide an omnidirectional field of view which is important for stable and long-term scene perception. In this paper, we explore $360^{\circ}$ images for visual object tracking and perceive new challenges caused by large distortion, stitching artifacts, and other unique attributes of $360^{\circ}$ images. To alleviate these problems, we take advantage of novel representations of target localization, i.e., bounding field-of-view, and then introduce a general 360 tracking framework that can adopt typical trackers for omnidirectional tracking. More importantly, we propose a new large-scale omnidirectional tracking benchmark dataset, 360VOT, in order to facilitate future research. 360VOT contains 120 sequences with up to 113K high-resolution frames in equirectangular projection. The tracking targets cover 32 categories in diverse scenarios. Moreover, we provide 4 types of unbiased ground truth, including (rotated) bounding boxes and (rotated) bounding field-of-views, as well as new metrics tailored for $360^{\circ}$ images which allow for the accurate evaluation of omnidirectional tracking performance.
|
| 13 |
+
|
| 14 |
+
mance. Finally, we extensively evaluated 20 state-of-the-art visual trackers and provided a new baseline for future comparisons. Homepage: https://360vot.hkustvgd.com
|
| 15 |
+
|
| 16 |
+
# 1. Introduction
|
| 17 |
+
|
| 18 |
+
Visual object tracking is an essential problem in computer vision since it is demanded in various applications such as video analysis, human-machine interaction, and intelligent robots. In the last decade, a large number of visual tracking algorithms [19, 32, 11, 26, 1] and various benchmarks [42, 30, 24, 16, 22] have been proposed to promote the development of the visual tracking community. Whereas most existing research focuses on perspective visual object tracking, there is little attention paid to omnidirectional visual object tracking.
|
| 19 |
+
|
| 20 |
+
Omnidirectional visual object tracking employs a $360^{\circ}$ camera to track the target object. With its omnidirectional field-of-view (FoV), a $360^{\circ}$ camera offers continuous observation of the target over a longer period, minimizing the out-of-view issue. This advantage is crucial for intelligent
|
| 21 |
+
|
| 22 |
+
agents to achieve stable, long-term tracking, and perception. In general, an ideal spherical camera model is used to describe the projection relationship of a $360^{\circ}$ camera. The $360^{\circ}$ image is widely represented by equirectangular projection (ERP) [36], which has two main features: 1) crossing the image border and 2) extreme distortion as the latitude increases. Moreover, due to inherent limitations or manufacturing defects of the camera, the $360^{\circ}$ image may suffer from stitching artifacts that would blur, break or duplicate the shape of objects. Meanwhile, omnidirectional FoV means it is inevitable to capture the photographers. They would distract and occlude the targets. These phenomena are illustrated in Figure 1. Eventually, they bring new challenges for object tracking on $360^{\circ}$ images.
|
| 23 |
+
|
| 24 |
+
To explore this problem and understand how the current tracking algorithms designed for perspective visual tracking perform, we proposed a challenging omnidirectional tracking benchmark dataset, referred to as 360VOT. The benchmark dataset is composed of 120 sequences, and each sequence has an average of 940 frames with $3840 \times 1920$ resolution. Our benchmark encompasses a wide range of categories and diverse scenarios, such as indoor, underwater, skydiving, and racing. Apart from 13 conventional attributes, 360VOT has additional 7 attributes, including the aforementioned challenges, fast motion on the sphere and latitude variation. Additionally, we introduce new representations to object tracking. Compared to the commonly used bounding box (BBox), bounding field-of-view (BFoV) [9, 43] represents object localization on the unit sphere in an angular fashion. BFoV can better constrain the target on $360^{\circ}$ images and is not subject to image resolution. Based on BFoV, we can properly crop the search regions, which enhances the performance of the conventional trackers devised for perspective visual tracking in omnidirectional tracking. To encourage future research, we provide densely unbiased annotations as ground truth, including BBox and three advanced representations, i.e., rotated BBox (rBBox), BFoV, and rotated BFoV (rBFoV). Accordingly, we develop new metrics tailored for $360^{\circ}$ images to accurately evaluate omnidirectional tracking performances.
|
| 25 |
+
|
| 26 |
+
In summary, the contribution of this work includes:
|
| 27 |
+
|
| 28 |
+
- The proposed 360VOT, to the best of our knowledge, is the first benchmark dataset for omnidirectional visual object tracking.
|
| 29 |
+
- We explore the new representations for visual object tracking and provide four types of unbiased ground truth.
|
| 30 |
+
- We propose new metrics for omnidirectional tracking evaluation, which measure the dual success rate and angle precision on the sphere.
|
| 31 |
+
- We benchmark 20 state-of-the-art trackers on 360VOT with extensive evaluations and develop a new baseline for future comparisons.
|
| 32 |
+
|
| 33 |
+
<table><tr><td>Benchmark</td><td>Videos</td><td>Total frames</td><td>Object classes</td><td>Attr.</td><td>Annotation</td><td>Feature</td></tr><tr><td>ALOV300[35]</td><td>314</td><td>152K</td><td>64</td><td>14</td><td>sparse BBox</td><td>diverse scenes</td></tr><tr><td>OTB100[42]</td><td>100</td><td>81K</td><td>16</td><td>11</td><td>dense BBox</td><td>short-term</td></tr><tr><td>NUS-PRO[25]</td><td>365</td><td>135K</td><td>8</td><td>12</td><td>dense BBox</td><td>occlusion-level</td></tr><tr><td>TC128[28]</td><td>129</td><td>55K</td><td>27</td><td>11</td><td>dense BBox</td><td>color enhanced</td></tr><tr><td>UAV123[30]</td><td>123</td><td>113K</td><td>9</td><td>12</td><td>dense BBox</td><td>UAV</td></tr><tr><td>DTB70[27]</td><td>70</td><td>16K</td><td>29</td><td>11</td><td>dense BBox</td><td>UAV</td></tr><tr><td>NFS[23]</td><td>100</td><td>383K</td><td>17</td><td>9</td><td>dense BBox</td><td>high FPS</td></tr><tr><td>UAVDT[14]</td><td>100</td><td>78K</td><td>27</td><td>14</td><td>sparse BBox</td><td>UAV</td></tr><tr><td>TrackingNet* [31]</td><td>511</td><td>226K</td><td>27</td><td>15</td><td>sparse BBox</td><td>large scale</td></tr><tr><td>OxUvA[38]</td><td>337</td><td>1.55M</td><td>22</td><td>6</td><td>sparse BBox</td><td>long-term</td></tr><tr><td>LaSOT* [16]</td><td>280</td><td>685K</td><td>85</td><td>14</td><td>dense BBox</td><td>category balance</td></tr><tr><td>GOT-10k* [22]</td><td>420</td><td>56K</td><td>84</td><td>6</td><td>dense BBox</td><td>generic</td></tr><tr><td>TOTB[17]</td><td>225</td><td>86K</td><td>15</td><td>12</td><td>dense BBox</td><td>transparent</td></tr><tr><td>TREK-150[15]</td><td>150</td><td>97K</td><td>34</td><td>17</td><td>dense BBox</td><td>FPV</td></tr><tr><td>VOT[24]</td><td>62</td><td>20K</td><td>37</td><td>9</td><td>dense BBox</td><td>annual</td></tr><tr><td>360VOT</td><td>120</td><td>113K</td><td>32</td><td>20</td><td>dense (r)BBox & (r)BFoV</td><td>360° images</td></tr></table>
|
| 34 |
+
|
| 35 |
+
Table 1: Comparison of current popular benchmarks for visual single object tracking in the literature. * indicates that only the test set of each dataset is reported.
|
| 36 |
+
|
| 37 |
+
# 2. Related Work
|
| 38 |
+
|
| 39 |
+
# 2.1. Benchmarks for visual object tracking
|
| 40 |
+
|
| 41 |
+
With the remarkable development of the visual object tracking community, previous works have proposed numerous benchmarks in various scenarios. ALOV300 [35] is a sparse benchmark introducing 152K frames and 16K annotations, while UAVDT [14] focuses on UAV scenarios and has 100 videos. TrackingNet [31] is a large-scale dataset collecting more than 14M frames based on the YT-BB dataset [34]. As YT-BB only provides fine-grained annotations at 1 fps, they explored a tracker to densify the annotations without further manual refinement. OxUvA [38] targets long-term tracking by constructing 337 video sequences, but each video only has 30 frames annotated.
|
| 42 |
+
|
| 43 |
+
One of the first dense BBox benchmarks is OTB100 [42] which is extended from OTB50 [41] and has 100 sequences. NUS-PRO [25] takes the feature of occlusion-level annotation and provides 365 sequences, while TC128 [28] researches the chromatic information in visual tracking. UAV123 [30] and DTB70 [27] offer 123 and 70 aerial videos of rigid objects and humans in various scenes. NfS [23] consists of more than 380K frames captured at 240 FPS studying higher frame rate tracking, while LaSOT [16] is a large-scale and category balance benchmark of premium quality. GOT-10k [22] provides about 1.5M annotations and 84 classes of objects, aiming at generic object tracking. The annual tracking challenge VOT [24] offered 62 sequences and 20K frames in 2022. A more recent benchmark TOTB [17] mainly focuses on transparent object tracking. TREK-150 [15] introduces 150 sequences of tracking in First Person Vision (FPV) with the interaction between the person and the target object.
|
| 44 |
+
|
| 45 |
+
By contrast, our proposed 360VOT is the first benchmark dataset to focus on object tracking and explore new repre
|
| 46 |
+
|
| 47 |
+
sentations on omnidirectional videos. A summarized comparison with existing benchmarks is reported in Table 1.
|
| 48 |
+
|
| 49 |
+
# 2.2. Benchmarks for $360^{\circ}$ object detection
|
| 50 |
+
|
| 51 |
+
Most visual trackers rely on the approaches of tracking by detection. Benefiting from the rapid development of object detection, it is effective to improve the performance of tracking by utilizing those sophisticated network architectures to obtain more robust correlation features. Recently, aiming at omnidirectional understanding and perception, researchers started resorting to object detection algorithms for $360^{\circ}$ images or videos. Several $360^{\circ}$ datasets and benchmarks for object detection have been proposed. FlyingCars [6] is a synthetic dataset composed of 6K images in $512 \times 256$ of synthetic cars and panoramic backgrounds. OSV [45] created a dataset that covers object annotations on 600 street-view panoramic images. 360-Indoor [5] focuses on indoor object detection among 37 categories, while PANDORA [43] provides 3K images of $1920 \times 960$ resolution with rBFOV annotation. These 360 detection benchmarks contain independent images with a sole type of annotation. Differently, as a benchmark for visual object tracking, 360VOT contains large-scale $360^{\circ}$ videos with long footage, higher resolution, diverse environments, and 4 types of annotations.
|
| 52 |
+
|
| 53 |
+
# 2.3. Visual object tracking scheme
|
| 54 |
+
|
| 55 |
+
To guarantee high tracking speed, the trackers for single object tracking generally crop the image and search for the target in small local regions. The tracking scheme is vital in selecting searching regions and interpreting network predictions over sequences in the inference phase. A compatible tracking inference scheme can enhance tracking performance. For example, DaSiamRPN [50] explored a local-to-global searching strategy for long-term tracking. SiamX [21] proposed an adaptive inference scheme to prevent tracking loss and realize fast target re-localization. Here, we introduce a 360-tracking framework to make use of local visual trackers, which are trained on normal perspective images to achieve enhanced performance on $360^{\circ}$ video tracking.
|
| 56 |
+
|
| 57 |
+
# 3. Tracking on $360^{\circ}$ Video
|
| 58 |
+
|
| 59 |
+
The $360^{\circ}$ video is composed of frames using the most common ERP. Each frame can capture $360^{\circ}$ horizontal and $180^{\circ}$ vertical field of view. Although omnidirectional FoV avoids out-of-view issues, the target may cross the left and right borders of a 2D image. Additionally, nonlinear projection distortion makes the target largely distorted when they are near the top or bottom of the image, as illustrated in Figure 1. Therefore, a new representation and framework that fit ERP for $360^{\circ}$ visual tracking are necessary.
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
Tracking a Train
|
| 63 |
+
|
| 64 |
+

|
| 65 |
+
BBox rBBox
|
| 66 |
+
Figure 2: Train. The comparison of the bounding regions of different representations on a $360^{\circ}$ image. The unwarped images based on BFoV and rBFoV are less distorted.
|
| 67 |
+
|
| 68 |
+

|
| 69 |
+
rBBox BFo Extracted reg
|
| 70 |
+
|
| 71 |
+
# 3.1. Representation for the target location
|
| 72 |
+
|
| 73 |
+
The (r)BBox is the most common and simple way to represent the target object's position in perspective images. It is a rectangular area defined by the rectangle around the target object on the image and denoted as $[cx, cy, w, h, \gamma]$ , where $cx, cy$ are object center, $w, h$ are width and height. The rotation angle $\gamma$ of BBox is always zero. However, these representations would become less accurate to properly constrain the target on the $360^{\circ}$ image. The works [49, 43] for $360^{\circ}$ object detection show that the BFoV and rBFoV are more appropriate representations on $360^{\circ}$ images. Basically, we can use the spherical camera model $\mathcal{F}$ to formulate the mathematical relationship between the 2D image in ERP and a continuous 3D unit sphere [20]. (r)BFoV is then defined as $[clon, clat, \theta, \phi, \gamma]$ where $[clon, clat]$ are the longitude and latitude coordinates of the object center at the spherical coordinate system, $\theta$ and $\phi$ denote the maximum horizontal and vertical field-of-view angles of the object's occupation. Additionally, the represented region of (r)BFoV on the $360^{\circ}$ image is commonly calculated via a tangent plane [49, 43], $T(\theta, \phi) \in \mathbb{R}^3$ , and formulated as:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
I (\mathrm {r}) \mathrm {B F o V} | \Omega) = \mathcal {F} \left(\mathcal {R} _ {y} (c l o n) \cdot \mathcal {R} _ {x} (c l a t) \cdot \mathcal {R} _ {z} (\gamma) \cdot \Omega\right), \tag {1}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $\mathcal{R}$ denotes the 3D rotation along the $y, x, z$ axes, $\Omega$ equals $T(\theta, \phi)$ here. The unwarped images based on tangent BFoV are distortion-free under the small FoV, as shown in Figure 2.
|
| 80 |
+
|
| 81 |
+
However, this definition has a disadvantage on large FoV and cannot represent the region exceeding $180^{\circ}$ FoV essentially. With the increasing FoV, the unwarped images from the tangent planes have drastic distortions, shown in the upper row in Figure 3. This defect limits the application of BFoV on visual object tracking since trackers rely on unwarped images for target searching. To address this problem, we extended the definition of BFoV. When the bounding region involves a large FoV, i.e., larger than $90^{\circ}$ , the extended BFoV leverages a spherical surface $S(\theta, \phi) \in \mathbb{R}^{3}$ instead of a tangent plane to represent the bounding region on the $360^{\circ}$ image. Therefore, the corresponding region of
|
| 82 |
+
|
| 83 |
+

|
| 84 |
+
Figure 3: The boundaries on the $360^{\circ}$ images and the corresponding unwarped images of different BFoV definitions. The tangent BFoV is displayed in blue and the extended BFoV is in orange. $M$ on the sphere surface denotes the object center and tangent point. Blue plane with dotted borders represents a larger plane out of space. Best viewed in color.
|
| 85 |
+
|
| 86 |
+

|
| 87 |
+
Figure 4: The diagram of 360 tracking framework. 360 tracking framework takes responsibility to extract local search regions for tracking and interpret tracking results. It supports various local visual trackers and can generate 4 types of representation.
|
| 88 |
+
|
| 89 |
+
extended (r)BFOV on $360^{\circ}$ is formulated as:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
I \left((\mathrm {r}) \mathrm {B F o V} \mid \Omega\right), \quad \Omega = \left\{ \begin{array}{l l} T (\theta , \phi), & \theta < 9 0 ^ {\circ}, \phi < 9 0 ^ {\circ} \\ S (\theta , \phi), & \text {o t h e r w i s e} \end{array} . \right. \tag {2}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
The comparisons of the boundary on $360^{\circ}$ images and corresponding unwarped images based on tangent BFoV and the extended BFoV are shown in Figure 3. Please refer to the supplementary for the detailed formulation.
|
| 96 |
+
|
| 97 |
+
# 3.2. 360 tracking framework
|
| 98 |
+
|
| 99 |
+
To conduct omnidirectional tracking using an existing visual tracker, we propose a 360 tracking framework, shown in Figure 4. The framework leverages extended BFoV to address challenges caused by object crossing-border and large distortion on the $360^{\circ}$ image. As a continuous representation in the spherical coordinate system, BFoV is not subject to the resolution of the image. Given an initial BFoV, the framework first calculates the corresponding region $I$ on the $360^{\circ}$ image via Eq. 2. By remapping the $360^{\circ}$ image using pixel coordinates recorded in $I$ , it extracts a less distorted local search region for target identification. From this extracted image, a local visual tracker then infers a BBox or rBBox prediction. Finally, we can still utilize $I$ to convert the local prediction back to the global bounding region on the $360^{\circ}$ image. The (r)BBox prediction is calculated as the minimum area (rotated) rectangle on the $360^{\circ}$ image. In terms of (r)BFoV, we can re-project the bounding region's
|
| 100 |
+
|
| 101 |
+
coordinates onto the spherical coordinate system and calculate the maximum bounding FoV. Since the framework does not rely on nor affect the network architecture of the tracker, we can adapt an arbitrary local visual tracker trained on conventional perspective images for omnidirectional tracking.
|
| 102 |
+
|
| 103 |
+
# 4. A New Benchmark Dataset: 360VOT
|
| 104 |
+
|
| 105 |
+
In this section, we elaborate on how to collect appropriate $360^{\circ}$ videos and efficiently obtain unbiased ground truth, making a new benchmark dataset for omnidirectional $(360^{\circ})$ Visual Object Tracking (360VOT).
|
| 106 |
+
|
| 107 |
+
# 4.1. Collection
|
| 108 |
+
|
| 109 |
+
The resources of $360^{\circ}$ videos are much less abundant than normal format videos. We spent lots of effort and time collecting hundreds of candidate videos from YouTube and captured some using a $360^{\circ}$ camera. After that, we ranked and filtered them considering four criteria of tracking difficulty scale and some additional challenging cases. Videos can gain higher ranking with 1) adequate relative motion of the target, 2) higher variability of the environment, 3) the target crossing frame borders, and 4) sufficient footage. In addition to the criteria listed above, videos with additional challenges are assigned a higher priority. For example, distinguishing targets from other highly comparable objects is a challenge in object detection and tracking.
|
| 110 |
+
|
| 111 |
+
After filtering, videos are further selected and sampled into sequences with a frame number threshold ( $\leq 2400$ ). The relatively stationary frames are further discarded manually. Considering the distribution balance, 120 sequences are finally selected as the 360VOT benchmark. The object classes mainly cover humans (skydiver, rider, pedestrian and diver), animals (dog, cat, horse, shark, bird, monkey, dolphin, panda, rabbit, squirrel, turtle, elephant and rhino), rigid objects (car, F1 car, bike, motorbike, boat, aircraft, Lego, basket, building, kart, cup, drone, helmet, shoes, tire and train) and human & carrier cases (human & bike, human & motorbike and human & horse). Our benchmark encompasses a wide range of categories with high diversity, as illustrated in the examples in Figure 1.
|
| 112 |
+
|
| 113 |
+
# 4.2. Annotation
|
| 114 |
+
|
| 115 |
+
Manual annotation of large-scale images in high quality usually requires sufficient manpower with basic professional knowledge in the domain. Accordingly, the tracking benchmark with 4 different types of ground truth increases the manual annotation workload largely increased and makes the annotation standard inconsistent in a large group of annotators. The large distortion and crossing border issues of $360^{\circ}$ images also make it difficult to obtain satisfactory annotations. Besides, there is no toolkit able to produce BFoV annotation directly. To overcome these problems, we seek to segment the per-pixel target instance in each frame and then obtain corresponding optimal (r)BBox and (r)BFoV from the resultant masks.
|
| 116 |
+
|
| 117 |
+
To realize the objective at a speedy time, our annotation pipeline includes three steps, initial object localization, interactive segmentation refinement, and mask-to-bounding box conversion. First, we integrated our 360 tracking framework with a visual tracker [21] and then used it to generate initial BBoxes for all sequences before segmentation. The annotators inspected the tracking results online and would correct and restart the tracking when tracking failed. The centroid of each BBox would be used to initiate segmentation later. Second, we developed an efficient segmentation annotation toolkit based on a click-based interactive segmentation model [37], which allows annotators to refine the initial segmentation with a few clicks. Finally, we converted the fine-grained segmentation masks with two rounds of revision to get the four unbiased ground truths by minimizing the bounding areas respectively. Please refer to the supplementary for details of the annotation toolkit and conversion algorithms.
|
| 118 |
+
|
| 119 |
+
# 4.3. Attributes
|
| 120 |
+
|
| 121 |
+
Each sequence is annotated with a total of 20 different attributes: illumination variation (IV), background clutter (BC), deformable target (DEF), motion blur (MB), camera motion (CM), rotation (ROT), partial occlusion (POC), full
|
| 122 |
+
|
| 123 |
+
<table><tr><td>Attr.</td><td>Meaning</td></tr><tr><td>IV</td><td>The target is subject to light variation.</td></tr><tr><td>BC</td><td>The background has a similar appearance as the target.</td></tr><tr><td>DEF</td><td>The target deforms during tracking.</td></tr><tr><td>MB</td><td>The target is blurred due to motion.</td></tr><tr><td>CM</td><td>The camera has abrupt motion.</td></tr><tr><td>ROT</td><td>The target rotates related to the frames.</td></tr><tr><td>POC</td><td>The target is partially occluded.</td></tr><tr><td>FOC</td><td>The target is fully occluded.</td></tr><tr><td>ARC</td><td>The ratio of the annotation aspect ratio of the first and the current frame is outside the range [0.5, 2].</td></tr><tr><td>SV</td><td>The ratio of the annotation area of the first and the current frame is outside the range [0.5, 2].</td></tr><tr><td>FM</td><td>The motion of the target center between contiguous frames exceeds its own size.</td></tr><tr><td>LR</td><td>The area of the target annotation is less than 1000 pixels.</td></tr><tr><td>HR</td><td>The area of the target annotation is larger than 5002 pixels.</td></tr><tr><td>SA</td><td>The 360° images have stitching artifacts and they affect the target object.</td></tr><tr><td>CB</td><td>The target is crossing the border of the frame and partially appears on the other side.</td></tr><tr><td>FMS</td><td>The motion angle on the spherical surface of the target center is larger than the last BFoV.</td></tr><tr><td>LFoV</td><td>The vertical or horizontal FoV of the BFoV is larger than 90°.</td></tr><tr><td>LV</td><td>The range of the latitude of the target center across the video is larger than 50°.</td></tr><tr><td>HL</td><td>The latitude of the target center is outside the range [−60°, 60°], lying in the “frigid zone”.</td></tr><tr><td>LD</td><td>The target suffers large distortion due to the equirectangular projection.</td></tr></table>
|
| 124 |
+
|
| 125 |
+
Table 2: Attribute description. The 360VOT not only contains 13 attributes widely used by the existing benchmarks but also has 7 additional attributes, described in the last block of the row, leading to distinct challenges.
|
| 126 |
+
|
| 127 |
+
occlusion (FOC), aspect ratio change (ARC), scale variation (SV), fast motion (FM), low resolution (LR), high resolution (HR), stitching artifact (SA), crossing border (CB), fast motion on the sphere (FMS), large FoV (LFoV), latitude variation (LV), high latitude (HL) and large distortion (LD). The detailed meaning of each attribute is described in Table 2. Among them, IV, BC, DEF, MB, CM, ROT, POC and LD attributes are manually labeled, while the others are computed from the annotation results of targets. The distinct features of the $360^{\circ}$ image are well represented in 360VOT: location variations (FMS, LFoV and LV), external disturbances (SA and LD) and special imaging (CB and HL). See visual examples in the supplementary.
|
| 128 |
+
|
| 129 |
+
Overall, the exact number of each attribute is plotted in a histogram, as shown in Figure 5a, while the correspondence of each attribute is provided with a heatmap, as shown in Figure 5b. A warmer tone indicates that the pair of attributes are more frequently present together and vice versa. The co-occurrence counts of each row are then normalized by the diagonal counts. We observe that scale changes (ARC and SV) and motion (MB and FM) are common challenges that are also included in other benchmarks.
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
(a) Attribute histogram
|
| 133 |
+
|
| 134 |
+

|
| 135 |
+
(b) Correspondence heatmap
|
| 136 |
+
Figure 5: Attribute distribution of 360VOT benchmark.
|
| 137 |
+
|
| 138 |
+
Extra challenges of omnidirectional visual object tracking, including CB, FMS, LV, and LD, co-occur with traditional challenges. Specifically, CB occurs when the two patches of the target without intersection are at opposite edges or corners of the frame, and LD happens when the target is of large FoV and appears in a high latitude area of the frame.
|
| 139 |
+
|
| 140 |
+
# 5. Experiments
|
| 141 |
+
|
| 142 |
+
# 5.1. Metrics
|
| 143 |
+
|
| 144 |
+
To conduct the experiments, we use the standard one-pass evaluation (OPE) protocol [42] and measure the success S, precision P, and normalized precision $\overline{P}$ [31] of the trackers over the 120 sequences. Success S is computed as the intersection over union (IoU) between the tracking results $B^{tr}$ and the ground truth annotations $B^{gt}$ . The trackers are ranked by the area under curve (AUC), which is the average of the success rates corresponding to the sampled thresholds [0, 1]. The precision P is computed as the distance between the results $\mathbf{C}^{tr}$ and the ground truth centers $\mathbf{C}^{gt}$ . The trackers are ranked by the precision rate on the specific threshold (i.e., 20 pixels). The normalized precision $\overline{\mathbb{P}}$ is scale-invariant, which normalizes the precision P over the size of the ground truth and then ranks the trackers using the AUC for the $\overline{\mathbb{P}}$ between 0 and 0.5. For the perspec
|
| 145 |
+
|
| 146 |
+
tive image using (r)BBox, these metrics can be formulated as:
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\begin{array}{l} \begin{array}{l} \mathbf {S} = I o U (B ^ {g t}, B ^ {t r}), \mathbf {P} = | | \mathbf {C} _ {x y} ^ {g t} - \mathbf {C} _ {x y} ^ {t r} | | _ {2}, \\ \overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{\overline {{ \\ \overline {{\mathbf {P}}} = \left\| d i a g \left(B ^ {g t}, B ^ {t r}\right) \left(\mathbf {C} _ {x y} ^ {g t} - \mathbf {C} _ {x y} ^ {t r}\right) \right\| _ {2}. \\ \end{array}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
However, for $360^{\circ}$ images, the target predictions may cross the image. To handle this situation and increase the accuracy of BBox evaluation, we introduce dual success $\mathbf{S}_{\text{dual}}$ and precision $\mathbf{P}_{\text{dual}}$ . Specifically, we shift the $B^{gt}$ to the left and right by $W$ , the width of $360^{\circ}$ images, to obtain two temporary ground truth $B_{l}^{gt}$ and $B_{r}^{gt}$ . Based on the new ground truth, we then calculate extra success $\mathbf{S}_l$ and $\mathbf{S}_r$ and precision $\mathbf{P}_l$ and $\mathbf{P}_r$ using Eq. 3. Finally, $\mathbf{S}_{\text{dual}}$ and $\mathbf{P}_{\text{dual}}$ are measured by:
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
\begin{array}{l} \begin{array}{l} \mathbf {S} _ {d u a l} = \max \left\{\mathbf {S} _ {l}, \mathbf {S}, \mathbf {S} _ {r} \right\}, \\ \mathbf {P} _ {l} = \left\{\mathbf {P}, \mathbf {P}, \mathbf {P} \right\}. \end{array} \tag {4} \\ \mathrm {P} _ {\text {d u a l}} = \min \left\{\mathrm {P} _ {l}, \mathrm {P}, \mathrm {P} _ {r} \right\}. \\ \end{array}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
$\mathbf{S}_{\text{dual}}$ and $\mathbf{S}$ , as $\mathbf{P}_{\text{dual}}$ and $\mathbf{P}$ , are the same when the annotation does not cross the image border. Similarly, we can compute the normalized dual $\overline{\mathbf{P}}_{\text{dual}}$ .
|
| 159 |
+
|
| 160 |
+
Since objects suffer significant non-linear distortion in the polar regions due to the equirectangular projection, the distance between the predicted and ground truth centers may be large on the 2D image but they are adjacent on the spherical surface. It means that precision metric $\mathrm{P}_{\text {dual }}$ is sensitive for $360^{\circ}$ images. Therefore, we propose a new metric $\mathrm{P}_{\text {angle }}$ , which is measured as the angle precision $< \mathbf{C}_{\text {lonlat }}^{gt}, \mathbf{C}_{\text {lonlat }}^{tr}>$ between the vectors of the ground truth and the tracker results in the spherical coordinate system. The different trackers are ranked with angle precision rate on a threshold, i.e., $3^{\circ}$ . Moreover, when target positions are represented by BFoV or rBFoV, we utilize spherical IoU [9] to compute the success metric, denoted as $\mathbf{S}_{\text {sphere }}$ , while only $\mathbf{S}_{\text {sphere }}$ and $\mathbf{P}_{\text {angle }}$ are measured.
|
| 161 |
+
|
| 162 |
+
# 5.2. Baseline trackers
|
| 163 |
+
|
| 164 |
+
We evaluated 20 state-of-the-art visual object trackers on 360VOT. According to the latest development of visual tracking, the compared methods can be roughly classified into three groups: transformer trackers, Siamese trackers, and other deep learning based trackers. Specifically, the transformer trackers contain Stark [44], ToMP [29], Mix-Former [8], SimTrack [3] and AiATrack [18]. The Siamese trackers include SiamDW [47], SiamMask [40], SiamRP-Npp [26], SiamBAN [4], AutoMatch [46], Ocean [48] and SiamX [21]. For other deep trackers, UDT [39], MetaSDNet [33], MDNet [32], ECO [11], ATOM [10], KYS [2], DiMP [1], PrDiMP [12] are evaluated. We used the official implementation, trained models, and default configurations to ensure a fair comparison among trackers. In addition, we developed a new baseline AiATrack-360 that combines the transformer tracker AiATrack [18] with our 360 tracking framework. We also adapt a different kind of tracker
|
| 165 |
+
|
| 166 |
+
<table><tr><td rowspan="2">Tracker</td><td colspan="4">360VOT BBox</td></tr><tr><td>S_dual(AUC)</td><td>P_dual</td><td>P_dual(AUC)</td><td>P_angle</td></tr><tr><td>UDT [39]</td><td>0.104</td><td>0.075</td><td>0.117</td><td>0.098</td></tr><tr><td>Meta-SDNet [33]</td><td>0.131</td><td>0.097</td><td>0.164</td><td>0.136</td></tr><tr><td>MDNet [32]</td><td>0.150</td><td>0.106</td><td>0.188</td><td>0.143</td></tr><tr><td>ECO [11]</td><td>0.175</td><td>0.130</td><td>0.212</td><td>0.179</td></tr><tr><td>ATOM [10]</td><td>0.252</td><td>0.216</td><td>0.286</td><td>0.266</td></tr><tr><td>KYS [2]</td><td>0.286</td><td>0.245</td><td>0.312</td><td>0.296</td></tr><tr><td>DiMP [1]</td><td>0.290</td><td>0.247</td><td>0.315</td><td>0.299</td></tr><tr><td>PrDiMP [12]</td><td>0.341</td><td>0.292</td><td>0.371</td><td>0.347</td></tr><tr><td>SiamDW [47]</td><td>0.156</td><td>0.116</td><td>0.190</td><td>0.156</td></tr><tr><td>SiamMask [40]</td><td>0.189</td><td>0.161</td><td>0.220</td><td>0.203</td></tr><tr><td>SiamRPNpp [26]</td><td>0.201</td><td>0.175</td><td>0.233</td><td>0.213</td></tr><tr><td>SiamBAN [4]</td><td>0.205</td><td>0.187</td><td>0.242</td><td>0.227</td></tr><tr><td>AutoMatch [46]</td><td>0.208</td><td>0.202</td><td>0.261</td><td>0.248</td></tr><tr><td>Ocean [48]</td><td>0.240</td><td>0.223</td><td>0.287</td><td>0.264</td></tr><tr><td>SiamX [21]</td><td>0.302</td><td>0.265</td><td>0.331</td><td>0.315</td></tr><tr><td>Stark [44]</td><td>0.381</td><td>0.356</td><td>0.403</td><td>0.408</td></tr><tr><td>ToMP [29]</td><td>0.393</td><td>0.352</td><td>0.421</td><td>0.413</td></tr><tr><td>MixFormer [8]</td><td>0.395</td><td>0.378</td><td>0.417</td><td>0.424</td></tr><tr><td>SimTrack [3]</td><td>0.400</td><td>0.373</td><td>0.421</td><td>0.424</td></tr><tr><td>AiATrack [18]</td><td>0.405</td><td>0.369</td><td>0.427</td><td>0.423</td></tr><tr><td>SiamX-360</td><td>0.391</td><td>0.365</td><td>0.430</td><td>0.425</td></tr><tr><td>AiATrack-360</td><td>0.534</td><td>0.506</td><td>0.563</td><td>0.574</td></tr></table>
|
| 167 |
+
|
| 168 |
+
SiamX [21] with our framework, named SiamX-360, to verify the generality of the proposed framework.
|
| 169 |
+
|
| 170 |
+
# 5.3. Performance based on BBox
|
| 171 |
+
|
| 172 |
+
Overall performance. Existing trackers take the BBox of the first frame to initialize the tracking, and the inference results are also in the form of BBox. Table 3 shows comparison results among four groups of trackers, i.e., other, Siamese, transformer baselines, and the adapted trackers for each block in the table. According to the quantitative results, PrDiMP [12], SiamX [21] and AiATrack-360 perform best in their group of trackers. Owing to the powerful network architectures, the transformer trackers generally outperform other groups of the compared trackers. After AiATrack integrates our proposed framework, AiATrack-360 achieves a significant performance increase of $12.9\%$ , $13.7\%$ , $13.6\%$ and $15.1\%$ in terms of $\mathrm{S}_{\text{dual}}$ , $\mathrm{P}_{\text{dual}}$ , $\overline{\mathrm{P}}_{\text{dual}}$ and $\mathrm{P}_{\text{angle}}$ respectively. AiATrack-360 outperforms all other trackers with a big performance gap. Compared to SiamX, SiamX-360 is improved by $8.9\%$ $\mathrm{S}_{\text{dual}}$ , $10\%$ $\mathrm{P}_{\text{dual}}$ , $9.6\%$ $\overline{\mathrm{P}}_{\text{dual}}$ and $11\%$ $\mathrm{P}_{\text{angle}}$ , which is comparable with other transformer trackers. Although the performance gains of AiATrack-360 and SiamX-360 are different, it validates the effectiveness and generalization of our 360 tracking frame
|
| 173 |
+
|
| 174 |
+
Table 3: Overall performance on 360VOT BBox in terms of dual success, dual precision, normalized dual precision, and angle precision. Bold blue indicates the best results in the tracker group. Bold red indicates the best results overall.
|
| 175 |
+
|
| 176 |
+
<table><tr><td rowspan="2">Tracker</td><td colspan="4">360VOT rBBox</td></tr><tr><td>S_dual(AUC)</td><td>P_dual</td><td>\(\overline{P}_{dual}(AUC)\)</td><td>P_angle</td></tr><tr><td>SiamX-360</td><td>0.205</td><td>0.278</td><td>0.278</td><td>0.327</td></tr><tr><td>AiATrack-360</td><td>0.362</td><td>0.449</td><td>0.516</td><td>0.535</td></tr><tr><td></td><td colspan="2">360VOT BFoV</td><td colspan="2">360VOT rBFoV</td></tr><tr><td></td><td>\(S_{sphere}(AUC)\)</td><td>\(P_{angle}\)</td><td>\(S_{sphere}(AUC)\)</td><td>\(P_{angle}\)</td></tr><tr><td>SiamX-360</td><td>0.262</td><td>0.327</td><td>0.243</td><td>0.323</td></tr><tr><td>AiATrack-360</td><td>0.548</td><td>0.564</td><td>0.426</td><td>0.530</td></tr></table>
|
| 177 |
+
|
| 178 |
+
Table 4: Tracking performance based on other annotations of 360VOT using 360 tracking framework.
|
| 179 |
+
|
| 180 |
+
work on $360^{\circ}$ visual object tracking. They can serve as a new baseline for future comparison.
|
| 181 |
+
|
| 182 |
+
Attribute-based performance. Furthermore, we evaluate all trackers under 20 attributes in order to analyze different challenges faced by existing trackers. In Figure 6, we plot the results on the videos with cross border (CB), fast motion on the sphere (FMS), and latitude variation (LV) attributes. These are the three exclusive and most challenging attributes of 360VOT. For complete results with other attributes, please refer to the supplementary. Compared to the overall performance, all trackers suffer performance degradation, especially on the CB and FMS attributes. For example, $\mathrm{P}_{\text{dual}}$ of SimTrack decreases by $4.2\%$ and $5.3\%$ on CB and FMS respectively. However, the performance of AiATrack-360 still dominates on all three adverse attributes, while SiamX-360 also obtains stable performance gains.
|
| 183 |
+
|
| 184 |
+
# 5.4. Performance based on other annotations
|
| 185 |
+
|
| 186 |
+
Apart from BBox ground truth, we provide additional ground truth, including rBBox, BFoV, and rBFoV. As our 360 tracking framework can estimate approximate rB-Box, BFoV, and rBFoV from local BBox predictions, we additionally evaluate performances of SiamX-360 and AiATrack-360 on these three representations (Table 4). Compared with the results on BBox (Table 3), the performance on rBBox declines vastly. SiamX-360 and AiATrack-360 only achieve 0.205 and $0.362\mathrm{S}_{\text{dual}}$ respectively. By contrast, the evaluation of BFoV and rBFoV has more reasonable and consistent numbers. In addition, we display visual results by AiATrack-360 and AiATrack in Figure 7. AiATrack-360 can always follow and localize the target in challenging cases. Compared with (r)BBox, (r)BFoV can bind the target accurately with fewer irrelevant areas. From the extensive evaluation, we can observe that using BFoV and rBFoV would be beneficial for object localization in omnidirectional scenes. While SiamX-360 and AiATrack-360 serve as new baselines to demonstrate this potential, developing new trackers which can directly predict rBBox, BFoV, and rBFoV will be an important future direction.
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
|
| 194 |
+

|
| 195 |
+
(a) Crossing Border
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
(b) Fast Motion on the Sphere
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
(c) Latitude Variation
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
Figure 6: Comparing BBox tracking performances of different trackers in terms of dual success rate and angle precision rate under the three distinct attributes of 360VOT.
|
| 205 |
+
Figure 7: Qualitative results of the baseline on different representations. Red denotes the ground truth, and blue denotes the results of AiATrack-360. The green in the first row denotes the results of AiATrack.
|
| 206 |
+
|
| 207 |
+
# 6. Discussion and Conclusion
|
| 208 |
+
|
| 209 |
+
The 360 tracking framework with existing tracker integration can, to some extent, succeed in omnidirectional visual object tracking, but there still remains much room for improvement. We want to discuss some promising directions here. 1) Data augmentation. The existing trackers are trained on the dataset of perspective images, while large-scale training data of $360^{\circ}$ images are lacking. During training, we can introduce projection distortion to augment the training data. 2) Long-term omnidirectional tracking algorithms. The trackers enhanced by our tracking frame
|
| 210 |
+
|
| 211 |
+
work are technically still classified as short-term trackers. As target occlusion is a noticeable attribute of 360VOT, the long-term tracker capable of target relocalization can perform better. Nonetheless, how to effectively and efficiently search for targets on a whole $360^{\circ}$ image is a challenge. 3) New network architectures. SphereNet [7] learns spherical representations for omnidirectional detection and classification, while DeepSphere [13] proposes a graph-based spherical CNN. The trackers exploiting these network architectures tailored for omnidirectional images may be able to extract better features and correlations for robust tracking.
|
| 212 |
+
|
| 213 |
+
By releasing 360VOT, we believe that the new dataset, representations, metrics, and benchmark can encourage more research and application of omnidirectional visual object tracking in both computer vision and robotics.
|
| 214 |
+
|
| 215 |
+
Acknowledgement. This research is partially supported by an internal grant from HKUST (R9429) and the Innovation and Technology Support Programme of the Innovation and Technology Fund (Ref: ITS/200/20FP). We thank Xiaopeng Guo, Yipeng Zhu, Xiaoyu Mo, and Tsz-Chun Law for data collection and processing.
|
| 216 |
+
|
| 217 |
+
# References
|
| 218 |
+
|
| 219 |
+
[1] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Learning discriminative model prediction for tracking. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6182-6191, 2019. 1, 6, 7
|
| 220 |
+
[2] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Know your surroundings: Exploiting scene information for object tracking. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIII 16, pages 205-221. Springer, 2020. 6, 7
|
| 221 |
+
[3] Boyu Chen, Peixia Li, Lei Bai, Lei Qiao, Qiuhong Shen, Bo Li, Weihao Gan, Wei Wu, and Wanli Ouyang. Backbone is all your need: A simplified architecture for visual object tracking. arXiv preprint arXiv:2203.05328, 2022. 6, 7
|
| 222 |
+
[4] Zedu Chen, Bineng Zhong, Guorong Li, Shengping Zhang, and Rongrong Ji. Siamese box adaptive network for visual tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6668-6677, 2020. 6, 7
|
| 223 |
+
[5] Shih-Han Chou, Cheng Sun, Wen-Yen Chang, Wan-Ting Hsu, Min Sun, and Jianlong Fu. 360-indoor: Towards learning real-world objects in $360^{\circ}$ indoor equirectangular images. 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020. 3
|
| 224 |
+
[6] Benjamin Coors, Alexandru Paul Condurache, and Andreas Geiger. Spherenet: Learning spherical representations for detection and classification in omnidirectional images. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018. 3
|
| 225 |
+
[7] Benjamin Coors, Alexandru Paul Condurache, and Andreas Geiger. Spherenet: Learning spherical representations for detection and classification in omnidirectional images. In Proceedings of the European conference on computer vision (ECCV), pages 518-533, 2018. 8
|
| 226 |
+
[8] Yutao Cui, Cheng Jiang, Limin Wang, and Gangshan Wu. Mixformer: End-to-end tracking with iterative mixed attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13608-13618, 2022. 6, 7
|
| 227 |
+
[9] Feng Dai, Bin Chen, Hang Xu, Yike Ma, Xiaodong Li, Bailan Feng, Peng Yuan, Chenggang Yan, and Qiang Zhao. Unbiased iou for spherical image object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 508-515, 2022. 2, 6
|
| 228 |
+
|
| 229 |
+
[10] Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, and Michael Felsberg. Atom: Accurate tracking by overlap maximization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4660-4669, 2019. 6, 7
|
| 230 |
+
[11] Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, and Michael Felsberg. Eco: Efficient convolution operators for tracking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6638-6646, 2017. 1, 6, 7
|
| 231 |
+
[12] Martin Danelljan, Luc Van Gool, and Radu Timofte. Probabilistic regression for visual tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7183-7192, 2020. 6, 7
|
| 232 |
+
[13] Michael Defferrard, Martino Milani, Frédérique Gusset, and Nathanaël Perraudin. Deepsphere: a graph-based spherical cnn. arXiv preprint arXiv:2012.15000, 2020. 8
|
| 233 |
+
[14] Dawei Du, Yuankai Qi, Hongyang Yu, Yifan Yang, Kaiwen Duan, Guorong Li, Weigang Zhang, Qingming Huang, and Qi Tian. The unmanned aerial vehicle benchmark: Object detection and tracking. In Proceedings of the European conference on computer vision (ECCV), pages 370-386, 2018. 2
|
| 234 |
+
[15] Matteo Dunnhofer, Antonino Furnari, Giovanni Maria Farinella, and Christian Micheloni. Is first person vision challenging for object tracking? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2698-2710, 2021. 2
|
| 235 |
+
[16] Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, and Haibin Ling. Lasot: A high-quality benchmark for large-scale single object tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5374-5383, 2019. 1, 2
|
| 236 |
+
[17] Heng Fan, Halady Akhilesha Miththanthaya, Siranjiv Ramana Rajan, Xiaogiong Liu, Zhilin Zou, Yuewei Lin, Haibin Ling, et al. Transparent object tracking benchmark. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10734-10743, 2021. 2
|
| 237 |
+
[18] Shenyuan Gao, Chunluan Zhou, Chao Ma, Xinggang Wang, and Junsong Yuan. Aiatrack: Attention in attention for transformer visual tracking. In European Conference on Computer Vision, pages 146-164. Springer, 2022. 6, 7
|
| 238 |
+
[19] João F Henriques, Rui Caseiro, Pedro Martins, and Jorge Batista. High-speed tracking with kernelized correlation filters. IEEE transactions on pattern analysis and machine intelligence, 37(3):583-596, 2014. 1
|
| 239 |
+
[20] Huajian Huang and Sai-Kit Yeung. 360vo: Visual odometry using a single 360 camera. In 2022 International Conference on Robotics and Automation (ICRA), pages 5594–5600. IEEE, 2022. 3
|
| 240 |
+
[21] Huajian Huang and Sai-Kit Yeung. Siamx: An efficient long-term tracker using cross-level feature correlation and adaptive tracking scheme. In International Conference on Robotics and Automation (ICRA). IEEE, 2022. 3, 5, 6, 7
|
| 241 |
+
[22] Lianghua Huang, Xin Zhao, and Kaiqi Huang. Got-10k: A large high-diversity benchmark for generic object tracking in
|
| 242 |
+
|
| 243 |
+
the wild. IEEE transactions on pattern analysis and machine intelligence, 43(5):1562-1577, 2019. 1, 2
|
| 244 |
+
[23] Hamed Kiani Galoogahi, Ashton Fagg, Chen Huang, Deva Ramanan, and Simon Lucey. Need for speed: A benchmark for higher frame rate object tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 1125-1134, 2017. 2
|
| 245 |
+
[24] Matej Kristan, Jiri Matas, Aleks Leonardis, Tomas Vojir, Roman Pflugfelder, Gustavo Fernandez, Georg Nebehay, Fatih Porikli, and Luka Čehovin. A novel performance evaluation methodology for single-target trackers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(11):2137-2155, Nov 2016. 1, 2
|
| 246 |
+
[25] Annan Li, Min Lin, Yi Wu, Ming-Hsuan Yang, and Shuicheng Yan. Nus-pro: A new visual tracking challenge. IEEE transactions on pattern analysis and machine intelligence, 38(2):335-349, 2015. 2
|
| 247 |
+
[26] Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, and Junjie Yan. Siamrpn++: Evolution of siamese visual tracking with very deep networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4282-4291, 2019. 1, 6, 7
|
| 248 |
+
[27] Siyi Li and Dit-Yan Yeung. Visual object tracking for unmanned aerial vehicles: A benchmark and new motion models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017. 2
|
| 249 |
+
[28] Pengpeng Liang, Erik Blasch, and Haibin Ling. Encoding color information for visual tracking: Algorithms and benchmark. IEEE transactions on image processing, 24(12):5630-5644, 2015. 2
|
| 250 |
+
[29] Christoph Mayer, Martin Danelljan, Goutam Bhat, Matthieu Paul, Danda Pani Paudel, Fisher Yu, and Luc Van Gool. Transforming model prediction for tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8731-8740, 2022. 6, 7
|
| 251 |
+
[30] Matthias Mueller, Neil Smith, and Bernard Ghanem. A benchmark and simulator for uav tracking. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part I 14, pages 445-461. Springer, 2016. 1, 2
|
| 252 |
+
[31] Matthias Muller, Adel Bibi, Silvio Giancola, Salman Alsubaihi, and Bernard Ghanem. Trackingnet: A large-scale dataset and benchmark for object tracking in the wild. In Proceedings of the European conference on computer vision (ECCV), pages 300-317, 2018. 2, 6
|
| 253 |
+
[32] Hyeonseob Nam and Bohyung Han. Learning multi-domain convolutional neural networks for visual tracking. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 1, 6, 7
|
| 254 |
+
[33] Eunbyung Park and Alexander C Berg. Meta-tracker: Fast and robust online adaptation for visual object trackers. In Proceedings of the European Conference on Computer Vision (ECCV), pages 569-585, 2018. 6, 7
|
| 255 |
+
[34] Esteban Real, Jonathon Shlens, Stefano Mazzocchi, Xin Pan, and Vincent Vanhoucke. Youtube-boundingboxes: A large high-precision human-annotated data set for object detection in video. In proceedings of the IEEE Conference on Com
|
| 256 |
+
|
| 257 |
+
puter Vision and Pattern Recognition, pages 5296-5305, 2017.2
|
| 258 |
+
[35] Arnold WM Smeulders, Dung M Chu, Rita Cucchiara, Simone Calderara, Afshin Dehghan, and Mubarak Shah. Visual tracking: An experimental survey. IEEE transactions on pattern analysis and machine intelligence, 36(7):1442-1468, 2013. 2
|
| 259 |
+
[36] John P Snyder. Flattening the earth: two thousand years of map projections. University of Chicago Press, 1997. 2
|
| 260 |
+
[37] Konstantin Sofiuk, Ilya A Petrov, and Anton Konushin. Reviving iterative training with mask guidance for interactive segmentation. In 2022 IEEE International Conference on Image Processing (ICIP), pages 3141-3145. IEEE, 2022. 5
|
| 261 |
+
[38] Jack Valmadre, Luca Bertinetto, Joao F Henriques, Ran Tao, Andrea Vedaldi, Arnold WM Smeulders, Philip HS Torr, and Efstratios Gavves. Long-term tracking in the wild: A benchmark. In Proceedings of the European conference on computer vision (ECCV), pages 670-685, 2018. 2
|
| 262 |
+
[39] Ning Wang, Yibing Song, Chao Ma, Wengang Zhou, Wei Liu, and Houqiang Li. Unsupervised deep tracking. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 6, 7
|
| 263 |
+
[40] Qiang Wang, Li Zhang, Luca Bertinetto, Weiming Hu, and Philip HS Torr. Fast online object tracking and segmentation: A unifying approach. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 1328-1338, 2019. 6, 7
|
| 264 |
+
[41] Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. Online object tracking: A benchmark. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2411-2418, 2013. 2
|
| 265 |
+
[42] Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. Object tracking benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9):1834-1848, 2015. 1, 2, 6
|
| 266 |
+
[43] Hang Xu, Qiang Zhao, Yike Ma, Xiaodong Li, Peng Yuan, Bailan Feng, Chenggang Yan, and Feng Dai. Pandora: A panoramic detection dataset for object with orientation. In ECCV, 2022. 2, 3
|
| 267 |
+
[44] Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, and Huchuan Lu. Learning spatio-temporal transformer for visual tracking. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10448-10457, 2021. 6, 7
|
| 268 |
+
[45] Dawen Yu and Shunping Ji. Grid based spherical cnn for object detection from panoramic images. Sensors, 19(11):2622, 2019. 3
|
| 269 |
+
[46] Zhipeng Zhang, Yihao Liu, Xiao Wang, Bing Li, and Weiming Hu. Learn to match: Automatic matching network design for visual tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13339-13348, 2021. 6, 7
|
| 270 |
+
[47] Zhipeng Zhang and Houwen Peng. Deeper and wider siamese networks for real-time visual tracking. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 6, 7
|
| 271 |
+
[48] Zhipeng Zhang, Houwen Peng, Jianlong Fu, Bing Li, and Weiming Hu. Ocean: Object-aware anchor-free tracking. In
|
| 272 |
+
|
| 273 |
+
Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXI 16, pages 771-787. Springer, 2020. 6, 7
|
| 274 |
+
[49] Pengyu Zhao, Ansheng You, Yuanxing Zhang, Jiaying Liu, Kaigui Bian, and Yunhai Tong. Spherical criteria for fast and accurate $360^{\circ}$ object detection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07):12959-12966, 2020. 3
|
| 275 |
+
[50] Zheng Zhu, Qiang Wang, Bo Li, Wei Wu, Junjie Yan, and Weiming Hu. Distractor-aware siamese networks for visual object tracking. In Proceedings of the European conference on computer vision (ECCV), pages 101-117, 2018. 3
|
360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cb206c03f89f965afc84eaa08093ac8667baf4029376f6334076c41673952e96
|
| 3 |
+
size 1019194
|
360votanewbenchmarkdatasetforomnidirectionalvisualobjecttracking/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0fb4306ede7eaad8d59a877e10cc713a5728eab1000d43454cf2dc9e7eebfd86
|
| 3 |
+
size 402307
|
3dawareblendingwithgenerativenerfs/c6b7200b-73ab-4d3a-939b-a50441252220_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d42712cdebc8a2bbe72e7fa24f038d0b8f7b5ae7ce4c10c95f08f1c75f6700f8
|
| 3 |
+
size 93037
|
3dawareblendingwithgenerativenerfs/c6b7200b-73ab-4d3a-939b-a50441252220_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b5b8e89cc9b4eaf37eae0864b5b18a155b1533054582168a2d5df4da3c116fee
|
| 3 |
+
size 120985
|
3dawareblendingwithgenerativenerfs/c6b7200b-73ab-4d3a-939b-a50441252220_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e6de02757b7309a0225955be3e754e3f2d2e671228c8708d8f257e7a1594ad7a
|
| 3 |
+
size 3170814
|
3dawareblendingwithgenerativenerfs/full.md
ADDED
|
@@ -0,0 +1,401 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D-aware Blending with Generative NeRFs
|
| 2 |
+
|
| 3 |
+
Hyunsu Kim
|
| 4 |
+
|
| 5 |
+
Gayoung Lee
|
| 6 |
+
|
| 7 |
+
Yunjey Choi
|
| 8 |
+
|
| 9 |
+
Jin-Hwa Kim 1,2
|
| 10 |
+
|
| 11 |
+
Jun-Yan Zhu
|
| 12 |
+
|
| 13 |
+
$^{1}$ NAVER AI Lab
|
| 14 |
+
|
| 15 |
+
$^{2}$ SNU AIIS
|
| 16 |
+
|
| 17 |
+
$^{3}$ CMU
|
| 18 |
+
|
| 19 |
+
# Abstract
|
| 20 |
+
|
| 21 |
+
Image blending aims to combine multiple images seamlessly. It remains challenging for existing 2D-based methods, especially when input images are misaligned due to differences in 3D camera poses and object shapes. To tackle these issues, we propose a 3D-aware blending method using generative Neural Radiance Fields (NeRF), including two key components: 3D-aware alignment and 3D-aware blending. For 3D-aware alignment, we first estimate the camera pose of the reference image with respect to generative NeRFs and then perform pose alignment for objects. To further leverage 3D information of the generative NeRF, we propose 3D-aware blending that utilizes volume density and blends on the NeRF's latent space, rather than raw pixel space. Collectively, our method outperforms existing 2D baselines, as validated by extensive quantitative and qualitative evaluations with FFHQ and AFHQ-Cat.
|
| 22 |
+
|
| 23 |
+
# 1. Introduction
|
| 24 |
+
|
| 25 |
+
Image blending aims at combining elements from multiple images naturally, enabling a wide range of applications in content creation, and virtual and augmented realities [95, 96]. However, blending images seamlessly requires delicate adjustment of color, texture, and shape, often requiring users' expertise and tedious manual processes. To reduce human efforts, researchers have proposed various automatic image blending algorithms, including classic methods [62, 49, 7, 76] and deep neural networks [93, 79, 54].
|
| 26 |
+
|
| 27 |
+
Despite significant progress, blending two unaligned images remains a challenge. Current 2D-based methods often assume that object shapes and camera poses have been accurately aligned. As shown in Figure 1c, even slight misalignment can produce unnatural results, as it is obvious to human eyes that foreground and background objects were captured using different cameras. Several methods [34, 52, 12, 66, 86] warp an image via 2D affine transformation. However, these approaches do not account for 3D geometric differences, such as out-of-plane rotation and 3D shape differences. 3D alignment is much more difficult for users and algorithms, as it requires inferring the 3D structure from a single view.
|
| 28 |
+
|
| 29 |
+
(a) Original
|
| 30 |
+
|
| 31 |
+
(b) Reference
|
| 32 |
+
|
| 33 |
+
(c) 2D method
|
| 34 |
+
|
| 35 |
+
(d) Ours
|
| 36 |
+
|
| 37 |
+
# Roughly aligned
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+

|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
Figure 1: Image blending is challenging for unaligned original and reference images. Existing 2D-based methods [42] struggle to synthesize realistic results due to the 3D object pose differences between foreground and background. In contrast, we propose a 3D-aware blending method that aligns and composes unaligned images without manual effort.
|
| 65 |
+
|
| 66 |
+

|
| 67 |
+
|
| 68 |
+

|
| 69 |
+
|
| 70 |
+

|
| 71 |
+
|
| 72 |
+
Additionally, even though previous methods get aligned images, they blend images in 2D space. Blending images using only 2D signals, such as pixel values (RGB) or 2D feature maps, doesn't account for the 3D structure of objects.
|
| 73 |
+
|
| 74 |
+
To address the above issues, we propose a 3D-aware image blending method based on generative Neural Radiance Fields (NeRFs) [9, 33, 10, 59, 67, 91]. Generative NeRFs learn to synthesize images in 3D using only collections of single-view images. Our method projects the input images to the latent space of generative NeRFs and performs 3D-aware alignment by novel view synthesis. We then perform blending on NeRFs' latent space. Concretely, we formulate an optimization problem in which a latent code is optimized to synthesize an image and volume density of the foreground
|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
(a) 2D blending
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
(b) 2D blending with our 3D-aware alignment
|
| 81 |
+
|
| 82 |
+

|
| 83 |
+
(c) Proposed method (Ours)
|
| 84 |
+
Figure 2: Comparison with the existing blending methods. Red lines denote target blending parts. (a) 2D blending. 2D blending methods compose two images without any 3D-aware alignment. (b) 2D blending with 3D-aware alignment. To address misalignment, we apply our 3D-aware alignment method to existing 2D blending methods. (c) Proposed method. We propose 3D-aware blending after applying our 3D-aware alignment. Note that all methods do not use 3D labels or 3D morphable models.
|
| 85 |
+
|
| 86 |
+
close to the reference while preserving the background of the original.
|
| 87 |
+
|
| 88 |
+
Figure 2 shows critical differences between our approach and previous methods. Figure 2a shows a classic 2D blending method composing two 2D images without alignment. We then show the performance of the 2D blending method can be improved using our 3D-aware alignment with generative NeRFs as shown in Figure 2b. To further exploit 3D information, we propose to compose two images in the NeRFs' latent space instead of 2D pixel space. Figure 2c shows our final method.
|
| 89 |
+
|
| 90 |
+
We demonstrate the effectiveness of our 3D-aware alignment and 3D-aware blending (volume density) on unaligned images. Extensive experiments show that our method outperforms both classic and learning-based methods regarding both photorealism and faithfulness to the input images. Additionally, our method can disentangle color and geometric changes during blending, and create multi-view consistent results. To our knowledge, our method is the first general-purpose 3D-aware image blending method capable of blending a diverse set of unaligned images.
|
| 91 |
+
|
| 92 |
+
# 2. Related Work
|
| 93 |
+
|
| 94 |
+
Image blending aims to compose different visual elements into a single image. Seminal works tackle this problem using various low-level visual cues, such as image gradients [62, 36, 75, 28, 74], frequency bands [7, 6], color and noise transfer [82, 72], and segmentation [49, 65, 1, 51]. Later, researchers developed data-driven systems to compose objects with similar lighting conditions, camera poses, and scene contexts [50, 15, 34].
|
| 95 |
+
|
| 96 |
+
Recently, various learning-based methods have been proposed, including blending deep features instead of pixels [73, 20, 31] or designing loss functions based on deep
|
| 97 |
+
|
| 98 |
+
features [88, 87]. Generative Adversarial Networks (GAN) have also been used for image blending [79, 92, 21, 42, 8, 95, 68]. For example, In-DomainGAN [92] exploits GAN inversion to achieve seamless blending, and StyleMapGAN [42] blends images in the spatial latent space. Recently, SDEdit [54] proposes a blending method via diffusion models. The above learning-based methods tend to be more robust than pixel-based methods. But given two images with large pose differences, both may struggle to preserve identity or generate unnatural effects.
|
| 99 |
+
|
| 100 |
+
In specific domains like faces [83, 22, 56, 81] or hair [95, 96, 19, 44], multiple methods can swap and blend unaligned images. However, these methods are limited to faces or hair, and they often need 3D face morphable models [5, 30], or multi-view images [55, 45] to provide 3D information. Our method offers a general-purpose solution that can handle a diverse set of objects without 3D data.
|
| 101 |
+
|
| 102 |
+
3D-aware generative models. Generative image models learn to synthesize realistic 2D images [32, 70, 26, 77, 14]. However, the original formulations do not account for the 3D nature of our visual world, making 3D manipulation difficult. Recently, several methods have integrated implicit scene representation, volumetric rendering, and GANs into generative NeRFs [67, 10, 57, 25]. Given a sampled viewpoint, an image is rendered via volumetric rendering and fed to a discriminator. For example, EG3D [9] uses an efficient 3D representation called tri-planes, and StyleSDF [58] merges the style-based architecture and the SDF-based volume renderer. Multiple works [9, 58, 91, 33] have developed a two-stage model to generate high-resolution images. With GAN inversion methods [94, 64, 29, 60, 23, 97, 27, 2], we can utilize these 3D-aware generative models to align and blend images and produce multi-view consistent 3D effects.
|
| 103 |
+
|
| 104 |
+

|
| 105 |
+
Step 1. Estimate camera poses and latent codes
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
Step 2. Pose alignment
|
| 109 |
+
Figure 3: 3D-aware alignment: we first use a CNN encoder to infer the camera pose of each input image. Step 1. Given the camera pose $\mathbf{c}$ , we estimate the latent code $\mathbf{w}$ for each input using a reconstruction loss $\mathcal{L}_{\mathrm{rec}}$ . Step 2. Given the estimated camera pose $\mathbf{c}_{\mathrm{ori}}$ and latent code $\mathbf{w}_{\mathrm{ref}}$ , we align the reference image to match the pose of the original image.
|
| 110 |
+
|
| 111 |
+
3D-aware image editing. Classic 3D-aware image editing methods can create 3D effects given 2D photographs [40, 16, 41]. However, they often require manual efforts to reconstruct the input's geometry and texture. Recently, to reduce manual efforts, researchers have employed generative NeRFs for 3D-aware editing. For example, EditNeRF [53] uses separate latent codes to edit the shape and color of a NeRF object. NeRF-Editing [85] proposes to reflect geometric edits in implicit neural representations. CLIP-NeRF [78] uses a CLIP loss [63] to ensure that the edited result corresponds to the input condition. In SURF-GAN [48], they discover controllable attributes using NeRFs for training a 3D-controllable GAN. Kobayashi et al. [47] enable editing via semantic scene decomposition. While the above works tackle various image editing tasks, we focus on a different task - image blending, which requires both alignment and harmonization. Compared to previous image blending methods, our method addresses blending in a 3D-aware manner.
|
| 112 |
+
|
| 113 |
+
# 3. Method
|
| 114 |
+
|
| 115 |
+
We aim to perform 3D-aware image blending using only 2D images, with target masks from users for both original and reference images. Our method consists of two stages: 3D-aware alignment and 3D-aware blending. Before we blend, we first align the pair of images regarding the pose.
|
| 116 |
+
|
| 117 |
+
In Section 3.1, we describe pose alignment for entire objects and local alignment for target regions. Then, we apply the 3D-aware blending method in the generative NeRF's latent space in Section 3.2. A variation of our blending method is illustrated in Section 3.3. We combine Poisson blending with our method to achieve near-perfect background preservation. We use EG3D [9] as our backbone, although other 3D-aware generative models, such as StyleSDF [58], can also be applied; see Section E in the supplement.
|
| 118 |
+
|
| 119 |
+
# 3.1. 3D-aware alignment
|
| 120 |
+
|
| 121 |
+
Pose alignment is a requisite process of our blending method, as slight pose misalignment of two images can severely degrade blending quality as shown in Figure 1. To match the reference image $\mathrm{I}_{\mathrm{ref}}$ to the pose of the original image $\mathrm{I}_{\mathrm{ori}}$ , we use a generative NeRF $G$ to estimate the camera pose $\mathbf{c}$ and the latent code $\mathbf{w}$ of each image. In Step 1 in Figure 3, we first train and freeze a CNN encoder (i.e., pose estimation network) to predict the camera poses of input images. During training, we can generate a large number of pairs of camera poses and images using generative NeRF and train the encoder $E$ using a pose reconstruction loss $\mathcal{L}_{\mathrm{pose}}$ as follows:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\mathcal {L} _ {\text {p o s e}} = \mathbb {E} _ {\mathbf {w}, \mathbf {c}} \| \mathbf {c} - E \left(G _ {\mathrm {R G B}} (\mathbf {w}, \mathbf {c})\right) \| _ {1}, \tag {1}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where $G_{\mathrm{RGB}}$ is an image rendering function with the generative NeRF $G$ , and $\| \cdot \|_1$ is the L1 distance. The latent code $\mathbf{w}$ and camera pose $\mathbf{c}$ are randomly drawn.
|
| 128 |
+
|
| 129 |
+
With our trained encoder, we estimate the camera poses $\mathbf{c}_{\mathrm{ori}}$ and $\mathbf{c}_{\mathrm{ref}}$ (defined as Euler angles $\mathbf{c} \in SO3$ ) of the original and reference images, respectively. Given the estimated camera poses, we project input images $\mathbf{I}_{\mathrm{ori}}$ and $\mathbf{I}_{\mathrm{ref}}$ to the latent codes $\mathbf{w}_{\mathrm{ori}}$ and $\mathbf{w}_{\mathrm{ref}}$ using Pivotal Tuning Inversion (PTI) [64]. We optimize the latent code $\mathbf{w}$ using the reconstruction loss $\mathcal{L}_{\mathrm{rec}}$ as follows:
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\mathcal {L} _ {\mathrm {r e c}} = \left\| \mathbf {I} - G _ {\mathrm {R G B}} (\mathbf {w}, \mathbf {c}) \right\| _ {1} + \mathcal {L} _ {\mathrm {L P I P S}} (\mathbf {I}, G _ {\mathrm {R G B}} (\mathbf {w}, \mathbf {c})), \tag {2}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where $\mathcal{L}_{\mathrm{LPIPS}}$ is a learned perceptual image patch similarity (LPIPS) [89] loss. For more accurate inversion, we fine-tune the generator $G$ . Inversion details are described in Section B in the supplement. Finally, as shown in Step 2 of Figure 3, we can align the reference image as follows:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\mathrm {I} _ {\text {r e f}} ^ {\mathrm {R}} = G _ {\mathrm {R B G}} \left(\mathbf {w} _ {\text {r e f}}, \mathbf {c} _ {\text {o r i}}\right). \tag {3}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
While pose alignment can align two entire objects, further alignment in editing regions may still be necessary due to variations in scale and translation between object instances. To align target editing regions such as the face, eyes, and ears, we can further employ local alignment in the loosely aligned dataset (AFHQv2). The Iterative Closest Point (ICP) algorithm [3, 17] is applied to meshes, which can adjust their scale and translation. For further details, please refer to Section C in the supplement.
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
Figure 4: Our 3D-aware blending pipeline. We employ density-blending loss $(\mathcal{L}_{\mathrm{density}})$ in the volume density of 3D NeRF space, as well as the image-blending loss $(\mathcal{L}_{\mathrm{image}})$ in 2D image space. Green rays pass through the interior of the mask $(\mathbf{m})$ and red rays pass through the exterior of the mask $(\mathbf{1} - \mathbf{m})$ . $\mathcal{L}_{\mathrm{image}}$ and $\mathcal{L}_{\mathrm{density}}$ are used to optimize the latent code $\mathbf{w}_{\mathrm{edit}}$ to generate the well-blended image $\mathbf{I}_{\mathrm{edit}}$ .
|
| 145 |
+
|
| 146 |
+
# 3.2. 3D-aware blending
|
| 147 |
+
|
| 148 |
+
We aim to find the best latent code $\mathbf{w}_{\mathrm{edit}}$ to synthesize a seamless and natural output. To achieve this goal, we exploit both 2D pixel constraints (RGB value) and 3D geometric constraints (volume density). With the proposed image-blending and density-blending losses, we optimize the latent code $\mathbf{w}_{\mathrm{edit}}$ , by matching the foreground with the reference and the background with the original.
|
| 149 |
+
|
| 150 |
+
Image-blending algorithms are often designed to match the color and details of the original image (i.e., background) while preserving the structure of the reference image (i.e., foreground) [62]. As shown in Figure 4, our image-blending loss matches the color and perceptual similarity of the original image using a combination of L1 and LPIPS [89], while matching the reference image's details using LPIPS loss alone. L1 loss in the reference can lead to overfitting to the pixel space. Let $\mathbf{I}_{\mathrm{edit}}$ be the rendered image from the latent code $\mathbf{w}_{\mathrm{edit}}$ . We define the image-blending loss as follows:
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\begin{array}{l} \mathcal {L} _ {\text {i m a g e}} = \left\| (\mathbf {1} - \mathbf {m}) \circ \mathrm {I} _ {\text {e d i t}} - (\mathbf {1} - \mathbf {m}) \circ \mathrm {I} _ {\text {o r i}} \right\| _ {1} \\ + \lambda_ {1} \mathcal {L} _ {\mathrm {L P I P S}} ((\mathbf {1} - \mathbf {m}) \circ \mathrm {I} _ {\mathrm {e d i t}}, (\mathbf {1} - \mathbf {m}) \circ \mathrm {I} _ {\mathrm {o r i}}) \\ + \lambda_ {2} \mathcal {L} _ {\mathrm {L P I P S}} (\mathbf {m} \circ \mathrm {I} _ {\text {e d i t}}, \mathbf {m} \circ \mathrm {I} _ {\text {r e f}}), \tag {4} \\ \end{array}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
where $\circ$ denotes element-wise multiplication. Here, $\lambda_{1}$ and $\lambda_{2}$ balance each loss term.
|
| 157 |
+
|
| 158 |
+
Density-blending is our key component in 3D-aware image blending. If we use only image-blending loss, the blending result easily falls blurry and may not reflect the reference object correctly. Especially, a highly structured object such as hair is hard to be blended in the 3D NeRF space without volume density, as shown in Figure 8. By representing each
|
| 159 |
+
|
| 160 |
+
image as a NeRF instance, we can calculate the density $\sigma$ of a given 3D location $\pmb{x} \in \mathbb{R}^3$ . Let $\mathcal{R}_{\mathrm{ref}}$ and $\mathcal{R}_{\mathrm{ori}}$ be the set of rays $r$ passing through the interior and exterior of the target mask $\mathbf{m}$ , respectively. For the 3D sample points along the rays $\mathcal{R}_{\mathrm{ref}}$ , we aim to match the density field between the reference and our output result, as shown as the sample points in a green ray in Figure 4. For 3D sample points in $\mathcal{R}_{\mathrm{ori}}$ , we also match the density field between the original and the result, as shown as the sample points in a red ray in Figure 4. Let $G_{\sigma}(\mathbf{w}; \pmb{x})$ be the density of a given 3D point $\pmb{x}$ with a given latent code $\mathbf{w}$ . Our density-blending loss can be formulated as follows:
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\begin{array}{l} \mathcal {L} _ {\text {d e n s i t y}} = \sum_ {\boldsymbol {r} \in \mathcal {R} _ {\text {r e f}}} \sum_ {\boldsymbol {x} \in \boldsymbol {r}} \| G _ {\sigma} (\mathbf {w} _ {\text {e d i t}}; \boldsymbol {x}) - G _ {\sigma} (\mathbf {w} _ {\text {r e f}}; \boldsymbol {x}) \| _ {1} \\ + \sum_ {\boldsymbol {r} \in \mathcal {R} _ {\mathrm {o r i}}} \sum_ {\boldsymbol {x} \in \boldsymbol {r}} \| G _ {\sigma} (\mathbf {w} _ {\text {e d i t}}; \boldsymbol {x}) - G _ {\sigma} (\mathbf {w} _ {\mathrm {o r i}}; \boldsymbol {x}) \| _ {1}. \tag {5} \\ \end{array}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
Our final objective function includes both image-blending loss and density-blending loss:
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
\mathcal {L} = \lambda \mathcal {L} _ {\text {i m a g e}} + \mathcal {L} _ {\text {d e n s i t y}}, \tag {6}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
where $\lambda$ is the hyperparameter that controls the contribution of the image-blending loss. If our user wants to blend the shape together without reflecting the color of reference, $\lambda_{2}$ in Eqn. 4 is set to 0. Otherwise, we can set $\lambda_{2}$ to a positive number to reflect the reference image's color and geometry as shown in Figure 9.
|
| 173 |
+
|
| 174 |
+
# 3.3. Combining with Poisson blending
|
| 175 |
+
|
| 176 |
+
While our method produces high-quality blending results, incorporating Poisson blending [62] further improves the preservation of the original image details. Figure 5 shows the effect of Poisson blending with our method. We perform
|
| 177 |
+
|
| 178 |
+

|
| 179 |
+
Figure 5: Ours with Poisson blending [62]. Ours shows satisfying blending results but a lack of preservation in details. In the first row, the earring is missing in our method. The high-frequency details such as hair and fur are less pronounced in our method alone compared to when it is combined with Poisson blending.
|
| 180 |
+
|
| 181 |
+
Poisson blending between the original image and the blended image generated by our 3D-aware blending method. Our blending method is modified in two ways. 1) In the initial blending stage, we only preserve the area around the border of the mask instead of all parts of the original image, as we can directly use the original image in the Poisson blending stage. We can reduce the number of iterations from 200 to 100, as improved faithfulness scores are easily achieved; see $\mathrm{m}L_{2}$ and $\mathrm{LPIPS}_{m}$ in Tables 1 and 2. 2) Instead of using the latent code of the original image $\mathbf{w}_{\mathrm{ori}}$ as the initial value of $\mathbf{w}_{\mathrm{edit}}$ , we use the latent code of the reference image $\mathbf{w}_{\mathrm{ref}}$ . This allows us to instantly reflect the identity of the reference image and only optimize $\mathbf{w}_{\mathrm{edit}}$ to reconstruct a small region near the mask boundary of the original image. Note that this is an optional choice, as our method without Poisson blending has already outperformed all the baselines, as shown in Tables 3 and 4.
|
| 182 |
+
|
| 183 |
+
# 4. Experiments
|
| 184 |
+
|
| 185 |
+
In this section, we show the advantage of our full method over several existing methods and ablated baselines. In Section 4.1, we describe our experimental settings, including baselines, datasets, and evaluation metrics. In Section 4.2, we show both quantitative and qualitative comparisons. In addition to the automatic quantitative metrics, our user study shows that our method is preferred over baselines regarding photorealism. In Section 4.3, we analyze the effectiveness of each module via ablation studies. Lastly, Section 4.4 shows useful by-products of our method, such as generating multi-view images and controlling the color and geometry disentanglement. Please see the supplement for experimental details, video results on the webpage, additional results, etc.
|
| 186 |
+
|
| 187 |
+
# 4.1. Experimental setup
|
| 188 |
+
|
| 189 |
+
Baselines. We compare our method with various image blending methods using only 2D input images. For classic methods, we run Poisson blending [62], a widely-used gradient-domain editing method. We also compare with several recent learning-based methods [8, 42, 38, 54]. Latent Composition [8] utilizes the compositionality in GANs by finding the latent code of the roughly collaged inputs on the manifold of the generator. StyleMapGAN [42] proposes the spatial latent space for GANs to enable local parts blending by mixing the spatial latent codes. Recently, Karras et al. [38] proposed StyleGAN3, which provides rotation equivariance. Therefore, we additionally show their blending results by finding the latent code of the composited inputs on the StyleGAN3-R manifold. Both $\mathcal{W}$ and $\mathcal{W}+$ of StyleGAN3-R latent spaces are tested. SDEdit [54] is a diffusion-based blending method that produces a natural-looking result by denoising the corrupted image of a composite image.
|
| 190 |
+
|
| 191 |
+
Datasets. We use FFHQ [39] and AFHQv2-Cat datasets [18] for model training. We use pose alignment for both datasets and apply further local alignment to the loosely aligned dataset (AFHQ).
|
| 192 |
+
|
| 193 |
+
To test blending performance, we use CelebA-HQ [37] for the FFHQ-trained models and AFHQv2-Cat test sets for the AFHQ-trained models. We randomly select 250 pairs of images from each dataset for an original and reference image. We also create a target mask for each pair to automatically simulate a user input using pretrained semantic segmentation networks [84, 90, 13]. We blend 5 and 3 semantic parts in each pair of images for CelebA-HQ and AFHQ, respectively. The total number of blended images in each method is 1,250 (CelebA-HQ) and 750 (AFHQv2-Cat). We also include results on ShapeNet-Car dataset [11] to show that our method works well for non-facial data.
|
| 194 |
+
|
| 195 |
+
Evaluation metrics. For evaluation metrics, we use masked $L_{2}$ , masked LPIPS [89] and Kernel Inception Score (KID) [4]. Masked $L_{2}$ ( $\mathrm{mL}_{2}$ ) is the $L_{2}$ distance between the original image and the blended image on the exterior of the mask, measuring the preservation of non-target areas of the original image. Unlike background regions, a pixel-wise loss is too strict for the target area changed during blending. We measure the perceptual similarity metric (LPIPS) [89] for the blended regions, which are called masked LPIPS ( $\mathrm{LPIPS}_{m}$ ) used in previous methods [35, 54]. Kernel Inception Score (KID) [4] is widely used to quantify the realism of the generated images regarding the real data distribution. We compute KID between blended images and the training dataset using the clean-fid library [61].
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
|
| 199 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">w/o align (baseline only)</td><td colspan="3">w/ 3D-aware align</td></tr><tr><td>KID ↓</td><td>LPIPSm ↓</td><td>mL2 ↓</td><td>KID ↓</td><td>LPIPSm ↓</td><td>mL2 ↓</td></tr><tr><td>Poisson Blending [62]</td><td>0.006</td><td>0.4203</td><td>0.0069</td><td>0.005</td><td>0.2355</td><td>0.0051</td></tr><tr><td>Latent Composition [8]</td><td>0.012</td><td>0.4735</td><td>0.0388</td><td>0.012</td><td>0.4487</td><td>0.0321</td></tr><tr><td>StyleGAN3 W [38]</td><td>0.016</td><td>0.4379</td><td>0.0353</td><td>0.017</td><td>0.3921</td><td>0.0307</td></tr><tr><td>StyleGAN3 W+ [38]</td><td>0.025</td><td>0.4634</td><td>0.0462</td><td>0.023</td><td>0.4086</td><td>0.0391</td></tr><tr><td>StyleMapGAN (32 × 32) [42]</td><td>0.007</td><td>0.3792</td><td>0.0118</td><td>0.006</td><td>0.1989</td><td>0.0045</td></tr><tr><td>SDEdit [54]</td><td>0.011</td><td>0.3857</td><td>0.0076</td><td>0.008</td><td>0.3427</td><td>0.0003</td></tr><tr><td>Ours</td><td>0.013</td><td>0.2046</td><td>0.0050</td><td>0.013</td><td>0.2046</td><td>0.0050</td></tr><tr><td>Ours + Poisson Blending</td><td>0.002</td><td>0.1883</td><td>0.0007</td><td>0.002</td><td>0.1883</td><td>0.0007</td></tr></table>
|
| 200 |
+
|
| 201 |
+
Table 1: Comparison with baselines in the CelebA-HQ test set. The first and second rows of the figure show the blending results without and with our 3D-aware alignment, respectively. Metric scores on the left side of the table show the results without alignment. We apply our 3D-aware alignment to the baselines on the right side of the table. Lower scores denote better performance in all metrics. The best and second-best scores are bold and underlined. Our method outperforms baselines in all metrics. LC and PB stand for Latent Composition [8] and Poisson Blending [62], respectively. Note that our method always operates 3D-aware alignment, as it is an integral part of our algorithm.
|
| 202 |
+
|
| 203 |
+
User study. To further examine the effectiveness of our 3D-aware blending method, we conduct a user study for photorealism. Our target goal is to edit the original image, so we exclude baselines that show highly flawed preservation of the original image. Human evaluates pairwise comparison of blended images between our method and one of the baselines. The user selects more real-looking images. We collect 5,000 comparison results via Amazon Mechanical Turk (MTurk).
|
| 204 |
+
|
| 205 |
+
# 4.2. Comparison with baselines
|
| 206 |
+
|
| 207 |
+
Here we compare our method with baselines in two variations. In the $w/o$ align setting, we do not apply our 3D-aware alignment to baselines. In the $w/$ align setting, we align the reference image with our 3D-aware alignment. This experiment demonstrates the effectiveness of our proposed method. 1) Our alignment method consistently improves all baselines in all evaluation metrics: KID, LPIPS $_m$ , and masked $L_2$ . 2) Our 3D-aware blending method outperforms all baselines, including those that use our alignment method. We also report the combination of our method and Poisson blending to achieve better background preservation, as the perfect inversion is still hard to be achieved in GAN-based methods.
|
| 208 |
+
|
| 209 |
+
Table 1 shows comparison results in CelebA-HQ. The left
|
| 210 |
+
|
| 211 |
+
side of the table includes all the baselines without our 3D-aware alignment. All metrics are worse than the right side of the table (w/ alignment). This result reveals that alignment between the original and reference image affects overall editing performance. Table 2 shows comparison results in AFHQv2-Cat. It shows the same tendency as Table 1.
|
| 212 |
+
|
| 213 |
+
Our method performs well regarding all metrics. Combined with Poisson blending, our method outperforms all baselines. Poisson blending and StyleMapGAN $(16 \times 16, 32 \times 32)$ show great faithfulness to the input images but suffer from artifacts. Latent Composition, StyleMapGAN $(8 \times 8)$ , and StyleGAN3 $\mathcal{W}$ produce realistic results but far from the input images. The identities of the original and reference images have changed, which is reflected by a worse LPIPS $_m$ and $mL_2$ . SDEdit fails to reflect the reference image and shows worse LPIPS $_m$ . StyleGAN3 $\mathcal{W}+$ often shows entirely collapsed images. Our method preserves the identity of the original image and reflects the reference image well while producing realistic outputs.
|
| 214 |
+
|
| 215 |
+
User study. We note that KID has a high correlation with background preservation. Unfortunately, it fails to capture the boundary artifacts and foreground image quality, espe
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
Table 2: Comparison with baselines in the AFHQv2-Cat test set. Formats of the figure and table are the same as Table 1.
|
| 219 |
+
|
| 220 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Ours</td><td colspan="2">Ours + Poisson Blending</td></tr><tr><td>w/o</td><td>w/ align</td><td>w/o</td><td>w/ align</td></tr><tr><td>Poisson [62]</td><td>79.9%</td><td>59.9%</td><td>80.9% (+1.0)</td><td>67.5% (+7.6)</td></tr><tr><td>StyleMap [42]</td><td>72.3%</td><td>62.0%</td><td>75.4% (+3.1)</td><td>66.3% (+4.3)</td></tr><tr><td>SDEdit [54]</td><td>61.0%</td><td>55.7%</td><td>61.1% (+0.1)</td><td>50.2% (-5.5)</td></tr></table>
|
| 221 |
+
|
| 222 |
+
Table 3: User study in CelebA-HQ regarding the photorealism of the blended image. The percentage denotes how often MTurk workers prefer our method to each baseline in pairwise comparison. Values larger than $50\%$ mean ours outperforms the baseline. Our method, both with and without Poisson blending, outperforms all baselines even if we improve the baselines using our 3D-aware alignment. Incorporating Poisson blending further enhances the realism score of our method as shown in green numbers.
|
| 223 |
+
|
| 224 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">Ours</td><td colspan="2">Ours + Poisson Blending</td></tr><tr><td>w/o</td><td>w/ align</td><td>w/o</td><td>w/ align</td></tr><tr><td>Poisson [62]</td><td>91.2%</td><td>76.2%</td><td>87.6% (-3.6)</td><td>82.8% (+6.6)</td></tr><tr><td>StyleMap [42]</td><td>91.0%</td><td>82.3%</td><td>91.6% (+0.6)</td><td>83.6% (+1.3)</td></tr></table>
|
| 225 |
+
|
| 226 |
+
Table 4: User study in AFHQv2-Cat regarding the photorealism of the blended image. All details are the same as in Table 3. Our approach surpasses all baselines, and the incorporation of Poisson blending further improves the realism score.
|
| 227 |
+
|
| 228 |
+

|
| 229 |
+
(a) Original
|
| 230 |
+
Figure 6: The effect of our 3D-aware alignment. Aligned reference images (d) have the same pose as the original images (a). With our alignment, blending results (e) look more realistic and reflect the reference well than those without alignment (c).
|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
(b) Reference
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
(c) Blend
|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
(d) Reference
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
w/o align
|
| 245 |
+
|
| 246 |
+

|
| 247 |
+
w/ align
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+
w/ align
|
| 254 |
+
|
| 255 |
+
cially for small foregrounds. To further evaluate the realism score of results, we conduct a human perception study for our method and other baselines, which shows great preservation scores $\mathrm{m}L_{2}$ . As shown in Tables 3 and 4, MTurk participants prefer our method to other baselines regarding the photorealism of the results. Our method, as well as the combination of ours with Poisson blending, outperforms the baselines. SDEdit with our 3D-aware alignment shows a comparable realism score with ours, but it can not reflect the reference well, as reflected in worse $\mathrm{LPIPS}_{m}$ score in Table 1. Similar to Tables 1 and 2, Mturk participants prefer baselines with alignment to their unaligned counterparts.
|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
Figure 7: Ablation study of Poisson blending (PB) in baselines. Despite combining Poisson blending with the baselines, StyleMapGAN still generates artifacts, and other baselines fail to preserve the identity of the reference. Our method with Poisson blending keeps the original image intact while accurately reflecting the reference image.
|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
Figure 8: The effect of our density-blending loss. Without the loss, 3D information is not considered, resulting in inaccurate blending in 3D space. In the bottom left figure, the hair mesh is not properly reflected without the density-blending loss, resulting in inaccurate blending and missing fine details.
|
| 262 |
+
|
| 263 |
+
# 4.3. Ablation study
|
| 264 |
+
|
| 265 |
+
3D-aware alignment is an essential part of image blending. As discussed in Section 4.2, our alignment provides consistent improvements in all baseline methods. Moreover, it plays a crucial role in our blending approach. Figure 6 shows the importance of 3D-aware alignment, where the lack of alignment in the reference images result in degraded blending results (Figure 6c). Specifically, the woman's hair appears blurry, and the size of the cat's eyes looks different. Aligned reference images can generate realistic blending results (Figure 6e) in our 3D-aware blending method.
|
| 266 |
+
|
| 267 |
+
Density-blending loss gives rich 3D signals in the blending procedure. Section 3.2 explains how we can exploit volume density fields in blending. Delicate geometric structures, such as hair, can not be easily blended without awareness of 3D information. Figure 8 shows an ablation study of our density-blending loss. In the bottom left, the hair looks
|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
Figure 9: Color-geometry disentanglement with our model. We can adjust the reflection of the reference image's color by adjusting the weight $\lambda_{2}$ in the image-blending loss. Without image blending loss on reference, we can focus on object shapes, as shown in the rightmost column.
|
| 271 |
+
|
| 272 |
+
blurry in the blended image, and the mesh of the result shows shorter hair than that in the reference image. In the bottom right, the well-blended image and corresponding mesh show that our density-blending loss contributes to capturing the highly structured object in blending.
|
| 273 |
+
|
| 274 |
+
Combination with Poisson blending. In Tables 1 and 2, we report the combination of our method and Poisson blending. It shows Poisson blending further enhances the performance of our method in all automatic metrics: KID, $\mathrm{m}L_{2}$ , and $\mathrm{LPIPS}_{m}$ . In the realism score of human perception, ours with Poisson blending enhance the score, as shown in green numbers of Tables 3 and 4. However, combining Poisson blending with each baseline does not have meaningful benefits, as shown in Figure 7. Baselines still show artifacts or fail to reflect the identity of the reference.
|
| 275 |
+
|
| 276 |
+

|
| 277 |
+
Figure 10: Multi-view blending results in various datasets: CelebA-HQ, AFHQv2-Cat, and ShapeNet-Car. Since we optimize the latent code of the generative NeRF, we can synthesis images of the blended object in different poses through the generative NeRF.
|
| 278 |
+
Figure 11: Failure cases of inversion. If an input image has a large variance in scale relative to the mean face or the estimated pose from the encoder is not valid, inversion sometimes fails. The first row shows a failure to reconstruct eyeglasses, and the second row shows a crushed face of a cat in the reconstructed image and mesh.
|
| 279 |
+
|
| 280 |
+
# 4.4. Additional advantages of NeRF-based blending
|
| 281 |
+
|
| 282 |
+
In addition to increasing blending quality, our 3D-aware method enables additional capacity: color-geometry disentanglement and multi-view consistent blending. As shown in Figure 9, we can control the influence of color in blending. The results with $\mathcal{L}_{\mathrm{image}}$ have a redder color than the results without the loss. If we remove or assign a lower weight to the image-blending loss on reference ( $\lambda_{2}$ in Eqn. 4), we can reflect the geometry of the reference object more than the color. In contrast, we can reflect colors better if we give a larger weight to $\lambda_{2}$ . Note that we always use the image-blending loss on the original image to preserve it better.
|
| 283 |
+
|
| 284 |
+
A key component of generative NeRFs is multi-view consistent generation. After applying the blending procedure described in Section 3.2, we have an optimized latent code $\mathbf{w}_{\mathrm{edit}}$ . Generative NeRF can synthesize a novel view blended image using $\mathbf{w}_{\mathrm{edit}}$ and a target camera pose. Figure 10 shows the multi-view consistent blending results in CelebA-HQ, AFHQv2-Cat, and ShapeNet-Car [11]. In Section I in the supplement and the attached website, we provide more multi-view blending results and videos for EG3D and StyleSDF [58].
|
| 285 |
+
|
| 286 |
+

|
| 287 |
+
|
| 288 |
+
# 5. Discussion and Limitations
|
| 289 |
+
|
| 290 |
+
Our method exploits the capability of NeRFs to align and blend images in a 3D-aware manner only with a collection of 2D images. Our 3D-aware alignment boosts the quality of existing 2D baselines. 3D-aware blending exceeds improved 2D baselines with our alignment method and shows additional advantages such as color-geometry disentanglement and multi-view consistent blending. We hope our approach paves the road to 3D-aware blending. Recently, 3DGP [69] presents a 3D-aware GAN, handling non-alignable scenes captured from arbitrary camera poses in real-world environments. Since our approach relies solely on a pre-trained generator, it can be readily extended to blend unaligned multi-category datasets such as ImageNet [24].
|
| 291 |
+
|
| 292 |
+
Despite improvements over existing blending baselines, our method depends on GAN inversion, which is a bottleneck of the overall performance regarding quality and speed. Figure 11 shows the inversion process can sometimes fail to accurately reconstruct the input image. We cannot obtain an acceptable inversion result if an input image is far from the average face generated from the mean latent code $\mathbf{w}_{\mathrm{avg}}$ . We also note the camera pose inferred by our encoder should not be overly inaccurate. Currently, the problem is being addressed by combining our method with Poisson blending. However, more effective solutions may be available with recent advances in 3D GAN inversion techniques [80, 46]. In the future, to enable real-time editing, we could explore training an encoder [27, 42] to blend images using our proposed loss functions.
|
| 293 |
+
|
| 294 |
+
Acknowledgments. We would like to thank Seohui Jeong, Che-Sang Park, Eric R. Chan, Junho Kim, Jung-Woo Ha, Youngjung Uh, and other Naver AI Lab researchers for their helpful comments and sharing of materials. All experiments were conducted on Naver Smart Machine Learning (NSML) platform [43, 71].
|
| 295 |
+
|
| 296 |
+
# References
|
| 297 |
+
|
| 298 |
+
[1] Aseem Agarwala, Mira Dontcheva, Maneesh Agrawala, Steven Drucker, Alex Colburn, Brian Curless, David Salesin, and Michael Cohen. Interactive digital photomontage. In ACM SIGGRAPH. 2004. 2
|
| 299 |
+
[2] David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, and Antonio Torralba. Semantic photo manipulation with a generative image prior. arXiv preprint arXiv:2005.07727, 2020. 2
|
| 300 |
+
[3] Paul J Besl and Neil D McKay. Method for registration of 3-d shapes. In Sensor fusion IV: control paradigms and data structures, 1992. 3
|
| 301 |
+
[4] Mikołaj Binkowski, Danica J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans. In International Conference on Learning Representations (ICLR), 2018. 5
|
| 302 |
+
[5] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, 1999. 2
|
| 303 |
+
[6] Matthew Brown, David G Lowe, et al. Recognising panoramas. In IEEE International Conference on Computer Vision (ICCV), 2003. 2
|
| 304 |
+
[7] Peter J Burt and Edward H Adelson. The laplacian pyramid as a compact image code. In Readings in computer vision. 1987. 1, 2
|
| 305 |
+
[8] Lucy Chai, Jonas Wulff, and Phillip Isola. Using latent space regression to analyze and leverage compositionality in gans. In International Conference on Learning Representations (ICLR), 2021. 2, 5, 6
|
| 306 |
+
[9] Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 1, 2, 3
|
| 307 |
+
[10] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 1, 2
|
| 308 |
+
[11] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 5, 9
|
| 309 |
+
[12] Bor-Chun Chen and Andrew Mae. Toward realistic image compositing with adversarial learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1
|
| 310 |
+
[13] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017. 5
|
| 311 |
+
[14] Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In International Conference on Machine Learning (ICML), 2020. 2
|
| 312 |
+
|
| 313 |
+
[15] Tao Chen, Ming-Ming Cheng, Ping Tan, Ariel Shamir, and Shi-Min Hu. Sketch2photo: Internet image montage. ACM Transactions on graphics (TOG), 2009. 2
|
| 314 |
+
[16] Tao Chen, Zhe Zhu, Ariel Shamir, Shi-Min Hu, and Daniel Cohen-Or. 3-sweep: Extracting editable objects from a single photo. ACM Transactions on graphics (TOG), 2013. 3
|
| 315 |
+
[17] Yang Chen and Gérard Medioni. Object modelling by registration of multiple range images. Image and vision computing, 1992. 3
|
| 316 |
+
[18] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 5
|
| 317 |
+
[19] Chaeyeon Chung, Taewoo Kim, Hyelin Nam, Seunghwan Choi, Gyojung Gu, Sunghyun Park, and Jaegul Choo. Hairfit: pose-invariant hairstyle transfer via flow-based hair alignment and semantic-region-aware inpainting. In The British Machine Vision Conference (BMVC), 2021. 2
|
| 318 |
+
[20] Edo Collins, Raja Bala, Bob Price, and Sabine Susstrunk. Editing in style: Uncovering the local semantics of gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
|
| 319 |
+
[21] Edo Collins, Raja Bala, Bob Price, and Sabine Susstrunk. Editing in style: Uncovering the local semantics of gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
|
| 320 |
+
[22] Kevin Dale, Kalyan Sunkavalli, Micah K Johnson, Daniel Vlasic, Wojciech Matusik, and Hanspeter Pfister. Video face replacement. In ACM SIGGRAPH Asia, 2011. 2
|
| 321 |
+
[23] Giannis Daras, Wen-Sheng Chu, Abhishek Kumar, Dmitry Lagun, and Alexandros G Dimakis. Solving inverse problems with nerfgans. arXiv preprint arXiv:2112.09061, 2021. 2
|
| 322 |
+
[24] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 9
|
| 323 |
+
[25] Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. Gram: Generative radiance manifolds for 3d-aware image generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2
|
| 324 |
+
[26] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. In Conference on Neural Information Processing Systems (NeurIPS), 2021. 2
|
| 325 |
+
[27] Tan M Dinh, Anh Tuan Tran, Rang Nguyen, and Binh-Son Hua. Hyperinverter: Improving stylegan inversion via hypernetwork. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 9
|
| 326 |
+
[28] Zeev Farbman, Gil Hoffer, Yaron Lipman, Daniel Cohen-Or, and Dani Lischinski. Coordinates for instant image cloning. ACM Transactions on graphics (TOG), 2009. 2
|
| 327 |
+
[29] Qianli Feng, Viraj Shah, Raghudeep Gadde, Pietro Perona, and Aleix Martinez. Near perfect gan inversion. arXiv preprint arXiv:2202.11833, 2022. 2
|
| 328 |
+
[30] Claudio Ferrari, Stefano Berretti, Pietro Pala, and Alberto Del Bimbo. A sparse and locally coherent morphable face model for dense semantic correspondence across heterogeneous 3d faces. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021. 2
|
| 329 |
+
|
| 330 |
+
[31] Anna Frühstück, Ibraeem Alhashim, and Peter Wonka. Tilegan: synthesis of large-scale non-homogeneous textures. ACM Transactions on graphics (TOG), 2019. 2
|
| 331 |
+
[32] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. In Conference on Neural Information Processing Systems (NeurIPS), 2014. 2
|
| 332 |
+
[33] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis. In International Conference on Learning Representations (ICLR), 2022. 1, 2
|
| 333 |
+
[34] James Hays and Alexei A Efros. Scene completion using millions of photographs. ACM Transactions on graphics (TOG), 2007. 1, 2
|
| 334 |
+
[35] Minyoung Huh, Richard Zhang, Jun-Yan Zhu, Sylvain Paris, and Aaron Hertzmann. Transforming and projecting images into class-conditional generative networks. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 17–34. Springer, 2020. 5
|
| 335 |
+
[36] Jiaya Jia, Jian Sun, Chi-Keung Tang, and Heung-Yeung Shum. Drag-and-drop pasting. In ACM Transactions on graphics (TOG), 2006. 2
|
| 336 |
+
[37] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In International Conference on Learning Representations (ICLR), 2018. 5
|
| 337 |
+
[38] Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. In Conference on Neural Information Processing Systems (NeurIPS), 2021. 5, 6, 7
|
| 338 |
+
[39] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 5
|
| 339 |
+
[40] Kevin Karsch, Varsha Hedau, David Forsyth, and Derek Hoiem. Rendering synthetic objects into legacy photographs. ACM Transactions on graphics (TOG), 2011. 3
|
| 340 |
+
[41] Natasha Kholgade, Tomas Simon, Alexei Efros, and Yaser Sheikh. 3d object manipulation in a single photograph using stock 3d models. ACM Transactions on graphics (TOG), 2014. 3
|
| 341 |
+
[42] Hyunsu Kim, Yunjey Choi, Junho Kim, Sungjoo Yoo, and Youngjung Uh. Exploiting spatial dimensions of latent in gan for real-time image editing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 1, 2, 5, 6, 7, 9
|
| 342 |
+
[43] Hanjoo Kim, Minkyu Kim, Dongjoo Seo, Jinwoong Kim, Heungseok Park, Soeun Park, Hyunwoo Jo, KyungHyun Kim, Youngil Yang, Youngkwan Kim, et al. Nsml: Meet the mlaas platform with a real-world case study. arXiv preprint arXiv:1810.09957, 2018. 9
|
| 343 |
+
[44] Taewoo Kim, Chaeyeon Chung, Yoonseo Kim, Sunghyun Park, Kangyeol Kim, and Jaegul Choo. Style your hair: Latent optimization for pose-invariant hairstyle transfer via local-style-aware hair alignment. In European Conference on Computer Vision (ECCV), 2022. 2
|
| 344 |
+
|
| 345 |
+
[45] Taewoo Kim, Chaeyeon Chung, Sunghyun Park, Gyojung Gu, Keonmin Nam, Wonzo Choe, Jaesung Lee, and Jaegul Choo. K-hairstyle: A large-scale korean hairstyle dataset for virtual hair editing and hairstyle classification. In IEEE International Conference on Image Processing (ICIP), 2021. 2
|
| 346 |
+
[46] Jaehoon Ko, Kyusun Cho, Daewon Choi, Kwangrok Ryoo, and Seungryong Kim. 3d gan inversion with pose optimization. WACV, 2023. 9
|
| 347 |
+
[47] Sosuke Kobayashi, Eiichi Matsumoto, and Vincent Sitzmann. Decomposing nef for editing via feature field distillation. arXiv preprint arXiv:2205.15585, 2022. 3
|
| 348 |
+
[48] Jeong-gi Kwak, Yuanming Li, Dongsik Yoon, Donghyeon Kim, David Han, and Hanseok Ko. Injecting 3d perception of controllable nerf-gan into stylegan for editable portrait image synthesis. In European Conference on Computer Vision (ECCV), 2022. 3
|
| 349 |
+
[49] Vivek Kwatra, Arno Schödl, Irfan Essa, Greg Turk, and Aaron Bobick. Graphcut textures: Image and video synthesis using graph cuts. ACM Transactions on graphics (TOG), 2003. 1, 2
|
| 350 |
+
[50] Jean-François Lalonde, Derek Hoiem, Alexei A Efros, Carsten Rother, John Winn, and Antonio Criminisi. Photo clip art. ACM Transactions on graphics (TOG), 2007. 2
|
| 351 |
+
[51] Anat Levin, Dani Lischinski, and Yair Weiss. A closed-form solution to natural image matting. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2007. 2
|
| 352 |
+
[52] Chen-Hsuan Lin, Ersin Yumer, Oliver Wang, Eli Shechtman, and Simon Lucey. St-gan: Spatial transformer generative adversarial networks for image compositing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 1
|
| 353 |
+
[53] Steven Liu, Xiuming Zhang, Zhoutong Zhang, Richard Zhang, Jun-Yan Zhu, and Bryan Russell. Editing conditional radiance fields. In IEEE International Conference on Computer Vision (ICCV), 2021. 3
|
| 354 |
+
[54] Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Guided image synthesis and editing with stochastic differential equations. In International Conference on Learning Representations (ICLR), 2021. 1, 2, 5, 6, 7
|
| 355 |
+
[55] Arsha Nagrani, Joon Son Chung, and Andrew Zisserman. Voxceleb: a large-scale speaker identification dataset. In INTERSPEECH, 2017. 2
|
| 356 |
+
[56] Thanh Thi Nguyen, Quoc Viet Hung Nguyen, Dung Tien Nguyen, Duc Thanh Nguyen, Thien Huynh-The, Saeid Naha-vandi, Thanh Tam Nguyen, Quoc-Viet Pham, and Cuong M Nguyen. Deep learning for deepfakes creation and detection: A survey. Computer Vision and Image Understanding, 2022. 2
|
| 357 |
+
[57] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2
|
| 358 |
+
[58] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. Stylesdf: High-resolution 3d-consistent image and geometry generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 3, 9
|
| 359 |
+
|
| 360 |
+
[59] Xingang Pan, Xudong Xu, Chen Change Loy, Christian Theobalt, and Bo Dai. A shading-guided generative implicit model for shape-accurate 3d-aware image synthesis. Conference on Neural Information Processing Systems (NeurIPS), 2021. 1
|
| 361 |
+
[60] Gaurav Parmar, Yijun Li, Jingwan Lu, Richard Zhang, Jun-Yan Zhu, and Krishna Kumar Singh. Spatially-adaptive multi-layer selection for gan inversion and editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11399-11409, 2022. 2
|
| 362 |
+
[61] Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu. On aliased resizing and surprising subtleties in gan evaluation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11410-11420, 2022. 5
|
| 363 |
+
[62] Patrick Pérez, Michel Gangnet, and Andrew Blake. Poisson image editing. In ACM SIGGRAPH, 2003. 1, 2, 4, 5, 6, 7
|
| 364 |
+
[63] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), 2021. 3
|
| 365 |
+
[64] Daniel Roich, Ron Mokady, Amit H Bermano, and Daniel Cohen-Or. Pivotal tuning for latent-based editing of real images. ACM Transactions on graphics (TOG), 2022. 2, 3
|
| 366 |
+
[65] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. "grabcut" interactive foreground extraction using iterated graph cuts. ACM Transactions on graphics (TOG), 2004. 2
|
| 367 |
+
[66] Othman Sbai, Camille Couprie, and Mathieu Aubry. Surprising image compositions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop, 2021. 1
|
| 368 |
+
[67] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. Conference on Neural Information Processing Systems (NeurIPS), 2020. 1, 2
|
| 369 |
+
[68] Yichun Shi, Xiao Yang, Yangyue Wan, and Xiaohui Shen. Semanticstylegan: Learning compositional generative priors for controllable image synthesis and editing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2
|
| 370 |
+
[69] Ivan Skorokhodov, Aliaksandr Siarohin, Yinghao Xu, Jian Ren, Hsin-Ying Lee, Peter Wonka, and Sergey Tulyakov. 3d generation on imagenet. In ICLR, 2023. 9
|
| 371 |
+
[70] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations (ICLR), 2021. 2
|
| 372 |
+
[71] Nako Sung, Minkyu Kim, Hyunwoo Jo, Youngil Yang, Jingwoong Kim, Leonard Lausen, Youngkwan Kim, Gayoung Lee, Donghyun Kwak, Jung-Woo Ha, et al. Nsml: A machine learning platform that enables you to focus on your models. arXiv preprint arXiv:1712.05902, 2017. 9
|
| 373 |
+
[72] Kalyan Sunkavalli, Micah K Johnson, Wojciech Matusik, and Hanspeter Pfister. Multi-scale image harmonization. ACM Transactions on Graphics (TOG), 29(4):1-10, 2010. 2
|
| 374 |
+
[73] Ryohei Suzuki, Masanori Koyama, Takeru Miyato, Taizan Yonetsuji, and Huachun Zhu. Spatially controllable image
|
| 375 |
+
|
| 376 |
+
synthesis with internal representation collaging. In arXiv preprint arXiv:1811.10153, 2018. 2
|
| 377 |
+
[74] Richard Szeliski, Matthew Uytendaele, and Drew Steedly. Fastoisson blending using multi-splines. In IEEE International Conference on Computational Photography (ICCP), 2011. 2
|
| 378 |
+
[75] Michael W Tao, Micah K Johnson, and Sylvain Paris. Error-tolerant image compositing. In European Conference on Computer Vision (ECCV), 2010. 2
|
| 379 |
+
[76] Matthew Uytendaele, Ashley Eden, and Richard Skeliski. Eliminating ghosting and exposure artifacts in image mosaics. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2001. 1
|
| 380 |
+
[77] Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelCNN decoders. Conference on Neural Information Processing Systems (NeurIPS), 2016. 2
|
| 381 |
+
[78] Can Wang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Clip-nerf: Text-and-image driven manipulation of neural radiance fields. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 3
|
| 382 |
+
[79] Huikai Wu, Shuai Zheng, Junge Zhang, and Kaiqi Huang. Gp-gan: Towards realistic high-resolution image blending. In ACM international conference on multimedia (ACM-MM), 2019. 1, 2
|
| 383 |
+
[80] Jiaxin Xie, Hao Ouyang, Jingtan Piao, Chenyang Lei, and Qifeng Chen. High-fidelity 3d gan inversion by pseudo-multiview optimization. In CVPR, 2023. 9
|
| 384 |
+
[81] Yangyang Xu, Bailin Deng, Junle Wang, Yanqing Jing, Jia Pan, and Shengfeng He. High-resolution face swapping via latent semantics disentanglement. In CVPR, 2022. 2
|
| 385 |
+
[82] Su Xue, Aseem Agarwala, Julie Dorsey, and Holly Rushmeier. Understanding and improving the realism of image composites. ACM Transactions on graphics (TOG), 31(4):1-10, 2012. 2
|
| 386 |
+
[83] Fei Yang, Jue Wang, Eli Shechtman, Lubomir Bourdev, and Dimitri Metaxas. Expression flow for 3d-aware face component transfer. In ACM SIGGRAPH, 2011. 2
|
| 387 |
+
[84] Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In European Conference on Computer Vision (ECCV), 2018. 5
|
| 388 |
+
[85] Yu-Jie Yuan, Yang-Tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, and Lin Gao. Nerf-editing: geometry editing of neural radiance fields. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 3
|
| 389 |
+
[86] Fangneng Zhan, Hongyuan Zhu, and Shijian Lu. Spatial fusion gan for image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1
|
| 390 |
+
[87] He Zhang, Jianming Zhang, Federico Peruzzi, Zhe Lin, and Vishal M Patel. Deep image compositing. In Winter Conference on Applications of Computer Vision (WACV), 2021. 2
|
| 391 |
+
[88] Lingzhi Zhang, Tarmily Wen, and Jianbo Shi. Deep image blending. In Winter Conference on Applications of Computer Vision (WACV), 2020. 2
|
| 392 |
+
|
| 393 |
+
[89] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 3, 4, 5
|
| 394 |
+
[90] Yuxuan Zhang, Huan Ling, Jun Gao, Kangxue Yin, Jean-Francois Lafleche, Adela Barriuso, Antonio Torralba, and Sanja Fidler. Datasetgan: Efficient labeled data factory with minimal human effort. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 5
|
| 395 |
+
[91] Peng Zhou, Lingxi Xie, Bingbing Ni, and Qi Tian. Cips-3d: A 3d-aware generator of gans based on conditionally-independent pixel synthesis. arXiv preprint arXiv:2110.09788, 2021. 1, 2
|
| 396 |
+
[92] Jiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. Indomain gan inversion for real image editing. In European Conference on Computer Vision (ECCV), 2020. 2
|
| 397 |
+
[93] Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A Efros. Learning a discriminative model for the perception of realism in composite images. In IEEE International Conference on Computer Vision (ICCV), 2015. 1
|
| 398 |
+
[94] Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision (ECCV), 2016. 2
|
| 399 |
+
[95] Peihao Zhu, Rameen Abdal, John Femiani, and Peter Wonka. Barbershop: Gan-based image compositing using segmentation masks. In ACM Transactions on graphics (TOG), 2021. 1, 2
|
| 400 |
+
[96] Peihao Zhu, Rameen Abdal, John Femiani, and Peter Wonka. Hairnet: Hairstyle transfer with pose changes. In European Conference on Computer Vision (ECCV), 2022. 1, 2
|
| 401 |
+
[97] Peihao Zhu, Rameen Abdal, Yipeng Qin, John Femiani, and Peter Wonka. Improved stylegan embedding: Where are the good latents? arXiv preprint arXiv:2012.09036, 2020. 2
|
3dawareblendingwithgenerativenerfs/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3bc66eb0764fef3ae4ab5b1243fe87054227fd305847e5bcf58e733e6f5516cc
|
| 3 |
+
size 971885
|
3dawareblendingwithgenerativenerfs/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4619ca59be3dfb770a9e4ee5121467d516d070141f73f9274450be923c26c651
|
| 3 |
+
size 515381
|
3dawaregenerativemodelforimprovedsideviewimagesynthesis/9c2d56de-d217-40ca-abcf-42949762d793_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a9ebb70b4070bb4fe6ba3f07c535ea73458cb528c177b7a6c2e2fe7434979132
|
| 3 |
+
size 96648
|
3dawaregenerativemodelforimprovedsideviewimagesynthesis/9c2d56de-d217-40ca-abcf-42949762d793_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bbc939616ef0cbc506586093020075b5a5f9ca365dced83d349ea48600db5980
|
| 3 |
+
size 112287
|
3dawaregenerativemodelforimprovedsideviewimagesynthesis/9c2d56de-d217-40ca-abcf-42949762d793_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e0da50bef27ca3b1361c71c57faa6fd27c4b266d31104f9b279681c3b9e4575e
|
| 3 |
+
size 4450777
|
3dawaregenerativemodelforimprovedsideviewimagesynthesis/full.md
ADDED
|
@@ -0,0 +1,462 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D-Aware Generative Model for Improved Side-View Image Synthesis
|
| 2 |
+
|
| 3 |
+
Kyungmin Jo $^{1,*}$
|
| 4 |
+
|
| 5 |
+
Wonjoon Jin $^{2,*}$
|
| 6 |
+
|
| 7 |
+
Jaegul Choo<sup>1</sup>
|
| 8 |
+
|
| 9 |
+
Hyunjoon Lee<sup>3</sup>
|
| 10 |
+
|
| 11 |
+
Sunghyun Cho²
|
| 12 |
+
|
| 13 |
+
1KAIST
|
| 14 |
+
|
| 15 |
+
Daejeon, Korea
|
| 16 |
+
|
| 17 |
+
{bttkm,jchoo}@kaist.ac.kr
|
| 18 |
+
|
| 19 |
+
$^{2}$ POSTECH
|
| 20 |
+
|
| 21 |
+
Pohang, Gyeongbuk, Korea
|
| 22 |
+
|
| 23 |
+
{jinwj1996,s.cho}@postech.ac.kr
|
| 24 |
+
|
| 25 |
+
3Kakao Brain
|
| 26 |
+
|
| 27 |
+
Seongnam-si, Gyeonggi-do, Korea
|
| 28 |
+
|
| 29 |
+
malfo.lee@kakaobrain.com
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
Frontal view
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
Figure 1: Our method robustly produces high-quality images of human faces, regardless of the camera pose while the baselines ( $\pi$ -GAN [2] and EG3D [1]) generate blurry images at the steep pose. The images are rendered with horizontal rotation from the frontal view to the side view.
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
Side view
|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
Frontal view
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
|
| 69 |
+

|
| 70 |
+
Side view
|
| 71 |
+
|
| 72 |
+
# Abstract
|
| 73 |
+
|
| 74 |
+
While recent 3D-aware generative models have shown photo-realistic image synthesis with multi-view consistency, the synthesized image quality degrades depending on the camera pose (e.g., a face with a blurry and noisy boundary at a side viewpoint). Such degradation is mainly caused by the difficulty of learning both pose consistency and photorealism simultaneously from a dataset with heavily imbalanced poses. In this paper, we propose SideGAN, a novel 3D GAN training method to generate photo-realistic im
|
| 75 |
+
|
| 76 |
+
ages irrespective of the camera pose, especially for faces of side-view angles. To ease the challenging problem of learning photo-realistic and pose-consistent image synthesis, we split the problem into two subproblems, each of which can be solved more easily. Specifically, we formulate the problem as a combination of two simple discrimination problems, one of which learns to discriminate whether a synthesized image looks real or not, and the other learns to discriminate whether a synthesized image agrees with the camera pose. Based on this, we propose a dual-branched discriminator with two discrimination branches. We also propose a pose-matching loss to learn the pose consistency of 3D GANs. In addition, we present a pose sampling strat
|
| 77 |
+
|
| 78 |
+
egy to increase learning opportunities for steep angles in a pose-imbalanced dataset. With extensive validation, we demonstrate that our approach enables 3D GANs to generate high-quality geometries and photo-realistic images irrespective of the camera pose.
|
| 79 |
+
|
| 80 |
+
# 1. Introduction
|
| 81 |
+
|
| 82 |
+
Generative Adversarial Networks (GANs) [9] have shown remarkable success in photo-realistic image generation [13, 14] by learning the distributions of high-resolution image datasets. Recent studies have taken this success one step further by extending GANs to pose-controllable image generation based on the guidance of a 3DMM prior [25, 5] or a differentiable renderer [28]. However, they produce inconsistent results across different poses and also suffer from limited pose controllability as they learn to generate 2D images for different poses independently without considering the 3D face structure.
|
| 83 |
+
|
| 84 |
+
Therefore, 3D-aware GANs have emerged to achieve multi-view consistent image generation. Recent studies [19, 2, 10, 27, 1, 23, 17] have tackled this problem by modeling the 3D structure of a face using neural radiance fields [16], enabling explicit view control. Combining volumetric feature projection with convolutional neural networks (CNNs) enables 3D GANs to generate photo-realistic face images in high resolution [10, 18, 1]. Albeit their ability to synthesize photo-realistic images with explicit view control, their results do not have a stable quality depending on the camera pose (Fig. 1). To be specific, side-view facial images generated by such methods show degraded qualities compared to photo-realistic images of frontal viewpoints (e.g., a blurry and a noisy facial boundary).
|
| 85 |
+
|
| 86 |
+
This unstable image quality is caused by the challenge for 3D-aware GANs to simultaneously learn to generate pose-consistent and photo-realistic images from a pose-imbalanced dataset (Fig. 2) such as the FFHQ dataset [13] where most images are frontal-view images. Specifically, EG3D [1], the state-of-the-art 3D GAN approach, formulates the problem as a learning problem of a pose-conditional distribution of real images. Unfortunately, learning the distribution of real images for each pose can be extremely challenging, especially for poses with only a small number of real images. GRAM [6] casts the problem as a combination of the learning of real/fake image discrimination and pose estimation. Nevertheless, pose estimation from degraded side-view images is not trivial to learn either. As a result, images generated by the existing 3D GANs are blurry or have noisy boundaries in the face region at steep pose angles (Fig. 1).
|
| 87 |
+
|
| 88 |
+
To tackle this problem, we propose SideGAN, a novel 3D GAN training method to generate photo-realistic images irrespective of the viewing angle. Our key idea is as follows. To ease the challenging problem of learning photo
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
Figure 2: Real-world face datasets generally have an imbalanced pose distribution, which is mainly concentrated on the frontal viewpoint.
|
| 92 |
+
|
| 93 |
+
realistic and multi-view consistent image synthesis, we split the problem into two subproblems, each of which can be solved more easily. Specifically, we formulate the problem as a combination of two simple discrimination problems, one of which learns to discriminate whether a synthesized image looks real or not, and the other learns to discriminate whether a synthesized image agrees with the camera pose. Unlike the formulations of the previous methods, which try to learn the real image distribution for each pose, or to learn pose estimation, our subproblems are much easier as each of them is analogous to a basic binary classification problem.
|
| 94 |
+
|
| 95 |
+
Based on this key idea, we propose a dual-branched discriminator, which has two branches for learning photorealism and pose consistency, respectively. As these branches are supervised explicitly for their respective purposes, high-quality images with pose consistency can be produced at each viewing angle, and consequently, the generator creates high-quality images and shapes. In addition, we propose a pose-matching loss to give supervision to the discriminator for the pose consistency, by considering a positive pose (i.e., rendering pose or ground truth pose) and a negative pose (i.e., irrelevant pose) for a given image. For example, the frontal viewpoint is one of the irrelevant poses for a side-view image. As reported in the experiments, this loss helps improve image and shape quality. Compared to the previous pose estimation strategy [6], our pose-matching loss provides a more effective way to learn pose-consistent image generation, as the pose-matching loss casts the learning of pose-consistent image generation as the learning of simple binary classification that is much easier than the learning of accurate pose regression.
|
| 96 |
+
|
| 97 |
+
Additionally, we suggest a simple but effective training strategy to alleviate the degradation caused by insufficient semantic knowledge at steep poses in a pose-imbalanced dataset. As shown in Fig. 2, most in-the-wild face datasets [13, 12, 3] usually have pose distributions concentrated on the frontal angle, causing the degradation of generated images at steep poses. While we may con
|
| 98 |
+
|
| 99 |
+
struct a pose-balanced dataset in a controlled environment, it requires a significant amount of effort, and is also hard to guarantee the diversity like in the in-the-wild datasets. Instead, we present an additional uniform pose sampling (AUPS) strategy that draws camera poses from both a uniform distribution and the actual camera pose distribution to enhance learning opportunities for steep angles during training. Our experiments show that this simple pose sampling strategy substantially improves the generation quality for side-view images.
|
| 100 |
+
|
| 101 |
+
Our contributions are summarized as follows:
|
| 102 |
+
|
| 103 |
+
- We split the problem of learning of 3D GANs into two easier subproblems: real/fake image discrimination and pose-consistency discrimination.
|
| 104 |
+
- We propose a dual-branched discriminator and a pose-matching loss to effectively learn the pose consistency by considering both positive and negative poses of a given image.
|
| 105 |
+
- We also present a simple but effective pose sampling strategy to compensate for the insufficient amount of side-view images in pose-imbalanced in-the-wild datasets.
|
| 106 |
+
- With extensive evaluations, SideGAN shows the state-of-the-art image and shape quality irrespective of the camera pose, especially at steep view angles.
|
| 107 |
+
|
| 108 |
+
# 2. Related work
|
| 109 |
+
|
| 110 |
+
Extending 2D GANs to have pose controllability. GANs [9] have achieved significant success in photorealistic 2D image generation [13, 14]. Extending 2D GANs to provide pose controllability has been addressed by disentangling 3D information from GAN's latent space. Finding meaningful directions for editing pose in the latent space can be done with supervision from pre-trained classifiers [20] or in an unsupervised manner [21]. Editing the camera pose can be implemented by disentangling the pose factor from the latent space with guidance from a 3DMM prior [25, 5]. Zhang et al. [28] utilize inverse graphics with a differentiable renderer for pose-controllable image generation by fine-tuning StyleGAN to have disentangled pose attributes. Shi et al. [7] exploit a depth prior to disentangle the latent codes of geometry and appearance for RGBD generation with pose controllability. Unfortunately, these studies based on 2D GANs fundamentally lack multi-view consistency or accurate pose controllability since they do not consider the 3D structure of faces.
|
| 111 |
+
|
| 112 |
+
3D-aware GANs. Recent work incorporating neural 3D representations into GANs enables multi-view consistent image generation with explicit camera control. GRAF [19] and $\pi$ -GAN [2] adopt fully implicit volumetric fields with differentiable volumetric rendering for 3D scene generation. However, these methods suffer from a large memory
|
| 113 |
+
|
| 114 |
+
burden due to fully implicit networks, restricting image resolution and expressiveness. To enable high-resolution image synthesis, GRAM [6] restricts point sampling to regions near the learned implicit surface. StyleNeRF [10], StyleSDF [18] and GIRAFFE [17] combine CNN-based upsamplers with volumetric feature projection in their multiview consistent image generation. EG3D [1], which is the most recent and related to our work, achieves photo-realistic image synthesis based on their tri-plane representation and StyleGAN feature generator. While previous 3D GAN studies have made significant progress in 3D-aware image synthesis, they have a limitation that the image quality degrades as the viewpoint shifts from frontal angles to steeper angles. To the best of our knowledge, our work is the first one to tackle the ineffectiveness of training 3D GANs from a pose-imbalanced dataset for photo-realistic multi-view consistent image generation irrespective of the camera pose.
|
| 115 |
+
|
| 116 |
+
# 3. SideGAN Framework
|
| 117 |
+
|
| 118 |
+
Our framework generates photo-realistic images irrespective of the camera pose even though most images in the training dataset are frontal-view images. As shown in Fig. 3, the main architecture is composed of two components. The first component is a generator $G_{\theta}$ for generating images from latent vectors $\mathbf{z}_{fg}$ and $\mathbf{z}_{bg}$ for the foreground and background regions, respectively, and a rendering camera parameter $\xi^{+}$ . The second component is a dual-branched discriminator $D_{\phi}$ for discriminating a generated image $\hat{\mathbf{I}}$ from a real image $\mathbf{I}$ and for discriminating whether a generated image agrees with a camera pose $\boldsymbol{\xi}$ . In the following, we describe each component. More details on our framework are provided in Sec. B.
|
| 119 |
+
|
| 120 |
+
# 3.1. Generator
|
| 121 |
+
|
| 122 |
+
Existing 3D GAN models [1, 2] mainly render the background and foreground together by a single network. This causes the 3D structures in the background region to mingle with the 3D structures in the foreground region and makes it difficult to create photo-realistic side-view images (Fig. 4). To address this issue, we design our generator $G_{\theta}$ to separately produce the foreground and background regions to avoid mingled foreground and background structures. Specifically, our generator $G_{\theta}$ is composed of two components: an image generator and a background network, inspired by EpiGRAF [23]. The image generator has two roles: it produces features for the foreground region (i.e., the facial region), and produces a final high-resolution image using both foreground and background features. Meanwhile, the background network produces features for the background region, which are used by the image generator.
|
| 123 |
+
|
| 124 |
+
For the image generator, we adopt the generator of a state-of-the-art 3D GAN model [1]. The image generator forms tri-plane features from the latent code $\mathbf{z}_{fg}$ and the
|
| 125 |
+
|
| 126 |
+

|
| 127 |
+
Figure 3: Illustration of main architecture. The generator takes latent codes $\mathbf{z}_{fg}$ and $\mathbf{z}_{bg}$ , camera parameters $\xi^{+}$ , and 3D position $\mathbf{x}$ as inputs and synthesizes an image $\hat{\mathbf{I}}$ . The dual-branched discriminator takes either a real image $\mathbf{I}$ or a generated image $\hat{\mathbf{I}}$ and camera parameters $\xi$ and outputs separately logits for image distribution and image-pose consistency.
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
|
| 131 |
+
camera parameter $\xi^{+} \in \mathbb{R}^{25}$ . Then, the generator samples 3D positions according to $\xi^{+}$ , then obtains features for the sampled positions from the tri-plane. After that, the foreground feature maps are obtained through a decoder and volume rendering. Then, the feature maps are integrated with the background feature maps according to the transmittance of the foreground to generate a low-resolution feature map. Finally, a high-resolution image is obtained from the low-resolution features through a super-resolution module in the image generator.
|
| 132 |
+
|
| 133 |
+
For the background network, we adopt the background network of EpiGRAF [23]. The background network is a multi-layer perceptron (MLP) that takes a latent code $\mathbf{z}_{bg}$ and a 3D position $\mathbf{x}$ as inputs and outputs a feature vector. To generate background features, we first sample 3D positions according to the camera pose $\boldsymbol{\xi}^{+}$ , and feed them to the background network to obtain feature vectors for the sampled 3D positions. After aggregating all the background features, we feed them to the image generator.
|
| 134 |
+
|
| 135 |
+
# 3.2. Dual-Branched Discriminator
|
| 136 |
+
|
| 137 |
+
As shown in Fig. 3, the dual-branched discriminator $D_{\phi}$ takes an image and a camera pose as inputs. The input pose can be either positive $(\pmb{\xi}^{+})$ or negative $(\pmb{\xi}^{-})$ , where a positive pose means that the pose agrees with the input image, while a negative pose means it does not. From the inputs, the discriminator predicts whether the input image is real or fake, and whether the input image agrees with the input camera pose using two output branches.
|
| 138 |
+
|
| 139 |
+
The discriminator $D_{\phi}$ comprises four components: a shared block $D_{\phi}^{\mathrm{s}}$ , a pose encoder $E_{\phi}$ , an image branch $D_{\phi}^{\mathrm{i}}$ and a pose branch $D_{\phi}^{\mathrm{p}}$ . The shared block extracts features from an input image, which will be used by the image and pose branches, while the pose encoder $E_{\phi}$ projects an input camera parameter $\xi$ to an embedding space. The image branch $D_{\phi}^{\mathrm{i}}$ predicts whether the input image is real or fake using the output of the shared block $D_{\phi}^{\mathrm{s}}$ . The pose branch $D_{\phi}^{\mathrm{p}}$ extracts pose features of the input image from the out-
|
| 140 |
+
|
| 141 |
+
put of shared block $D_{\phi}^{s}$ , which are then combined with the features from the pose encoder to discriminate whether the input image agrees with the input camera pose.
|
| 142 |
+
|
| 143 |
+
# 4. Training for a Wider Range of Angles
|
| 144 |
+
|
| 145 |
+
In this section, we describe our training strategy including the pose-matching loss and AUPS.
|
| 146 |
+
|
| 147 |
+
# 4.1. Pose-Matching Loss
|
| 148 |
+
|
| 149 |
+
To promote pose consistency between the input pose to the generator and its corresponding synthesized image, the pose-matching loss is computed between a pair of an image and a camera pose. The pose-matching loss considers both positive and negative pairs of an image and a camera pose to more strongly guide the generator to produce pose-consistent images. In the case of a positive pair whose image and camera pose are supposed to agree with each other, the pose-matching loss penalizes the generator if the image does not agree with the pose. On the other hand, in the case of a negative pair whose image and camera pose are supposed to not agree, the pose-matching loss penalizes the generator if the image agrees with the pose.
|
| 150 |
+
|
| 151 |
+
Formally, we define the pose-matching loss $\mathcal{L}_{\mathrm{pose}}^{\mathrm{gen}}$ for the generator as:
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
\begin{array}{l} \mathcal {L} _ {\text {p o s e}} ^ {\text {g e n}} (\theta) = \mathcal {L} _ {\text {p o s e}} ^ {\text {g e n , +}} (\theta) + \mathcal {L} _ {\text {p o s e}} ^ {\text {g e n , -}} (\theta) \\ = \mathbb {E} _ {\boldsymbol {\xi} ^ {+} \sim p _ {\xi}} [ h (- (D _ {\phi} ^ {\mathrm {s p}} (\hat {\mathbf {I}}) \otimes E _ {\phi} (\boldsymbol {\xi} ^ {+}))) ] \tag {1} \\ + \mathbb {E} _ {\boldsymbol {\xi} ^ {-} \sim p _ {\xi}} \left[ h \left(D _ {\phi} ^ {\operatorname {s p}} (\hat {\mathbf {I}}) \otimes E _ {\phi} \left(\boldsymbol {\xi} ^ {-}\right)\right) \right], \\ \end{array}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
where $\otimes$ is an element-wise multiplication, $D_{\phi}^{\mathrm{sp}}(\cdot) = D_{\phi}^{\mathrm{p}}(D_{\phi}^{\mathrm{s}}(\cdot))$ , $\hat{\mathbf{I}} = G_{\theta}(\mathbf{z},\boldsymbol{\xi}^{+})$ , and $\mathbf{z} = (\mathbf{z}_{\mathrm{fg}},\mathbf{z}_{\mathrm{bg}})$ . $h$ is the softplus activation function and $p_{\xi}$ is the pose distribution, whose details will be given in Sec. 4.3. A negative pose $\boldsymbol{\xi}^{-}$ is randomly sampled so as not to be the same as the positive pose $\boldsymbol{\xi}^{+}$ . For a generated image $\hat{\mathbf{I}}$ , its positive pose $\boldsymbol{\xi}^{+}$ is the rendering pose used in the generator.
|
| 158 |
+
|
| 159 |
+
We also define a pose-matching loss to train the discriminator as:
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\mathcal {L} _ {\text {p o s e}} ^ {\text {d i s}} (\phi) = \mathcal {L} _ {\text {p o s e}} ^ {\text {d i s}, +} (\phi) + \mathcal {L} _ {\text {p o s e}} ^ {\text {d i s}, -} (\phi), \tag {2}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
where the terms on the right-hand-side are computed using positive and negative pairs, respectively. Both $\mathcal{L}_{\mathrm{pose}}^{\mathrm{dis}, + }(\phi)$ and $\mathcal{L}_{\mathrm{pose}}^{\mathrm{dis}, - }(\phi)$ are defined using both real and synthesized images for positive and negative pairs. Specifically, $\mathcal{L}_{\mathrm{pose}}^{\mathrm{dis}, + }$ is defined as:
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\begin{array}{l} \mathcal {L} _ {\text {p o s e}} ^ {\text {d i s}, +} (\phi) = \mathbb {E} _ {(\mathbf {I}, \boldsymbol {\xi} ^ {+}) \sim (p _ {r}, p _ {\xi})} [ h (- (D _ {\phi} ^ {\mathrm {s p}} (\mathbf {I}) \otimes E _ {\phi} (\boldsymbol {\xi} ^ {+}))) ] \\ + \mathbb {E} _ {\boldsymbol {\xi} ^ {+} \sim p _ {\xi}} [ h \left(- \left(D _ {\phi} ^ {\mathrm {s p}} (\hat {\mathbf {I}}) \otimes E _ {\phi} \left(\boldsymbol {\xi} ^ {+}\right)\right) \right], \tag {3} \\ \end{array}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
where $p_r$ is the distribution of real images and $\mathbf{I}$ is a real image. The first and second terms on the right-hand side use real and synthesized pairs as positive pairs, respectively. For the first term, we sample a real image $\mathbf{I}$ and its corresponding ground-truth pose $\xi^{+}$ as a positive sample. The pose-matching loss $\mathcal{L}_{\mathrm{pose}}^{\mathrm{dis}, - }(\phi)$ for a negative pose is defined as:
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\begin{array}{l} \mathcal {L} _ {\text {p o s e}} ^ {\mathrm {d i s}, -} (\phi) = \mathbb {E} _ {\mathbf {I} \sim p _ {r}, \boldsymbol {\xi} ^ {-} \sim p _ {\xi}} [ h (D _ {\phi} ^ {\mathrm {s p}} (\mathbf {I}) \otimes E _ {\phi} (\boldsymbol {\xi} ^ {-})) ] \tag {4} \\ + \mathbb {E} _ {\boldsymbol {\xi} ^ {-} \sim p _ {\xi}} [ h (D _ {\phi} ^ {\mathrm {s p}} (\hat {\mathbf {I}}) \otimes E _ {\phi} (\boldsymbol {\xi} ^ {-})) ]. \\ \end{array}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
Note that both positive and negative pairs of the pose-matching loss for the discriminator are defined using both real and synthesized images. Thanks to this, the pose branch of the discriminator is trained to focus only on the pose consistency regardless of whether an image looks real or fake, and subsequently, resulting in the generator being trained to produce pose-consistent images.
|
| 178 |
+
|
| 179 |
+
# 4.2. Final Loss
|
| 180 |
+
|
| 181 |
+
In addition to the pose-matching loss, we adopt other loss terms in our final loss as described in the following.
|
| 182 |
+
|
| 183 |
+
Non-Saturating GAN Loss. In the dual-branched discriminator, the image branch $D_{\phi}^{\mathrm{i}}$ is optimized by a non-saturating GAN loss to learn the entire target image distribution. The non-saturating GAN loss for the generator is defined as
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\mathcal {L} _ {\mathrm {a d v}} ^ {\text {g e n}} (\theta) = \mathbb {E} _ {\mathbf {z} \sim p _ {z}, \boldsymbol {\xi} ^ {+} \sim p _ {\xi}} [ h (- D _ {\phi} ^ {\mathrm {s i}} (G _ {\theta} (\mathbf {z}, \boldsymbol {\xi} ^ {+}))) ], \tag {5}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
where $D_{\phi}^{\mathrm{si}}(\cdot) = D_{\phi}^{\mathrm{i}}(D_{\phi}^{\mathrm{s}}(\cdot))$ . The non-saturating GAN loss for the discriminator with $R1$ regularization [1] is defined as
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\begin{array}{l} \mathcal {L} _ {\mathrm {a d v}} ^ {\mathrm {d i s}} (\phi) = \mathbb {E} _ {\mathbf {z} \sim p _ {z}, \boldsymbol {\xi} ^ {+} \sim p _ {\xi}} [ h \left(D _ {\phi} ^ {\mathrm {s i}} \left(G _ {\theta} \left(\mathbf {z}, \boldsymbol {\xi} ^ {+}\right)\right)\right) ] \tag {6} \\ + \mathbb {E} _ {\mathbf {I} \sim p _ {r}} \left[ h \left(- D _ {\phi} ^ {\mathrm {s i}} (\mathbf {I})\right) + \lambda_ {R 1} | \nabla D _ {\phi} ^ {\mathrm {s i}} (\mathbf {I}) | ^ {2} \right], \\ \end{array}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
where $\lambda_{R1}$ is a balancing weight.
|
| 196 |
+
|
| 197 |
+
Identity Regularization. To encourage the generator to create semantically various images, we train the generator with an additional identity regularization term $\mathcal{L}_{\mathrm{id}} =$
|
| 198 |
+
|
| 199 |
+
$\lambda_{\mathrm{z}}\mathcal{L}_{\mathrm{z}} + \lambda_{\mathrm{c}}\mathcal{L}_{\mathrm{c}},$ where $\mathcal{L}_{\mathrm{z}}$ is a loss term to promote images with diverse identities, and $\mathcal{L}_{\mathrm{c}}$ is a term to prevent the identity of a generated image from being affected by the camera parameter $\xi$ . $\lambda_{\mathrm{z}}$ and $\lambda_{\mathrm{c}}$ are balancing weights. $\mathcal{L}_{\mathrm{z}}$ is defined as
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\mathcal {L} _ {\mathrm {z}} (\theta) = \mathbb {E} _ {\mathbf {z} _ {1}, \mathbf {z} _ {2} \sim p _ {z}, \boldsymbol {\xi} ^ {+} \sim p _ {\xi}} [ \langle E _ {i d} (\hat {\mathbf {I}} _ {1}), E _ {i d} (\hat {\mathbf {I}} _ {2}) \rangle ], \tag {7}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
where $\hat{\mathbf{I}}_1 = G_\theta (\mathbf{z}_1,\pmb {\xi}^+)$ , $\hat{\mathbf{I}}_2 = G_\theta (\mathbf{z}_2,\pmb {\xi}^+)$ , and $E_{\mathrm{id}}$ is a face identity network [4]. $\langle \cdot ,\cdot \rangle$ calculates the cosine similarity. $\mathcal{L}_{\mathrm{c}}$ is defined as:
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
\mathcal {L} _ {\mathrm {c}} (\theta) = \mathbb {E} _ {\mathbf {z} \sim p _ {z}, \boldsymbol {\xi} _ {1} ^ {+}, \boldsymbol {\xi} _ {2} ^ {+} \sim p _ {\xi}} \left[ \frac {1 - \left\langle E _ {\mathrm {i d}} \left(\hat {\mathbf {I}} _ {1}\right) , E _ {i d} \left(\hat {\mathbf {I}} _ {2}\right) \right\rangle}{\| \hat {\mathbf {I}} _ {1} - \hat {\mathbf {I}} _ {2} \| _ {1}} \right], \tag {8}
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
where $\hat{\mathbf{I}}_1 = G_\theta (\mathbf{z},\pmb {\xi}_1^+)$ , and $\hat{\mathbf{I}}_2 = G_\theta (\mathbf{z},\pmb {\xi}_2^+)$ . The identity regularization $\mathcal{L}_{\mathrm{id}}$ helps the generator faithfully learn semantic information from the dataset, enabling image synthesis with high fidelity (Sec. 5.2).
|
| 212 |
+
|
| 213 |
+
Final Loss. The final losses for training the generator and the discriminator are then defined as:
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\begin{array}{l} \mathcal {L} _ {\text {t o t a l}} ^ {\text {g e n}} = \mathcal {L} _ {\text {a d v}} ^ {\text {g e n}} + \lambda_ {\text {p o s e}} \mathcal {L} _ {\text {p o s e}} ^ {\text {g e n}} + \mathcal {L} _ {\text {i d}} + \lambda_ {\mathrm {d}} \mathcal {L} _ {\mathrm {d}}, \quad \text {a n d} \tag {9} \\ \mathcal {L} _ {\text {t o t a l}} ^ {\mathrm {d i s}} = \mathcal {L} _ {\mathrm {a d v}} ^ {\mathrm {d i s}} + \lambda_ {\text {p o s e}} \mathcal {L} _ {\text {p o s e}} ^ {\mathrm {d i s}}, \\ \end{array}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
where $\mathcal{L}_{\mathrm{d}}$ is an additional $L^1$ -based density regularization term [24]. $\lambda_{\mathrm{pose}}$ and $\lambda_{\mathrm{d}}$ are weights to balance the terms. More details on the losses can be found in Sec. 5.
|
| 220 |
+
|
| 221 |
+
# 4.3. Additional Uniform Pose Sampling
|
| 222 |
+
|
| 223 |
+
As previous methods mostly focus on learning frontal-view images in their training because of the pose-imbalanced dataset, they lack opportunities for learning side-view images, resulting in degenerate side-view image quality. To increase the opportunities of learning side-view images in pose-imbalanced datasets, our AUPS strategy samples camera poses for rendering fake images from the training dataset like EG3D [1] and additionally sample poses from a uniform distribution in training. Specifically, for computing the non-saturating GAN loss with the image branch of the discriminator, we use camera poses sampled from the training dataset and the uniform distribution together. For computing the pose-matching loss and the identity regularization with the pose branch of the dual-branched discriminator, on the other hand, we simply use camera poses sampled only from the training dataset like EG3D as we found that the pose branch can already be effectively trained without AUPS.
|
| 224 |
+
|
| 225 |
+
While sampling camera poses solely from the uniform distribution may seem straightforward to increase the learning opportunities at steep angles, it can lead to a significant discrepancy between the real and fake image distributions,
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
FFHQ
|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
Figure 4: Qualitative comparison among $\pi$ -GAN [2], EG3D [1] and ours. All the models are trained without transfer learning. Unlike blurry images and noisy geometry of baselines at the steep pose, our method generates high-quality images and shapes on the target datasets. (Columns 1-4, 6-9: the results of a 30-degree rotation from the frontal to the side view. Columns 5, 10: the side views of the shape obtained using the marching cube.)
|
| 232 |
+
|
| 233 |
+
which may harm the training process. To mitigate this, we use both pose distribution of the training dataset and uniform distribution together to decrease the distribution discrepancy while increasing learning opportunities for steep angles. More details on the AUPS can be found in Sec. B.3.
|
| 234 |
+
|
| 235 |
+
# 5. Experiments
|
| 236 |
+
|
| 237 |
+
Implementation Details. Most of the experimental setups and preprocessing methods are the same as those of EG3D [1] except for the following. We set the dimensions of the background latent vector $\mathbf{z}_{bg}$ to 512. The final image resolution of our model is $256 \times 256$ and the neural rendering resolution is fixed as $64 \times 64$ . The neural rendering result is bilinearly upsampled to $128 \times 128$ and fed to the superresolution module in the image generator. The batch size is set to 64 in all the experiments. The balancing weights for the loss terms are set as follows: $\lambda_{\mathrm{pose}} = 1$ , $\lambda_{\mathrm{z}} = 0.5$ , $\lambda_{\mathrm{c}} = 0.25$ , $\lambda_{\mathrm{d}} = 0.25$ and $\lambda_{R1} = 1$ .
|
| 238 |
+
|
| 239 |
+
Datasets. We validate our method on real-world human face datasets (CelebAHQ [12] and FFHQ [13]) and a real-world cat face dataset (AFHQ Cats [3]). To show results both with and without background regions, we remove the background regions of the CelebAHQ dataset using the ground-truth segmentation masks, but keep the background regions of the FFHQ dataset in our experiments. We obtain the ground-truth poses of real images using pre-trained camera pose estimation models [8, 15].
|
| 240 |
+
|
| 241 |
+
Transfer learning. As used in previous 3D GANs for compensating for the small dataset size, we optionally adopt transfer learning to improve the quality of side-view image synthesis [1, 10]. To be specific, we pre-train a generator with a pose-balanced synthetic dataset and fine-tune it with a pose-imbalanced in-the-wild dataset to compensate for insufficient knowledge for side-view images in in-the-wild datasets. Specifically, we use the FaceSynthetics dataset [26] for pre-training. We also remove the
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
EEG3D
|
| 245 |
+
Frontal view
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
Generated image1
|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
Side view
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
3D Shape
|
| 265 |
+
|
| 266 |
+

|
| 267 |
+
|
| 268 |
+

|
| 269 |
+
Frontal view
|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
Generated image2
|
| 275 |
+
|
| 276 |
+

|
| 277 |
+
|
| 278 |
+

|
| 279 |
+
|
| 280 |
+

|
| 281 |
+
|
| 282 |
+

|
| 283 |
+
Side view
|
| 284 |
+
|
| 285 |
+

|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
3D Shape
|
| 289 |
+
|
| 290 |
+

|
| 291 |
+
FFHQ
|
| 292 |
+
EEG3D
|
| 293 |
+
|
| 294 |
+

|
| 295 |
+
s
|
| 296 |
+
Frontal view
|
| 297 |
+
Generated image1
|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
Side view
|
| 303 |
+
|
| 304 |
+

|
| 305 |
+
|
| 306 |
+

|
| 307 |
+
3D Shape
|
| 308 |
+
|
| 309 |
+

|
| 310 |
+
|
| 311 |
+

|
| 312 |
+
Frontal view
|
| 313 |
+
Generated image2
|
| 314 |
+
|
| 315 |
+

|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
|
| 319 |
+

|
| 320 |
+
|
| 321 |
+

|
| 322 |
+
Side view
|
| 323 |
+
|
| 324 |
+

|
| 325 |
+
|
| 326 |
+

|
| 327 |
+
3D Shape
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
AFHQ Cats
|
| 331 |
+
|
| 332 |
+

|
| 333 |
+
Frontal view
|
| 334 |
+
Generated image1
|
| 335 |
+
Figure 5: Qualitative comparison between EG3D [1] and ours. Both models are trained with transfer learning. Unlike unnatural images and geometry of the baseline at the steep pose, our method generates high-quality images and shapes on the target datasets. (Columns 1-4, 6-9 : the results of a 30-degree rotation from the frontal to the side view. Columns 5, 10 : the side views of the shape obtained using the marching cube.)
|
| 336 |
+
|
| 337 |
+

|
| 338 |
+
|
| 339 |
+

|
| 340 |
+
Side view
|
| 341 |
+
|
| 342 |
+

|
| 343 |
+
|
| 344 |
+

|
| 345 |
+
|
| 346 |
+

|
| 347 |
+
|
| 348 |
+

|
| 349 |
+
Frontal view
|
| 350 |
+
Generated image2
|
| 351 |
+
|
| 352 |
+

|
| 353 |
+
|
| 354 |
+

|
| 355 |
+
Side view
|
| 356 |
+
3D Shape
|
| 357 |
+
|
| 358 |
+

|
| 359 |
+
|
| 360 |
+

|
| 361 |
+
|
| 362 |
+
background regions of the FaceSynthetics dataset with the ground-truth segmentation masks to accurately learn 3D geometries. We use the training strategy of EG3D [1] to pretrain models.
|
| 363 |
+
|
| 364 |
+
# 5.1. Comparison
|
| 365 |
+
|
| 366 |
+
We first conduct qualitative and quantitative comparisons of SideGAN and previous 3D GANs ( $\pi$ -GAN [2] and EG3D [1]) on different datasets (CelebAHQ [12], FFHQ [13] and AFHQ Cats [3]) both with and without transfer learning.
|
| 367 |
+
|
| 368 |
+
Qualitative Comparison. Fig. 4 shows a qualitative comparison between our method and the previous 3D GANs. In this comparison, all the models are trained from scratch without transfer learning. The AFHQ Cats dataset [3] is not
|
| 369 |
+
|
| 370 |
+
<table><tr><td></td><td colspan="3">FID↓</td><td colspan="2">Depth error↓</td></tr><tr><td>Method \Dataset</td><td>CelebAHQ</td><td>FFHQ</td><td>AFHQ(Cats)</td><td>CelebAHQ</td><td>FFHQ</td></tr><tr><td>π-GAN</td><td>80.372</td><td>120.991</td><td>-</td><td>2.438</td><td>1.365</td></tr><tr><td>EG3D</td><td>40.760</td><td>35.348</td><td>-</td><td>0.760</td><td>0.921</td></tr><tr><td>Ours</td><td>37.417</td><td>22.174</td><td>-</td><td>0.580</td><td>0.649</td></tr><tr><td>EG3D+transfer learning</td><td>28.912</td><td>26.627</td><td>15.639</td><td>0.606</td><td>0.864</td></tr><tr><td>Ours+transfer learning</td><td>22.219</td><td>24.571</td><td>10.134</td><td>0.549</td><td>0.657</td></tr></table>
|
| 371 |
+
|
| 372 |
+
Table 1: Quantitative comparison of the image and shape quality with baselines.
|
| 373 |
+
|
| 374 |
+
included in this comparison as the dataset is too small to train a generator without transfer learning. For all the real-world human face datasets, $\pi$ -GAN and EG3D generate blurry images for steep angles compared to realistic frontal images. In contrast, SideGAN robustly generates high-quality images irrespective of camera pose. Fig. 5 shows another qualitative comparison where we adopt transfer learn
|
| 375 |
+
|
| 376 |
+

|
| 377 |
+
Figure 6: Visual comparison of side-view images on CelebAHQ [12] on the setting with or without transfer learning. Without transfer learning, our proposed method outperforms the baseline (EG3D [1]), which shows noisy facial boundaries. With transfer learning, our method also outperforms the baseline, which generates holes.
|
| 378 |
+
|
| 379 |
+
ing. For all the datasets, EG3D generates unnatural images for steep angles compared to realistic frontal images. On the other hand, SideGAN robustly generates high-quality images irrespective of camera pose. These results indicate that our method is effective in learning to synthesize high-quality images at all camera poses in both cases with and without transfer learning. Additional results are in Sec. D.
|
| 380 |
+
|
| 381 |
+
Fig. 6 shows zoomed-in patches of side-view images of SideGAN and EG3D [1] to compare the quality of synthesized details. As the figure shows, SideGAN produces more realistic details for side-view images with much less artifacts than EG3D regardless of transfer learning. In addition, the figure also shows that transfer learning helps both models generate clearer images as it provides additional information on side views of human faces. Nevertheless, the result of EG3D with transfer learning still suffers from severe artifacts such as holes due to its pose-sensitive training process.
|
| 382 |
+
|
| 383 |
+
Quantitative Comparison. We conduct a quantitative evaluation on the image and shape quality. To evaluate the pose-irrespective performance of the models, we generate images and shapes at randomly sampled camera poses from a uniform distribution. Refer to Sec. C.5. for more details regarding the pose sampling strategy used in this experiment. Tab. 1 shows the quantitative comparison. As the table shows, in both cases with and without transfer learning, SideGAN outperforms all the other baselines in terms of image quality based on FID [11] thanks to our effective training method.
|
| 384 |
+
|
| 385 |
+
Due to the absence of 3D geometries corresponding to synthesized images, we evaluate the shape quality with pseudo-ground-truth shapes, which are estimated from synthesized images using an off-the-shelf 3D reconstruction model [8], as done in EG3D [1]. We measure depth error by calculating MSE between generated depth from our model
|
| 386 |
+
|
| 387 |
+
<table><tr><td></td><td>AUPS</td><td>Dual-branched discriminator & Pose-matching loss</td><td>Identity regularization (Lid)</td><td>FID↓</td></tr><tr><td rowspan="2">EG3D</td><td></td><td></td><td></td><td>28.912</td></tr><tr><td>✓</td><td></td><td></td><td>30.553</td></tr><tr><td rowspan="2">SideGAN (Ours)</td><td>✓</td><td>✓</td><td></td><td>23.106</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>22.219</td></tr></table>
|
| 388 |
+
|
| 389 |
+
Table 2: Ablation study for key components of the proposed method on CelebAHQ [12].
|
| 390 |
+
|
| 391 |
+
<table><tr><td></td><td>FID↓</td><td>Depth error↓</td></tr><tr><td>Ours w/ pose-regression loss</td><td>30.069</td><td>0.624</td></tr><tr><td>Ours w/ pose-matching loss</td><td>22.219</td><td>0.549</td></tr></table>
|
| 392 |
+
|
| 393 |
+
Table 3: Comparison between the pose-matching loss and the pose-regression loss [6] on CelebAHQ [12].
|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
Figure 7: Additional visual results of ablation study. (b) While AUPS helps improve side-view image quality, artifacts still remain. (d) $\mathcal{L}_{\mathrm{id}}$ results in a slightly clearer side-view image than (c). (e) Instead of our pose-matching loss, our model with the pose-regression loss leads to a flattened shape.
|
| 397 |
+
|
| 398 |
+
and rendered depth from the estimated geometry. Tab. 1 shows that SideGAN achieves the best depth accuracy than the other baselines for the shape quality both with and without transfer learning. This remarkable improvement can also be shown in Fig. 4 and Fig. 5, where generated shapes from SideGAN show high-fidelity 3D geometries compared to those of the other methods.
|
| 399 |
+
|
| 400 |
+
# 5.2. Ablation Studies
|
| 401 |
+
|
| 402 |
+
We conduct ablation studies to evaluate the benefits of three components in our framework: 1) the dualbranched discriminator (Sec. 3.2), 2) the pose-matching loss (Sec. 4.1), 3) AUPS (Sec. 4.3), and 4) the identity regularization $\mathcal{L}_{\mathrm{id}}$ (Sec. 4.2). The ablation studies are conducted using the CelebAHQ dataset [12].
|
| 403 |
+
|
| 404 |
+
Tab. 2 and Fig. 7 report the ablation study result. In Tab. 2, the pose-matching loss and the dual-branched discriminator are applied together since supervision is needed for the pose branch of the discriminator. With AUPS, the image quality of side-view is improved in EG3D [1] (Fig. 7 (b)) since AUPS increases the learning opportunities at steep angles. However, the side-view images still have artifacts and the FID of EG3D deteriorates. This is because EG3D learns the real/fake distribution in a wise manner through a pose-conditional GAN loss, which is unstable under the misalignment between two distributions, caused by AUPS as mentioned in Sec. 4.3. Unlike
|
| 405 |
+
|
| 406 |
+
EG3D, our framework with AUPS improves the FID as each component is added, proving the benefit of each component. This is because SideGAN's GAN loss is more robust to the mismatch of the pose distribution than EG3D and our model learns photo-realism and pose-consistency separately through the dual-branched discriminator and the pose-matching loss.
|
| 407 |
+
|
| 408 |
+
To evaluate the effectiveness of the pose-matching loss in learning side-view images and 3D geometries, we compare our pose-matching loss with the pose-regression loss of GRAM [6] both quantitatively and qualitatively. As shown in Tab. 3, our pose-matching loss results in a significantly lower FID score and depth error, owing to the fact that our binary-classification-based pose-matching loss allows for easier training. We also provide visual comparison in Fig. 7. Compared to our model trained with the pose-matching loss (d), the model trained with the pose-regression loss (e) produces a flattened shape, demonstrating the advantages of the pose-matching loss.
|
| 409 |
+
|
| 410 |
+
# 5.3. Effects on the Steep and Extrapolated Angles
|
| 411 |
+
|
| 412 |
+
Finally, we conduct a more detailed quantitative analysis of SideGAN for different camera poses by measuring the FID scores of synthesized images for frontal, steep, and extrapolated angles. Measuring FID scores requires a sufficient amount of ground-truth images for each camera pose, which is not the case for the in-the-wild datasets. Thus, we conduct our analysis using the FaceSynthetic dataset [26], which is pose-balanced and provides a larger number of images for a wider range of camera poses compared to the in-the-wild datasets. Specifically, we first construct a pose-imbalanced training dataset from FaceSynthetics by randomly sampling images within the pose range from $-50^{\circ}$ to $50^{\circ}$ to have a Gaussian pose distribution like in-the-wild datasets. Then, we train our model with the dataset, and evaluate its FID scores for different camera poses using the original FaceSynthetics dataset. For comparison, we also evaluate the FID scores of EG3D [1]. In this experiment, we did not apply transfer learning.
|
| 413 |
+
|
| 414 |
+
Fig. 8 shows the evaluation result for different camera poses. In the figure, the near-frontal angles are $(-30^{\circ}, 30^{\circ})$ , and the steep angles are $(-50^{\circ}, -30^{\circ}) \cup (30^{\circ}, 50^{\circ})$ . The extrapolated angles indicate angles smaller or larger than $-50^{\circ}$ and $50^{\circ}$ , which are outside the training distribution. As shown in the figure, our model performs comparably to EG3D at near-frontal angles, and as the angle gets larger, our model performs significantly better than EG3D, proving the effectiveness of our approach.
|
| 415 |
+
|
| 416 |
+
# 6. Conclusion
|
| 417 |
+
|
| 418 |
+
In this paper, we proposed SideGAN, a novel 3D GAN training method to generate high-quality images irrespective of the camera pose. Our method is based on the key
|
| 419 |
+
|
| 420 |
+

|
| 421 |
+
Figure 8: Comparison on image quality (FID↓) with regard to the range of the camera angles. We limit the FaceSynthetic dataset [26] not to have any images within the range of extrapolated angles. SideGAN outperforms EG3D [1] in image quality except for the range of the frontal view, which shows even competitive result. All angles are from -90 to 90 degrees based on the frontal view.
|
| 422 |
+
|
| 423 |
+
idea that decomposes the originally challenging problem into two easier subproblems, each of which promotes pose consistency and photo-realism, respectively. Based on this, we propose a novel dual-branched discriminator and a pose-matching loss. We also presented AUPS to increase the learning opportunities for improving the synthesis quality at a side viewpoint.
|
| 424 |
+
|
| 425 |
+
Our experimental results show that our method can synthesize photo-realistic images irrespective of the camera pose on human and animal face datasets. Especially, even only with pose-imbalanced in-the-wild datasets, our model can generate details of side-view images such as ears, unlike blurry images from the baselines.
|
| 426 |
+
|
| 427 |
+
Our method is not free from limitations. For animal faces, we found that black spot-like artifacts appear behind the ear, which might be due to the lack of knowledge about the back of the ear since we conduct transfer learning from synthetic human face to animal face. Also, despite the background network, the background region is sometimes not clearly separated. However, we expect that a more advanced background separation scheme such as [22] would be able to resolve this.
|
| 428 |
+
|
| 429 |
+
Acknowledgement This work was supported by the National Research Foundation of Korea (NRF) grant (NRF2018R1A5A1060031), the Institute of Information & communications Technology Planning & Evaluation (IITP) grant (No.2019-0-01906, Artificial Intelligence Graduate School Program(POSTECH)) funded by the Korea government (MSIT), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF2022R1A2B5B02001913).
|
| 430 |
+
|
| 431 |
+
# References
|
| 432 |
+
|
| 433 |
+
[1] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 16123-16133, 2022.
|
| 434 |
+
[2] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 5799-5809, 2021.
|
| 435 |
+
[3] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 8188-8197, 2020.
|
| 436 |
+
[4] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 4690-4699, 2019.
|
| 437 |
+
[5] Yu Deng, Jiaolong Yang, Dong Chen, Fang Wen, and Xin Tong. Disentangled and controllable face image generation via 3d imitative-contrastive learning. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 5154–5163, 2020.
|
| 438 |
+
[6] Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. Gram: Generative radiance manifolds for 3d-aware image generation. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 10673-10683, 2022.
|
| 439 |
+
[7] Shi et al. 3d-aware indoor scene synthesis with depth priors. In ECCV, pages 406-422, 2022.
|
| 440 |
+
[8] Yao Feng, Haiwen Feng, Michael J Black, and Timo Bolkart. Learning an animatable detailed 3d face model from inthe-wild images. ACM Transactions on Graphics (ToG), 40(4):1-13, 2021.
|
| 441 |
+
[9] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020.
|
| 442 |
+
[10] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis. arXiv preprint arXiv:2110.08985, 2021.
|
| 443 |
+
[11] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proc. of the Advances in Neural Information Processing Systems (NeurIPS), 30, 2017.
|
| 444 |
+
[12] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
|
| 445 |
+
[13] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 4401-4410, 2019.
|
| 446 |
+
|
| 447 |
+
[14] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), 2020.
|
| 448 |
+
[15] Taehee Brad Lee. Cat hipsterizer. https://github.com/kairess/cat_hipsterizer, 2018. Accessed: 2022-11-08.
|
| 449 |
+
[16] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021.
|
| 450 |
+
[17] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 11453-11464, 2021.
|
| 451 |
+
[18] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. Stylesdf: High-resolution 3d-consistent image and geometry generation. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 13503-13513, 2022.
|
| 452 |
+
[19] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. Proc. of the Advances in Neural Information Processing Systems (NeurIPS), 33:20154-20166, 2020.
|
| 453 |
+
[20] Yujun Shen, Jinjin Gu, Xiaou Tang, and Bolei Zhou. Interpreting the latent space of gans for semantic face editing. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 9243-9252, 2020.
|
| 454 |
+
[21] Yujun Shen and Bolei Zhou. Closed-form factorization of latent semantics in gans. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 1532-1540, 2021.
|
| 455 |
+
[22] Minjung Shin, Yunji Seo, Jeongmin Bae, Young Sun Choi, Hyunsu Kim, Hyeran Byun, and Youngjung Uh. Ballgan: 3d-aware image synthesis with a spherical background. arXiv preprint arXiv:2301.09091, 2023.
|
| 456 |
+
[23] Ivan Skorokhodov, Sergey Tulyakov, Yiqun Wang, and Peter Wonka. Epigraph: Rethinking training of 3d gans. arXiv preprint arXiv:2206.10535, 2022.
|
| 457 |
+
[24] Jingxiang Sun, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang, and Yebin Liu. Ide-3d: Interactive disentangled editing for high-resolution 3d-aware portrait synthesis. arXiv preprint arXiv:2205.15517, 2022.
|
| 458 |
+
[25] Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhofer, and Christian Theobalt. Stylerig: Rigging stylegan for 3d control over portrait images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6142-6151, 2020.
|
| 459 |
+
[26] Erroll Wood, Tadas Baltrusaitis, Charlie Hewitt, Sebastian Dziadzio, Matthew Johnson, Virginia Estellers, Thomas J. Cashman, and Jamie Shotton. Fake it till you make it: Face analysis in the wild using synthetic data alone, 2021.
|
| 460 |
+
[27] Yinghao Xu, Sida Peng, Ceyuan Yang, Yujun Shen, and Bolei Zhou. 3d-aware image synthesis via learning structural and textural representations. In Proc. of IEEE conference on computer vision and pattern recognition (CVPR), pages 18430-18439, 2022.
|
| 461 |
+
|
| 462 |
+
[28] Yuxuan Zhang, Wenzheng Chen, Huan Ling, Jun Gao, Yinan Zhang, Antonio Torralba, and Sanja Fidler. Image gans meet differentiable rendering for inverse graphics and interpretable 3d neural rendering. arXiv preprint arXiv:2010.09125, 2020.
|
3dawaregenerativemodelforimprovedsideviewimagesynthesis/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:01882020a6334f97702984454b98101a1bac9c7fa596cbfb1d839d26d37caa34
|
| 3 |
+
size 806086
|
3dawaregenerativemodelforimprovedsideviewimagesynthesis/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a3d8fe93750ef145f76c8132f2502472e5a041494a1a2c11da0aa573e532ece6
|
| 3 |
+
size 526177
|
3dawareimagegenerationusing2ddiffusionmodels/a1f92a77-46e3-4d3f-a4f1-2561244d5fed_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:96acd97bbc3e7ba80492bedcd918061090430441b26f6cc42f269c19d12accfe
|
| 3 |
+
size 82383
|
3dawareimagegenerationusing2ddiffusionmodels/a1f92a77-46e3-4d3f-a4f1-2561244d5fed_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4d5776164a6ffc4507022511d62d5c691de7d5571863d2f709fd204efe9aa693
|
| 3 |
+
size 102640
|
3dawareimagegenerationusing2ddiffusionmodels/a1f92a77-46e3-4d3f-a4f1-2561244d5fed_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7b0a18cdedf820ac79fb0d50474884892a5634e41244fc4aba4930d15729694c
|
| 3 |
+
size 5197586
|
3dawareimagegenerationusing2ddiffusionmodels/full.md
ADDED
|
@@ -0,0 +1,359 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D-aware Image Generation using 2D Diffusion Models
|
| 2 |
+
|
| 3 |
+
Jianfeng Xiang\*1,2 Jiaolong Yang\* Binbin Huang\*3 Xin Tong\* 1Tsinghua University 2Microsoft Research Asia 3ShanghaiTech University
|
| 4 |
+
|
| 5 |
+
{t-jxiang, jiaoyan, xtong}@microsoft.com huangbb@shanghaiitech.edu.cn
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
Figure 1: Our diffusion-based 3D-aware image generation trained on ImageNet. The first three rows show the diverse objects and scenes generated by our method. The bottom row shows two cases synthesized under a $360^{\circ}$ camera trajectory. (More results at project page)
|
| 11 |
+
|
| 12 |
+

|
| 13 |
+
|
| 14 |
+

|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
|
| 24 |
+
# Abstract
|
| 25 |
+
|
| 26 |
+
In this paper, we introduce a novel 3D-aware image generation method that leverages 2D diffusion models. We formulate the 3D-aware image generation task as multiview 2D image set generation, and further to a sequential unconditional-conditional multiview image generation process. This allows us to utilize 2D diffusion models to boost the generative modeling power of the method. Additionally, we incorporate depth information from monocular depth estimators to construct the training data for the conditional diffusion model using only still images.
|
| 27 |
+
|
| 28 |
+
We train our method on a large-scale unstructured 2D image dataset, i.e., ImageNet, which is not addressed by previous methods. It produces high-quality images that significantly outperform prior methods. Furthermore, our approach showcases its capability to generate instances with large view angles, even though the training images are diverse and unaligned, gathered from "in-the-wild" real-world environments.<sup>1</sup>
|
| 29 |
+
|
| 30 |
+
# 1. Introduction
|
| 31 |
+
|
| 32 |
+
Learning to generate 3D contents has become an increasingly prominent task due to its numerous applications such
|
| 33 |
+
|
| 34 |
+
as VR/AR, movie production, and art design. Recently, significant progress has been made in the field of 3D-aware image generation, with a variety of approaches being proposed [4, 5, 7, 10, 30, 32, 43, 44, 54]. The goal of 3D-aware image generation is to train image generation models that are capable of explicitly controlling 3D camera pose, typically by using only unstructured 2D image collections.
|
| 35 |
+
|
| 36 |
+
Most existing methods for 3D-aware image generation rely on Generative Adversarial Networks (GANs) [9] and utilize a Neural Radiance Field (NeRF) [25] or its variants as the 3D scene representation. While promising results have been demonstrated for object-level generation, extending these methods to large-scale, in-the-wild data that features significantly more complex variations in geometry and appearance remains a challenge.
|
| 37 |
+
|
| 38 |
+
Diffusion Models (DMs) [13, 48, 50], on the other hand, are increasingly gaining recognition for their exceptional generative modeling performance on billion-scale image datasets [33, 35, 37]. It has been shown that DMs have surpassed GANs as the state-of-the-art models for complex image generation tasks [8, 14, 15, 29]. However, applying DMs to 3D-aware image generation tasks is not straightforward. Unlike 3D-aware GANs, training DMs for 3D generation necessitates raw 3D assets for its nature of regression-based learning [24, 27, 28, 45, 56].
|
| 39 |
+
|
| 40 |
+
To take advantage of the potent capability of DMs and
|
| 41 |
+
|
| 42 |
+
the ample availability of 2D data, our core idea in this paper is to formulate 3D-aware generation as a multiview 2D image set generation task. Two critical issues must be addressed for this newly formulated task. The first is how to apply DMs for image set generation. Our solution to this is to cast set generation as a sequential unconditional-conditional generation process by factorizing the joint distribution of multiple views of an instance using the chain rule of probability. More specifically, we sample the initial view of an instance using an unconditional DM, followed by iteratively sampling other views with previous views as conditions via a conditional DM. This not only minimizes the model's output to a single image per generation, but also grants it the ability to handle variable numbers of output views.
|
| 43 |
+
|
| 44 |
+
The second issue is the lack of multiview image data. Inspired by a few recent studies [3, 11], we append depth information to the image data through monocular depth estimation techniques and use depth to construct multiview data using only still images. However, we found that naively applying the data construction strategy of [11] can result in domain gaps between training and inference. To alleviate this, we recommend additional training data augmentation strategies that can improve the generation quality, particularly for the results under large view angles.
|
| 45 |
+
|
| 46 |
+
We tested our method on both a large-scale, multi-class dataset, i.e., ImageNet [6], and several smaller, single-category datasets that feature significant variations in geometry. The results show that our method outperformed state-of-the-art 3D-aware GANs on ImageNet by a wide margin, demonstrating the significantly enhanced generative modeling capability of our novel 3D-aware generation approach. It also performed favorably against prior art on other datasets, showing comparable texture quality but improved geometry. Moreover, we find that our model has the capability to generate scenes under large view angles (up to 360 degrees) from unaligned training data, which is a challenging task further demonstrating the efficacy of our new method.
|
| 47 |
+
|
| 48 |
+
The contributions of this work are summarized below:
|
| 49 |
+
|
| 50 |
+
- We present a novel 3D-aware image generation method that uses 2D diffusion models. The method is designed based on a new formulation for 3D-aware generation, i.e., sequential unconditional-conditional multiview image sampling.
|
| 51 |
+
- We undertake 3D-aware generation on a large-scale in-the-wild dataset (ImageNet), which is not addressed by previous 3D-aware generation models.
|
| 52 |
+
- We demonstrate the capability of our method for large-angle generation from unaligned data (up to 360 degrees).
|
| 53 |
+
|
| 54 |
+
# 2. Related Work
|
| 55 |
+
|
| 56 |
+
3D-aware image generation Previous 3D-aware image generation studies [4, 5, 7, 10, 30, 32, 43] have achieved this objective on some well-aligned image datasets of specific objects. Most of these works are based on GANs [9]. Some of them [5, 7, 43, 47, 54] generate 3D scene representations which are used to directly render the final output images. They typically leverage NeRF [25] or its variants as the 3D scene representation and train a scene generator with supervision on the rendered images from a jointly-trained discriminator. Others have combined 3D representation with 2D refinements [4, 10, 30, 32], performing two steps: generating a low-resolution volume to render 2D images or feature maps, and then refining the 2D images with a super-resolution module. Another work [44] achieves this task without introducing intermediate 3D representations with depth-based matching. Very recently, two works concurrent to us [41, 46] expand 3D-aware generation task to large and diverse 2D image collections such as ImageNet [6], utilizing geometric priors from pretrained monocular depth prediction models. This work presents a novel 2D diffusion based 3D-aware generative model, which can be applied to diverse in-the-wild 2D images.
|
| 57 |
+
|
| 58 |
+
Diffusion models Diffusion models [48] come with a well-conceived theoretical formulation and U-net architecture, making them suitable for image modeling tasks [13, 50]. Improved diffusion-based methods [8, 14, 15, 29] demonstrated that DMs have surpassed GANs as the new state-of-the-art models for some image generation tasks. Additionally, diffusion models can be applied to conditional generation, leading to the flourishing of downstream image-domain tasks such as image super-resolution [18, 38], inpainting [23, 35, 36], novel view synthesis [53], scene synthesis [3, 17], and 3D generation [1, 16, 24, 27, 28, 45, 56]. Our method utilizes 2D unconditional and conditional diffusion models with an iterative view sampling process to tackle 3D-aware generation.
|
| 59 |
+
|
| 60 |
+
Optimization-based 3D generation According to the theory of diffusion models, the U-nets are trained to the score function (log derivative) of the image distribution under different noise levels [50]. This has led to the development of the Score Distillation Sampling (SDS) technique, which has been used to perform text-to-3D conversion using a text-conditioned diffusion model, with SDS serving as the multiview objective to optimize a NeRF-based 3D representation. Although recent works [20, 51] have explored this technique on different diffusion models and 3D representations, they are not generative models and are not suitable for random generation without text prompt.
|
| 61 |
+
|
| 62 |
+
Depth-assisted view synthesis Some previous works utilized depth information for view synthesis tasks including
|
| 63 |
+
|
| 64 |
+
single-view view synthesis [11, 31] and perpetual view generation [3, 19, 21]. In contrast, this work deals with a different task, i.e., 3D-aware generative modeling of 2D image distributions. For our task, we propose a new formulation of sequential unconditional-conditional multiview image sampling, where the latter conditional generation subroutine shares a similar task with novel view synthesis.
|
| 65 |
+
|
| 66 |
+
# 3. Problem Formulation
|
| 67 |
+
|
| 68 |
+
# 3.1. Preliminaries
|
| 69 |
+
|
| 70 |
+
In this section, we provide a brief overview of the theory behind Diffusion Models and Conditional Diffusion Models [13, 50]. DMs are probabilistic generative models that are designed to recover images from a specified degradation process. To achieve this, two Markov chains are defined. The forward chain is a destruction process that progressively adds Gaussian noise to target images:
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
q \left(\mathbf {x} _ {t} \mid \mathbf {x} _ {t - 1}\right) = \mathcal {N} \left(\mathbf {x} _ {t}; \sqrt {1 - \beta_ {t}} \mathbf {x} _ {t - 1}, \beta_ {t} \mathbf {I}\right). \tag {1}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
This process results in the complete degradation of target images in the end, leaving behind only tractable Gaussian noise. The reverse chain is then employed to iteratively recover images from noise:
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
p _ {\theta} \left(\mathbf {x} _ {t - 1} \mid \mathbf {x} _ {t}\right) = \mathcal {N} \left(\mathbf {x} _ {t - 1}; \mu_ {\theta} \left(\mathbf {x} _ {t}, t\right); \Sigma_ {\theta} \left(\mathbf {x} _ {t}, t\right)\right), \tag {2}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where the mean and variance functions are modeled as neural networks trained by minimizing the KL divergence between the joint distributions $q(\mathbf{x}_{0:T}), p_{\theta}(\mathbf{x}_{0:T})$ of these two chains. A simplified and reweighted version of this objective can be written as:
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
\mathbb {E} _ {t \sim \mathcal {U} [ 1, T ], \mathbf {x} _ {0} \sim q (\mathbf {x} _ {0}), \epsilon \sim \mathcal {N} (\mathbf {0}, \mathbf {I})} \left[ \| \epsilon - \epsilon_ {\theta} (\mathbf {x} _ {t}, t) \| ^ {2} \right]. \tag {3}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
After training the denoising network $\epsilon_{\theta}$ , samples can be generated from Gaussian noise through the reverse chain.
|
| 89 |
+
|
| 90 |
+
Similarly, the Conditional Diffusion Models are formulated by adding a condition $c$ to all the distributions in the deduction with an objective involving $c$ :
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
\mathbb {E} _ {t \sim \mathcal {U} [ 1, T ], \mathbf {x} _ {0}, c \sim q (\mathbf {x} _ {0}, c), \boldsymbol {\epsilon} \sim \mathcal {N} (\mathbf {0}, \mathbf {I})} \left[ \| \boldsymbol {\epsilon} - \boldsymbol {\epsilon} _ {\theta} (\mathbf {x} _ {t}, t, c) \| ^ {2} \right]. \tag {4}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
# 3.2. 3D Generation as Iterative View Sampling
|
| 97 |
+
|
| 98 |
+
Our assumption is that the distribution of 3D assets, denoted as $q_{a}(\mathbf{x})$ , is equivalent to the joint distribution of its corresponding multiview images. Specifically, given camera sequence $\{\pi_0,\pi_1,\dots ,\pi_N\}$ , we have
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
q _ {a} (\mathbf {x}) = q _ {i} \left(\Gamma \left(\mathbf {x}, \boldsymbol {\pi} _ {0}\right), \Gamma \left(\mathbf {x}, \boldsymbol {\pi} _ {1}\right), \dots , \Gamma \left(\mathbf {x}, \boldsymbol {\pi} _ {N}\right)\right), \tag {5}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
where $q_{i}$ is the distribution of images observed from 3D assets, and $\Gamma(\cdot, \cdot)$ is the 3D-2D rendering operator. This assumption is derived from the bijective correspondence between 3D assets and their multiview projections, given infinite number of views (in practice dozens to hundreds of
|
| 105 |
+
|
| 106 |
+
views are usually adequate). The joint distribution can be factorized into a series of conditioned distributions:
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\begin{array}{l} q _ {a} (\mathbf {x}) = q _ {i} \left(\Gamma (\mathbf {x}, \boldsymbol {\pi} _ {0})\right) \cdot \\ q _ {i} (\Gamma (\mathbf {x}, \boldsymbol {\pi} _ {1}) | \Gamma (\mathbf {x}, \boldsymbol {\pi} _ {0})) \cdot \tag {6} \\ \end{array}
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
q _ {i} (\Gamma (\mathbf {x}, \boldsymbol {\pi} _ {N}) | \Gamma (\mathbf {x}, \boldsymbol {\pi} _ {0}), \dots , \Gamma (\mathbf {x}, \boldsymbol {\pi} _ {N - 1}))
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
It can be noticed that the conditional distributions exhibit an iterative arrangement. By sampling $\Gamma (\mathbf{x},\pi_n)$ step by step with previous samples as conditions, the joint multiview images are generated, thus directly determining the 3D asset.
|
| 117 |
+
|
| 118 |
+
In practice, however, multiview images are also difficult to obtain. To use unstructured 2D image collections, we construct training data using depth-based image warping. First, we substitute the original condition images in Eq. 6, i.e., $\{\Gamma (\mathbf{x},\pmb {\pi}_k),k = 1,\dots ,n - 1\}$ for $\Gamma (\mathbf{x},\pmb {\pi}_n)$ as $\Pi (\Gamma (\mathbf{x},\pmb {\pi}_k),\pmb {\pi}_n)$ , where $\Pi (\cdot ,\cdot)$ denotes the depth-based image warping operation that warps an image to a given target view using depth. As a result, Eq. 6 can be rewritten as
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
\begin{array}{l} q _ {a} (\mathbf {x}) \approx q _ {i} (\Gamma (\mathbf {x}, \boldsymbol {\pi} _ {0})) \cdot \\ q _ {i} (\Gamma (\mathbf {x}, \boldsymbol {\pi} _ {1}) | \Pi (\Gamma (\mathbf {x}, \boldsymbol {\pi} _ {0}), \boldsymbol {\pi} _ {1})) \cdot \tag {7} \\ q _ {i} (\Gamma (\mathbf {x}, \boldsymbol {\pi} _ {N}) | \Pi (\Gamma (\mathbf {x}, \boldsymbol {\pi} _ {0}), \boldsymbol {\pi} _ {N}), \dots) \\ \end{array}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
Under this formulation, we further eliminate the requirement for actual multiview images $\Gamma(\mathbf{x}, \pi_k)$ by only warping $\Gamma(\mathbf{x}, \pi_n)$ itself back-and-forth. The details can be found in Sec. 4.1.
|
| 125 |
+
|
| 126 |
+
Note that unlike some previous 3D-aware GANs [4, 5, 7, 10, 32], we model generic objects and scenes without pose label or any canonical pose definition. We directly regard the image distribution $q_{d}$ in the datasets as $q_{i}(\Gamma (\mathbf{x},\pi_{0}))$ , i.e., the distribution of 3D assets' first partial view. All other views $\pi_1,\dots ,\pi_N$ are considered to be relative to the first view. This way, we formulate 3Daware generation as an unconditional-conditional image generation task, where an unconditional model is trained for $q_{i}(\Gamma (\mathbf{x},\pi_{0}))$ and a conditional model is trained for other terms $q_{i}(\Gamma (\mathbf{x},\pi_{n})|\Pi (\Gamma (\mathbf{x},\pi_{0}),\pi_{n}),\dots)$
|
| 127 |
+
|
| 128 |
+
# 4. Approach
|
| 129 |
+
|
| 130 |
+
As per our problem formulation in Sec. 3.2, our first step is to prepare the data, which includes the construction of RGBD images and the implementation of the warping algorithm (Sec. 4.1). We then train an unconditional RGBD diffusion model and a conditional model, parameterizing the unconditional term (the first one) and conditional terms (the others) in Eq. 7, respectively (Sec. 4.2). After training, our method can generate diverse 3D-aware image samples with a broad camera pose range (Sec. 4.3). The inference framework of our method is depicted in Fig. 2.
|
| 131 |
+
|
| 132 |
+

|
| 133 |
+
Figure 2: The overall framework. Our method contains two diffusion models $\mathcal{G}_u$ and $\mathcal{G}_c$ . $\mathcal{G}_u$ is an unconditional model for randomly generating the first view, and $\mathcal{G}_c$ is a conditional generator for novel views. With aggregated conditioning, multiview images are obtained iteratively by refining and completing previously synthesized views. For fast free-view synthesis, one can run 3D fusion or image-based rendering to synthesize new target views.
|
| 134 |
+
|
| 135 |
+
# 4.1. Data Preparation
|
| 136 |
+
|
| 137 |
+
RGBD image construction To achieve RGBD warping, additional depth information is required for each image. We employ an off-the-shelf monocular depth estimator [34] to predict depth map as it generalizes well to the targeted datasets with diverse objects and scenes.
|
| 138 |
+
|
| 139 |
+
RGBD-warping operator The RGBD-warping operation $\Pi$ is a geometry-aware process determining the relevant information of partial RGBD observations under novel viewpoints. It takes a source RGBD image $\mathbf{I}_s = (\mathbf{C}_s, \mathbf{D}_s)$ and a target camera $\pi_t$ as input, and outputs the visible image contents under target view $\mathbf{I}_t = (\mathbf{C}_t, \mathbf{D}_t)$ and a visibility mask $\mathbf{M}_t$ , i.e., $\Pi: (\mathbf{I}_s, \pi_t) \to (\mathbf{I}_t, \mathbf{M}_t)$ . Our warping algorithm is implemented using a mesh-based representation and rasterizer. For an RGBD image, we construct a mesh by back-projecting the pixels to 3D vertices and defining edges for adjacent pixels on the image grid.
|
| 140 |
+
|
| 141 |
+
Training pair construction To model the conditional distributions in Eq. 7, data-condition pairs that comprise $\Gamma (\mathbf{x},\pmb {\pi}_n)$ and $\Pi (\Gamma (\mathbf{x},\pmb {\pi}_k),\pmb {\pi}_n)$ are required. Inspired by AdaMPI [11], we adopt a forward-backward warping strategy to construct the training pairs from only $\Gamma (\mathbf{x},\pmb {\pi}_n)$ without the need for actual images of $\Gamma (\mathbf{x},\pmb {\pi}_k)$ . Specifically, the target RGBD images are firstly warped to novel views and then warped back to the original target views. This strategy
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
Figure 3: Illustration of forward-backward warping.
|
| 145 |
+
|
| 146 |
+
creates holes in the images which is caused by geometry occlusion. Despite its simplicity, conditions constructed using this strategy are equivalent to warp real images to the target views for Lambertian surfaces, or approximations of them for non-Lambertian regions:
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\Pi (\Gamma (\mathbf {x}, \boldsymbol {\pi} _ {k}), \boldsymbol {\pi} _ {n}) \approx \Pi (\Pi (\Gamma (\mathbf {x}, \boldsymbol {\pi} _ {n}), \boldsymbol {\pi} _ {k}), \boldsymbol {\pi} _ {n}). \tag {8}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
This is because the difference between $\Gamma(\mathbf{x}, \pi_k)$ and $\Pi(\Gamma(\mathbf{x}, \pi_n), \pi_k)$ , i.e., the holes for scene contents not visible at view $\pi_n$ , will be invisible again when wrapped back to $\pi_n$ , and therefore become irrelevant. See Fig. 3 for an illustration of our training pair construction based on this forward-backward warping strategy.
|
| 153 |
+
|
| 154 |
+
# 4.2. Training
|
| 155 |
+
|
| 156 |
+
# 4.2.1 Unconditional RGBD generation
|
| 157 |
+
|
| 158 |
+
We first train an unconditional diffusion model $\mathcal{G}_u$ to handle the distribution of all 2D RGBD images (the first term $q_{i}(\Gamma (\mathbf{x},\pmb{\pi}_{0}))$ in Eq. 7). As mentioned, we directly regard the image distribution $q_{d}$ in the datasets as $q_{i}$ , i.e., the distribution of 3D assets' partial observations, and train the diffusion model on the constructed RGBD images $\mathbf{I}\sim q_d(\mathbf{I})$ to parameterize it.
|
| 159 |
+
|
| 160 |
+
We adopt the ADM network architecture from [8] with minor modifications to incorporate the depth channel. For datasets with class labels (e.g., ImageNet [6]), classifier-free guidance [15] is employed with a label dropping rate of $10\%$ .
|
| 161 |
+
|
| 162 |
+
# 4.2.2 Conditional RGBD completion and refining
|
| 163 |
+
|
| 164 |
+
We then train a conditional RGBD diffusion model $\mathcal{G}_c$ for sequential view generation (the remaining terms $q_i(\Gamma(\mathbf{x},\pi_n)|\Pi(\Gamma(\mathbf{x},\pi_0),\pi_n),\dots)$ in Eq. 7). The constructed data pairs $(\mathbf{I},\Pi(\Pi(\mathbf{I},\pi_k),\pi_n))$ using the forward-backward warping strategy are used to $\mathcal{G}_c$ . Instead of predefining the camera sequences $\{\pi_n\}$ for training, we randomly sample relative camera pose from Gaussian distribution, which can make the process more flexible meanwhile keeping the generalization ability.
|
| 165 |
+
|
| 166 |
+
Our conditional models are fine-tuned from their unconditional counterparts. Specifically, we concatenate the additional condition, i.e., a warped RGBD image with mask, with the original noisy image to form the new network input. The holes on the condition RGBD image are filled with Gaussian noise. Necessary modifications to the network structure are made to the first layer to increase the number of input channels. Zero initialization is used for the added network parameters. Classifier-free guidance is not applied to these conditions.
|
| 167 |
+
|
| 168 |
+
We apply several data augmentation strategies to the constructed conditions for training. We found such augmentations can improve the performance and stability of the inference process.
|
| 169 |
+
|
| 170 |
+
Blur augmentation The RGBD warping operation introduces image blur due to the low-pass filtering that occurs during interpolation and resampling in mesh rasterization. The forward-backward warping strategy involves two image warping steps, while only one is utilized during inference. To mitigate this gap, for the constructed conditions, we randomly replace the unmasked pixels in twice-warped images by pixels in the original images with a predefined probability and then apply Gaussian blur with random standard deviations (Fig. 4). This augmentation expands the training condition distribution to better reflect those encountered at inference time.
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
Figure 4: Illustration of our condition construction process.
|
| 174 |
+
|
| 175 |
+
Texture erosion augmentation The textures located close to depth discontinuities on the condition images have a negative impact on the image generation quality. This phenomenon can be attributed to two causes. Firstly, inthe-wild images contain complex view-dependent lighting effects, particularly near object boundaries (consider the Fresnel effect, rim light, subsurface scattering, etc.). These unique features serve as strong indicators of the edges of foreground objects, hindering the ability of the conditional model to generate appropriate geometry in novel views. Secondly, the estimated depth map is not perfect and may incur segmentation errors around object edges. To address this issue, we perform random erosion on the texture component of the constructed conditions while leaving the depth unchanged (Fig. 4). This augmentation eliminates the problematic textural information near edges and leads to superior generation quality.
|
| 176 |
+
|
| 177 |
+
# 4.3. Inference
|
| 178 |
+
|
| 179 |
+
With trained conditional and unconditional generative models, our 3D-aware iterative view sampling can be applied to obtain multiview images of a 3D asset:
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\begin{array}{l} p _ {\theta} \left(\mathbf {I} _ {0}, \mathbf {I} _ {1}, \dots , \mathbf {I} _ {N}\right) \approx p _ {\theta} \left(\mathbf {I} _ {0}\right) \cdot \\ \begin{array}{l} p _ {\theta} \left(\mathbf {I} _ {1} | \Pi \left(\mathbf {I} _ {0}, \boldsymbol {\pi} _ {1}\right)\right) \cdot \\ \dots \end{array} . \tag {9} \\ p _ {\theta} \left(\mathbf {I} _ {N} \mid \Pi \left(\mathbf {I} _ {0}, \boldsymbol {\pi} _ {N}\right), \dots\right) \\ \end{array}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
One can define a camera sequence that covers the desired views for multiview image synthesis. This camera sequence can be set arbitrarily to a large extent. Such flexibility is provided by random warping during the training stage. Following the given camera sequence, novel views are sampled one after the other iteratively, with all previously sampled images as conditions.
|
| 186 |
+
|
| 187 |
+
Condition aggregation There remains a question of how our trained conditional diffusion models can be conditioned by all previously sampled images. We have tested both stochastic conditioning [3, 53] and a new aggregated conditioning strategy, and found the latter to be more effective
|
| 188 |
+
|
| 189 |
+
for our task. As illustrated in Fig. 2 (right), aggregated conditioning collects information from previous images by performing a weighted sum across all warped versions of them:
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\mathbf {C} _ {n} = \sum_ {i = 0} ^ {n - 1} \mathbf {W} _ {(i, n)} \Pi \left(\mathbf {I} _ {i}, \boldsymbol {\pi} _ {n}\right) / \sum_ {i = 0} ^ {n - 1} \mathbf {W} _ {(i, n)}, \tag {10}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
where $\mathbf{W}_{(i,n)}$ is the weight map. The weight is calculated for each pixel following the lumigraph rendering principles [2]. More details of the weight map computation can be found in Appendix.
|
| 196 |
+
|
| 197 |
+
Fusion-based free-view synthesis In our original formulation, generating any novel view for an instance necessities running the diffusion model $\mathcal{G}_c$ , which is inefficient for video generation and interactive applications. Here, we present a simple and efficient free-view generation solution based on fusing a fixed set of pre-generated views. Specifically, we first define a set of views uniformly covering the desired viewing range and generate images using the trained diffusion models. For any novel view, we warp the pregenerated views to it and aggregate them using a strategy following our condition aggregation. This strategy not only improves the speed for video generation, but also well preserves the texture detail consistency among different views.
|
| 198 |
+
|
| 199 |
+
# 5. Experiments
|
| 200 |
+
|
| 201 |
+
Implementation details We train our method on four datasets: ImageNet [6], SDIP Dogs [26], SDIP Elephants [26] and LSUN Horses [55]. ImageNet is a largescale dataset containing 1.3M images from 1000 classes. The other three are single-category datasets containing 125K, 38K, and 163K images, respectively. Images in these datasets are unaligned and contain complex geometry, which makes the 3D-aware image generation task challenging. We predict the depth maps using the MiDaS [34] dpt_beit_large_512 model. When constructing training pairs with forward-backward warping, camera poses are sampled from Gaussian distributions with $\sigma = (0.3, 0.15)$ for the yaw and pitch angles. FOVs are fixed to $45^{\circ}$ .
|
| 202 |
+
|
| 203 |
+
Our experiments are primarily conducted on $128^{2}$ image resolution, and we will also demonstrate $256^{2}$ generation results using a diffusion-based super-resolution model. We use the same network architecture and training setting as ADM [8] for the training on ImageNet and use a smaller version with channel halved on the other three datasets for efficiency. All our models are trained on 8 NVIDIA Tesla V100 GPUs with 32GB memory. For ImageNet results, Classifier-free guidance weight of both unconditional and conditional networks is set to 3 for the shown samples and 0 for numerical evaluation. $^{2}$
|
| 204 |
+
|
| 205 |
+
Inference speed Evaluated on a NVIDIA Tesla V100 GPU, generating the initial view using $\mathcal{G}_u$ takes 20s with 1000-step DDPM sampler, while generating one new view using $\mathcal{G}_c$ takes 1s using 50-step DDIM sampler.
|
| 206 |
+
|
| 207 |
+
# 5.1. Visual Results
|
| 208 |
+
|
| 209 |
+
Figure 1, 5 and 6 present some sampled multiview images from our method. As shown, our method can generate 3D-aware multiview images with diverse content and large view angle. High-quality 3D-aware images can be generated from in-the-wild image collections.
|
| 210 |
+
|
| 211 |
+
# 5.2. Comparison to Prior Arts
|
| 212 |
+
|
| 213 |
+
We compare our method with previous 3D-aware GANs including pi-GAN [5], EpiGRAF [47] and EG3D [4]. Since there is no pose label, the pose conditioning in EpiGRAF and EG3D are removed. Class labels are fed to the generator and discriminator instead. Note that no depth map is used by these methods.
|
| 214 |
+
|
| 215 |
+
For quantitative evaluation, we measure the Fréchet inception distance (FID) [12] and Inception Score (IS) [39] using 10K randomly generated samples and the whole real images set. Following past practice [5], camera poses are randomly sampled from Gaussian distributions with $\sigma = 0.3$ and 0.15 for the yaw and pitch angles, respectively. The results are shown in Table 1. Some visual examples are presented in Fig. 5.
|
| 216 |
+
|
| 217 |
+
For the results on ImageNet, Table 1 shows that our results are significantly better than EpiGRAF and EG3D, while pi-GAN clearly underperformed. This large performance gain demonstrates the superior capability of our method for modeling diverse, large-scale image data. The visual examples also show the better quality of our results.
|
| 218 |
+
|
| 219 |
+
On other single-category datasets that have smaller scales, the quantitative results of the three methods are comparable: our method is slightly worse than EG3D and slightly better than EpiGRAF. However, their results often exhibit unrealistic 3D geometry. As can be observed from Fig. 5, both EG3D and EpiGRAF generated 'planar' geometries and hence failed to produce the realistic 3D shapes of the synthesized objects, leading to wrong visual parallax when viewed with different angles.
|
| 220 |
+
|
| 221 |
+
# 5.3. Large View Synthesis
|
| 222 |
+
|
| 223 |
+
In this section, we further test the modeling capability of our conditional diffusion model $\mathcal{G}_c$ , particularly under long camera trajectories for large view synthesis.
|
| 224 |
+
|
| 225 |
+
Performance w.r.t. view range We first test our image generation quality under different view ranges. We define
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
Figure 5: Multiview generation on ImageNet, SDIP Dogs, SDIP Elephants and LSUN horses datasets at $128^{2}$ resolution.
|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
Figure 6: Images generated by our method with and without the fusion strategy.
|
| 232 |
+
|
| 233 |
+
a long camera sequence $\{\pi_n\}$ which forms a sampling grid with 9 columns for yaw and 3 rows for pitch, respectively. The resultant 27 views have angles ranging $\pm 0.6$ for yaw
|
| 234 |
+
|
| 235 |
+
(i.e., $\sim 70^{\circ}$ range) and $\pm 0.15$ for pitch (i.e., $\sim 17^{\circ}$ range), respectively. The numerical results in Table 2 show that the quality degrades moderately as the view range gets larger. The quality drop can be attributed to two reasons: domain drifting and data bias (see Appendix for discussions). Figure 7 shows all 27 views of two samples. The visual quality for large angles remains reasonable.
|
| 236 |
+
|
| 237 |
+
$360^{\circ}$ generation We conducted an evaluation of $360^{\circ}$ generation on ImageNet and found that our approach demonstrates efficacy in certain scenarios, as shown in Fig. 1 and 8. Note that $360^{\circ}$ generation of unbounded real-world scenes is a challenging task. One significant contrib-
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
Figure 7: Large view synthesis results. To highlight the contribution of the conditional generator $\mathcal{G}_c$ , we show a smaller figure with regions invisible in the first view marked pink.
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
Figure 8: Curated $360^{\circ}$ generation results on ImageNet.
|
| 244 |
+
|
| 245 |
+
Table 1: Quantitative comparison of generation quality with FID and IS scores using ${10}\mathrm{\;K}$ generated samples.
|
| 246 |
+
|
| 247 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">ImageNet</td><td>Dog</td><td>Elephant</td><td>Horse</td></tr><tr><td>FID↓</td><td>IS↑</td><td>FID↓</td><td>FID↓</td><td>FID↓</td></tr><tr><td>pi-GAN [5]</td><td>138</td><td>6.82</td><td>115</td><td>71.0</td><td>92.6</td></tr><tr><td>EpiGRAF [47]</td><td>67.3</td><td>12.7</td><td>17.3</td><td>7.25</td><td>5.82</td></tr><tr><td>EG3D [4]</td><td>40.4</td><td>16.9</td><td>9.83</td><td>3.15</td><td>2.61</td></tr><tr><td>Ours</td><td>9.45</td><td>68.7</td><td>12.0</td><td>6.00</td><td>4.01</td></tr><tr><td>Ours.fusion</td><td>14.1</td><td>61.4</td><td>14.7</td><td>11.0</td><td>10.2</td></tr></table>
|
| 248 |
+
|
| 249 |
+
Table 2: Generation quality with various view ranges, measured with FID and IS scores of $10\mathrm{K}$ generated samples.
|
| 250 |
+
|
| 251 |
+
<table><tr><td rowspan="2">(#views, yaw range)</td><td colspan="2">ImageNet</td><td>Dog</td><td>Elephant</td><td>Horse</td></tr><tr><td>FID↓</td><td>IS↑</td><td>FID↓</td><td>FID↓</td><td>FID↓</td></tr><tr><td>(1, 0°) - Gu only</td><td>7.85</td><td>85.2</td><td>8.48</td><td>4.06</td><td>2.50</td></tr><tr><td>(9, 17°)</td><td>8.90</td><td>74.9</td><td>11.5</td><td>6.22</td><td>3.52</td></tr><tr><td>(15, 35°)</td><td>9.82</td><td>71.0</td><td>13.0</td><td>7.95</td><td>4.85</td></tr><tr><td>(21, 50°)</td><td>11.2</td><td>66.1</td><td>14.9</td><td>10.1</td><td>6.75</td></tr><tr><td>(27, 70°)</td><td>13.0</td><td>60.3</td><td>17.0</td><td>12.8</td><td>9.41</td></tr></table>
|
| 252 |
+
|
| 253 |
+
utor to this challenge is the data bias problem, where rear views of objects are frequently underrepresented.
|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
Figure 9: Ablation study on our proposed data augmentation strategies. Noticeable artifacts are marked with box.
|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
Figure 10: Ablation study on our proposed aggregated conditioning and stochastic conditioning.
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
Figure 11: NeuS and COLMAP reconstruction results.
|
| 263 |
+
|
| 264 |
+
Table 3: Ablation study on the proposed condition augmentation strategies. The FID-2K metric on the SDIP Dog dataset are reported.
|
| 265 |
+
|
| 266 |
+
<table><tr><td>(#views, yaw range)</td><td>Ours</td><td>w/o erosion</td><td>w/o blur</td></tr><tr><td>(9,17°)</td><td>18.1</td><td>18.4</td><td>19.1</td></tr><tr><td>(15,35°)</td><td>19.2</td><td>20.9</td><td>22.2</td></tr><tr><td>(27,70°)</td><td>23.1</td><td>26.8</td><td>31.6</td></tr></table>
|
| 267 |
+
|
| 268 |
+
# 5.4. Ablation Study
|
| 269 |
+
|
| 270 |
+
Data augmentation strategies We train two conditional models on the SDIP Dog dataset without augmentation and compare the results both visually and quantitatively to verify their effectiveness. 27 views are synthesized for each generated instance following the evaluation in Sec. 5.3. Figure 9 shows that without blur augmentation, the generated images become excessively sharp after a short view sampling chain, which is also detrimental in terms of the FID metric (Table 3). Additionally, without texture erosion augmentation, unreliable information on the edges of the depth map can negatively impact the conditional view sampling process, resulting in poor large-view results. This decrease in quality is also evident in the FID metrics. With all of
|
| 271 |
+
|
| 272 |
+

|
| 273 |
+
Figure 12: $256^{2}$ generation result upsampled from $128^{2}$ using diffusion-based super-resolution model.
|
| 274 |
+
|
| 275 |
+

|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+
our proposed augmentations enabled, we achieve the best results both visually and quantitatively.
|
| 280 |
+
|
| 281 |
+
Multiview conditioning strategy We further compare the effectiveness of our aggregated conditioning strategy against stochastic conditioning [3, 53] in Fig. 10. For our task, stochastic conditioning is not suitable as it does not properly consider all previously-generated contents and will lead to inconsistency among different views.
|
| 282 |
+
|
| 283 |
+
Fusion-based free-view synthesis Table 1 shows the quantitative results of our efficient, fusion-based free-view synthesis solution. For this solution, we first generate 27 fixed views with $70^{\circ}$ yaw range and $17^{\circ}$ pitch range (Sec. 5.3) and use them to generate novel views. After generating these 27 views, it can run at 16 fps to generate arbitrary novel views with our unoptimized mesh rendering implementation. Its FID score is still significantly lower than previous methods on ImageNet, but slightly higher compared to our original method. This is expected as the image-based fusion inevitably introduces blur and other distortions. Figure 6 compares the image samples generated by our method with and without the fusion strategy. Moreover, we conduct 3D reconstruction on the multi-view image samples with the fusion strategy using NeuS [52] and COLMAP [42]. As is shown in Figure 11, the accurate reconstruction results from both methods demonstrate the good multiview consistency of our method. The results with smoothly-changing views in our supplementary video are also generated with this fusion strategy. For the $360^{\circ}$ renderings in the video, the results are obtained by fusing 15 views covering the upper hemisphere of camera viewpoints.
|
| 284 |
+
|
| 285 |
+
# 5.5. Higher-Resolution Generation
|
| 286 |
+
|
| 287 |
+
In theory, our method can be directly applied to train on higher-resolution images given sufficient computational resources. An efficient alternative is to apply image-space upsampling, which has been seen frequent use in previous 3D-aware GANs [4, 10, 32]. We have implemented a $256^{2}$ DM conditioned on low-resolution images for image upsampling following Cascaded Diffusion [14]. This model is trained efficiently by fine-tuning a pretrained $128^{2}$ unconditional model. Figure 12 shows one $256^{2}$ sample from this model; more can be found in Appendix.
|
| 288 |
+
|
| 289 |
+
# 6. Conclusion
|
| 290 |
+
|
| 291 |
+
We have presented a novel method for 3D-aware image generative modelling. Our method is derived from a new formulation of this task: sequential unconditional-conditional generation of multiview images. We incorporate depth information to construct our training data using only still images, and train diffusion models for multiview image modeling. The training results on both large-scale multi-class dataset (i.e., ImageNet) and complex single-category datasets have collectively demonstrated the strong generative modelling power of our proposed method.
|
| 292 |
+
|
| 293 |
+
Limitations and future work Though our method has shown high-quality results and strong generative power, it still has several limitations. Firstly, the depth maps used for training are obtained by applying an existing monocular depth estimator [34]. The depth error and bias in the data will inevitably affect the quality of our generated results. How to alleviate such a negative impact or eliminate the requirement of depth (e.g., using multiview images) are left as our future work. Secondly, not all object can be generated under $360^{\circ}$ . We empirically found that this is more successful for object categories that are with more back-view images in the training dataset and main object well center-aligned. How to make $360^{\circ}$ generation more robust is also our future direction. Finally, like most diffusion models, the image generation speed of our method is limited. However, we posit that these limitations can be gradually alleviated with the development of DM sampling acceleration [22, 40, 49].
|
| 294 |
+
|
| 295 |
+
# References
|
| 296 |
+
|
| 297 |
+
[1] Titas Anciukevicius, Zexiang Xu, Matthew Fisher, Paul Henderson, Hakan Bilen, Niloy J Mitra, and Paul Guerrero. Renderriffusion: Image diffusion for 3d reconstruction, inpainting and generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12608-12618, 2023. 2
|
| 298 |
+
[2] Chris Buehler, Michael Bosse, Leonard McMillan, Steven Gortler, and Michael Cohen. Unstructured lumigraph rendering. In Annual Conference on Computer Graphics and Interactive Techniques, pages 425-432, 2001. 6
|
| 299 |
+
[3] Shengqu Cai, Eric Ryan Chan, Songyou Peng, Mohamad Shahbazi, Anton Obukhov, Luc Van Gool, and Gordon Wetzstein. Diffdreamer: Consistent single-view perpetual view generation with conditional diffusion models. arXiv preprint arXiv:2211.12131, 2022. 2, 3, 5, 9
|
| 300 |
+
[4] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In IEEE/CVF International Conference on Computer Vision, 2022. 1, 2, 3, 6, 8, 9
|
| 301 |
+
[5] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu,
|
| 302 |
+
|
| 303 |
+
and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5799-5809, 2021. 1, 2, 3, 6, 8
|
| 304 |
+
[6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009. 2, 5, 6
|
| 305 |
+
[7] Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. Gram: Generative radiance manifolds for 3d-aware image generation. In IEEE/CVF International Conference on Computer Vision, 2022. 1, 2, 3
|
| 306 |
+
[8] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021. 1, 2, 5, 6
|
| 307 |
+
[9] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in Neural Information Processing Systems, 27, 2014. 1, 2
|
| 308 |
+
[10] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis. In International Conference on Learning Representations, 2021. 1, 2, 3, 9
|
| 309 |
+
[11] Yuxuan Han, Ruicheng Wang, and Jiaolong Yang. Single-view synthesis in the wild with learned adaptive multiplane images. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1-8, 2022. 2, 3, 4
|
| 310 |
+
[12] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pages 6626-6637, 2017. 6
|
| 311 |
+
[13] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020. 1, 2, 3
|
| 312 |
+
[14] Jonathan Ho, Chitwan Sahara, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. J. Mach. Learn. Res., 23(47):1-33, 2022. 1, 2, 9
|
| 313 |
+
[15] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 1, 2, 5
|
| 314 |
+
[16] Animesh Karnewar, Andrea Vedaldi, David Novotny, and Niloy J Mitra. Holodiffusion: Training a 3d diffusion model using 2d images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18423-18433, 2023. 2
|
| 315 |
+
[17] Jiabao Lei, Jiapeng Tang, and Kui Jia. Generative scene synthesis via incremental view inpainting using rgbd diffusion models. arXiv preprint arXiv:2212.05993, 2022. 2
|
| 316 |
+
[18] Haoying Li, Yifan Yang, Meng Chang, Shiqi Chen, Huajun Feng, Zhihai Xu, Qi Li, and Yueting Chen. Srdiff: Single image super-resolution with diffusion probabilistic models. Neurocomputing, 479:47-59, 2022. 2
|
| 317 |
+
[19] Zhengqi Li, Qianqian Wang, Noah Snavely, and Angjoo Kanazawa. Infinitenature-zero: Learning perpetual view generation of natural scenes from single images. In ECCV, 2022. 3
|
| 318 |
+
[20] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fi
|
| 319 |
+
|
| 320 |
+
dler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. arXiv preprint arXiv:2211.10440, 2022. 2
|
| 321 |
+
[21] Andrew Liu, Richard Tucker, Varun Jampani, Ameesh Makadia, Noah Snavely, and Angjoo Kanazawa. Infinite nature: Perpetual view generation of natural scenes from a single image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14458-14467, 2021. 3
|
| 322 |
+
[22] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In Advances in Neural Information Processing Systems, 2022. 9
|
| 323 |
+
[23] Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11461-11471, 2022. 2
|
| 324 |
+
[24] Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2837-2845, 2021. 1, 2
|
| 325 |
+
[25] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. 1, 2
|
| 326 |
+
[26] Ron Mokady, Omer Tov, Michal Yarom, Oran Lang, Inbar Mosseri, Tali Dekel, Daniel Cohen-Or, and Michal Irani. Self-distilled stylegan: Towards generation from internet photos. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1–9, 2022. 6
|
| 327 |
+
[27] Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, and Matthias Nießner. Diffrf: Rendering-guided 3d radiance field diffusion. arXiv preprint arXiv:2212.01206, 2022. 1, 2
|
| 328 |
+
[28] Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751, 2022. 1, 2
|
| 329 |
+
[29] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162-8171. PMLR, 2021. 1, 2
|
| 330 |
+
[30] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11453-11464, 2021. 1, 2
|
| 331 |
+
[31] Simon Niklaus, Long Mai, Jimei Yang, and Feng Liu. 3d ken burns effect from a single image. ACM Transactions on Graphics (ToG), 38(6):1-15, 2019. 3
|
| 332 |
+
[32] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. Stylesdf: High-resolution 3d-consistent image and geometry generation. In IEEE/CVF International Conference on Computer Vision, 2022. 1, 2, 3, 9
|
| 333 |
+
[33] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125,
|
| 334 |
+
|
| 335 |
+
2022.1
|
| 336 |
+
[34] René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE transactions on pattern analysis and machine intelligence, 44(3):1623-1637, 2020. 4, 6, 9
|
| 337 |
+
[35] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 1, 2
|
| 338 |
+
[36] Chitwan Sahara, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1-10, 2022. 2
|
| 339 |
+
[37] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022. 1
|
| 340 |
+
[38] Chitwan Sahara, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image superresolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. 2
|
| 341 |
+
[39] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. Advances in neural information processing systems, 29, 2016. 6
|
| 342 |
+
[40] Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In International Conference on Learning Representations, 2022. 9
|
| 343 |
+
[41] Kyle Sargent, Jing Yu Koh, Han Zhang, Huiwen Chang, Charles Herrmann, Pratul Srinivasan, Jiajun Wu, and Deqing Sun. Vq3d: Learning a 3d-aware generative model on image-net. arXiv preprint arXiv:2302.06833, 2023. 2, 6
|
| 344 |
+
[42] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 9
|
| 345 |
+
[43] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. In Advances in Neural Information Processing Systems, 2020. 1, 2
|
| 346 |
+
[44] Zifan Shi, Yujun Shen, Jiapeng Zhu, Dit-Yan Yeung, and Qifeng Chen. 3d-aware indoor scene synthesis with depth priors. In ECCV, 2022. 1, 2
|
| 347 |
+
[45] J Ryan Shue, Eric Ryan Chan, Ryan Po, Zachary Ankner, Jiajun Wu, and Gordon Wetzstein. 3d neural field generation using triplane diffusion. arXiv preprint arXiv:2211.16677, 2022. 1, 2
|
| 348 |
+
[46] Ivan Skorokhodov, Aliaksandr Siarohin, Yinghao Xu, Jian Ren, Hsin-Ying Lee, Peter Wonka, and Sergey Tulyakov. 3d generation on imagenet. In International Conference on Learning Representations, 2023. 2, 6
|
| 349 |
+
[47] Ivan Skorokhodov, Sergey Tulyakov, Yiqun Wang, and Peter Wonka. EpiGRAF: Rethinking training of 3d GANs. In Advances in Neural Information Processing Systems, 2022. 2, 6, 8
|
| 350 |
+
|
| 351 |
+
[48] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256-2265. PMLR, 2015. 1, 2
|
| 352 |
+
[49] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. 9
|
| 353 |
+
[50] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. 1, 2, 3
|
| 354 |
+
[51] Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation. arXiv preprint arXiv:2212.00774, 2022. 2
|
| 355 |
+
[52] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. Advances in Neural Information Processing Systems, 2021. 9
|
| 356 |
+
[53] Daniel Watson, William Chan, Ricardo Martin-Brualla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. arXiv preprint arXiv:2210.04628, 2022. 2, 5, 9
|
| 357 |
+
[54] Jianfeng Xiang, Jiaolong Yang, Yu Deng, and Xin Tong. Gram-hd: 3d-consistent image generation at high resolution with generative radiance manifolds. arXiv preprint arXiv:2206.07255, 2022. 1, 2
|
| 358 |
+
[55] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. 6
|
| 359 |
+
[56] Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5826–5835, 2021. 1, 2
|
3dawareimagegenerationusing2ddiffusionmodels/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:00b9bdbb092a9c4d8472abaed7e3ff87819a8191046cbc24308c143f33998a62
|
| 3 |
+
size 1139912
|
3dawareimagegenerationusing2ddiffusionmodels/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b92e4cd1a73ba31fc2293ccca094e28331dde2832f1036a013d0a48d3bd91453
|
| 3 |
+
size 416030
|
3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/2fa63626-9b1d-415b-b571-b36c444bdefd_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:52412e0d2190385a7b7e62948b645bfd8d42a069deb5e1513606174eb7d91ee7
|
| 3 |
+
size 93808
|
3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/2fa63626-9b1d-415b-b571-b36c444bdefd_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8666378b0e4f1783261bc98a917b2b2d3dd199614dc185bca6e0d6daa357a17b
|
| 3 |
+
size 118404
|
3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/2fa63626-9b1d-415b-b571-b36c444bdefd_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7b00579b71c1ad571fdbdb33b35c8eb00a27f51f9a3d7bb2225961676f5dc6ae
|
| 3 |
+
size 3336218
|
3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/full.md
ADDED
|
@@ -0,0 +1,361 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D-Aware Neural Body Fitting for Occlusion Robust 3D Human Pose Estimation
|
| 2 |
+
|
| 3 |
+
Yi Zhang $^{1*}$ Pengliang Ji $^{2*}$ Angtian Wang $^{1}$ Jieru Mei $^{1}$ Adam Kortylewski $^{3,4}$ Alan Yuille $^{1}$ Johns Hopkins University $^{2}$ Beihang University Max Planck Institute for Informatics $^{4}$ University of Freiburg
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
Regression-based methods for 3D human pose estimation directly predict the 3D pose parameters from a 2D image using deep networks. While achieving state-of-the-art performance on standard benchmarks, their performance degrades under occlusion. In contrast, optimization-based methods fit a parametric body model to 2D features in an iterative manner. The localized reconstruction loss can potentially make them robust to occlusion, but they suffer from the 2D-3D ambiguity. Motivated by the recent success of generative models in rigid object pose estimation, we propose 3D-aware Neural Body Fitting (3DNBF) - an approximate analysis-by-synthesis approach to 3D human pose estimation with SOTA performance and occlusion robustness. In particular, we propose a generative model of deep features based on a volumetric human representation with Gaussian ellipsoidal kernels emitting 3D pose-dependent feature vectors. The neural features are trained with contrastive learning to become 3D-aware and hence to overcome the 2D-3D ambiguity. Experiments show that 3DNBF outperforms other approaches on both occluded and standard benchmarks. Code is available at https://github.com/edz-o/3DNBF
|
| 8 |
+
|
| 9 |
+
# 1. Introduction
|
| 10 |
+
|
| 11 |
+
Monocular 3D human pose estimation (HPE) is a longstanding problem of computer vision. Regression-based methods [12,24,49,53,69,71] directly regress the 3D pose parameters of a human body model, such as SMPL [40], and learn to overcome the inherent 2D-3D ambiguity of the prediction task from the training data. However, the performance of regression-based methods degrades when humans are partially occluded, as demonstrated by related work [26] and in our experiments (Figure 1 (c)). Optimization-based methods [4,27,51,77,79] fit a parametric body model to 2D representations, such as keypoint detections [24,28] or seg
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
Figure 1: 3D human pose estimation under occlusion. Performance of regression-based methods [22] degrades under occlusion (c). Traditional optimization-based methods can be robust occlusion, but they suffer from the 2D-3D ambiguity in monocular 3D HPE (d). Our generative approach resolves the 2D-3D ambiguity through analysis-by-synthesis in a 3D-aware feature space (e).
|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
|
| 24 |
+
mentation maps [49, 53, 81], in an iterative manner. They are relatively robust to occlusion but perform worse than regression-based methods in 3D HPE particularly because they suffer from the 2D-3D ambiguity (Figure 1(d)), even when regularized with strong 3D priors [51], because the manually designed 2D features lack 3D information.
|
| 25 |
+
|
| 26 |
+
Recently, generative models have been shown to be successful with improved robustness to occlusion in object recognition [31] and rigid object pose estimation [73] for certain object categories. The idea is to formulate vision tasks as inverse graphics or analysis-by-synthesis [25, 80] - searching for the parameters in a generative model (e.g. computer graphics models) that best explain the observed image while an outlier process can be introduced to explain occluded regions. However, performing analysis-by-synthesis in RGB pixel space is challenging due to the lack of both good generative models and efficient algorithms to inverse them. Instead, they perform approximate analysis-by-synthesis in deep feature space. However, the generative models used are 2D-based or simple cuboid-like 3D structures with features invariant to the 3D viewpoint, making
|
| 27 |
+
|
| 28 |
+
them less suitable for 3D HPE.
|
| 29 |
+
|
| 30 |
+
In this work, we propose a 3D-aware Neural Body Fitting (3DNBF) framework that enables feature-level analysis-by-synthesis for 3D HPE, which is highly robust to occlusion (Figure 1(e)). Specifically, we propose a novel generative model of deep network features for human body, named Neural Body Volumes (NBV). NBV is an explicit volume-based parametric body representation consisting of a set of Gaussian ellipsoidal kernels that emit feature vectors. Compared with the popular mesh representation, our volume representation is analytically differentiable, provides smooth gradients, i.e. is efficient to optimize, and rigorously handles self-occlusion [56]. We employ a factorized likelihood model for feature maps which is further made robust to partial occlusion by incorporating robust loss functions [18]. To overcome the 2D-3D ambiguity, we impose a distribution on the kernel features conditioned on pose parameters making them pose-dependent.
|
| 31 |
+
|
| 32 |
+
Unlike optimization-based methods that manually design the feature representation which may lose information, we learn the features from data. In particular, we introduce a contrastive learning framework [2,13,76] to learn features that are invariant to instance-specific details (such as color of the clothes), meanwhile encouraging them to capture local 3D pose information of the human body parts, i.e. being 3D-aware. The generative model is learned with the feature extractor network iteratively. For more efficient inference, we attach a regression head to the feature extractor to predict the pose and shape parameters from the feature directly. During inference, we initialize NBV with the prediction from our regressor head and optimize the human pose by maximizing the likelihood of the target feature map under the generative model using gradient-based optimization. We find this combined approach can resolve common errors of regression-based methods, such as when the pose of partially occluded parts is not estimated correctly (Figure 1).
|
| 33 |
+
|
| 34 |
+
We evaluate 3DNBF on three existing 3D HPE datasets: 3DPW [72], 3DPW-Occ [82] and 3DOH50K [82], and propose a more challenging adversarial evaluation protocol 3DPW-AdvOcc for occlusion robustness. Our experimental results show that 3DNBF outperforms state-of-the-art (SOTA) regression-based methods as well as optimization-based approaches by a large margin under occlusion while maintaining SOTA performance on non-occluded data. In summary, our main contributions are:
|
| 35 |
+
|
| 36 |
+
1. We propose 3DNBF - an approximate analysis-by-synthesis approach for 3D HPE at feature level with a volume-based neural generative model NBV for human body with pose-dependent kernel features.
|
| 37 |
+
2. We introduce a contrastive learning framework to train NBV with a feature extractor such that the feature activations capture the local 3D-posed information of the
|
| 38 |
+
|
| 39 |
+
body parts, to resolve the 2D-3D ambiguity.
|
| 40 |
+
|
| 41 |
+
3. We demonstrate on four datasets that 3DNBF outperforms SOTA regression-based and optimization-based methods, particularly when under occlusion.
|
| 42 |
+
|
| 43 |
+
# 2. Related Work
|
| 44 |
+
|
| 45 |
+
Monocular 3D Human Pose Estimation. Existing approaches can be categorized into regression-based and optimization-based methods. Regression-based methods [12,24,49,53,69,71] directly estimate 3D human pose from RGB image using a deep network. Different 3D human pose representations are adopted such as 3D joint locations [41,59], 3D heatmaps [52,68,83] and parameters of a parametric human body [24,28,53]. Optimization-based methods [4,27,51,77,79] involve parametric human models like SMPL [1,40,51], and produce both the 3D human pose and human shape. The representative method is SMPLify [4], which fits the SMPL model to 2D keypoint detections with strong priors. Exploiting more information into the fitting procedure has been investigated, including silhouettes [33], multi-view [17], more expressive shape models [23]. [77] propose to fit 3D part affinity maps to overcome 2D-3D ambiguity. This requires the network to learn accurate part orientation which is difficult and shown to be less robust to occlusion. Hybrid methods [9, 66] perform iterative optimization using regressed descent directions.
|
| 46 |
+
|
| 47 |
+
Robustness to Occlusion. Regression-based methods are sensitive to occlusions as studied by Kocabas et al. [26], who propose part segmentation guided attention mechanism to handle occlusion. Data augmentation is another common way to enhance occlusion robustness, for example by cropping [3, 22, 58], or by putting patches into the image [11, 63]. Even with data augmentation, we show that they are still sensitive to occlusion by applying a more sophisticated sliding-window attack. Explicit occlusion handling in regression-based methods infers occluded joints using representation redundancy [42, 43] which is partially successful or exploits visibility information in the training [7, 38, 74, 82]. However, the occlusion information except self-occlusion is often unavailable in the wild and expensive to annotate which limits the applicability of such methods. To model pose ambiguities for truncated human images, [3, 29, 64] predict multiple possible poses that have correct 2D projections. Another direction to handle occlusion leverages motion in sequences [6, 16]. Generative models are shown to be robust to occlusion [44, 73] for parsing rigid objects and we further demonstrate it for articulated objects.
|
| 48 |
+
|
| 49 |
+
Human body representations. Parametric mesh models [4, 27, 51, 77, 79] are most popular models for human pose/shape estimation, which generate intermediate representations like 2D keypoints, silhouette and part segmen
|
| 50 |
+
|
| 51 |
+
tations. However, these representations lose the local information, e.g. shading, useful for inferring 3D from 2D. Recently, implicit volume representation has become increasingly popular [21,37,48,50,54,62,67,75,78] as they can achieve highly realistic human reconstruction. However, they are not suitable for our purpose as training these models often requires multi-view or videos and takes an extended time for a single person. We propose a body representation that combines a volumetric 3D Gaussian representation [56,57] which gives more stable gradients compared to mesh-based differentiable rendering. Compared to popular implicit volume representations, our volume representation is explicit with fewer parameters to learn which leads to efficient inference.
|
| 52 |
+
|
| 53 |
+
Generative Models of Neural Textures. Prior works have shown the potential of combining 3D representations with neural texture maps, with an application to image synthesis of static scenes through neural rendering [46, 47, 70]. As inverting a generative model of RGB pixel values is challenging, a recent line of work introduced a neural analysis-by-synthesis approach to perform visual recognition tasks such as image classification [30-32] and 3D pose estimation [73] with a largely enhanced robustness to partial occlusion when compared to standard deep network based approaches. However, these prior works explicitly assume rigid objects and use simple 2D-based or cuboid-like generative models. [45] learns a continuous feature embedding function for each vertex on 3D human body mesh. However, this representation is invariant to 3D pose and therefore loses the useful information for estimation 3D from 2D. Our work generalizes the neural analysis-by-synthesis approach to 3D HPE, addressing the challenge of 2D-3D ambiguities and modeling articulated human bodies.
|
| 54 |
+
|
| 55 |
+
# 3. 3D-Aware Neural Body Fitting
|
| 56 |
+
|
| 57 |
+
In the following, we first explain a conceptual formulation of analysis-by-synthesis for 3D human pose estimation (Section 3.1) and propose our feature-level analysis-by-synthesis formulation in Section 3.2. Then we introduce the proposed generative Neural Body Volume model (Section 3.3), including the 3D-aware pose-dependent features (Section 3.4). Finally, we describe the training and inference process for the generative model in Section 3.5, 3.6.
|
| 58 |
+
|
| 59 |
+
# 3.1. HPE via Analysis-by-Synthesis
|
| 60 |
+
|
| 61 |
+
Given an input image $\pmb{I} \in \mathbb{R}^{H \times W \times 3}$ , we aim to estimate the 3D human pose parameters $\pmb{\theta}$ . Using Bayes rule we formulate the pose estimation task as a probabilistic inference problem given the observed image $\pmb{I}$ :
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\boldsymbol {\theta} ^ {*} = \underset {\boldsymbol {\theta}} {\operatorname {a r g m a x}} p (\boldsymbol {\theta} | \boldsymbol {I}) = \underset {\boldsymbol {\theta}} {\operatorname {a r g m a x}} p (\boldsymbol {I} | \boldsymbol {\theta}) p (\boldsymbol {\theta}), \tag {1}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
where $p(\theta)$ is a prior distribution learned from data [4, 51], and $p(\mathbf{I}|\theta)$ is the likelihood. $p(\mathbf{I}|\theta)$ is typically defined using a generative forward model (involving 3D CAD models and a graphics engine), and the analysis-by-synthesis process is hence defined as finding the parameters $\theta^{*}$ that can best explain the input image. However, it is very challenging to reconstruct human images accurately which requires either multi-view images or video input [21, 37, 54, 67, 75].
|
| 68 |
+
|
| 69 |
+
Instead of performing analysis-by-synthesis in RGB space, we aim to reconstruct the human appearance at the feature-level of a neural network. Fig. 2 is an overview of our method. The feature representations will be learned to become invariant to image variations that is not relevant for the HPE task, such as clothing color or style, and hence will enable us to perform HPE accurately from a single image. In the following, we will first introduce the concept of feature-level analysis-by-synthesis and subsequently introduce a generative model of humans on the feature level.
|
| 70 |
+
|
| 71 |
+
# 3.2. Feature-Level Analysis-by-Synthesis
|
| 72 |
+
|
| 73 |
+
We denote a feature representation of an input image as $\zeta(I) = F \in \mathbb{R}^{H \times W \times D}$ which is the output of a deep convolutional neural network $\zeta$ . $f_{i} \in \mathbb{R}^{D}$ is a feature vector in $F$ at pixel $i$ on the feature map. We define a generative model of humans on the feature-level as $\mathcal{G}(\boldsymbol{\theta}) = \hat{\Phi} \in \mathbb{R}^{H \times W \times D}$ , which produces a feature map $\hat{\Phi}$ given the pose $\theta$ . We can now define the likelihood function of our Bayesian model (Eq. 1). To enable efficient learning and inference, we adopt a factorized likelihood model:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
p (\boldsymbol {F} | \mathcal {G} (\boldsymbol {\theta}), \mathcal {B}) = \prod_ {i \in \mathcal {F} \mathcal {G}} p (\boldsymbol {f} _ {i} | \hat {\boldsymbol {\phi}} _ {i}) \prod_ {i ^ {\prime} \in \mathcal {B} \mathcal {G}} p (\boldsymbol {f} _ {i ^ {\prime}} | \mathcal {B}), \tag {2}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where the foreground $\mathcal{F}\mathcal{G}$ is the set of all pixel locations on the feature map $\pmb{F}$ that are covered by the human. The background $\mathcal{B}\mathcal{G}$ contains those pixels respectively that are not covered. The foreground likelihood $p(\pmb {f}_i|\hat{\phi}_i)$ is defined as a Gaussian distribution $\mathcal{N}(\hat{\phi}_i,\sigma_i^2 I)$ with the mean vector $\hat{\phi}_i$ at location $i$ , and a standard deviation $\sigma_{i}$ . Background features are modeled using a simple background model $p(\pmb {f}_{i^{\prime}}|\mathcal{B})$ that is defined by a Gaussian distribution $\mathcal{N}(\pmb {b},\sigma^2\pmb {I})$ , where the parameters are $\mathcal{B} = \{\pmb {b},\sigma \}$ learned from the background features in the training images.
|
| 80 |
+
|
| 81 |
+
Occlusion robustness. Following related work on occlusion-robust analysis-by-synthesis [10], we define a robust likelihood as:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
p (\boldsymbol {F} | \mathcal {G} (\boldsymbol {\theta}), \mathcal {B}, \boldsymbol {Z}) = \prod_ {i \in \mathcal {F G}} p \left(\boldsymbol {f} _ {i} | \hat {\phi} _ {i}, z _ {i}\right) \prod_ {i ^ {\prime} \in \mathcal {B G}} p \left(\boldsymbol {f} _ {i ^ {\prime}} | \mathcal {B}\right) \tag {3}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
p (\boldsymbol {f} _ {i} | \hat {\phi} _ {i}, z _ {i}) = \left[ p (\boldsymbol {f} _ {i} | \hat {\phi} _ {i}) \right] ^ {z _ {i}} \left[ p (\boldsymbol {f} _ {i} | \mathcal {B}) \right] ^ {(1 - z _ {i})},
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
where $z_{i} \in \{0,1\}$ is a binary variable and we set its prior probabilities to be $p(z_{i} = 1) = p(z_{i} = 0) = 0.5$ . The variable $z_{i}$ allows the background model to explain those pixels in $\mathcal{F}\mathcal{G}$ that cannot be explained well by the foreground model,
|
| 92 |
+
|
| 93 |
+
presumably due to partial occlusion. To reduce clutter in the remaining paper we will omit the occlusion variable in the coming equations, but note that we are using a robust likelihood during inference.
|
| 94 |
+
|
| 95 |
+
In the following section, we describe our feature-level generative model for human pose estimation.
|
| 96 |
+
|
| 97 |
+
# 3.3. Neural Body Volumes
|
| 98 |
+
|
| 99 |
+
At the core of our framework is the Neural Body Volumes (NBV) representation, a model that enables the rendering of human bodies on the feature-level (illustrated in Figure 2(b)). Traditional human body models are mostly mesh-based, e.g. SMPL [40]. However, while meshes are useful representations for forward-rendering applications in computer graphics, they are sub-optimal for differentiable inverse rendering, since the mesh rendering process is inherently difficult to differentiate w.r.t. the model parameters [39]. Prior art [56] showed that volume rendering has a smoother and analytical gradient, and leads to a more efficient optimization, and better handling self-occlusion compared to meshes. Inspired by these results, we propose Neural Body Volumes (NBV), a volume-based representation of human bodies for rendering human bodies on the feature-level. In NBV, a human body is represented by a three-dimensional volume that consists of $K$ Gaussian kernels placed on the body surface. The density at spatial location $\mathbf{X} \in \mathbb{R}^3$ is $\rho_k(\mathbf{X}) = \mathcal{N}(M_k, \Sigma_k)$ . $M_k(\theta, \beta) \in \mathbb{R}^3$ and $\Sigma_k(\theta, \beta) \in \mathbb{R}^{3 \times 3}$ are the mean vector and covariance matrix conditioned on human pose and shape, parameterized by $\theta$ and $\beta$ respectively, controlling the center and shape of the Gaussian kernel which we describe in detail in the following paragraph. The volume density is defined as $\rho(\mathbf{X}) = \sum_{k=1}^{K} \rho_k(\mathbf{X})$ . Each Gaussian kernel is associated with a feature vector $\phi_k \in \mathbb{R}^D$ which can be rendered to image space using volume rendering:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\hat {\phi} (\boldsymbol {r} _ {\Pi}) = \int_ {t _ {n}} ^ {t _ {f}} T (t) \sum_ {k = 1} ^ {K} \rho_ {k} (\boldsymbol {r} _ {\Pi} (t)) \phi_ {\boldsymbol {k}} \mathrm {d} t, \tag {4}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\text {w h e r e} T (t) = \exp \left(- \int_ {t _ {n}} ^ {t} \rho (\boldsymbol {r} _ {\Pi} (s)) \mathrm {d} s\right),
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
that computes the aggregated feature along the ray $\boldsymbol{r}_{\Pi}(t)$ $t \in [t_n, t_f]$ from the camera center through a pixel on the image plane where $\Pi$ denotes the camera parameters. The Gaussian kernel representation enables calculation of the analytic form of the integral $\hat{\phi}(\boldsymbol{r}) = \sum_{k=1}^{K} \alpha_k (\boldsymbol{r}_{\Pi}, \rho) \phi_k$ which we provide in the supplementary material. Here, the number of Gaussian kernels $K$ and the associated features $\Phi = \{\phi_k\}$ are global parameters shared across all human instances. While for each input image, we optimize the pose $\theta$ and shape $\beta$ to transform the location $M_k(\theta, \beta)$ and shape $\Sigma_k(\theta, \beta)$ of the Gaussian ellipsoids. Our model can fit arbitrary shapes with a sufficient number of kernels.
|
| 110 |
+
|
| 111 |
+
Conditioning on pose and shape. Given a set of body joints $J \in \mathbb{R}^{N \times 3}$ , the pose is defined as their corresponding rotation matrices $\Omega \in \mathbb{R}^{N \times 3 \times 3}$ relative to the tem-
|
| 112 |
+
|
| 113 |
+
plate joints $\bar{J}$ in a skeleton tree. We model body articulation using linear blend skinning (LBS) [34] which transforms the center of the Gaussian kernels with transformation linearly blending the accumulated rigid transformations $G(J,\Omega)\in \mathbb{R}^{N\times 4\times 4}$ of the $N$ body joints (including the root transformation). And we model body shape variations by displacing the kernels with linear combinations of a set of $L$ basis shape displacements $S\in \mathbb{R}^{L\times K\times 3}$ :
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\boldsymbol {M} _ {\boldsymbol {k}} = \sum_ {i = 1} ^ {N} w _ {k, i} \boldsymbol {G} _ {i} [ \bar {\boldsymbol {M}} _ {\boldsymbol {k}} + \sum_ {l = 1} ^ {L} \beta_ {l} \boldsymbol {S} _ {l k} | \mathbf {1} ], \tag {5}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where $\bar{M}_k$ denotes the kernel position in rest pose, and $\sum_{i=1}^{N} w_{k,i} = 1$ and $\sum_{l=1}^{L} \beta_l = 1$ are pose and shape blend weights. $[\cdot|\mathbf{1}]$ denotes the homogeneous coordinates. For the spatial covariance, we also perform transformation and blending according to the rotation of the joints:
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\boldsymbol {\Sigma} _ {k} ^ {- 1} = \sum_ {i = 1} ^ {N} w _ {k, i} \boldsymbol {R} _ {i} ^ {T} \bar {\boldsymbol {\Sigma}} _ {k} ^ {- 1} \boldsymbol {R} _ {i}, \boldsymbol {R} _ {i} = \prod_ {j \in A (i)} \boldsymbol {\Omega} _ {i}, \tag {6}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $\bar{\Sigma}_k$ is the covariance matrix in rest pose, and $A(i)$ is the ordered set ancestors of joint $i$ . This takes into account that the orientation of the Gaussian ellipsoid should rotate with the pose.
|
| 126 |
+
|
| 127 |
+
The template joints location $\bar{J}$ can also deform according to shape. Specifically, we regress the template joint locations from the locations of the deformed Gaussian kernels $\bar{J} = g(\bar{M} +\sum_{l = 1}^{L}\beta_{l}S_{l})$ . The common choice for such regressor $g:\mathbb{R}^{K\times 3}\to \mathbb{R}^{N\times 3}$ is a linear function [40, 51].
|
| 128 |
+
|
| 129 |
+
In summary, the proposed Neural Body Volume representation enables us to render human bodies on the feature-level using volume rendering process such that for each pixel in the feature map there will be a feature vector $\hat{\phi}$ corresponding to the contribution from all Gaussian kernels.
|
| 130 |
+
|
| 131 |
+
# 3.4. A Generative Model of 3D-Aware Features
|
| 132 |
+
|
| 133 |
+
Related work on feature-level inverse rendering for rigid pose estimation [20, 73] trains the feature extractor $\zeta$ such that the features become invariant to changes in the 3D pose. However, for human pose estimation, it is fundamentally important for the feature representation to be 3D-aware, in order to resolve the inherent 2D-3D ambiguity (as shown in Fig. 1). To resolve this problem, we aim to learn pose-dependent feature representations that is able to better resolve the 2D-3D ambiguity of human poses.
|
| 134 |
+
|
| 135 |
+
3D pose-dependent features for NBV. To overcome the 2D-3D ambiguity, we make the generative model 3D-aware. In particular, we impose a distribution on the kernel features $\Phi$ conditioned on the human pose and shape as shown in Fig. 2(c). Therefore, the rendered kernel features explicitly carry 3D pose information. Specifically, we define a set of body limbs $\{(J_i,J_j)|(i,j)\in \mathcal{L}\}$ each defined as an ordered
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
Figure 2: Overview of our system. (a) We perform feature-level analysis-by-synthesis for 3D human pose estimation by fitting a 3D-aware generative model of deep feature (NBV) to the feature map $\pmb{F}$ extracted by a U-Net. (b) NBV is defined as a volume representation of human body $\rho$ , driven by pose and shape parameters $\{\theta, \beta\}$ , which consists of a set of Gaussian kernels each emitting a pose-dependent feature $\phi$ . Volume rendering is used to render NBV to a feature map $\hat{\Phi}$ . The foreground feature likelihood is defined as a Gaussian distribution centered at the rendered feature vector while the background feature likelihood is modeled by a background model. Pose estimation is done by optimizing the negative log-likelihood (NLL) loss of $\pmb{F}$ w.r.t. $\{\theta, \beta\}$ and camera II. (c) the distribution of the kernel feature is conditioned on the orientation of the limb that the kernel belongs to.
|
| 139 |
+
|
| 140 |
+

|
| 141 |
+
|
| 142 |
+
tuple connecting two body joints. The orientation of a limb is defined as $l = (J_{j} - J_{i}) / \| J_{j} - J_{i}\|$ . We first learn to assign each Gaussian kernel in NBV to one limb according to the pose blend weights. Then we associate each kernel with multiple features $\{\phi_o\}$ that correspond to a set of predefined limb orientations $\{\hat{l}_o\in \mathbb{R}^3\}_{o = 1}^O,\| \hat{l}_o\| = 1$ . The distribution for the feature vector of the Gaussian kernel $k$ is then defined as:
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
p \left(\phi_ {k} = \phi_ {k o} \mid \boldsymbol {l} _ {k} (\boldsymbol {\theta}, \boldsymbol {\beta})\right) = \frac {p \left(\boldsymbol {l} _ {k} \mid \hat {\boldsymbol {l}} _ {o}\right)}{\sum_ {o = 1} ^ {O} p \left(\boldsymbol {l} _ {k} \mid \hat {\boldsymbol {l}} _ {o}\right)}, \tag {7}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
where $l_{k}(\theta, \beta) \in \mathbb{R}^{3}$ is the orientation of the limb that Gaussian kernel $k$ belongs to. $p(l_{k}|\hat{l}_{o})$ is the von Mises-Fisher distribution $\mathrm{vMF}(l_k|\hat{l}_o,\kappa_o)$ . In the simple case of only one kernel $k$ , the likelihood of feature at foreground pixel $i$ becomes a Gaussian Mixture Model (GMM):
|
| 149 |
+
|
| 150 |
+
$$
|
| 151 |
+
p \left(\boldsymbol {f} _ {i} \mid \hat {\phi} _ {i}\right) = \sum_ {o = 1} ^ {O} p \left(\phi_ {k} = \phi_ {k o} \mid l _ {k}\right) \mathcal {N} \left(\hat {\phi} _ {k o}, \sigma_ {i o} ^ {2} \boldsymbol {I}\right), \tag {8}
|
| 152 |
+
$$
|
| 153 |
+
|
| 154 |
+
where $\hat{\phi}_{ko}$ is the feature rendered from $\phi_{ko}$ . Intuitively, the rendered feature has different distributions under different 3D limb orientations. Therefore, we can unambiguously infer the 3D pose from the observed features. During inference, we use the expectation of the kernel feature $\mathbb{E}(\phi_k|\pmb {\theta},\pmb {\beta})$ for volume rendering for differentiability.
|
| 155 |
+
|
| 156 |
+
# 3.5. Training
|
| 157 |
+
|
| 158 |
+
Given a set of images $\{I_n\}_{n=1}^N$ , with ground truth 3D keypoints $\{\hat{J}_n\}_{n=1}^N$ and shape $\{\mathcal{V}_n\}_{n=1}^N$ , we need to learn a set of parameters in NBV: the template Gaussian kernels and the associated features $\{\bar{M}, \bar{\Sigma}, \Phi\}$ , the template joints $\bar{J}$ , the blend weights $W$ , the basis shape displacements $S$ and the joint regressor $g$ . We also need to train the UNet feature extractor $\zeta$ . We train our model in separate steps by first learning the pose/shape-related parameters $\{\bar{M}, \bar{\Sigma}, \bar{J}, W, S, g\}$ then the kernel features $\Phi$ and $\zeta$ .
|
| 159 |
+
|
| 160 |
+
Learning pose and shape parameters in NBV. Starting from a downsampled version of a template body mesh model created by artists, we initialize the kernel centers $\bar{M}$ with the locations of the vertices and compute the spatial covariance matrices $\bar{\Sigma}$ based on the distance of the vertices to their neighbors with the desired amount of overlap. Following [40], a manual segmentation of the template mesh is leveraged to obtain the initial template joints $\bar{J}$ , the linear joint regressor $g$ , and the blend weights $\mathbf{W}$ . Then we train all pose-related parameters $\{\bar{M},\bar{J},\bar{W},g\}$ together with instance-specific pose $\theta$ by minimizing the reconstruction error between the Gaussian kernels and the ground truth shape $\mathcal{V}$ . After that, the ground truth shapes are transformed back to the rest pose and the shape basis $\mathbf{S}$ is obtained by running PCA on these pose-normalized shapes. We refer the readers to [40] for details as we share
|
| 161 |
+
|
| 162 |
+
a similar training process for this part. Another regressor $\hat{g}$ is trained to regress the ground truth keypoints from kernel centers $M$ . In practice, we can directly convert a trained SMPL model [40] to NBV by placing the Gaussian kernels at the vertices on the SMPL mesh.
|
| 163 |
+
|
| 164 |
+
After training the pose/shape-related parameters, we register our NBV to the train set to obtain the ground truth shape and pose for each training sample $\{\pmb{\theta}_n\}, \{\beta_n\}$ . Then, we learn the NBV kernel features and a UNet feature extractor $\zeta$ jointly in an iterative manner.
|
| 165 |
+
|
| 166 |
+
MLE learning of NBV kernel features. If $\zeta$ is trained, we can learn the kernel features $\Phi$ through maximum likelihood estimation (MLE) by minimizing the following negative log-likelihood of the feature representations over the whole training set,
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
\mathcal {L} _ {\mathrm {N L L}} (\hat {\boldsymbol {F}}, \boldsymbol {\Phi}) = - \sum_ {i \in \mathcal {F} \mathcal {G}} \log p \left(\hat {\boldsymbol {f}} _ {i} \mid \hat {\phi} _ {i}\right), \tag {9}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
where for training efficiency, we use an approximate solution to avoid matrix inversion $\phi_{ko} = \frac{\sum_{i\in\mathcal{K}}\gamma_{iko}\hat{\pmb{f}}_i}{\sum_{i\in\mathcal{K}}\gamma_{iko}}$ where $\mathcal{K}$ is the set of pixels in the training data that the kernel feature $\phi_{ko}$ contributes to, and $\gamma_{iko}$ is the contribution of $\phi_{ko}$ to pixel $i$ which is obtained from the volume rendering process. Similarly, the parameters of the background distribution are learned using MLE on the features that are not covered by the projected NBV model in the training data. To reduce the computational cost, we follow [73] and employ a momentum strategy [14] to update $\Phi$ in a moving average manner.
|
| 173 |
+
|
| 174 |
+
3D-aware contrastive learning of the UNet feature extractor. Given the generative model, we can train the UNet feature extractor with the NLL loss as defined in Equation 9 w.r.t. the network parameters. In addition, we want the extracted feature map to have the property that the rendered feature from NBV in the ground truth pose has the largest probability. To this end, we incorporate a set of contrastive losses:
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
\mathcal {L} _ {\mathrm {F G}} (\boldsymbol {F}, \mathcal {F} \mathcal {G}) = - \sum_ {i \in \mathcal {F} \mathcal {G}} \sum_ {i ^ {\prime} \in \mathcal {F} \mathcal {G} \backslash \{i \}} \| \boldsymbol {f} _ {i} - \boldsymbol {f} _ {i ^ {\prime}} \| ^ {2} \tag {10}
|
| 178 |
+
$$
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\mathcal {L} _ {\mathrm {3 D}} (\boldsymbol {F}) = - \sum_ {k} \sum_ {o} \sum_ {o ^ {\prime} \in \mathcal {O} \backslash \{o \}} \sum_ {i \in \mathcal {K} _ {k o}} \sum_ {j \in \mathcal {K} _ {k o ^ {\prime}}} \| \boldsymbol {f} _ {i k o} - \boldsymbol {f} _ {j k o ^ {\prime}} \| ^ {2} \tag {11}
|
| 182 |
+
$$
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
\mathcal {L} _ {\mathrm {B G}} (\boldsymbol {F}, \mathcal {F} \mathcal {G}, \mathcal {B} \mathcal {G}) = - \sum_ {i \in \mathcal {F} \mathcal {G}} \sum_ {j \in \mathcal {B} \mathcal {G}} \| \boldsymbol {f} _ {i} - \boldsymbol {f} _ {j} \| ^ {2} \tag {12}
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
where $\mathcal{L}_{\mathrm{FG}}$ encourages features of different pixels to be distinct from each other. $\mathcal{L}_{3\mathrm{D}}$ encourages features of the same kernel in different 3D poses to be distinct from each other, i.e. to become 3D-aware. $\mathcal{L}_{\mathrm{BG}}$ encourages features on the human to be distinct from those in the background. We optimize those losses jointly $\mathcal{L}_{\mathrm{contrast}} = \mathcal{L}_{\mathrm{FG}} + \mathcal{L}_{3\mathrm{D}} + \mathcal{L}_{\mathrm{BG}}$ in a contrastive learning framework. Therefore, the total loss for training $\zeta$ is $\mathcal{L}_{train} = \mathcal{L}_{\mathrm{NLL}} + \mathcal{L}_{\mathrm{contrast}}$ .
|
| 189 |
+
|
| 190 |
+
Bottom-up initialization with regression heads. For efficient inference, it is a common practice in generative modeling to initialize with regression-based methods. In our model, we add a regression head to the UNet feature extractor to predict the pose and shape parameters $\{\theta, \beta\}$ from the observed feature map. The regression head and the UNet are learned jointly.
|
| 191 |
+
|
| 192 |
+
# 3.6. Inference
|
| 193 |
+
|
| 194 |
+
We estimate the 3D human pose $\theta$ , shape parameters $\beta$ , and the camera parameters $\Pi$ using the analysis-by-synthesis formulation in Equation 1. This boils down to minimizing an NLL loss plus a regularization term from the pose prior $p(\theta)$ w.r.t $\{\theta, \beta, \Pi\}$ . The initialization comes from the regression head. Our proposed generative model is fully differentiable and therefore can be optimized using gradient-based methods.
|
| 195 |
+
|
| 196 |
+
# 3.7. Implementation Details
|
| 197 |
+
|
| 198 |
+
We convert the neutral SMPL model to NBV using the method described in Sec. 3.5, keeping 858 kernels. We use a U-Net [60] style network as the feature extractor which consists of a ResNet-50 [15] backbone and 3 upsampling blocks. The regression head follows the design of [26]. The input image is a $320 \times 320$ crop centered around the human. The feature map has a $4 \times$ downsampled resolution and the feature dimension is 64 which balances performance and computation cost as shown in ablation in Sec. 4.3. The Adam optimizer with a learning rate of $5 \times 10^{-5}$ and batch size of 64 is used for training the feature extractor and the regression head. Standard data augmentation techniques are used including random flipping, scaling, and rotation. For the 3D pose-dependent features, we consider the limb orientation projected to the yz-plane and split the unit circle evenly. We set $O = 4$ for all kernels which already gives good enough results as shown in the ablation study in Sec. 4.3. We consider 9 limbs including the left/right upper/ lower arm/leg and the torso. The torso includes the head and its orientation is defined as the direction from the mid-hip joint to the neck joint. For inference, we also use Adam as the optimizer with a learning rate of 0.02 and run a maximum of 80 steps. We use VPoser [51] as our 3D pose prior. We check the negative log-likelihood $\mathcal{L}_{NLL}$ of the initial pose and its $180^{\circ}$ -rotated version around y-axis and use the better one to initialize our model. Inference speed is $\sim 1.7\mathrm{fps}$ with a batch size of 32 on 4 NVIDIA Titan Xp GPUs.
|
| 199 |
+
|
| 200 |
+
# 4. Experiments
|
| 201 |
+
|
| 202 |
+
In this section, we demonstrate the effectiveness and robustness of 3DNBF by comparing it with SOTA HPE. In addition to existing benchmarks, we propose a more challenging adversarial evaluation for occlusion robustness. Finally,
|
| 203 |
+
|
| 204 |
+
we conduct ablation studies to verify the design choices and effectiveness of different components.
|
| 205 |
+
|
| 206 |
+
# 4.1. Training Setup and Datasets
|
| 207 |
+
|
| 208 |
+
Training. Follow the common setting, we train NBV on Human3.6M [19], MPI-INF-3DHP [41], and COCO [36] datasets. We use ground truth SMPL fittings for Human3.6M and MPI-INF-3DHP [24, 28] and the pseudo-ground truth fittings from EFT [22] for COCO following [26]. The selection of subjects for training strictly follows previous work [22, 26, 28]. We first train the feature extractor on COCO for 175K iterations, then fine-tune on all data for another 175K iterations. During fine-tuning, the sampling ratio in each batch is $50\%$ Human3.6M, $20\%$ MPI-INF-3DHP and $30\%$ COCO. Note that for all baseline methods, we use the official model trained with the same data as ours for fairness.
|
| 209 |
+
|
| 210 |
+
Occlusion Robustness Evaluation. We conduct evaluations on two datasets to measure the robustness and generalization of our method: an in-the-wild dataset 3DPW-Occ [82] which is a subset of the original 3DPW [72] dataset and an artificial indoor occlusion dataset 3DOH50K [82]. In particular, we directly test all models on these datasets without any training on them. For 3DPW and 3DPW-Occ, we sample the videos every 30 frames. We report mean per joint position error (MPJPE) and Procrustes-aligned mean per joint position error (PA-MPJPE) in mm as the main evaluation metrics. We also report the 2D Percentage of Correct Keypoints with head length threshold (PCKh) to measure how well the prediction aligns with the 2D image.
|
| 211 |
+
|
| 212 |
+
Adversarial occlusion robustness evaluation. Inspired by the occlusion analysis in [26], we design an adversarial protocol 3DPW-AdvOcc to further evaluate the occlusion robustness of SOTA methods. Specifically, we slide an occlusion patch over the input image to find the worst prediction. This is done by comparing the relative performance degradation on the visible joints. We argue that evaluating the performance on occluded joints is sometimes ambiguous since the location of occluded joints is not always predictable even for human. Therefore, for a more stable and meaningful evaluation, the joints outside the bounding box or occluded by the patch are excluded from evaluation. Instead of using a gray occlusion patch, we use textured patches generated by randomly cropping texture maps from the Describable Textures Dataset (DTD) [8], which is more challenging. Two different square patch sizes are used: 40 and 80 relative to a $224 \times 224$ image, denoted as Occ@40 and Occ@80 respectively, and the stride is set to 10.
|
| 213 |
+
|
| 214 |
+
# 4.2. Performance Evaluation
|
| 215 |
+
|
| 216 |
+
Baselines. To demonstrate the superior performance and occlusion robustness of 3DNBF, we compare our model with four SOTA regression-based methods: SPIN [28], HMR-EFT [22], Mesh Graphormer [35] and PARE [26] where PARE is designed to be robust to occlusions with part attention and trained with synthetic occlusion augmentation. For fair comparisons, we adopt the models with the same ResNet-50 backbone for all methods. We also compare 3DNBF with SOTA optimization-base methods that also improve occlusion robustness: SMPLify [4], 3D POF [77] and EFT [22].
|
| 217 |
+
|
| 218 |
+
Comparison to SOTA. As shown in Table 1, we first evaluate on the standard 3DPW test set and 3DNBF achieves SOTA performance. On occlusion datasets 3DPW-Occ and 3DOH50K, our improvement becomes more significant. We then evaluate the occlusion robustness on 3DPW-AdvOcc where we find all regression-based methods suffer from occlusion with the MPJPE increasing up to $225\%$ and the PA-MPJPE increasing up to $127\%$ even for the best-performing method. The transformer-based model [35] suffers the most from occlusion which we speculate to be due to overfitting. In contrast, 3DNBF is much more robust to occlusion improving over the SOTAs by a wide margin. Note that our predictions align better with the image as shown in PCKh.
|
| 219 |
+
|
| 220 |
+
Comparison to other optimization-based methods. We compare 3DNBF with three optimization-based methods on 3DPW and 3DPW-AdvOcc. We choose HMR-EFT as initial regressor to use the official EFT implementation. All methods use the same 2D keypoints detected by OpenPose [5]. As shown in Table 2, we achieve the best performance on both non-occluded and occluded settings. Although SMPLify improves 2D PCKh, it does not quite improve on the 3D metrics. This is due to SMPLify only fitting SMPL parameters to 2D keypoints without capturing any 3D information from the image, thus suffering from the 2D-3D ambiguity. EFT fine-tunes the regression network using a 2D keypoint reprojection loss. It achieves better performance than SMPLify because the regression network itself can implicitly encode 3D information in the input image and serve as a conditional 3D pose prior. However, EFT does not improve on non-occluded cases.
|
| 221 |
+
|
| 222 |
+
Qualitative results. We qualitatively demonstrate the improved occlusion robustness of 3DNBF compared to the SOTA regression-based method PARE in Fig. 3 and show more comparisons in the supplementary material. Notice that PARE makes unaligned predictions for visible joints.
|
| 223 |
+
|
| 224 |
+
# 4.3. Ablation Studies
|
| 225 |
+
|
| 226 |
+
In this section, we provide ablation of different components in 3DNBF including the NBV, the design of pose-dependent features, and the 3D-aware contrastive loss. All experiments are on 3DPW-AdvOcc@80.
|
| 227 |
+
|
| 228 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">3DPW [72]</td><td colspan="3">3DPW-Occ [82]</td><td colspan="3">3DPW-AdvOcc@40</td><td colspan="3">3DPW-AdvOcc@80</td><td colspan="3">3DOH50K [82]</td></tr><tr><td>MPJPE↓</td><td>P-MPJPE↓</td><td>PCKh↑</td><td>MPJPE↓</td><td>P-MPJPE↓</td><td>PCKh↑</td><td>MPJPE↓</td><td>P-MPJPE↓</td><td>PCKh↑</td><td>MPJPE↓</td><td>P-MPJPE↓</td><td>PCKh↑</td><td>MPJPE↓</td><td>P-MPJPE↓</td><td>PCKh↑</td></tr><tr><td>SPIN [28]</td><td>96.6</td><td>58.3</td><td>91.7</td><td>97.5</td><td>60.8</td><td>85.9</td><td>203.5</td><td>97.0</td><td>63.6</td><td>338.8</td><td>111.7</td><td>34.1</td><td>101.3</td><td>67.9</td><td>83.3</td></tr><tr><td>HMR-EFT [22]</td><td>89.5</td><td>53.4</td><td>93.1</td><td>95.8</td><td>57.1</td><td>87.2</td><td>146.7</td><td>73.2</td><td>77.8</td><td>202.8</td><td>83.7</td><td>63.7</td><td>97.4</td><td>65.8</td><td>84.4</td></tr><tr><td>MGraphr [35]</td><td>80.4</td><td>53.4</td><td>88.7</td><td>116.8</td><td>75.7</td><td>66.6</td><td>158.8</td><td>93.2</td><td>70.8</td><td>261.5</td><td>121.0</td><td>48.8</td><td>127.4</td><td>76.0</td><td>79.8</td></tr><tr><td>PARE [26]</td><td>81.4</td><td>50.9</td><td>92.5</td><td>86.8</td><td>58.8</td><td>86.2</td><td>126.5</td><td>72.5</td><td>82.3</td><td>210.9</td><td>97.4</td><td>61.9</td><td>100.7</td><td>65.1</td><td>84.2</td></tr><tr><td>3DNBF</td><td>79.8</td><td>49.3</td><td>95.7</td><td>77.2</td><td>51.2</td><td>93.1</td><td>105.1</td><td>60.5</td><td>92.0</td><td>140.7</td><td>71.8</td><td>85.0</td><td>86.7</td><td>57.5</td><td>88.6</td></tr></table>
|
| 229 |
+
|
| 230 |
+
Table 1: Evaluation on 3DPW, 3DPW-Occ, 3DPW-AdvOcc, and 3DOH50K. The number 40 and 80 after 3DPW-AdvOcc denote the occluder size. Note the performance improvement of 3DNBF increases as occlusion becomes more severe. (P-MPJPE: PA-MPJPE; MGraphr: Mesh Graphormer.)
|
| 231 |
+
|
| 232 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">3DPW [72]</td><td colspan="3">3DPW-AdvOcc@40</td><td colspan="3">3DPW-AdvOcc@80</td></tr><tr><td>MPJPE↓</td><td>PA-MPJPE↓</td><td>PCKh↑</td><td>MPJPE↓</td><td>PA-MPJPE↓</td><td>PCKh↑</td><td>MPJPE↓</td><td>PA-MPJPE↓</td><td>PCKh↑</td></tr><tr><td>HMR-EFT [22]</td><td>89.5</td><td>53.4</td><td>93.1</td><td>146.7</td><td>73.2</td><td>77.8</td><td>202.8</td><td>83.7</td><td>63.7</td></tr><tr><td>+ SMPLify [4]</td><td>106.2</td><td>64.8</td><td>91.2</td><td>133.4</td><td>75.8</td><td>85.6</td><td>192.2</td><td>89.3</td><td>73.4</td></tr><tr><td>+ 3DPOF [77]</td><td>97.6</td><td>60.8</td><td>90.6</td><td>125.1</td><td>69.6</td><td>84.0</td><td>175.0</td><td>78.8</td><td>73.0</td></tr><tr><td>+ EFT [22]</td><td>92.8</td><td>55.9</td><td>93.0</td><td>114.1</td><td>64.6</td><td>88.7</td><td>158.2</td><td>75.2</td><td>78.5</td></tr><tr><td>+ 3DNBF</td><td>88.8</td><td>53.3</td><td>93.6</td><td>109.4</td><td>62.2</td><td>90.9</td><td>150.2</td><td>72.0</td><td>85.3</td></tr></table>
|
| 233 |
+
|
| 234 |
+

|
| 235 |
+
Figure 3: Qualitative results on evaluated datasets.
|
| 236 |
+
|
| 237 |
+
Table 2: Comparison to optimization-base methods. HMR-EFT is used for initialization.
|
| 238 |
+
|
| 239 |
+
<table><tr><td></td><td>MPJPE↓</td><td>P-MPJPE↓</td><td>PCKh↑</td></tr><tr><td>3DNBF</td><td>140.7</td><td>71.8</td><td>85.0</td></tr><tr><td>Init. only</td><td>171.4</td><td>80.5</td><td>75.8</td></tr><tr><td>w/o NBV</td><td>146.6</td><td>72.6</td><td>83.1</td></tr><tr><td>w/o contrast</td><td>167.4</td><td>80.4</td><td>79.4</td></tr></table>
|
| 240 |
+
|
| 241 |
+
(a) Ablations for 3DNBF.
|
| 242 |
+
|
| 243 |
+
<table><tr><td>O</td><td>MPJPE↓</td><td>P-MPJPE↓</td><td>PCKh↑</td></tr><tr><td>1</td><td>189.6</td><td>84.8</td><td>69.2</td></tr><tr><td>4</td><td>140.7</td><td>71.8</td><td>85.0</td></tr><tr><td>8</td><td>161.2</td><td>77.8</td><td>81.0</td></tr></table>
|
| 244 |
+
|
| 245 |
+
(b) # of pose-dependent features. $O = 1$ means using pose-independent feature.
|
| 246 |
+
|
| 247 |
+
Table 3: Ablation studies. All experiments are performed on 3DPW-AdvOcc@80. (P-MPJPE: PA-MPJPE.)
|
| 248 |
+
|
| 249 |
+
NBV vs. Mesh. To demonstrate the advantage of NBV over mesh representation, we replace it with a mesh-based neural representation using SMPL while keeping everything else
|
| 250 |
+
|
| 251 |
+
the same. We use the differentiable rendering implementation from PyTorch3D [55]. As shown in Table 3a, this model achieves worse results than using NBV.
|
| 252 |
+
|
| 253 |
+
The pose-dependent kernel features. The 3D-aware pose-dependent kernel feature is key to the success of 3DNBF. Here we validate its effectiveness by comparing it with pose-independent feature $(O = 1)$ . As shown in Table 3b, much better performance is achieved with $O > 1$ . Using 4 features for each kernel achieves the best performance while further increasing the number of features may make the learning harder as it introduced more parameters.
|
| 254 |
+
|
| 255 |
+
Importance of contrastive training. We ablate this by training our model without contrastive learning, i.e. training the feature extractor with regression loss only. The performance degrades a lot as shown in Table 3a. The intuition is that our model requires the features to be Gaussian distributed and contrastive learning encourages this.
|
| 256 |
+
|
| 257 |
+
Regression head performance. Although we do not expect the regression head to be robust to occlusion, it achieves higher occlusion robustness compared to other regression-based methods as shown in Table 3a and Table 1.
|
| 258 |
+
|
| 259 |
+
# 5. Conclusion
|
| 260 |
+
|
| 261 |
+
In this work, we introduce 3D Neural Body Fitting (3DNBF) - an approximate analysis-by-synthesis approach to 3D HPE that is accurate and highly robust to occlusion. To this end, NBV is proposed which is an explicit volume-based generative model of pose-dependent features for hu
|
| 262 |
+
|
| 263 |
+
man body. We propose a contrastive learning framework for training a feature extractor that captures the 3D pose information of the body parts thus overcoming the 2D-3D ambiguity in monocular 3D HPE. Experiments on challenging benchmark datasets demonstrate that 3DNBF outperforms SOTA regression-based methods as well as optimization-based methods. While focusing on occlusion robustness in this paper, we expect our model to be robust to other challenging adversarial examinations [61, 65].
|
| 264 |
+
|
| 265 |
+
# 6. Acknowledgements
|
| 266 |
+
|
| 267 |
+
AK acknowledges support via his Emmy Noether Research Group funded by the German Science Foundation (DFG) under Grant No. 468670075. This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via [2022-21102100005]. This work was also supported by NIH R01 EY029700, Army Research Laboratory award W911NF2320008, and Office of Naval Research N00014-21-1-2812.
|
| 268 |
+
|
| 269 |
+
# References
|
| 270 |
+
|
| 271 |
+
[1] Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, and James Davis. Scape: Shape completion and animation of people. SIGGRAPH, 2005. 2
|
| 272 |
+
[2] Yutong Bai, Angtian Wang, Adam Kortylewski, and Alan Yuille. Coke: Contrastive learning for robust keypoint detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 65-74, 2023. 2
|
| 273 |
+
[3] Benjamin Biggs, David Novotny, Sebastien Ehrhardt, Hanbyul Joo, Ben Graham, and Andrea Vedaldi. 3d multi-bodies: Fitting sets of plausible 3d human models to ambiguous image data. In Advances in Neural Information Processing, 2020. 2
|
| 274 |
+
[4] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. Keep it SMPL: Automatic estimation of 3d human pose and shape from a single image. In European Conference on Computer Vision, pages 561-578. Springer, 2016. 1, 2, 3, 7, 8
|
| 275 |
+
[5] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In IEEE Conference on Computer Vision and Pattern Recognition, pages 7291–7299, 2017. 7
|
| 276 |
+
[6] Yu Cheng, Bo Yang, Bo Wang, and Robby T Tan. 3d human pose estimation using spatio-temporal networks with explicit occlusion training. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 10631-10638, 2020. 2
|
| 277 |
+
[7] Yu Cheng, Bo Yang, Bo Wang, Wending Yan, and Robby T Tan. Occlusion-aware networks for 3d human pose estimation in video. In International Conference on Computer Vision, pages 723-732, 2019. 2
|
| 278 |
+
[8] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the
|
| 279 |
+
|
| 280 |
+
wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3606-3613, 2014. 7
|
| 281 |
+
[9] Enric Corona, Gerard Pons-Moll, Guillem Alenya, and Francesc Moreno-Noguer. Learned vertex descent: A new direction for 3d human model fitting. In European Conference on Computer Vision, pages 146-165. Springer, 2022. 2
|
| 282 |
+
[10] Bernhard Egger, Sandro Schonborn, Andreas Schneider, Adam Kortylewski, Andreas Morel-Forster, Clemens Blumer, and Thomas Vetter. Occlusion-aware 3d morphable models and an illumination prior for face image analysis. International Journal of Computer Vision, 126(12):1269-1287, 2018. 3
|
| 283 |
+
[11] Georgios Georgakis, Ren Li, Srikrishna Karanam, Terrence Chen, Jana Košecka, and Ziyan Wu. Hierarchical kinematic human mesh recovery. In European Conference on Computer Vision, pages 768-784. Springer, 2020. 2
|
| 284 |
+
[12] Riza Alp Guler and Iasonas Kokkinos. HoloPose: Holistic 3D human reconstruction in-the-wild. In IEEE Conference on Computer Vision and Pattern Recognition, pages 10884-10894, 2019. 1, 2
|
| 285 |
+
[13] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729-9738, 2020. 2
|
| 286 |
+
[14] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In IEEE Conference on Computer Vision and Pattern Recognition, pages 9729-9738, 2020. 6
|
| 287 |
+
[15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6
|
| 288 |
+
[16] Buzhen Huang, Yuan Shu, Jingyi Ju, and Yangang Wang. Occluded human body capture with self-supervised spatial-temporal motion prior. arXiv preprint arXiv:2207.05375, 2022. 2
|
| 289 |
+
[17] Yinghao Huang, Federica Bogo, Christoph Lassner, Angjoo Kanazawa, Peter V. Gehler, Javier Romero, Ijaz Akhter, and Michael J. Black. Towards accurate marker-less human shape and pose estimation over time. In International Conference on 3DVision, 2017. 2
|
| 290 |
+
[18] Peter J Huber. Robust statistics, volume 523. John Wiley & Sons, 2004. 2
|
| 291 |
+
[19] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence, 36(7):1325-1339, 2013. 7
|
| 292 |
+
[20] Shun Iwase, Xingyu Liu, Rawal Khirodkar, Rio Yokota, and Kris M Kitani. Repose: Fast 6d object pose refinement via deep texture rendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3303-3312, 2021. 4
|
| 293 |
+
[21] Wei Jiang, Kwang Moo Yi, Golnoosh Samei, Oncel Tuzel, and Anurag Ranjan. Neuman: Neural human radiance field
|
| 294 |
+
|
| 295 |
+
from a single video. In European Conference on Computer Vision, 2022. 3
|
| 296 |
+
[22] Hanbyul Joo, Natalia Neverova, and Andrea Vedaldi. Exemplar fine-tuning for 3d human model fitting towards inthe-wild 3d human pose estimation. In 2021 International Conference on 3D Vision (3DV), pages 42-52. IEEE, 2021. 1, 2, 7, 8
|
| 297 |
+
[23] Hanbyul Joo, Tomas Simon, and Yaser Sheikh. Total capture: A 3D deformation model for tracking faces, hands, and bodies. In IEEE Conference on Computer Vision and Pattern Recognition, pages 8320-8329, 2018. 2
|
| 298 |
+
[24] Angjoo Kanazawa, Michael J. Black, David W. Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In IEEE Conference on Computer Vision and Pattern Recognition, pages 7122-7131, 2018. 1, 2, 7
|
| 299 |
+
[25] Daniel Kersten, Pascal Mamassian, and Alan Yuille. Object perception as bayesian inference. Annu. Rev. Psychol., 55:271-304, 2004. 1
|
| 300 |
+
[26] Muhammed Kocabas, Chun-Hao P Huang, Otmar Hilliges, and Michael J Black. Pare: Part attention regressor for 3d human body estimation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 11127-11137, 2021. 1, 2, 6, 7, 8
|
| 301 |
+
[27] Nikos Kolotouros, Georgios Pavlakos, Michael J. Black, and Kostas Daniilidis. Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In International Conference on Computer Vision, pages 2252-2261, 2019. 1, 2
|
| 302 |
+
[28] Nikos Kolotouros, Georgios Pavlakos, Michael J Black, and Kostas Daniilidis. Learning to reconstruct 3d human pose and shape via model-fitting in the loop. In International Conference on Computer Vision, pages 2252-2261, 2019. 1, 2, 7, 8
|
| 303 |
+
[29] Nikos Kolotouros, Georgios Pavlakos, Dinesh Jayaraman, and Kostas Daniilidis. Probabilistic modeling for human mesh recovery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11605-11614, 2021. 2
|
| 304 |
+
[30] Adam Kortylewski, Ju He, Qing Liu, and Alan L Yuille. Compositional convolutional neural networks: A deep architecture with innate robustness to partial occlusion. In IEEE Conference on Computer Vision and Pattern Recognition, pages 8940-8949, 2020. 3
|
| 305 |
+
[31] Adam Kortylewski, Qing Liu, Angtian Wang, Yihong Sun, and Alan Yuille. Compositional convolutional neural networks: A robust and interpretable model for object recognition under occlusion. International Journal of Computer Vision, pages 1-25, 2020. 1, 3
|
| 306 |
+
[32] Adam Kortylewski, Qing Liu, Huiyu Wang, Zhishuai Zhang, and Alan Yuille. Combining compositional models and deep networks for robust object classification under occlusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1333-1341, 2020. 3
|
| 307 |
+
[33] Christoph Lassner, Javier Romero, Martin Kiefel, Federica Bogo, Michael J. Black, and Peter V. Gehler. Unite the People: Closing the loop between 3D and 2D human representations. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4704-4713, 2017. 2
|
| 308 |
+
|
| 309 |
+
[34] John P Lewis, Matt Cordner, and Nickson Fong. Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 165-172, 2000. 4
|
| 310 |
+
[35] Kevin Lin, Lijuan Wang, and Zicheng Liu. Mesh graphormer. In Proceedings of the IEEE/CVF international conference on computer vision, pages 12939-12948, 2021. 7, 8
|
| 311 |
+
[36] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740-755. Springer, 2014. 7
|
| 312 |
+
[37] Lingjie Liu, Marc Habermann, Viktor Rudnev, Kripasindhu Sarkar, Jiatao Gu, and Christian Theobalt. Neural actor: Neural free-view synthesis of human actors with pose control. ACM Transactions on Graphics (TOG), 40(6):1-16, 2021. 3
|
| 313 |
+
[38] Qihao Liu, Yi Zhang, Song Bai, and Alan Yuille. Explicit occlusion reasoning for multi-person 3d human pose estimation. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part V, pages 497-517. Springer, 2022. 2
|
| 314 |
+
[39] Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. Soft rasterizer: A differentiable renderer for image-based 3d reasoning. In IEEE Conference on Computer Vision and Pattern Recognition, pages 7708-7717, 2019. 4
|
| 315 |
+
[40] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multiperson linear model. In ACM Trans. Graphics (Proc. SIGGRAPH Asia), 2015. 1, 2, 4, 5, 6
|
| 316 |
+
[41] Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, and Christian Theobalt. Monocular 3d human pose estimation in the wild using improved cnn supervision. In 2017 international conference on 3D vision (3DV), pages 506-516. IEEE, 2017. 2, 7
|
| 317 |
+
[42] Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Mohamed Elgharib, Pascal Fua, Hans-Peter Seidel, Helge Rhodin, Gerard Pons-Moll, and Christian Theobalt. Xnect: Real-time multi-person 3d motion capture with a single rgb camera. Acm Transactions On Graphics (TOG), 39(4):82-1, 2020. 2
|
| 318 |
+
[43] Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Srinath Sridhar, Gerard Pons-Moll, and Christian Theobalt. Single-shot multi-person 3d pose estimation from monocular rgb. In 2018 International Conference on 3D Vision (3DV), pages 120-130. IEEE, 2018. 2
|
| 319 |
+
[44] Pol Moreno, Christopher KI Williams, Charlie Nash, and Pushmeet Kohli. Overcoming occlusion with inverse graphics. In European Conference on Computer Vision, pages 170-185. Springer, 2016. 2
|
| 320 |
+
[45] Natalia Neverova, David Novotny, Marc Szafraniec, Vasil Khalidov, Patrick Labatut, and Andrea Vedaldi. Continuous surface embeddings. Advances in Neural Information Processing Systems, 33:17258-17270, 2020. 3
|
| 321 |
+
|
| 322 |
+
[46] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7588–7597, 2019. 3
|
| 323 |
+
[47] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In CVPR, 2021. 3
|
| 324 |
+
[48] Atsuhiro Noguchi, Xiao Sun, Stephen Lin, and Tatsuya Harada. Neural articulated radiance field. In International Conference on Computer Vision, pages 5762-5772, 2021. 3
|
| 325 |
+
[49] Mohamed Omran, Christoph Lassner, Gerard Pons-Moll, Peter V. Gehler, and Bernt Schiele. Neural body fitting: Unifying deep learning and model-based human pose and shape estimation. In International Conference on 3D Vision, 2018. 1, 2
|
| 326 |
+
[50] Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. In IEEE Conference on Computer Vision and Pattern Recognition, pages 5865-5874, 2021. 3
|
| 327 |
+
[51] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. Expressive body capture: 3D hands, face, and body from a single image. In IEEE Conference on Computer Vision and Pattern Recognition, pages 10975-10985, 2019. 1, 2, 3, 4, 6
|
| 328 |
+
[52] Georgios Pavlakos, Xiaowei Zhou, Konstantinos G Derpanis, and Kostas Daniilidis. Coarse-to-fine volumetric prediction for single-image 3D human pose. In IEEE Conference on Computer Vision and Pattern Recognition, 2017. 2
|
| 329 |
+
[53] Georgios Pavlakos, Luyang Zhu, Xiaowei Zhou, and Kostas Daniilidis. Learning to estimate 3D human pose and shape from a single color image. In IEEE Conference on Computer Vision and Pattern Recognition, pages 459-468, 2018. 1, 2
|
| 330 |
+
[54] Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans. In IEEE Conference on Computer Vision and Pattern Recognition, pages 9054–9063, 2021. 3
|
| 331 |
+
[55] Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. Accelerating 3d deep learning with pytorch3d. arXiv preprint arXiv:2007.08501, 2020. 8
|
| 332 |
+
[56] Helge Rhodin, Nadia Robertini, Christian Richardt, Hans-Peter Seidel, and Christian Theobalt. A versatile scene model with differentiable visibility applied to generative pose estimation. In International Conference on Computer Vision, pages 765-773, 2015. 2, 3, 4
|
| 333 |
+
[57] Nadia Robertini, Edilson De Aguiar, Thomas Helten, and Christian Theobalt. Efficient multi-view performance capture of fine-scale surface detail. In 2014 2nd International Conference on 3D Vision, volume 1, pages 5-12. IEEE, 2014. 3
|
| 334 |
+
[58] Chris Rockwell and David F. Fouhey. Full-body awareness from partial observations. In European Conference on Computer Vision, pages 522-539, 2020. 2
|
| 335 |
+
|
| 336 |
+
[59] Gregory Rogez, Philippe Weinzaepfel, and Cordelia Schmid. Lcr-net: Localization-classification-regression for human pose. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3433-3441, 2017. 2
|
| 337 |
+
[60] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015. 6
|
| 338 |
+
[61] Nataniel Ruiz, Adam Kortylewski, Weichao Qiu, Cihang Xie, Sarah Adel Bargal, Alan Yuille, and Stan Sclaroff. Simulated adversarial testing of face recognition models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4145–4155, 2022. 9
|
| 339 |
+
[62] Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. In International Conference on Computer Vision, pages 2304-2314, 2019. 3
|
| 340 |
+
[63] István Sárándi, Timm Linder, Kai O Arras, and Bastian Leibe. How robust is 3d human pose estimation to occlusion? arXiv preprint arXiv:1808.09316, 2018. 2
|
| 341 |
+
[64] Akash Sengupta, Ignas Budvytis, and Roberto Cipolla. Hierarchical kinematic probability distributions for 3d human shape and pose estimation from images in the wild. In Proceedings of the IEEE/CVF international conference on computer vision, pages 11219-11229, 2021. 2
|
| 342 |
+
[65] Michelle Shu, Chenxi Liu, Weichao Qiu, and Alan Yuille. Identifying model weakness with adversarial examiner. In AAAI, volume 34, pages 11998-12006, 2020. 9
|
| 343 |
+
[66] Jie Song, Xu Chen, and Otmar Hilliges. Human body model fitting by learned gradient descent. In European Conference on Computer Vision, 2020. 2
|
| 344 |
+
[67] Shih-Yang Su, Frank Yu, Michael Zollhöfer, and Helge Rhodin. A-nerf: Articulated neural radiance fields for learning human shape, appearance, and pose. Advances in Neural Information Processing Systems, 34:12278-12291, 2021. 3
|
| 345 |
+
[68] Xiao Sun, Bin Xiao, Fangyin Wei, Shuang Liang, and Yichen Wei. Integral human pose regression. In European Conference on Computer Vision, pages 529-545, 2018. 2
|
| 346 |
+
[69] Jun Kai Vince Tan, Ignas Budvyitis, and Roberto Cipolla. Indirect deep structured learning for 3D human shape and pose prediction. In *British Machine Vision Conference*, 2017. 1, 2
|
| 347 |
+
[70] Justus Thies, Michael Zollhöfer, and Matthias Nießner. Deferred neural rendering: Image synthesis using neural textures. ACM Transactions on Graphics (TOG), 38(4):1-12, 2019. 3
|
| 348 |
+
[71] Hsiao-Yu Tung, Hsiao-Wei Tung, Ersin Yumer, and Katerina Fragkiadaki. Self-supervised learning of motion capture. In Advances in Neural Information Processing, pages 5236–5246, 2017. 1, 2
|
| 349 |
+
[72] Timo von Marcard, Roberto Henschel, Michael J Black, Bodo Rosenhahn, and Gerard Pons-Moll. Recovering accurate 3d human pose in the wild using imus and a moving camera. In European Conference on Computer Vision, pages 601-617, 2018. 2, 7, 8
|
| 350 |
+
|
| 351 |
+
[73] Angtian Wang, Adam Kortylewski, and Alan Yuille. Nemo: Neural mesh models of contrastive features for robust 3d pose estimation. arXiv preprint arXiv:2101.12378, 2021. 1, 2, 3, 4, 6
|
| 352 |
+
[74] Justin Wang, Edward Xu, Kangrui Xue, and Lukasz Kidzinski. 3D pose detection in videos: Focusing on occlusion. arXiv preprint arXiv:2006.13517, 2020. 2
|
| 353 |
+
[75] Chung-Yi Weng, Brian Curless, Pratul P Srinivasan, Jonathan T Barron, and Ira Kemelmacher-Shlizerman. Humaner: Free-viewpoint rendering of moving people from monocular video. In IEEE Conference on Computer Vision and Pattern Recognition, pages 16210-16220, 2022. 3
|
| 354 |
+
[76] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733-3742, 2018. 2
|
| 355 |
+
[77] Donglai Xiang, Hanbyul Joo, and Yaser Sheikh. Monocular total capture: Posing face, body, and hands in the wild. In IEEE Conference on Computer Vision and Pattern Recognition, pages 10965-10974, 2019. 1, 2, 7, 8
|
| 356 |
+
[78] Yuliang Xiu, Jinlong Yang, Dimitrios Tzionas, and Michael J Black. Icon: Implicit clothed humans obtained from normals. In IEEE Conference on Computer Vision and Pattern Recognition, pages 13286-13296. IEEE, 2022. 3
|
| 357 |
+
[79] Yuanlu Xu, Song-Chun Zhu, and Tony Tung. Denserac: Joint 3d pose and shape estimation by dense render-and-compare. In International Conference on Computer Vision, October 2019. 1, 2
|
| 358 |
+
[80] Alan Yuille and Daniel Kersten. Vision as bayesian inference: analysis by synthesis? Trends in cognitive sciences, 10(7):301-308, 2006. 1
|
| 359 |
+
[81] Andrei Zanfir, Eduard Gabriel Bazavan, Hongyi Xu, Bill Freeman, Rahul Sukthankar, and Cristian Sminchisescu. Weakly supervised 3d human pose and shape reconstruction with normalizing flows. In European Conference on Computer Vision, pages 465-481, 2020. 1
|
| 360 |
+
[82] Tianshu Zhang, Buzhen Huang, and Yangang Wang. Object-occluded human shape and pose estimation from a single color image. In IEEE Conference on Computer Vision and Pattern Recognition, pages 7374–7383, 2020. 2, 7, 8
|
| 361 |
+
[83] Xingyi Zhou, Qixing Huang, Xiao Sun, Xiangyang Xue, and Yichen Wei. Towards 3d human pose estimation in the wild: a weakly-supervised approach. In International Conference on Computer Vision, pages 398-407, 2017. 2
|
3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:208f25454a3488ab04e83b6f375780efb15ded4a921308c83bb955e26dc064c7
|
| 3 |
+
size 407226
|
3dawareneuralbodyfittingforocclusionrobust3dhumanposeestimation/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:008ebe3fe9cc7cd4fbdcde45c0d4a640f7d25ee7bc873eee825cc30b7a9c46a6
|
| 3 |
+
size 498457
|
3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/940e4977-9767-4aba-96fa-fc3c8ae3c067_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4e9c72eadffb537b66820b93abd7fbd8736ba515c4514dad972ee80656e1a013
|
| 3 |
+
size 87865
|
3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/940e4977-9767-4aba-96fa-fc3c8ae3c067_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:926a30cf9c819f23be54a11bdac12f00033cba7e678dffe232317381b5ed8262
|
| 3 |
+
size 105731
|
3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/940e4977-9767-4aba-96fa-fc3c8ae3c067_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bfbbdea26d125f19f8389868ddc8b41819aba182d7b5dc0682de800ce9c0ac29
|
| 3 |
+
size 4799830
|
3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/full.md
ADDED
|
@@ -0,0 +1,361 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D Distillation: Improving Self-Supervised Monocular Depth Estimation on Reflective Surfaces
|
| 2 |
+
|
| 3 |
+
Xuepeng Shi $^{2}$ Georgi Dikov $^{1}$ Gerhard Reitmayr $^{1}$ Tae-Kyun Kim $^{2,3}$ Mohsen Ghafoorian $^{1}$ $^{1}$ Qualcomm $^{2}$ Imperial College London $^{3}$ KAIST
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
Self-supervised monocular depth estimation (SSMDE) aims at predicting the dense depth maps of monocular images, by learning to minimize a photometric loss using spatially neighboring image pairs during training. While SSMDE offers a significant scalability advantage over supervised approaches, it performs poorly on reflective surfaces as the photometric constancy assumption of the photometric loss is violated. We note that the appearance of reflective surfaces is view-dependent and often there are views of such surfaces in the training data that are not contaminated by strong specular reflections. Thus, reflective surfaces can be accurately reconstructed by aggregating the predicted depth of these views. Motivated by this observation, we propose 3D distillation: a novel training framework that utilizes the projected depth of reconstructed reflective surfaces to generate reasonably accurate depth pseudo-labels. To identify those surfaces automatically, we employ an uncertainty-guided depth fusion method, combining the smoother and more accurate projected depth on reflective surfaces and the detailed predicted depth elsewhere. In our experiments using the ScanNet and 7-Scenes datasets, we show that 3D distillation not only significantly improves the prediction accuracy, especially on the problematic surfaces, but also that it generalizes well over various underlying network architectures and to new datasets.
|
| 8 |
+
|
| 9 |
+
# 1. Introduction
|
| 10 |
+
|
| 11 |
+
Monocular depth estimation [37, 7] is the task of predicting the dense depth map of a monocular image. It is a fundamental and challenging problem in computer vision as it bridges the gap between 2D images and the 3D world. Supervised monocular depth estimation requires a large number of images from diverse scenes with ground truth depth. However, creating depth annotations involves
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: (a) On a reflective surface, predicting the correct surface depth does not minimize the photometric loss [14], due to the disparity between the projection with correct depth $A$ and the observed location $B$ . (b) $\mathrm{L} \rightarrow \mathrm{R}$ : Image from the ScanNet test set (scene0781_00) [4], predicted depth of Monodepth2 [14] and MonoViT [53] which overestimates the depth of the highlight. (c) $\mathrm{L} \rightarrow \mathrm{R}$ : Ground truth, predicted depth of [14, 53] with our 3D distillation.
|
| 17 |
+
|
| 18 |
+
expensive hardware and is time-consuming [12, 4, 39]. In contrast, self-supervised monocular depth estimation (SSMDE) [11, 55, 13, 14] only requires posed images as training data, such as stereo pairs and video sequences, and is therefore important for domains such as autonomous driving and virtual/augmented reality where the scalability of the data acquisition for various environments and camera setups matters. As a consequence, SSMDE has drawn much attention in recent years [43, 42].
|
| 19 |
+
|
| 20 |
+
Fundamentally, training an SSMDE model is based on the photometric loss [14]: given (i) the relative pose between two frames (source and target), (ii) the camera intrinsic parameters and (iii) the predicted depth map of the target frame, one can transform the source image into
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
(a)
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
Figure 2: (a) Images from the ScanNet train set [4] with annotated highlights. (b) Depth predictions of Monodepth2 [14]. (c) Mesh [32] of the table showing that the reflective surface can still be reconstructed correctly by aggregating the predicted depth from different view directions. The artifacts from the overestimated depths are occluded by the correct mesh surface.
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
(b)
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
(c)
|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
|
| 44 |
+
the target view direction with interpolation-based warping. The photometric loss can then be used to guide the training of the underlying depth estimation model. The effectiveness of SSMDE in outdoor applications such as autonomous driving [12] has been demonstrated in many prior works [11, 55, 13, 34, 14, 15, 16, 22, 45].
|
| 45 |
+
|
| 46 |
+
On the other hand, applying SSMDE to indoor scenes [4, 39] is challenging due to commonly observed reflective surfaces such as shiny floors, tables, screens, etc. As illustrated in Fig. 1a, predicting the correct depth of a reflective surface does not minimize the photometric loss due to view-dependent effects violating the photometric constancy assumption. Specifically, perceiving depth at the mirror image of a light source on the reflective surface, appearing as a virtual faraway object, minimizes the said loss. Consequently, the network learns to predict overestimated depth for the specular reflection, see Fig. 1b. However, this issue has not been studied enough.
|
| 47 |
+
|
| 48 |
+
To address this issue, we propose 3D distillation: a general training framework to improve SSMDE on reflective surfaces. As shown in Fig. 2a and Fig. 2b, we observe that specular highlights are view-dependent, and that there are some view directions in which the surface appearance is not contaminated by them. Thus, reflective surfaces can be accurately reconstructed by aggregating the predicted depth of these views, as shown in Fig. 2c. Inspired by this observation, we utilize the projected depth of reconstructed scenes to generate accurate depth pseudo-labels for challenging reflective surfaces. However, while the projected depth is more accurate at reflective surfaces, it is lacking high-frequency details due to volumetric averaging over multiple views. To overcome this over-smoothing problem, we propose a fusion scheme in which the projected and predicted depth are combined under the guidance of an uncertainty map associated to the predicted depth. Our 3D distillation is agnostic to the underlying network architectures [14, 29, 53] and significantly improves the depth prediction accuracy on reflective surfaces, as shown in Fig. 1c.
|
| 49 |
+
|
| 50 |
+
We highlight the contributions of this paper as follows:
|
| 51 |
+
|
| 52 |
+
1. We propose 3D distillation: a novel training frame
|
| 53 |
+
|
| 54 |
+
work that utilizes multi-view 3D information to improve depth prediction accuracy on reflective surfaces without adding computational cost or model parameters during inference.
|
| 55 |
+
|
| 56 |
+
2. We originally fuse the predicted and projected depth for pseudo-label generation, and propose an uncertainty-based approach that accurately identifies specular highlights.
|
| 57 |
+
3. To validate the effectiveness, we select a subset of the ScanNet dataset [4] which is rich in specular reflections and glossy surfaces and thus provide a foundation for benchmarking future works tackling this issue.
|
| 58 |
+
4. Through extensive evaluations, we show that 3D distillation significantly improves the depth accuracy of reflective surfaces on ScanNet [4] and 7-Scenes [39], while being agnostic to the underlying networks.
|
| 59 |
+
|
| 60 |
+
# 2. Related Work
|
| 61 |
+
|
| 62 |
+
# 2.1. Self-Supervised Monocular Depth Estimation
|
| 63 |
+
|
| 64 |
+
Self-supervised monocular depth estimation (SSMDE) aims to learn the dense depth maps of monocular images, training with the photometric loss [14] using stereo pairs or monocular videos. Monodepth [13] learns depth from stereo pairs. Monodepth2 [14] further uses temporally neighboring frames to minimize the photometric loss, and introduces auto-masking and minimum reprojection loss to solve the problem of stationary pixels and occlusions. To deal with dynamic objects, semantic information is utilized in SGDepth [22] and motion maps are introduced in [26]. Feature space reconstruction losses are used in [52, 40] to improve the depth accuracy. DeFeatNet [41] introduces a cross-domain dense feature representation and a warped feature consistency to improve the depth accuracy. In [34], a complex architecture is deployed to supervise a more compact one. HR-Depth [29] introduces high-resolution feature representation and feature fusion squeeze-and-excitation block. MonoViT [53]
|
| 65 |
+
|
| 66 |
+

|
| 67 |
+
Figure 3: Pipeline of our 3D distillation training. Firstly, a pretrained self-supervised depth model is used to get the predicted depth of the training images. Then, the mesh of the training scene is reconstructed using the predicted depth, and the projected depth of the training images can be obtained. Finally, the predicted and projected depth are fused to generate pseudo-labels, under the guidance of the uncertainty of the predicted depth (low high). 3D distillation can generate accurate pseudolabels by utilizing the multi-view 3D information aggregated from the predicted depth of multiple video frames.
|
| 68 |
+
|
| 69 |
+
uses MPViT [24] as the encoder of the depth model and achieves state-of-the-art SSMDE accuracy [42]. Planar assumptions are used in [51, 25] to improve the depth estimation accuracy for indoor scenes, which is achieved by utilizing external superpixel segmentation [9] or vanishing point detection [28] methods. MonoIndoor [19] is designed to handle indoor dynamic depth ranges and be more robust to rotational motions. DistDepth [46] uses an external depth model [36] as a teacher, which is trained in a supervised manner, to guide the training of SSMDE models.
|
| 70 |
+
|
| 71 |
+
In this paper, we improve the SSMDE accuracy on reflective surfaces in indoor scenes in a self-supervised manner, which has not been studied in these existing works. The proposed 3D distillation utilizes the multi-view 3D information aggregated from the predicted depth of multiple video frames, instead of utilizing external segmentation or depth models [51, 25, 46]. To demonstrate the generalizability, we experiment on three SSMDE architectures [14, 29, 53].
|
| 72 |
+
|
| 73 |
+
# 2.2. Self-Supervised Multi-View Stereo
|
| 74 |
+
|
| 75 |
+
Self-supervised multi-view stereo predicts depth from multi-view images, without using ground truth depth labels during training. Generating pseudo-labels is prevalent in this topic. U-MVS [47] uses uncertainty [10] to filter out unreliable pseudo-labels. In [48], the projected depth from reconstructed meshes is used as pseudo-labels and low-resolution training is introduced to improve the ac
|
| 76 |
+
|
| 77 |
+
curacy. In contrast, our 3D distillation originally fuses the predicted depth and projected depth under the guidance of uncertainty [23] to generate reliable pseudo-labels. RC-MVSNet [3] uses NeRF [31] as a teacher to improve the accuracy, and training models on an object-level dataset [1]. However, designing a general NeRF model [31] for scene-level datasets [4] is challenging [17]. In contrast, our 3D distillation can work on scene-level datasets [4] and does not rely on external models like NeRF [31].
|
| 78 |
+
|
| 79 |
+
# 2.3. Uncertainty Estimation
|
| 80 |
+
|
| 81 |
+
Uncertainty estimation [10, 23, 20] aims to quantify the uncertainty of predictions. Regression uncertainty [20] and MC-dropout [10] are used to select reliable pseudo-labels in semi-supervised object detection [27] and self-supervised multi-view stereo [47], respectively. In SSMDE, different strategies are explored in [35] to model uncertainty. In this paper, we work on SSMDE and use an ensemble-based uncertainty [23] to guide the fusion of depth training labels from different sources.
|
| 82 |
+
|
| 83 |
+
# 3. Method
|
| 84 |
+
|
| 85 |
+
In this section, we first discuss the self-supervised pretraining, then detail our 3D distillation training which aggregates multi-view 3D information to improve the depth accuracy on reflective surfaces. An overview of our 3D distillation training pipeline is shown in Fig. 3.
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
Image
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
Monodepth2Pred.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
HR-Depth Pred.
|
| 95 |
+
|
| 96 |
+

|
| 97 |
+
MonoViT Pred.
|
| 98 |
+
MonoViT Proj.
|
| 99 |
+
|
| 100 |
+

|
| 101 |
+
Uncertainty
|
| 102 |
+
Mask
|
| 103 |
+
|
| 104 |
+

|
| 105 |
+
Ground Truth
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
Monodepth2 Proj.
|
| 109 |
+
|
| 110 |
+

|
| 111 |
+
HR-Depth Proj.
|
| 112 |
+
Figure 4: First row $(\mathrm{L}\rightarrow \mathrm{R})$ : Image from the ScanNet train set (scene0066_00) [4]; predicted depth of Monodepth2 [14], HR-Depth [29], and MonoViT [53], respectively; uncertainty map [23] of the predicted depth. Second row $(\mathrm{L}\rightarrow \mathrm{R})$ : Ground truth depth; projected depth of [14, 29, 53], respectively; binary mask of the uncertainty map. The predicted depth can keep more high-frequency details and the projected depth is more accurate on reflective surfaces. In our 3D distillation, the predicted depth and projected depth are fused under the guidance of uncertainty, which combines the best of the two worlds.
|
| 113 |
+
|
| 114 |
+

|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
|
| 118 |
+
# 3.1. Self-Supervised Pretraining
|
| 119 |
+
|
| 120 |
+
In this stage, a self-supervised depth model is obtained by training with the photometric loss [13, 14]. Let $S_{I} = \{I_{t}\}_{t=1}^{N}$ be a sequence of video frames for training. Following common notation, we denote with $T_{t\rightarrow s}$ the relative pose for a source image $I_{s}$ , with respect to a target image $I_{t}$ , and with $K$ the world-to-pixel coordinate camera projection matrix. The goal is to predict a dense depth map $D_{t}$ of a target image $I_{t}$ which minimizes the photometric loss $\mathcal{L}_{\mathrm{recons}}$ as follows:
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\mathcal {L} _ {\text {r e c o n s}} = \ell \left(I _ {t}, \operatorname {w a r p} \left(I _ {s}, T _ {t \rightarrow s}, K, D _ {t}\right)\right), \tag {1}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
where $\operatorname{warp}(I, T, K, D) = I\langle KTDK^{-1}x\rangle_{x \in \operatorname{coords}(I)}$ denotes an image warping transformation with bilinear interpolation sampling. Following [13, 14], we use:
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
\ell \left(I _ {a}, I _ {b}\right) = \frac {\alpha}{2} \left(1 - \operatorname {S S I M} \left(I _ {a}, I _ {b}\right)\right) + (1 - \alpha) \| I _ {a} - I _ {b} \| _ {1}, \tag {2}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
a combination of pixel-wise $L_{1}$ and SSIM [44] losses, where $\alpha = 0.85$
|
| 133 |
+
|
| 134 |
+
In practice, we follow [14] to extend the photometric loss from Eq. (1) to account for multiple source frames using the minimum reprojection loss and add a smoothness regularization. During training, the ground truth camera poses are used to calculate the relative pose $T_{t\rightarrow s}$ , and thus the predicted depth is metric. Any existing SSMDE network architecture [14, 29, 53] fits into this framework of self-supervised training.
|
| 135 |
+
|
| 136 |
+
# 3.2. 3D Distillation Training
|
| 137 |
+
|
| 138 |
+
In this stage, the self-supervised model is first used to generate the predicted depth of training images. Then, the
|
| 139 |
+
|
| 140 |
+
meshes of training scenes are reconstructed using the predicted depth, and the projected depth of training images can be obtained. Finally, the predicted and projected depth are fused to generate pseudo-labels, and a 3D distillation model is trained using the pseudo-labels. Note that the self-supervised model is frozen in this stage.
|
| 141 |
+
|
| 142 |
+
# 3.2.1 Predicted Depth Generation
|
| 143 |
+
|
| 144 |
+
For the training video sequence $S_{I} = \{I_{t}\}_{t = 1}^{N}$ , the self-supervised depth model is used to obtain the predicted depth, i.e., $S_{D} = \{D_{t}\}_{t = 1}^{N}$ . The predicted depth is accurate on high-frequency details, such as the boundary of an object. However, the predicted depth can perform poorly on reflective surfaces, as the photometric constancy assumption of the photometric loss in Eq. (1) is violated.
|
| 145 |
+
|
| 146 |
+
# 3.2.2 Projected Depth Generation
|
| 147 |
+
|
| 148 |
+
To get better training supervision for reflective surfaces, we aggregate the multi-view 3D information from the predicted depth of multiple video frames. Specifically, with the predicted depth $S_{D}$ and ground truth camera poses of the training images, we use TSDF-fusion [32] to reconstruct the 3D mesh of the scene; then we project the 3D mesh according to the camera poses of the images and obtain the corresponding projected depth, i.e., $S_{P} = \{P_{t}\}_{t=1}^{N}$ . The projected depth is more accurate than the predicted depth on reflective surfaces, because reflective surfaces are view-dependent and often there are views of such surfaces in the training data that are not contaminated by strong specular reflections. Fig. 4 illustrates that the predicted and projected
|
| 149 |
+
|
| 150 |
+
<table><tr><td></td><td>[48]</td><td>RC-MVSNet [3]</td><td>Ours</td></tr><tr><td>Pseudo-Label Technique</td><td>proj. depth</td><td>Depth from NeRF [31]</td><td>pred. depth+proj. depth</td></tr><tr><td rowspan="2">Training Data</td><td>LR Training</td><td>Depth-guided Sampling</td><td>Uncer-guided Fusion</td></tr><tr><td>Object [1]</td><td>Object [1]</td><td>Scene [4]</td></tr></table>
|
| 151 |
+
|
| 152 |
+
Table 1: Different strategies to aggregate multi-view 3D information to generate pseudo-labels. 'Technique' means the proposed technique to improve the quality of pseudo-labels. 'LR Training' denotes low-resolution training. 'Uncer' denotes uncertainty. 'Object' and 'Scene' denote object-level datasets and scene-level datasets, respectively.
|
| 153 |
+
|
| 154 |
+
depth are complementary.
|
| 155 |
+
|
| 156 |
+
In this step, mesh reconstruction is necessary because: (i) mesh creation improves the completeness of the projected depth $S_P$ ; (ii) meshes can model occlusions.
|
| 157 |
+
|
| 158 |
+
# 3.2.3 Uncertainty-guided Depth Fusion
|
| 159 |
+
|
| 160 |
+
We fuse the predicted depth $D_{t}$ and projected depth $P_{t}$ under the guidance of the uncertainty of the predicted depth. With three self-supervised models with different network architectures [14, 29, 53], we can use an ensemble-based uncertainty [23] to obtain the uncertainty maps $S_{U} = \{U_{t}\}_{t=1}^{N}$ . Specifically, the standard deviation of the three depth predictions for a pixel is the uncertainty of this pixel. As shown in Fig. 4 (top row), the ability of these networks to capture high-frequency information is different, so the depth predictions at specular highlights are varying as well, which increases the uncertainty there. We do not use MC-dropout [10] here, as MC-dropout [10] may not work well in SSMDE with known scale, as discussed in [35]. We set a threshold $\alpha_{\mathrm{uncer}} = 0.4$ and fuse the predicted and projected depth to get the pseudo-labels $S_{L} = \{L_{t}\}_{t=1}^{N}$ , formulated as:
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
L _ {t} (x) = \left\{ \begin{array}{l l} P _ {t} (x), & \text {i f} U _ {t} (x) \geq \alpha_ {\text {u n c e r}} \\ D _ {t} (x), & \text {o t h e r w i s e} \end{array} \right. \tag {3}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
where $x$ is a pixel on an image frame $I_{t}$ .
|
| 167 |
+
|
| 168 |
+
We compare different strategies to aggregate multi-view 3D information to generate pseudo-labels in Tab. 1. In [48], the projected depth from reconstructed meshes is used and low-resolution training/high-resolution testing is introduced to improve the accuracy. However, cross-resolution testing is challenging for monocular depth estimation [8, 18]. In [3], NeRF [31] is used as a teacher to improve the accuracy. However, designing a general NeRF model [31] for scene-level datasets [4] is not trivial [17]. Our 3D distillation originally fuses the predicted depth and projected depth to generate pseudo-labels, which works well for scene-level monocular depth estimation.
|
| 169 |
+
|
| 170 |
+
# 3.2.4 Model Training
|
| 171 |
+
|
| 172 |
+
We use the pseudo-labels $S_{L}$ to train the 3D distillation model. Following [50, 38], the training loss is:
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\mathcal {L} _ {\text {d e p t h}} = \left| \log F _ {t} - \log L _ {t} \right|, \tag {4}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
where $F_{t}$ is the prediction of image $I_{t}$ and $L_{t}$ is the pseudolabel of image $I_{t}$ . To demonstrate the benefit of aggregating multi-view 3D information, the 3D distillation model uses the same network architecture as the self-supervised model, and is trained from scratch on the pseudo-labels instead of fine-tuning the self-supervised model. Our 3D distillation framework only modifies the training stage, without introducing additional computational cost or model parameters during inference.
|
| 179 |
+
|
| 180 |
+
# 3.3. Implementation Details
|
| 181 |
+
|
| 182 |
+
We experiment using three network architectures [14, 29, 53]. For numerical stability during training, the depth models predict disparity and the output is activated by a sigmoid function. The input/output resolution of the depth models is $384 \times 288$ . We implement our method with PyTorch [33]. The training batch size for Monodepth2 [14] and HR-Depth [29] is 12 and for MonoViT [53] is 8. All the models are trained for 41 epochs with the Adam optimizer [21]. The initial learning rate is $10^{-4}$ and reduced by a factor of 10 after 26 and 36 epochs. Flipping and color augmentations are used during training, following [14]. For the scene reconstruction, we use TSDF-fusion [32] and mesh extraction in Open3D [54]. The voxel size is $0.05\mathrm{m}$ and the truncation distance is $1.0\mathrm{m}$ . To speed up the reconstruction, we only integrate every $10^{\mathrm{th}}$ frame during TSDF-fusion. To obtain the projected depth of meshes, we use Pyrender [30].
|
| 183 |
+
|
| 184 |
+
# 4. Experiments
|
| 185 |
+
|
| 186 |
+
In this section, we first introduce the datasets we use, then present the main results, and finally discuss the ablation experiments. We also show the effectiveness of our 3D distillation qualitatively in Fig. 5.
|
| 187 |
+
|
| 188 |
+
# 4.1. Datasets
|
| 189 |
+
|
| 190 |
+
ScanNet (v2) dataset [4] is a large-scale indoor RGB-D dataset that includes both 2D and 3D data. It contains 1613 indoor scenes with ground truth camera poses and depth maps. We use the official train set (1201 scenes) for our model training. During training, we only use images and ground truth camera poses, without using ground truth depth data. We consider every $10^{\mathrm{th}}$ frame as a target frame to reduce redundancy and for each, we find a source frame both backwards and forwards in time with a relative translation of $5 - 10\mathrm{cm}$ and a relative rotation of at most 3 degrees, forming 45539 training triples. We evaluate using
|
| 191 |
+
|
| 192 |
+
<table><tr><td rowspan="2">Architecture</td><td rowspan="2">Model</td><td colspan="7">ScanNet Val Set</td></tr><tr><td>Abs Rel ↓</td><td>Sq Rel ↓</td><td>RMSE ↓</td><td>RMSE log ↓</td><td>δ < 1.25 ↑</td><td>δ < 1.252 ↑</td><td>δ < 1.253 ↑</td></tr><tr><td rowspan="3">Monodepth2 [14]</td><td>Self-Supervised [14]</td><td>0.167</td><td>0.100</td><td>0.385</td><td>0.203</td><td>0.764</td><td>0.935</td><td>0.981</td></tr><tr><td>Self-Teaching [35]</td><td>0.160</td><td>0.090</td><td>0.365</td><td>0.193</td><td>0.780</td><td>0.941</td><td>0.983</td></tr><tr><td>3D Distillation (ours)</td><td>0.157</td><td>0.083</td><td>0.357</td><td>0.190</td><td>0.782</td><td>0.943</td><td>0.985</td></tr><tr><td rowspan="3">HR-Depth [29]</td><td>Self-Supervised [14]</td><td>0.166</td><td>0.100</td><td>0.381</td><td>0.200</td><td>0.771</td><td>0.937</td><td>0.982</td></tr><tr><td>Self-Teaching [35]</td><td>0.159</td><td>0.090</td><td>0.360</td><td>0.190</td><td>0.785</td><td>0.943</td><td>0.984</td></tr><tr><td>3D Distillation (ours)</td><td>0.154</td><td>0.080</td><td>0.349</td><td>0.186</td><td>0.788</td><td>0.945</td><td>0.986</td></tr><tr><td rowspan="3">MonoViT [53]</td><td>Self-Supervised [14]</td><td>0.138</td><td>0.077</td><td>0.331</td><td>0.171</td><td>0.831</td><td>0.955</td><td>0.986</td></tr><tr><td>Self-Teaching [35]</td><td>0.133</td><td>0.071</td><td>0.314</td><td>0.163</td><td>0.844</td><td>0.959</td><td>0.988</td></tr><tr><td>3D Distillation (ours)</td><td>0.128</td><td>0.060</td><td>0.296</td><td>0.157</td><td>0.846</td><td>0.962</td><td>0.990</td></tr><tr><td rowspan="2">Architecture</td><td rowspan="2">Model</td><td colspan="7">ScanNet Test Set</td></tr><tr><td>Abs Rel ↓</td><td>Sq Rel ↓</td><td>RMSE ↓</td><td>RMSE log ↓</td><td>δ < 1.25 ↑</td><td>δ < 1.252 ↑</td><td>δ < 1.253 ↑</td></tr><tr><td rowspan="3">Monodepth2 [14]</td><td>Self-Supervised [14]</td><td>0.189</td><td>0.116</td><td>0.407</td><td>0.217</td><td>0.731</td><td>0.921</td><td>0.974</td></tr><tr><td>Self-Teaching [35]</td><td>0.184</td><td>0.109</td><td>0.392</td><td>0.210</td><td>0.742</td><td>0.925</td><td>0.976</td></tr><tr><td>3D Distillation (ours)</td><td>0.181</td><td>0.105</td><td>0.388</td><td>0.208</td><td>0.746</td><td>0.927</td><td>0.976</td></tr><tr><td rowspan="3">HR-Depth [29]</td><td>Self-Supervised [14]</td><td>0.184</td><td>0.111</td><td>0.399</td><td>0.212</td><td>0.739</td><td>0.925</td><td>0.976</td></tr><tr><td>Self-Teaching [35]</td><td>0.178</td><td>0.102</td><td>0.381</td><td>0.204</td><td>0.752</td><td>0.931</td><td>0.979</td></tr><tr><td>3D Distillation (ours)</td><td>0.176</td><td>0.098</td><td>0.378</td><td>0.202</td><td>0.754</td><td>0.932</td><td>0.979</td></tr><tr><td rowspan="3">MonoViT [53]</td><td>Self-Supervised [14]</td><td>0.154</td><td>0.082</td><td>0.343</td><td>0.182</td><td>0.801</td><td>0.948</td><td>0.984</td></tr><tr><td>Self-Teaching [35]</td><td>0.152</td><td>0.081</td><td>0.329</td><td>0.177</td><td>0.811</td><td>0.948</td><td>0.983</td></tr><tr><td>3D Distillation (ours)</td><td>0.149</td><td>0.075</td><td>0.324</td><td>0.174</td><td>0.812</td><td>0.949</td><td>0.985</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 2: Main results on the ScanNet val and test sets [4]. 'Self-Supervised' indicates that the model is trained with the photometric loss [14]. 'Self-Teaching' indicates that the model is supervised by the predicted depth from self-supervised models and trained with the depth loss in Eq. (4). '3D Distillation' indicates that the model is supervised by the fusion of the predicted depth and project depth and trained with the depth loss in Eq. (4). Bold indicates the best result of an architecture.
|
| 195 |
+
|
| 196 |
+
the complete official val set (312 scenes) and the complete official test set (100 scenes). To better evaluate the accuracy on reflective surfaces, we create ScanNet-Reflection, a subset in which reflective surfaces can be observed in every image. ScanNet-Reflection val and test sets consist of 439 and 121 images from the official val and test sets, respectively. To evaluate the accuracy on non-reflective surfaces, we also create a ScanNet-NoReflection val set, which consists of 1012 images without reflective surfaces from the official val set. We evaluate the absolute depth and use the standard depth metrics [7]. In the supplementary material, we provide the list of the training triples, the lists of the ScanNet-Reflection and ScanNet-NoReflection subsets, and the definitions of the evaluation metrics.
|
| 197 |
+
|
| 198 |
+
7-Scenes dataset [39] is a challenging RGB-D dataset captured in indoor scenes. To show the cross-dataset generalizability, we use models trained on ScanNet [4] to test on 7-Scenes [39], following [5, 38]. We use the test set in [5, 38], which consists of 13 sequences, and evaluate using the ground truth depth from [2]. We evaluate the relative depth as the camera intrinsics of different datasets [4, 39] are different, and use the standard depth metrics [7].
|
| 199 |
+
|
| 200 |
+
# 4.2. Main Results
|
| 201 |
+
|
| 202 |
+
ScanNet [4] results with and without our 3D distillation are shown in Tab. 2. We can see: (i) 3D distillation models achieve the best accuracy under all seven metrics, for three different backbones [14, 29, 53] and on both val and test sets. For example, using Monodepth2 architecture [14], 3D distillation can decrease the Sq Rel of the self-teaching model by $7.78\%$ and $3.67\%$ on the val and test sets, respectively; using HR-Depth architecture [29], 3D distillation can decrease the Sq Rel of the self-teaching model by $11.11\%$ and $3.92\%$ on the val and test sets, respectively; the corresponding improvements for MonoViT [53] are $15.49\%$ and $7.41\%$ on the val and test sets, respectively. (ii) The observed improvements of 3D distillation for a stronger model are larger. Specifically, on the val set, 3D distillation can decrease the Sq Rel of the self-teaching models by $7.78\% / 11.11\% / 15.49\%$ for Monodepth2 [14], HR-Depth [29], and MonoViT [53] architectures, respectively; on the test set, 3D distillation can decrease the Sq Rel of the self-teaching models by $3.67\% / 3.92\% / 7.41\%$ for Monodepth2 [14], HR-Depth [29], and MonoViT [53] architectures, respectively. We assume stronger models can better capture high-frequency information, thus their depth pre
|
| 203 |
+
|
| 204 |
+
<table><tr><td rowspan="2">Architecture</td><td rowspan="2">Model</td><td colspan="7">ScanNet-Reflection Val Set</td></tr><tr><td>Abs Rel ↓</td><td>Sq Rel ↓</td><td>RMSE ↓</td><td>RMSE log ↓</td><td>δ < 1.25 ↑</td><td>δ < 1.252 ↑</td><td>δ < 1.253 ↑</td></tr><tr><td rowspan="3">Monodepth2 [14]</td><td>Self-Supervised [14]</td><td>0.206</td><td>0.227</td><td>0.584</td><td>0.246</td><td>0.750</td><td>0.912</td><td>0.961</td></tr><tr><td>Self-Teaching [35]</td><td>0.192</td><td>0.188</td><td>0.548</td><td>0.233</td><td>0.764</td><td>0.920</td><td>0.967</td></tr><tr><td>3D Distillation (ours)</td><td>0.156</td><td>0.093</td><td>0.442</td><td>0.191</td><td>0.786</td><td>0.943</td><td>0.987</td></tr><tr><td rowspan="3">HR-Depth [29]</td><td>Self-Supervised [14]</td><td>0.213</td><td>0.244</td><td>0.605</td><td>0.255</td><td>0.741</td><td>0.906</td><td>0.961</td></tr><tr><td>Self-Teaching [35]</td><td>0.202</td><td>0.208</td><td>0.565</td><td>0.243</td><td>0.756</td><td>0.914</td><td>0.964</td></tr><tr><td>3D Distillation (ours)</td><td>0.153</td><td>0.090</td><td>0.430</td><td>0.188</td><td>0.789</td><td>0.948</td><td>0.989</td></tr><tr><td rowspan="3">MonoViT [53]</td><td>Self-Supervised [14]</td><td>0.179</td><td>0.206</td><td>0.557</td><td>0.227</td><td>0.819</td><td>0.930</td><td>0.963</td></tr><tr><td>Self-Teaching [35]</td><td>0.176</td><td>0.195</td><td>0.537</td><td>0.224</td><td>0.823</td><td>0.930</td><td>0.963</td></tr><tr><td>3D Distillation (ours)</td><td>0.126</td><td>0.068</td><td>0.367</td><td>0.159</td><td>0.851</td><td>0.965</td><td>0.991</td></tr><tr><td rowspan="2">Architecture</td><td rowspan="2">Model</td><td colspan="7">ScanNet-Reflection Test Set</td></tr><tr><td>Abs Rel ↓</td><td>Sq Rel ↓</td><td>RMSE ↓</td><td>RMSE log ↓</td><td>δ < 1.25 ↑</td><td>δ < 1.252 ↑</td><td>δ < 1.253 ↑</td></tr><tr><td rowspan="3">Monodepth2 [14]</td><td>Self-Supervised [14]</td><td>0.181</td><td>0.160</td><td>0.521</td><td>0.221</td><td>0.758</td><td>0.932</td><td>0.976</td></tr><tr><td>Self-Teaching [35]</td><td>0.179</td><td>0.146</td><td>0.502</td><td>0.218</td><td>0.750</td><td>0.938</td><td>0.980</td></tr><tr><td>3D Distillation (ours)</td><td>0.156</td><td>0.096</td><td>0.459</td><td>0.195</td><td>0.766</td><td>0.945</td><td>0.988</td></tr><tr><td rowspan="3">HR-Depth [29]</td><td>Self-Supervised [14]</td><td>0.182</td><td>0.168</td><td>0.530</td><td>0.225</td><td>0.749</td><td>0.937</td><td>0.979</td></tr><tr><td>Self-Teaching [35]</td><td>0.175</td><td>0.145</td><td>0.492</td><td>0.215</td><td>0.757</td><td>0.936</td><td>0.982</td></tr><tr><td>3D Distillation (ours)</td><td>0.152</td><td>0.089</td><td>0.451</td><td>0.190</td><td>0.771</td><td>0.956</td><td>0.990</td></tr><tr><td rowspan="3">MonoViT [53]</td><td>Self-Supervised [14]</td><td>0.154</td><td>0.129</td><td>0.458</td><td>0.197</td><td>0.822</td><td>0.955</td><td>0.979</td></tr><tr><td>Self-Teaching [35]</td><td>0.151</td><td>0.130</td><td>0.439</td><td>0.191</td><td>0.837</td><td>0.950</td><td>0.978</td></tr><tr><td>3D Distillation (ours)</td><td>0.127</td><td>0.069</td><td>0.379</td><td>0.162</td><td>0.846</td><td>0.961</td><td>0.992</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Table 3: Main results on the ScanNet-Reflection val and test sets [4]. ScanNet-Reflection is a subset in which specular reflections or glossy surfaces can be observed in every image.
|
| 207 |
+
|
| 208 |
+
<table><tr><td rowspan="2">Architecture</td><td rowspan="2">Model</td><td colspan="7">ScanNet-NoReflection Val Set</td></tr><tr><td>Abs Rel ↓</td><td>Sq Rel ↓</td><td>RMSE ↓</td><td>RMSE log ↓</td><td>δ < 1.25 ↑</td><td>δ < 1.25² ↑</td><td>δ < 1.25³ ↑</td></tr><tr><td rowspan="3">Monodepth2 [14]</td><td>Self-Supervised [14]</td><td>0.169</td><td>0.100</td><td>0.395</td><td>0.206</td><td>0.759</td><td>0.932</td><td>0.979</td></tr><tr><td>Self-Teaching [35]</td><td>0.161</td><td>0.090</td><td>0.375</td><td>0.196</td><td>0.777</td><td>0.939</td><td>0.981</td></tr><tr><td>3D Distillation (ours)</td><td>0.159</td><td>0.087</td><td>0.373</td><td>0.195</td><td>0.779</td><td>0.941</td><td>0.983</td></tr><tr><td rowspan="3">HR-Depth [29]</td><td>Self-Supervised [14]</td><td>0.169</td><td>0.102</td><td>0.388</td><td>0.202</td><td>0.766</td><td>0.933</td><td>0.980</td></tr><tr><td>Self-Teaching [35]</td><td>0.160</td><td>0.089</td><td>0.367</td><td>0.192</td><td>0.784</td><td>0.941</td><td>0.982</td></tr><tr><td>3D Distillation (ours)</td><td>0.158</td><td>0.086</td><td>0.365</td><td>0.190</td><td>0.786</td><td>0.942</td><td>0.983</td></tr><tr><td rowspan="3">MonoViT [53]</td><td>Self-Supervised [14]</td><td>0.140</td><td>0.074</td><td>0.333</td><td>0.171</td><td>0.829</td><td>0.952</td><td>0.984</td></tr><tr><td>Self-Teaching [35]</td><td>0.134</td><td>0.068</td><td>0.317</td><td>0.164</td><td>0.840</td><td>0.956</td><td>0.987</td></tr><tr><td>3D Distillation (ours)</td><td>0.133</td><td>0.065</td><td>0.311</td><td>0.162</td><td>0.838</td><td>0.956</td><td>0.987</td></tr></table>
|
| 209 |
+
|
| 210 |
+
Table 4: Main results on the ScanNet-NoReflection val set [4]. ScanNet-NoReflection is a subset without reflective surfaces.
|
| 211 |
+
|
| 212 |
+
diction is more influenced by reflective surfaces. (iii) Self-teaching models are better than self-supervised models. We assume the depth loss, i.e., Eq. (4), can decrease the contribution of challenging and faraway depth during training, thus improving overall accuracy. Nevertheless, our 3D distillation models are much better than self-teaching models.
|
| 213 |
+
|
| 214 |
+
ScanNet-Reflection [4] results with and without our 3D distillation are shown in Tab. 3. Our 3D distillation can significantly improve the depth accuracy, which supports the effectiveness for reflective surfaces. For example, on the
|
| 215 |
+
|
| 216 |
+
Sq Rel metric of the val set, 3D distillation can improve the self-teaching models by $50.53\% / 56.73\% / 65.13\%$ for Monodepth2 [14], HR-Depth [29], and MonoViT [53] architectures, respectively; on the Sq Rel metric of the test set, 3D distillation can improve the self-teaching models by $34.25\% / 38.62\% / 46.92\%$ for Monodepth2 [14], HR-Depth [29], and MonoViT [53] architectures, respectively.
|
| 217 |
+
|
| 218 |
+
ScanNet-NoReflection [4] results are shown in Tab. 4. We can observe that 3D distillation improvements extend beyond reflective patches.
|
| 219 |
+
|
| 220 |
+
<table><tr><td rowspan="2">Architecture</td><td rowspan="2">Model</td><td colspan="7">7-Scenes</td></tr><tr><td>Abs Rel ↓</td><td>Sq Rel ↓</td><td>RMSE ↓</td><td>RMSE log ↓</td><td>δ < 1.25 ↑</td><td>δ < 1.252 ↑</td><td>δ < 1.253 ↑</td></tr><tr><td rowspan="3">Monodepth2 [14]</td><td>Self-Supervised [14]</td><td>0.153</td><td>0.071</td><td>0.323</td><td>0.190</td><td>0.793</td><td>0.959</td><td>0.989</td></tr><tr><td>Self-Teaching [35]</td><td>0.152</td><td>0.069</td><td>0.321</td><td>0.188</td><td>0.796</td><td>0.961</td><td>0.989</td></tr><tr><td>3D Distillation (ours)</td><td>0.149</td><td>0.065</td><td>0.308</td><td>0.185</td><td>0.800</td><td>0.963</td><td>0.990</td></tr><tr><td rowspan="3">HR-Depth [29]</td><td>Self-Supervised [14]</td><td>0.157</td><td>0.078</td><td>0.334</td><td>0.193</td><td>0.790</td><td>0.957</td><td>0.988</td></tr><tr><td>Self-Teaching [35]</td><td>0.149</td><td>0.067</td><td>0.315</td><td>0.185</td><td>0.802</td><td>0.963</td><td>0.990</td></tr><tr><td>3D Distillation (ours)</td><td>0.147</td><td>0.064</td><td>0.304</td><td>0.183</td><td>0.804</td><td>0.965</td><td>0.990</td></tr><tr><td rowspan="3">MonoViT [53]</td><td>Self-Supervised [14]</td><td>0.140</td><td>0.059</td><td>0.297</td><td>0.176</td><td>0.821</td><td>0.967</td><td>0.992</td></tr><tr><td>Self-Teaching [35]</td><td>0.137</td><td>0.057</td><td>0.293</td><td>0.174</td><td>0.827</td><td>0.968</td><td>0.992</td></tr><tr><td>3D Distillation (ours)</td><td>0.134</td><td>0.053</td><td>0.284</td><td>0.170</td><td>0.831</td><td>0.972</td><td>0.993</td></tr></table>
|
| 221 |
+
|
| 222 |
+
Table 5: Main results on 7-Scenes [39]. All the models are trained using the ScanNet train set [4].
|
| 223 |
+
|
| 224 |
+
<table><tr><td rowspan="2">Training Label</td><td colspan="7">ScanNet Val Set</td></tr><tr><td>Abs Rel ↓</td><td>Sq Rel ↓</td><td>RMSE ↓</td><td>RMSE log ↓</td><td>δ < 1.25 ↑</td><td>δ < 1.252 ↑</td><td>δ < 1.253 ↑</td></tr><tr><td>pred. depth</td><td>0.160</td><td>0.090</td><td>0.365</td><td>0.193</td><td>0.780</td><td>0.941</td><td>0.983</td></tr><tr><td>proj. depth</td><td>0.176</td><td>0.102</td><td>0.421</td><td>0.224</td><td>0.710</td><td>0.919</td><td>0.980</td></tr><tr><td>proj. depth with low-resolution training</td><td>0.344</td><td>0.341</td><td>0.833</td><td>0.484</td><td>0.326</td><td>0.602</td><td>0.804</td></tr><tr><td>pred. depth + proj. depth with diff</td><td>0.166</td><td>0.094</td><td>0.397</td><td>0.212</td><td>0.740</td><td>0.926</td><td>0.981</td></tr><tr><td>pred. depth + proj. depth with mv</td><td>0.158</td><td>0.085</td><td>0.365</td><td>0.195</td><td>0.773</td><td>0.940</td><td>0.984</td></tr><tr><td>pred. depth + proj. depth with uncer (ours)</td><td>0.157</td><td>0.083</td><td>0.357</td><td>0.190</td><td>0.782</td><td>0.943</td><td>0.985</td></tr><tr><td rowspan="2">Training Label</td><td colspan="7">ScanNet-Reflection Val Set</td></tr><tr><td>Abs Rel ↓</td><td>Sq Rel ↓</td><td>RMSE ↓</td><td>RMSE log ↓</td><td>δ < 1.25 ↑</td><td>δ < 1.252 ↑</td><td>δ < 1.253 ↑</td></tr><tr><td>pred. depth</td><td>0.192</td><td>0.188</td><td>0.548</td><td>0.233</td><td>0.764</td><td>0.920</td><td>0.967</td></tr><tr><td>proj. depth</td><td>0.189</td><td>0.130</td><td>0.565</td><td>0.249</td><td>0.664</td><td>0.901</td><td>0.979</td></tr><tr><td>proj. depth with low-resolution training</td><td>0.377</td><td>0.469</td><td>1.128</td><td>0.561</td><td>0.262</td><td>0.508</td><td>0.730</td></tr><tr><td>pred. depth + proj. depth with diff</td><td>0.172</td><td>0.115</td><td>0.521</td><td>0.229</td><td>0.709</td><td>0.914</td><td>0.983</td></tr><tr><td>pred. depth + proj. depth with mv</td><td>0.163</td><td>0.109</td><td>0.469</td><td>0.204</td><td>0.772</td><td>0.935</td><td>0.984</td></tr><tr><td>pred. depth + proj. depth with uncer (ours)</td><td>0.156</td><td>0.093</td><td>0.442</td><td>0.191</td><td>0.786</td><td>0.943</td><td>0.987</td></tr></table>
|
| 225 |
+
|
| 226 |
+
Table 6: Ablation experiments using different training labels. The network is Monodepth2 [14] architecture. 'pred. depth' and 'proj. depth' indicate only using the predicted depth or projected depth as training supervision, respectively. 'low-resolution training' denotes training the model with low-resolution [48]. 'diff' denotes fusing using the difference strategy. 'mv' denotes fusing using the multi-view consistency check [49]. 'uncer' denotes fusing using the uncertainty [23].
|
| 227 |
+
|
| 228 |
+
7-Scenes [39] results with and without our 3D distillation are shown in Tab. 5. 3D distillation models are still the best, which demonstrates the cross-dataset generalizability of 3D distillation. For example, on the Sq Rel metric, 3D distillation can improve the self-teaching models by $5.80\%$ / $4.48\%$ / $7.02\%$ for Monodepth2 [14], HR-Depth [29], and MonoViT [53] architectures, respectively.
|
| 229 |
+
|
| 230 |
+
# 4.3. Ablation Experiments
|
| 231 |
+
|
| 232 |
+
We train models using different training labels and evaluate these models in Tab. 6. We use Monodepth2 architecture [14], as its training time is the shortest. We can make the following observations: (i) 'proj. depth' is much worse than 'pred. depth', as the projected depth is oversmoothing. This supports that it is important to fuse the pre
|
| 233 |
+
|
| 234 |
+
dicted depth and projected depth. (ii) Among the strategies to fuse the predicted depth and projected depth, 'uncer' is better than 'diff' and 'mv'. 'diff' strategy means that, for a pixel $x$ , if $D_{t}(x) - P_{t}(x) > 0.25P_{t}(x)$ , this pixel will be regarded as being on a reflective surface. 'mv' strategy means pixels which fail in multi-view consistency check [49] will be regarded as being on reflective surfaces. Specifically, in a training triple we check the pixels in the target frame with the aid of two source frames, i.e., the three view consistency in [49]. (iii) Low-resolution training/high-resolution testing [48] is the worst, because cross-resolution testing for monocular depth estimation is challenging [8, 18]. Specifically, we train the model with the resolution of $128 \times 96$ and test the model with the resolution of $384 \times 288$ .
|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
Image
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
Ground Truth
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
Self-Supervised
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
Self-Teaching
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
3D Distillation (ours)
|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+

|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
|
| 275 |
+

|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
Figure 5: Our 3D distillation can significantly improve the depth prediction accuracy on reflective surfaces. The network is MonoViT architecture [53]. The image of the first row is from scene0704_00 of the ScanNet val set [4]. The images of the other rows are from scene0721_00, scene0776_00, scene0781_00, and scene0796_00 of the ScanNet test set [4], respectively.
|
| 283 |
+
|
| 284 |
+

|
| 285 |
+
|
| 286 |
+

|
| 287 |
+
|
| 288 |
+

|
| 289 |
+
|
| 290 |
+

|
| 291 |
+
|
| 292 |
+
# 5. Conclusion
|
| 293 |
+
|
| 294 |
+
We have proposed 3D distillation: a novel training framework for improving SSMDE on reflective surfaces. Motivated by the view-dependent property of reflective surfaces, 3D distillation utilizes the multi-view 3D information aggregated from the predicted depth of multiple frames to generate accurate pseudo-labels for reflective surfaces. 3D distillation significantly improves the depth estimation accuracy for various architectures [14, 29, 53] and on multiple datasets [4, 39], without adding computational cost or model parameters during inference.
|
| 295 |
+
|
| 296 |
+
Limitations and Future Work In the 3D distillation training framework, (i) perfect reflections such as mirrors are not handled; (ii) ensemble-based uncertainty requires multiple models during training; (iii) camera poses are assumed known during training. In future work, all these
|
| 297 |
+
|
| 298 |
+
limitations could be tackled, e.g., by using dedicated networks to predict reflection masks and camera poses. Besides, since depth and normal estimation are synergistic tasks [6], it could be a promising future direction to combine 3D distillation training with normal estimation and use predicted depth and surface normal to refine each other. Moreover, applying 3D distillation recursively could lead to more improvements. For the sake of simplicity, in this paper, we opt for a single iteration that already proves to be effective.
|
| 299 |
+
|
| 300 |
+
Acknowledgements T-K. Kim is supported by NST grant (CRC 21011, MSIT) and KOCCA grant (R2022020028, MCST).
|
| 301 |
+
|
| 302 |
+
# References
|
| 303 |
+
|
| 304 |
+
[1] Henrik Aanaes, Rasmus Ramsbøl Jensen, George Vogiatzis, Engin Tola, and Anders Bjorholm Dahl. Large-scale data for multiple-view stereopsis. IJCV, 2016. 3, 5
|
| 305 |
+
[2] Eric Brachmann and Carsten Rother. Learning less is more - 6d camera localization via 3d surface regression. In CVPR, 2018. 6
|
| 306 |
+
[3] Di Chang, Aljaz Bozic, Tong Zhang, Qingsong Yan, Yingcong Chen, Sabine Susstrunk, and Matthias Nießner. Rc-mvsnet: Unsupervised multi-view stereo with neural rendering. In ECCV, 2022. 3, 5
|
| 307 |
+
[4] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas A. Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, 2017. 1, 2, 3, 4, 5, 6, 7, 8, 9
|
| 308 |
+
[5] Arda Düzçeker, Silvano Galliani, Christoph Vogel, Pablo Speciale, Mihai Dusmanu, and Marc Pollefeys. Deep-videovs: Multi-view stereo on video with recurrent spatiotemporal fusion. In CVPR, 2021. 6
|
| 309 |
+
[6] Ainaz Eftekhar, Alexander Sax, Jitendra Malik, and Amir Zamir. Omnidata: A scalable pipeline for making multi-task mid-level vision datasets from 3d scans. In ICCV, 2021. 9
|
| 310 |
+
[7] David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. In NeurIPS, 2014. 1, 6
|
| 311 |
+
[8] José M. Fácil, Benjamin Ummenhofer, Huizhong Zhou, Luis Montesano, Thomas Brox, and Javier Civera. Camconvs: Camera-aware multi-scale convolutions for single-view depth. In CVPR, 2019. 5, 8
|
| 312 |
+
[9] Pedro F. Felzenszwalb and Daniel P. Huttenlocher. Efficient graph-based image segmentation. IJCV, 2004. 3
|
| 313 |
+
[10] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016. 3, 5
|
| 314 |
+
[11] Ravi Garg, B. G. Vijay Kumar, Gustavo Carneiro, and Ian D. Reid. Unsupervised CNN for single view depth estimation: Geometry to the rescue. In ECCV, 2016. 1, 2
|
| 315 |
+
[12] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. In CVPR, 2012. 1, 2
|
| 316 |
+
[13] Clément Godard, Oisin Mac Aodha, and Gabriel J. Brostow. Unsupervised monocular depth estimation with left-right consistency. In CVPR, 2017. 1, 2, 4
|
| 317 |
+
[14] Clément Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J. Brostow. Digging into self-supervised monocular depth estimation. In ICCV, 2019. 1, 2, 3, 4, 5, 6, 7, 8, 9
|
| 318 |
+
[15] Vitor Guizilini, Rares Ambrus, Sudeep Pillai, Allan Raventos, and Adrien Gaidon. 3d packing for self-supervised monocular depth estimation. In CVPR, 2020. 2
|
| 319 |
+
[16] Vitor Guizilini, Rui Hou, Jie Li, Rares Ambrus, and Adrien Gaidon. Semantically-guided representation learning for self-supervised monocular depth. In ICLR, 2020. 2
|
| 320 |
+
[17] Haoyu Guo, Sida Peng, Haotong Lin, Qianqian Wang, Guofeng Zhang, Hujun Bao, and Xiaowei Zhou. Neural 3d scene reconstruction with the Manhattan-world assumption. In CVPR, 2022. 3, 5
|
| 321 |
+
|
| 322 |
+
[18] Mu He, Le Hui, Yikai Bian, Jian Ren, Jin Xie, and Jian Yang. Ra-depth: Resolution adaptive self-supervised monocular depth estimation. In ECCV, 2022. 5, 8
|
| 323 |
+
[19] Pan Ji, Runze Li, Bir Bhanu, and Yi Xu. Monoindoor: Towards good practice of self-supervised monocular depth estimation for indoor environments. In ICCV, 2021. 3
|
| 324 |
+
[20] Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In NeurIPS, 2017. 3
|
| 325 |
+
[21] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 5
|
| 326 |
+
[22] Marvin Klingner, Jan-Aike Termöhlen, Jonas Mikolajczyk, and Tim Fingscheidt. Self-supervised monocular depth estimation: Solving the dynamic object problem by semantic guidance. In ECCV, 2020. 2
|
| 327 |
+
[23] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS, 2017. 3, 4, 5, 8
|
| 328 |
+
[24] Youngwan Lee, Jonghee Kim, Jeffrey Willette, and Sung Ju Hwang. Mpvit: Multi-path vision transformer for dense prediction. In CVPR, 2022. 3
|
| 329 |
+
[25] Boying Li, Yuan Huang, Zeyu Liu, Danping Zou, and Wenxian Yu. Structdepth: Leveraging the structural regularities for self-supervised indoor depth estimation. In ICCV, 2021. 3
|
| 330 |
+
[26] Hanhan Li, Ariel Gordon, Hang Zhao, Vincent Casser, and Anelia Angelova. Unsupervised monocular depth learning in dynamic scenes. In CoRL, 2020. 2
|
| 331 |
+
[27] Yen-Cheng Liu, Chih-Yao Ma, and Zsolt Kira. Unbiased teacher v2: Semi-supervised object detection for anchor-free and anchor-based detectors. In CVPR, 2022. 3
|
| 332 |
+
[28] Xiaohu Lu, Jian Yao, Haoang Li, and Yahui Liu. 2-line exhaustive searching for real-time vanishing point estimation in manhattan world. In WACV, 2017. 3
|
| 333 |
+
[29] Xiaoyang Lyu, Liang Liu, Mengmeng Wang, Xin Kong, Lina Liu, Yong Liu, Xinxin Chen, and Yi Yuan. Hr-depth: High resolution self-supervised monocular depth estimation. In AAAI, 2021. 2, 3, 4, 5, 6, 7, 8, 9
|
| 334 |
+
[30] Matthew Matl. Pyrender. https://github.com/mmatl/pyrender, 2019.5
|
| 335 |
+
[31] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. 3, 5
|
| 336 |
+
[32] Richard A. Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J. Davison, Pushmeet Kohli, Jamie Shotton, Steve Hodges, and Andrew W. Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In ISMAR, 2011. 2, 4, 5
|
| 337 |
+
[33] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019. 5
|
| 338 |
+
|
| 339 |
+
[34] Andrea Pilzer, Stéphane Lathuilière, Nicu Sebe, and Elisa Ricci. Refine and distill: Exploiting cycle-inconsistency and knowledge distillation for unsupervised monocular depth estimation. In CVPR, 2019. 2
|
| 340 |
+
[35] Matteo Poggi, Filippo Aleotti, Fabio Tosi, and Stefano Mattoccia. On the uncertainty of self-supervised monocular depth estimation. In CVPR, 2020. 3, 5, 6, 7, 8
|
| 341 |
+
[36] René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In ICCV, 2021. 3
|
| 342 |
+
[37] Ashutosh Saxena, Sung H. Chung, and Andrew Y. Ng. Learning depth from single monocular images. In NeurIPS, 2005. 1
|
| 343 |
+
[38] Mohamed Sayed, John Gibson, Jamie Watson, Victor Prisacariu, Michael Firman, and Clément Godard. Simplerecon: 3d reconstruction without 3d convolutions. In ECCV, 2022. 5, 6
|
| 344 |
+
[39] Jamie Shotton, Ben Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, and Andrew W. Fitzgibbon. Scene coordinate regression forests for camera relocalization in RGB-D images. In CVPR, 2013. 1, 2, 6, 8, 9
|
| 345 |
+
[40] Chang Shu, Kun Yu, Zhixiang Duan, and Kuiyuan Yang. Feature-metric loss for self-supervised learning of depth and egomotion. In ECCV, 2020. 2
|
| 346 |
+
[41] Jaime Spencer, Richard Bowden, and Simon Hadfield. Defeat-net: General monocular depth via simultaneous unsupervised representation learning. In CVPR, 2020. 2
|
| 347 |
+
[42] Jaime Spencer, C. Stella Qian, Chris Russell, Simon Hadfield, Erich W. Graf, Wendy J. Adams, Andrew J. Schofield, James H. Elder, Richard Bowden, Heng Cong, Stefano Mattoccia, Matteo Poggi, Zeeshan Khan Suri, Yang Tang, Fabio Tosi, Hao Wang, Youmin Zhang, Yusheng Zhang, and Chaoqiang Zhao. The monocular depth estimation challenge. In WACVW, 2023. 1, 3
|
| 348 |
+
[43] Jaime Spencer, Chris Russell, Simon Hadfield, and Richard Bowden. Deconstructing self-supervised monocular reconstruction: The design decisions that matter. TMLR, 2022. 1
|
| 349 |
+
[44] Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. Image quality assessment: from error visibility to structural similarity. TIP, 2004. 4
|
| 350 |
+
[45] Jamie Watson, Oisin Mac Aodha, Victor Prisacariu, Gabriel J. Brostow, and Michael Firman. The temporal opportunist: Self-supervised multi-frame monocular depth. In CVPR, 2021. 2
|
| 351 |
+
[46] Cho-Ying Wu, Jialiang Wang, Michael Hall, Ulrich Neumann, and Shuochen Su. Toward practical monocular indoor depth estimation. In CVPR, 2022. 3
|
| 352 |
+
[47] Hongbin Xu, Zhipeng Zhou, Yali Wang, Wenxiong Kang, Baigui Sun, Hao Li, and Yu Qiao. Digging into uncertainty in self-supervised multi-view stereo. In ICCV, 2021. 3
|
| 353 |
+
[48] Jiayu Yang, Jose M. Alvarez, and Miaomiao Liu. Self-supervised learning of depth inference for multi-view stereo. In CVPR, 2021. 3, 5, 8
|
| 354 |
+
[49] Yao Yao, Zixin Luo, Shiwei Li, Tian Fang, and Long Quan. Mvsnet: Depth inference for unstructured multi-view stereo. In ECCV, 2018. 8
|
| 355 |
+
|
| 356 |
+
[50] Wang Yifan, Carl Doersch, Relja Arandjelovic, João Carreira, and Andrew Zisserman. Input-level inductive biases for 3d reconstruction. In CVPR, 2022. 5
|
| 357 |
+
[51] Zehao Yu, Lei Jin, and Shenghua Gao. $\mathbf{P}^2$ net: Patch-match and plane-regularization for unsupervised indoor depth estimation. In ECCV, 2020. 3
|
| 358 |
+
[52] Huangying Zhan, Ravi Garg, Chamara Saroj Weerasekera, Kejie Li, Harsh Agarwal, and Ian D. Reid. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In CVPR, 2018. 2
|
| 359 |
+
[53] Chaoqiang Zhao, Youmin Zhang, Matteo Poggi, Fabio Tosi, Xianda Guo, Zheng Zhu, Guan Huang, Yang Tang, and Stefano Mattoccia. Monovit: Self-supervised monocular depth estimation with a vision transformer. In 3DV, 2022. 1, 2, 3, 4, 5, 6, 7, 8, 9
|
| 360 |
+
[54] Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3d: A modern library for 3d data processing. CoRR, 2018. 5
|
| 361 |
+
[55] Tinghui Zhou, Matthew Brown, Noah Snavely, and David G. Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, 2017. 1, 2
|
3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d57acc5741a19bbdd8d4bbdd2b975863d1f62283c3b63c749ea7b7dce194d051
|
| 3 |
+
size 979274
|
3ddistillationimprovingselfsupervisedmonoculardepthestimationonreflectivesurfaces/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a4e777fdd20ef47acebfdb8e347b81147b559221019a0f3a8c4e184c7cc09a5c
|
| 3 |
+
size 433241
|
3dhackerspectrumbaseddecisionboundarygenerationforhardlabel3dpointcloudattack/de922a90-2feb-49b4-b9ef-6ae72d2ba6c0_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:266775391a3541671b271d0ef0fd1bc210205cac333613780780205ccf2f405c
|
| 3 |
+
size 83273
|
3dhackerspectrumbaseddecisionboundarygenerationforhardlabel3dpointcloudattack/de922a90-2feb-49b4-b9ef-6ae72d2ba6c0_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:190e9cfbd71fb0b01776d61090577fa8e7bf4a56585ca8bc31b02033bd3a4aff
|
| 3 |
+
size 102916
|