Add Batch 4b2b806f-092e-47d5-84e5-3bfcdf971655
Browse filesThis view is limited to 50 files because it contains too many changes. Β See raw diff
- 360attackdistortionawareperturbationsfromperspectiveviews/76b7526c-857a-49f6-a30e-32cc83caefa6_content_list.json +3 -0
- 360attackdistortionawareperturbationsfromperspectiveviews/76b7526c-857a-49f6-a30e-32cc83caefa6_model.json +3 -0
- 360attackdistortionawareperturbationsfromperspectiveviews/76b7526c-857a-49f6-a30e-32cc83caefa6_origin.pdf +3 -0
- 360attackdistortionawareperturbationsfromperspectiveviews/full.md +449 -0
- 360attackdistortionawareperturbationsfromperspectiveviews/images.zip +3 -0
- 360attackdistortionawareperturbationsfromperspectiveviews/layout.json +3 -0
- 360monodepthhighresolution360degmonoculardepthestimation/95547699-3553-46d5-9a18-b8edbbca0aea_content_list.json +3 -0
- 360monodepthhighresolution360degmonoculardepthestimation/95547699-3553-46d5-9a18-b8edbbca0aea_model.json +3 -0
- 360monodepthhighresolution360degmonoculardepthestimation/95547699-3553-46d5-9a18-b8edbbca0aea_origin.pdf +3 -0
- 360monodepthhighresolution360degmonoculardepthestimation/full.md +316 -0
- 360monodepthhighresolution360degmonoculardepthestimation/images.zip +3 -0
- 360monodepthhighresolution360degmonoculardepthestimation/layout.json +3 -0
- 3daclearningattributecompressionforpointclouds/a469c9ab-1f24-40a4-b64f-3f15ba1a545d_content_list.json +3 -0
- 3daclearningattributecompressionforpointclouds/a469c9ab-1f24-40a4-b64f-3f15ba1a545d_model.json +3 -0
- 3daclearningattributecompressionforpointclouds/a469c9ab-1f24-40a4-b64f-3f15ba1a545d_origin.pdf +3 -0
- 3daclearningattributecompressionforpointclouds/full.md +333 -0
- 3daclearningattributecompressionforpointclouds/images.zip +3 -0
- 3daclearningattributecompressionforpointclouds/layout.json +3 -0
- 3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/53bff2b9-1255-4216-a377-5b793f0ea58e_content_list.json +3 -0
- 3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/53bff2b9-1255-4216-a377-5b793f0ea58e_model.json +3 -0
- 3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/53bff2b9-1255-4216-a377-5b793f0ea58e_origin.pdf +3 -0
- 3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/full.md +305 -0
- 3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/images.zip +3 -0
- 3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/layout.json +3 -0
- 3dcommoncorruptionsanddataaugmentation/f2e777f3-b2b6-4e77-b91a-b3be86f414ef_content_list.json +3 -0
- 3dcommoncorruptionsanddataaugmentation/f2e777f3-b2b6-4e77-b91a-b3be86f414ef_model.json +3 -0
- 3dcommoncorruptionsanddataaugmentation/f2e777f3-b2b6-4e77-b91a-b3be86f414ef_origin.pdf +3 -0
- 3dcommoncorruptionsanddataaugmentation/full.md +366 -0
- 3dcommoncorruptionsanddataaugmentation/images.zip +3 -0
- 3dcommoncorruptionsanddataaugmentation/layout.json +3 -0
- 3deformrscertifyingspatialdeformationsonpointclouds/8e74b4ec-d4a4-469e-970c-3a30b923b6a8_content_list.json +3 -0
- 3deformrscertifyingspatialdeformationsonpointclouds/8e74b4ec-d4a4-469e-970c-3a30b923b6a8_model.json +3 -0
- 3deformrscertifyingspatialdeformationsonpointclouds/8e74b4ec-d4a4-469e-970c-3a30b923b6a8_origin.pdf +3 -0
- 3deformrscertifyingspatialdeformationsonpointclouds/full.md +291 -0
- 3deformrscertifyingspatialdeformationsonpointclouds/images.zip +3 -0
- 3deformrscertifyingspatialdeformationsonpointclouds/layout.json +3 -0
- 3dhumantonguereconstructionfromsingleinthewildimages/0631587e-4fb7-46f3-bf3a-8e19d2908ac4_content_list.json +3 -0
- 3dhumantonguereconstructionfromsingleinthewildimages/0631587e-4fb7-46f3-bf3a-8e19d2908ac4_model.json +3 -0
- 3dhumantonguereconstructionfromsingleinthewildimages/0631587e-4fb7-46f3-bf3a-8e19d2908ac4_origin.pdf +3 -0
- 3dhumantonguereconstructionfromsingleinthewildimages/full.md +316 -0
- 3dhumantonguereconstructionfromsingleinthewildimages/images.zip +3 -0
- 3dhumantonguereconstructionfromsingleinthewildimages/layout.json +3 -0
- 3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/8c2f9c65-7ec2-4bc0-9267-6b4f35e52097_content_list.json +3 -0
- 3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/8c2f9c65-7ec2-4bc0-9267-6b4f35e52097_model.json +3 -0
- 3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/8c2f9c65-7ec2-4bc0-9267-6b4f35e52097_origin.pdf +3 -0
- 3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/full.md +245 -0
- 3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/images.zip +3 -0
- 3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/layout.json +3 -0
- 3dmomentsfromnearduplicatephotos/48a663ff-cb9f-445e-941f-b1850bb9d2fa_content_list.json +3 -0
- 3dmomentsfromnearduplicatephotos/48a663ff-cb9f-445e-941f-b1850bb9d2fa_model.json +3 -0
360attackdistortionawareperturbationsfromperspectiveviews/76b7526c-857a-49f6-a30e-32cc83caefa6_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7121749f848d302e1014af5461c649b71f0f0aa8c4c01dc94e804e3ae53885a9
|
| 3 |
+
size 87596
|
360attackdistortionawareperturbationsfromperspectiveviews/76b7526c-857a-49f6-a30e-32cc83caefa6_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7c26a44e74ad086837ec0edded7770e83ffffffee7dff56130cc4211c71f48c8
|
| 3 |
+
size 110856
|
360attackdistortionawareperturbationsfromperspectiveviews/76b7526c-857a-49f6-a30e-32cc83caefa6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:72300dc053f02b597f41cec8dc28ed36a3e1dc57ef7f327f34ac3f18def3ae18
|
| 3 |
+
size 3171767
|
360attackdistortionawareperturbationsfromperspectiveviews/full.md
ADDED
|
@@ -0,0 +1,449 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 360-Attack: Distortion-Aware Perturbations from Perspective-Views
|
| 2 |
+
|
| 3 |
+
Yunjian Zhang $^{1,2}$ Yanwei Liu $^{1,*}$ Jinxia Liu $^{3}$ Jingbo Miao $^{1,2}$
|
| 4 |
+
|
| 5 |
+
Antonios Argyriou<sup>4</sup> Liming Wang<sup>1</sup> Zhen Xu<sup>1</sup>
|
| 6 |
+
|
| 7 |
+
$^{1}$ Institute of Information Engineering, Chinese Academy of Sciences
|
| 8 |
+
|
| 9 |
+
$^{2}$ University of Chinese Academic of Sciences, $^{3}$ Zhejiang Wanli University, $^{4}$ University of Thessaly
|
| 10 |
+
|
| 11 |
+
{zhangyunjian, liuyanwei}@iie.ac.cn, liujinxia1969@126.com, miaojingbo@iie.ac.cn
|
| 12 |
+
|
| 13 |
+
{wangliming, xuzhen}@ie.ac.cn
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
The application of deep neural networks (DNNs) on 360-degree images has achieved remarkable progress in the recent years. However, DNNs have been demonstrated to be vulnerable to well-crafted adversarial examples, which may trigger severe safety problems in the real-world applications based on 360-degree images. In this paper, we propose an adversarial attack targeting spherical images, called 360-attack, that transfers adversarial perturbations from perspective-view (PV) images to a final adversarial spherical image. Given a target spherical image, we first represent it with a set of planar PV images, and then perform 2D attacks on them to obtain adversarial PV images. Considering the issue of the projective distortion between spherical and PV images, we propose a distortion-aware attack to reduce the negative impact of distortion on attack. Moreover, to reconstruct the final adversarial spherical image with high aggressiveness, we calculate the spherical saliency map with a novel spherical spectrum method and next propose a saliency-aware fusion strategy that merges multiple inverse perspective projections for the same position on the spherical image. Extensive experimental results show that 360-attack is effective for disturbing spherical images in the black-box setting. Our attack also proves the presence of adversarial transferability from $\mathbb{Z}^2$ to SO(3) groups.
|
| 18 |
+
|
| 19 |
+
# 1. Introduction
|
| 20 |
+
|
| 21 |
+
Previous studies have shown that deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples [10, 28, 36, 40]. Many attack algorithms have been proposed for various tasks, including image classification [16], video caption [39], 3D mesh classification [33], and point cloud recognition [43]. However, during the investi
|
| 22 |
+
|
| 23 |
+
gation of this issue, the security of DNNs applying to spherical images has been largely ignored.
|
| 24 |
+
|
| 25 |
+
Recently, we have observed an increasing number of computer vision problems requiring spherical signals, for instance, omnidirectional RGB-D images generated from panorama cameras [4,17], 360-degree videos captured from sensors on self-driving cars [15,47], and spherical data projected from the 3D domain [12]. Inspired by the remarkable success of DNNs in various tasks, many approaches have been proposed to apply DNNs on spherical images to solve real world problems, including advanced driver assistance systems (ADAS) [24], autonomous navigation [13, 22, 29], and VR/AR applications [35,45].
|
| 26 |
+
|
| 27 |
+
DNNs used for spherical images typically operate in two domains: the spherical domain and the panoramic domain. The first type of models, referred to spherical models, directly dispose the spherical image in the spherical domain [7, 8, 12], while models on the planar domain, which are called panoramic models, operate on the panorama transformed from the spherical image [19, 34, 46].
|
| 28 |
+
|
| 29 |
+
Due to the extensive applications of spherical images, the vulnerability of DNNs used for applications around them needs further investigation. A straightforward way to generate adversarial spherical images is to attack the spherical or panoramic models in the white-box setting directly. However, these models are difficult to obtain in practice due to their greatly divergent principles, and backpropagation on them is of low efficiency, compared to the same operation on standard planar CNNs [34]. Another intractable problem of attacking the panoramic model is that the panoramas usually suffer from great distortion compared to the raw spherical images, reducing the effect of the added perturbations when the adversarial panoramas are re-projected to the original spherical domain. Therefore, an efficient attack method with less distortion is required. Considering the perspective views of a spherical image are less distorted and can be processed with a simple planar network, in this paper, we propose to generate adversarial spherical exam
|
| 30 |
+
|
| 31 |
+
plies by disturbing their planar perspective-view (PV) representations. Specifically, we simultaneously disturb these PV images rendered from different positions on the spherical image, and reconstruct the adversarial spherical image from them with a re-projection and a fusion method. As our attack is implemented by transferring 2D adversarial perturbations to the 3D space without any knowledge about the target model, it can be considered as a black-box attack.
|
| 32 |
+
|
| 33 |
+
Overall, our contributions are summarized as follows:
|
| 34 |
+
|
| 35 |
+
- To the best of our knowledge, we are the first to propose a black-box attack towards spherical models, called 360-attack, by generating adversarial spherical images from their corresponding PV images. 360-attack is performed directly on the planar domain, and eventually the perturbations are transferred to the spherical images.
|
| 36 |
+
- To obtain highly transferable adversarial PV images toward attacking the spherical model, we proposed a novel Distortion-Aware Iterative Fast Gradient Sign Method (DAI-FGSM) with considering the perturbation degradation caused by plane-to-sphere projection distortion. Accordingly, the negative effect of the projective distortion on the attack is alleviated.
|
| 37 |
+
- We propose a novel spherical-spectrum-based saliency detection method, and then propose a saliency-aware fusion strategy to merge multiple inverse perspective projections for the same position for generating the final adversarial spherical images.
|
| 38 |
+
- Extensive experiments on the synthetic and real-world datasets demonstrate the effectiveness of our 360-attack on DNNs designed for spherical images, and it also proves that the adversarial perturbations can be transferred from $\mathbb{Z}^2$ to SO(3) groups.
|
| 39 |
+
|
| 40 |
+
# 2. Related Work
|
| 41 |
+
|
| 42 |
+
# 2.1. Adversarial Attacks
|
| 43 |
+
|
| 44 |
+
Since Szegedy et al. [36] first reported the existence of adversarial examples in DNNs, various attacks have been proposed. The first type of attacks are white-box attacks, which generate adversarial examples with full knowledge of the target model. Goodfellow et al. [16] proposed the Fast Gradient Sign Method (FGSM) that directly generates adversarial examples by calculating the gradients of the loss function for the input image. After that, multiple iterative versions of FGSM are proposed, including the Basic Iterative Method (BIM) [23], Projected Gradient Descent (PGD) [26], Momentum Iterative FGSM (MI-FGSM) [11], and Nesterov Accelerate Gradient Method (NI-FGSM) [25].
|
| 45 |
+
|
| 46 |
+
Different from the white-box attacks, black-box methods aim to attack DNNs without any knowledge regarding their inner workings. Generally, black-box attacks can be divided into three categories: score-based, decision-based,
|
| 47 |
+
|
| 48 |
+
and transfer-based attacks. Score-based attacks assume that the attackers can query the prediction probability of the target model. These methods usually rely on sampling methods to approximate the gradients to generate adversarial examples [5, 21]. In decision-based attacks, the only allowed operation for the attackers is to query the output labels of given samples from the target network. The Boundary attack [2] and its variants [6, 9] are feasible in this setting. Instead of directly attacking the target model, transfer-based attacks make use of the fact that adversarial examples have high transferability across different models [20, 37, 38, 41]. Specifically, they generate adversarial perturbations on a white-box model and then transfer them to the unknown target network. Moreover, it can overcome the gradient mask defenses [1, 30] deployed on the target network. In this paper, we further investigate the transferability of adversarial examples, and the results demonstrate that this property also exists across different representation spaces, such as the planar space and the spherical space.
|
| 49 |
+
|
| 50 |
+
# 2.2. DNNs for Spherical Images
|
| 51 |
+
|
| 52 |
+
Planar DNNs are difficult to be applied directly on spherical images because the underlying projection models and data formats of planar and spherical images are different. To address this discrepancy, there are two types of methods. In the first, the spherical images are projected to coronas in the planar $\mathbb{Z}^2$ space, and then planar DNN models are applied to them [13, 19, 34, 46]. However, this projection introduces significant distortion, making the convolution results inaccurate. More recently, spherical CNNs that directly handle the spherical images have been presented [8, 12, 22]. In these schemes, the domain space is transformed to the three-dimensional rotated group (SO(3)), and the rotation-equivariant convolution is implemented by using a generalized Fast Fourier Transform algorithm.
|
| 53 |
+
|
| 54 |
+
# 3. The Proposed 360-Attack
|
| 55 |
+
|
| 56 |
+
# 3.1. Overview of the Framework
|
| 57 |
+
|
| 58 |
+
The pipeline of 360-attack is shown in Fig. 1. First, multiple PV images are rendered from different positions on the sphere. Next, we attack a planar DNN to obtain adversarial PV images. Note the projection between spherical and PV images introduces distortion [48], and interferes with the effect of the perturbations in PV images. Therefore, traditional 2D attack methods are not very efficient in this scenario because they do not consider the characteristic of the projection distortion and treat all pixels equally on the PV image to be non-distorted corresponding to the spherical counterpart. In 360-attack, adversarial PV images are generated by a novel distortion-aware method that conquers the projection distortion. After that, the adversarial PV images are projected to the sphere. As the inversely projected spherical areas for multiple PV images have overlaps
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
Figure 1. The pipeline of the proposed 360-attack.
|
| 62 |
+
|
| 63 |
+
with each other, we merge them ultimately to an adversarial spherical image with a saliency-aware fusion method.
|
| 64 |
+
|
| 65 |
+
# 3.2. Perspective Projection
|
| 66 |
+
|
| 67 |
+
For a given position $P$ in spherical coordinates $(\theta_P, \phi_P)$ on the sphere, where $\theta_P$ and $\phi_P$ stand respectively for latitude and longitude. If the field of view $f_h \times f_w$ and desired perspective resolution $h \times w$ are set, the PV image can be generated by a rectilinear projection that maps a position $(u, v)$ on the PV image to a 3D position at $(X, Y, Z)$ . The mapping relation between 2D and 3D coordinates is formulated by
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\left[ \begin{array}{l} X \\ Y \\ Z \end{array} \right] = \frac {R}{\sqrt {x ^ {2} + y ^ {2} + z ^ {2}}} \left[ \begin{array}{l} x \\ y \\ z \end{array} \right], \tag {1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
x = 2 \tan (f _ {h} / 2) \cdot (u + 0. 5) / 2 - \tan (f _ {h} / 2),
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
y = \tan \left(f _ {w} / 2\right) - 2 \tan \left(f _ {w} / 2\right) \cdot (v + 0. 5) / h, \tag {2}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
z = 1. 0,
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $R$ is a rotation matrix.
|
| 86 |
+
|
| 87 |
+
# 3.3. Distortion-Aware Iterative Fast Gradient Sign Attack
|
| 88 |
+
|
| 89 |
+
Given a spherical image $x_{s}$ labeled as $y_{o}$ and the spherical model $C_{s}$ , the problem of generating adversarial spherical image $x_{s}^{adv}$ can be formulated as:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\min \left\| x _ {s} ^ {\text {a d v}} - x _ {s} \right\| _ {\infty} \tag {3}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
s. t. C _ {s} \left(x _ {s} ^ {a d v}\right) \neq y _ {o}.
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
As 360-attack is implemented from the PV domain, the adversarial PV image $x_{p}^{adv}$ is also required to sastify $C_{p}(x_{p}^{adv}) \neq y_{o}$ , where $C_{p}(\cdot)$ is the planar classifier. It can be proved that the magnitude of the perturbations added on the PV image will be decreased after the projection. For a position $\rho^{o}$ on the spherical image, the perturbation added on it (denoted as $\xi^{o}$ ) is calculated by the perturbations of several positions on the PV image, that is
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\left| \xi^ {o} \right| = \left| \sum_ {i} \omega_ {i} \xi_ {i} ^ {p} \right| \leq \sum_ {i} \left| \omega_ {i} \xi_ {i} ^ {p} \right|, \tag {4}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $\xi_i^p$ is the perturbation of the $i$ -th pixel that related to $\rho^o$ on the PV image, $\omega_{i}$ is its weight during the interpolation, and $\sum_{i}\omega_{i} = 1$ . As $|\omega_{i}\xi_{i}^{p}|\leq |\xi_{i}^{p}|$ , then $|\xi^o |\leq |\xi_i^p |$ which means that the magnitude of the perturbation on the spherical image is limited by the allowable size of PV perturbations. Therefore, the attack performance will be degraded when the adversarial PV image is projected to the spherical image. Towards mitigating this issue, we propose a distortion-aware attack operating on the PV domain to minimize the magnitude loss of the perturbation during the inverse perspective projection. We rewrite Eq.(3) as
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\max L \left(x _ {p} ^ {a d v}, y _ {o}\right)
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\min L _ {p} (\dot {e}) \tag {5}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
s. t. | | e | | _ {\infty} \leq \epsilon ,
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where $e$ is the PV perturbation, $L(\cdot)$ is the loss function, $L_{p}(\cdot)$ is the pixel-level perturbation loss caused by the inverse perspective projection $P_{I}(\cdot)$ , and $\epsilon$ limits the magnitude of perturbations. We seek for a transformation $F_{t}$ to compensate the distortion introduced by $P_{I}(\cdot)$ , then the second objective can be solved approximately by
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\min \left| \left| P _ {I} \left(F _ {t} (e)\right) - e \right| \right| _ {2}. \tag {6}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
In order to find an effective $F_{t}$ , we deeply analyze the perspective projection, and model the position distortion and pixel intensity distortion between PV and spherical images according to spherical triangle formulas, then derive a pixel-wise transformation with geometry knowledge.
|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
Figure 2. Distortion for the rectilinear projection.
|
| 129 |
+
|
| 130 |
+
Theorem 1. Let $D_I$ be the pixel intensity distortion introduced from the perspective projection, then
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
D _ {I} \propto \sqrt {\frac {\sin \beta}{\beta \left[ \cos^ {2} \frac {\beta}{2} - \sin^ {2} \operatorname {a r c c o s} \left(\cos \theta_ {A} \cos \phi_ {A}\right) \right]}}, \tag {7}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
where $\alpha$ is the spherical angle corresponding to the arc between the perspective center and the projection point, $\beta$ is the angle resolution of every sampling grid on the sphere, $(\theta_A,\phi_A)$ is the spherical coordinate of the projection point.
|
| 137 |
+
|
| 138 |
+
Proof. As shown in Fig. 2, an arbitrary point $A$ on the sphere centered at $O$ can be projected to the tangent plane of the perspective center position $P$ , and its projected point is denoted as $A'$ , and the angle between the rays $OP$ and $OA$ is denoted as $\alpha$ . As the grids on the plane are obtained by sampling, each of them represents a small portion of the sphere. For a small neighbourhood centered at $A$ , its left and right endpoint are denoted by $B$ and $C$ respectively, and $\angle BOC$ is the angle resolution of the sampling grid, denoted as $\beta$ .
|
| 139 |
+
|
| 140 |
+
An arc on the sphere is stretched to a line during the projection, and the position deviation distortion $D_{p}$ can be formulated by the ratio between the length of the projected line $B^{\prime}C^{\prime}$ and its corresponding arc $\widehat{BAC}$
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
D _ {p} = B ^ {\prime} C ^ {\prime} / \widehat {B A C} \tag {8}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
If the spherical image is normalized to a unit sphere, then
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
D _ {p} = \left[ \tan \left(\alpha + \frac {\beta}{2}\right) - \tan \left(\alpha - \frac {\beta}{2}\right) \right] / \beta . \tag {9}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
According to the Spherical law of cosines
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
\alpha = \arccos (\sin \theta_ {P} \sin \theta_ {A} + \cos \theta_ {P} \cos \theta_ {A} \cos \Delta \phi), \tag {10}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
where $\theta_{P}$ and $\theta_{A}$ are the latitudes of points $P$ and $A$ , and $\Delta \phi$ is the difference between the longitudes of these two points. Considering Eq.(9) and Eq.(10), we can obtain
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
D _ {p} = \frac {\sin \beta}{\beta \left(\cos^ {2} \frac {\beta}{2} - \sin^ {2} \arccos (\cos \theta_ {A} \cos \Delta \phi)\right)}. \tag {11}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
Based on the image energy formulation [31] and the Parseval's Theorem, the energy of a disk on the spherical image can be formulated by
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
E \left(I _ {S}\right) = \Sigma_ {\phi} \Sigma_ {\theta} I _ {\theta \phi} ^ {2}, \tag {12}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
where $I_{S}$ is the disk on the sphere, $I_{\theta \phi}$ is the pixel on the sphere, and $E(\cdot)$ is the energy function. Ideally, the projection process will follow the conservation of energy. In practice, the perspective projection introduces position distortion by stretching the ideal non-distorted plane, which is
|
| 171 |
+
|
| 172 |
+
implemented by interpolation. Then the energy of the PV image $I_P'$ can be represented by
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
E \left(I _ {P} ^ {\prime}\right) = \Phi \left[ D _ {p} \cdot E \left(I _ {S}\right) \right], \tag {13}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
where $\Phi (\cdot)$ denotes the projection function. If we denote the pixel intensity distortion $D_{I}$ as the ratio between $E(I_P^{\prime})$ and $E(I_S)$ , then in terms of Eq.(12) and Eq.(13), the relation between $D_{I}$ and $D_{p}$ can be approximately characterized as
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
D _ {I} \propto \sqrt {D _ {p}}. \tag {14}
|
| 182 |
+
$$
|
| 183 |
+
|
| 184 |
+
Finally, with Eq.(11),
|
| 185 |
+
|
| 186 |
+
$$
|
| 187 |
+
D _ {I} \propto \sqrt {\frac {\sin \beta}{\beta \left[ \cos^ {2} \frac {\beta}{2} - \sin^ {2} \operatorname {a r c c o s} \left(\cos \theta_ {A} \cos \phi_ {A}\right) \right]}}. \tag {15}
|
| 188 |
+
$$
|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
|
| 192 |
+
Theorem 1 indicates that the pixels in the PV image suffer from $D_{I}$ distortion when projected onto the sphere. Therefore $D_{I}$ can be seen as a mask operating on the image, and we can obtain
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
P _ {I} \left(F _ {t} (e)\right) = F _ {t} (e) / D _ {I}. \tag {16}
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
To solve Eq.(6), $F_{t}$ is expected to satisfy $P_{I}(F_{t}(e))\approx e$ then
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
F _ {t} (e) = e \cdot D _ {I}. \tag {17}
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
We propose to solve Eq.(5) by two steps: First, the perturbations are calculated with normal attack approaches. Then they are adjusted with the distortion mask $D_I$ . Therefore, our derivatized distortion compensation function can serve as an additional module to any existing attacks, such as FGSM, PGD, and MI-FGSM. In this paper, we choose PGD, and propose the Distortion-Aware Iterative Fast Gradient Sign Method (DAI-FGSM), in which the perturbation is manipulated with a distortion mask at every step. DAIFGSM is summarized in Algorithm 1, where $\mathrm{sign}(\cdot)$ is the sign function, and $\nabla$ is the differential operator.
|
| 205 |
+
|
| 206 |
+
# 3.4. Inverse Perspective Projection and Saliency-aware Fusion
|
| 207 |
+
|
| 208 |
+
Given a set of adversarial PV images, in order to obtain the final adversarial spherical image, we first re-project the PV images to the sphere by using inversely Eq.(1) and Eq.(2). As each PV image is only projected to a portion of the sphere, we call it a spherical part. Due to the overlapped field of views among different PV images, different pixels in different PV images maybe be projected to the same position on the spherical surface, leading to overlaps across spherical parts. Therefore, how to merge multiple projection pixels in the same position on the sphere to one pixel is a crucial problem. A common method is to average them [3, 14], which is inefficient for generating adversarial
|
| 209 |
+
|
| 210 |
+
# Algorithm 1 DAI-FGSM
|
| 211 |
+
|
| 212 |
+
Input: A PV image $x_{p}$ with ground-truth label $y_{o}$ , the angle resolution $\beta$ , and a classifier $C$ with loss function $L$
|
| 213 |
+
Input: Perturbation size $\epsilon$ , step size per iteration $\gamma$ , and maximum iterations $T$
|
| 214 |
+
Output: An adversarial PV image $x_{p}^{adv}$
|
| 215 |
+
1: for all Positions $(\theta_A,\phi_A)$ in $x_{p}\dot{\mathbf{d}}\mathbf{o}$
|
| 216 |
+
2: Calculate the pixel intensity distortion mask $D_{I}(\theta_{A},\phi_{A})$
|
| 217 |
+
3: end for
|
| 218 |
+
4: $x_0^{adv} = x_p$
|
| 219 |
+
5: for $t = 0$ to $T - 1$ do
|
| 220 |
+
6: $e_{t + 1} = \gamma \cdot \mathrm{sign}(\nabla_{x^{\alpha^{adv}}}L(x_t^{adv},y_o))$
|
| 221 |
+
7: Update $e_{t + 1} = \mathrm{Clip}^{\epsilon}\{D_I\cdot e_{t + 1}\}$
|
| 222 |
+
8: Update $x_{t+1}^{adv} = \mathrm{Clip}_x^{(0,1)}\{x_p + e_{t+1}\}$
|
| 223 |
+
9: end for
|
| 224 |
+
10: return $x_{p}^{adv} = x_{T}^{adv}$
|
| 225 |
+
|
| 226 |
+
examples because this operation takes the multiple pixels equally important. With that in mind, we merge projection pixels by considering the difference between the original spherical pixel and its neighbors. For pixels similar with their neighbors, we consider farther spherical parts to collect more information of them. While for pixels significantly different from neighbors, we consider more on their close spherical parts. Considering saliency map implicitly reveals the variation among pixels, we propose a saliency-aware method to fuse spherical parts.
|
| 227 |
+
|
| 228 |
+
# 3.4.1 Saliency Detection Based on Spherical Spectral Residual
|
| 229 |
+
|
| 230 |
+
In this paper, we propose an efficient spherical spectral residual method for spherical saliency detection. Given a spherical image $I$ , the pixel on the position $(\theta, \phi)$ can be represented by the spherical harmonic function [27] as
|
| 231 |
+
|
| 232 |
+
$$
|
| 233 |
+
I (\theta , \phi) = \sum_ {l = 0} ^ {\infty} \sum_ {m = - l} ^ {l} f _ {l} ^ {m} Y _ {l} ^ {m} (\theta , \phi), \tag {18}
|
| 234 |
+
$$
|
| 235 |
+
|
| 236 |
+
where $f_{l}^{m}$ is the spherical harmonic coefficient, $Y_{l}^{m}$ is the corresponding spherical harmonic function, $l$ is the spherical harmonic degree, and $m$ is the spherical harmonic order.
|
| 237 |
+
|
| 238 |
+
Generally, the spectral maps of spherical images are triangle matrices. To apply the residual approach on the spectrum maps, we first complement the matrices using the mean values of each column. After that, we compute the amplitude and phase of the spectrum, respectively, denoted as $I_{am}$ and $I_{ph}$ . Next, we calculate the log spectrum residual $\mathcal{R}(I)$ of the image:
|
| 239 |
+
|
| 240 |
+
$$
|
| 241 |
+
\mathcal {R} (I) = \log \left(I _ {a m}\right) - H F _ {n} * \log \left(I _ {a m}\right), \tag {19}
|
| 242 |
+
$$
|
| 243 |
+
|
| 244 |
+
where $HF_{n}$ is an $n\times n$ mean filter that is used to obtain the averaged log spectrum of the spherical image, and $*$ is the filtering operation.
|
| 245 |
+
|
| 246 |
+
As discussed in [18], considerable shape similarities can be observed from different spectrums of the input spherical image, and the statistical similarities imply redundancies in the image. Therefore, the information jumping out of the smooth curves deserves the attention, and the residual spectrum contains specific characteristics of the image. Finally, the saliency map $S$ of the original spherical image can be achieved from the residual spectrum $\mathcal{R}(I)$ by
|
| 247 |
+
|
| 248 |
+
$$
|
| 249 |
+
S (\theta , \phi) = \sum_ {l = 0} ^ {\infty} \sum_ {m = - l} ^ {l} e ^ {\mathcal {R} _ {l} ^ {m} + j I _ {p h}} Y _ {l} ^ {m} (\theta , \phi). \tag {20}
|
| 250 |
+
$$
|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
(a)
|
| 254 |
+
Figure 3. The performance of the attacks on (a) $M_{s}$ and (b) $M_{e}$ .
|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
(b)
|
| 258 |
+
|
| 259 |
+
# 3.4.2 Saliency-aware Fusion for Inversely Projected PV Images
|
| 260 |
+
|
| 261 |
+
Every pixel on the sphere is corresponding to a saliency score, implicitly indicating the degree of difference between it and its neighbors. The saliency score is used to guide the fusion of multiple inversely projected PV images.
|
| 262 |
+
|
| 263 |
+
Assuming a position $(\theta, \phi)$ is covered by $k$ spherical parts, in order to obtain its fused pixel value, we firstly compute the haversine distances between the centers of all spherical parts and it, and the distance (denoted as $d_i$ ) between the $i$ -th center $(\theta_i, \phi_i)$ and $(\theta, \phi)$ is calculated by:
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
d _ {i} = 2 \arcsin \sqrt {\sin^ {2} \left(\frac {\theta_ {i} - \theta}{2}\right) + \cos \theta_ {i} \cos \theta \sin^ {2} \left(\frac {\phi_ {i} - \phi}{2}\right)}. \tag {21}
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
Next the Gaussian function is used to calculate the weights of the spherical parts according to the saliency score $S(\theta, \phi)$ :
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
g _ {i} = e ^ {- \frac {\left(d _ {i} - d _ {m i n}\right) ^ {2}}{2 \cdot S (\theta , \phi)}} = e ^ {- \frac {\left(d _ {i} - d _ {m i n}\right) ^ {2} \cdot S (\theta , \phi)}{2}}, \tag {22}
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
where $d_{min} = \min \{d_1, d_2, \dots, d_k\}$ . Finally, the value of the pixel at $(\theta, \phi)$ is obtained by weighting the $k$ spherical parts $I_i$ with normalized gaussian weights $w_i$ :
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
I _ {F} (\theta , \phi) = \sum_ {i = 1} ^ {k} w _ {i} I _ {i} (\theta , \phi), w _ {i} = \frac {g _ {i}}{\sum_ {i = 1} ^ {k} g _ {i}}, \tag {23}
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
(a)
|
| 283 |
+
|
| 284 |
+

|
| 285 |
+
(b)
|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
(c)
|
| 289 |
+
|
| 290 |
+

|
| 291 |
+
(d)
|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
(e)
|
| 295 |
+
|
| 296 |
+

|
| 297 |
+
(f)
|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
(g)
|
| 301 |
+
|
| 302 |
+

|
| 303 |
+
(h)
|
| 304 |
+
Figure 4. Some examples of 360-attack on the 3D object classification task. The top line shows the original benign images, and the bottom line shows the adversarial images.
|
| 305 |
+
|
| 306 |
+
<table><tr><td colspan="2">Perturbation magnitude Ξ΅</td><td>0.02</td><td>0.04</td><td>0.06</td><td>0.08</td></tr><tr><td rowspan="2">FGSM</td><td>Me</td><td>0.649</td><td>0.290</td><td>0.132</td><td>0.108</td></tr><tr><td>Ms</td><td>0.736</td><td>0.682</td><td>0.610</td><td>0.503</td></tr><tr><td rowspan="2">PGD</td><td>Me</td><td>0.616</td><td>0.283</td><td>0.122</td><td>0.102</td></tr><tr><td>Ms</td><td>0.679</td><td>0.627</td><td>0.579</td><td>0.497</td></tr><tr><td rowspan="2">MI-FGSM</td><td>Me</td><td>0.608</td><td>0.276</td><td>0.115</td><td>0.107</td></tr><tr><td>Ms</td><td>0.662</td><td>0.615</td><td>0.565</td><td>0.491</td></tr></table>
|
| 307 |
+
|
| 308 |
+
Table 1. Classification accuracy with different 2D attacks.
|
| 309 |
+
|
| 310 |
+
<table><tr><td colspan="2">Perturbation magnitude Ξ΅</td><td>0.02</td><td>0.04</td><td>0.06</td><td>0.08</td></tr><tr><td rowspan="2">Panorama-domain</td><td>Me</td><td>0.848</td><td>0.845</td><td>0.649</td><td>0.351</td></tr><tr><td>Ms</td><td>0.847</td><td>0.842</td><td>0.838</td><td>0.774</td></tr><tr><td rowspan="2">PV-domain</td><td>Me</td><td>0.793</td><td>0.758</td><td>0.375</td><td>0.163</td></tr><tr><td>Ms</td><td>0.780</td><td>0.767</td><td>0.716</td><td>0.591</td></tr></table>
|
| 311 |
+
|
| 312 |
+
Table 2. Classification accuracy for fine-tuned models.
|
| 313 |
+
|
| 314 |
+
It can be seen from Eq.(23) that our fusion strategy gives the spherical parts closer to the fused position larger weight, especially for pixels on the salient areas. The reason is that the image content on those areas changes dramatically, and the pixels far from them contribute less to the fusion procedure. Therefore we pay more attention to close parts, which provide more accurate information for the fused pixel. On the contrary, for pixels on the non-salient areas, the change of the image content is relatively smooth, and the pixels within a large area may be similar. Weighting multiple spherical parts in non-salient areas helps consider more neighbor pixels. This saliency-aware fusion strategy avoids over-smoothing, reserving more adversarial perturbations.
|
| 315 |
+
|
| 316 |
+
# 4. Experiments
|
| 317 |
+
|
| 318 |
+
# 4.1. 3D Object Classification
|
| 319 |
+
|
| 320 |
+
We first evaluate the performance of our attack on the shape classification task with the spherical ModelNet-40 dataset, which is a benchmark dataset of spherical models.
|
| 321 |
+
|
| 322 |
+
# 4.1.1 Evaluation Setup
|
| 323 |
+
|
| 324 |
+
We follow the operation in [8], generating a synthetic spherical dataset by projecting the ModelNet-40 [42] dataset to a spherical surface using a ray-mesh intersection method. Because there is no previous work investigating the issue of generating adversarial spherical images, the approach directly attacking the panorama is adopted as the baseline, in which the spherical images are first projected to panoramas, then the typical attack such as FGSM and PGD is carried on them to generate adversarial panoramas, and the adversarial panoramas are finally remapped to the spherical space. The resolution of the spherical images is set to $128 \times 128$ , and thus the resolution of the panorama images is $64 \times 128$ . The field of view for rendering PV images is set to $120^{\circ}$ , because it is approximate to that of human vision, and we enforce that the adjacent PV images overlap half with each other, resulting twelve PV images for one spherical image. We consider two target models, including Spherical CNN $(M_s)$ and a standard cnn $(M_e)$ taking panoramas as inputs. The planar model $(M_t)$ used for generating adversarial panoramas and PV images is trained on a synthetic dataset consisting PV images and panoramas rescaled to $128 \times 128$ . In our experiments, $\epsilon$ ranges from 0.01 to 0.10, and $\gamma$ is set to $\epsilon / 10$ . The choice of $T$ refers to [23], in which $T = \epsilon / \gamma + 4$ .
|
| 325 |
+
|
| 326 |
+

|
| 327 |
+
(a)
|
| 328 |
+
Figure 5. The ablation study performed on: (a) the spherical model $M_{s}$ and (b) the planar model $M_{e}$ .
|
| 329 |
+
|
| 330 |
+

|
| 331 |
+
(b)
|
| 332 |
+
|
| 333 |
+
# 4.1.2 Attack Performance
|
| 334 |
+
|
| 335 |
+
We evaluate the performance of the 360-attack against $M_{s}$ and $M_{e}$ , and the results are shown in Fig. 3. All the attacks successfully mislead the two models, although the models perform well with over $80\%$ accuracy on benign images. It can be observed that $M_{s}$ is more robust than $M_{e}$ . When facing powerful attacks, $M_{s}$ still keeps an accuracy above $30\%$ , while the accuracy of $M_{e}$ is lower than $10\%$ . Among the attacks, the attack capability of the 360-attack is obviously superior to that of panorama-domain attacks. This is because panoramas are more distorted than PV images, making the adversarial panoramas consist of distorted information. The superiority of the 360-attack is more evident when attacking $M_{s}$ , with more than $20\%$ accuracy decline compared to baselines. However, the results in Fig. 3 (b)
|
| 336 |
+
|
| 337 |
+
illustrate that the effects of the three attacks on $M_e$ tend to be the same when $\epsilon$ increases. It is caused by the fragility of $M_e$ : When $\epsilon$ is small, the model has a weak immunocompetence to the adversarial examples, and thus more powerful attacks have more significant effect on the model. When the attack power increases, all attacks severely mislead the model, and finally achieve the similar attacking effects.
|
| 338 |
+
|
| 339 |
+
The key of the success for the 360-attack lies in its capability for generating aggressive perturbations in the planar attack and reserving them in the fusion operation. In the planar attack step, the proposed DAI-FGSM method can compensate the perturbations to alleviate the impact of the following inverse perspective projection, which guarantees the high aggressiveness of the adversarial PV images. Moreover, in the fusion step, the salient pixels are fused by considering more on their close spherical parts, while the non-salient pixels are obtained by weighting more far spherical parts, which collects abundant information of the pixel and suppresses the impact of the projective distortion on attack.
|
| 340 |
+
|
| 341 |
+
Fig. 4 shows adversarial examples of 360-attack. All of the adversarial images successfully attack $M_{s}$ and $M_{e}$ with a minor modification to the original images, which demonstrates the effectiveness of 360-attack.
|
| 342 |
+
|
| 343 |
+
Note that our attack directly generates perturbations in the $\mathbb{Z}^2$ space, and then transfers the planar perturbations to the spherical image. As the disturbed spherical image can successfully mislead the spherical model which operates in the SO(3) group, it demonstrates the transferability of adversarial perturbations from $\mathbb{Z}^2$ to SO(3) groups.
|
| 344 |
+
|
| 345 |
+
# 4.1.3 Combined with Different 2D Attacks
|
| 346 |
+
|
| 347 |
+
As claimed before, the distortion compensation function can be combined with any existing planar attacks. Therefore, we compare the attack results when integrating the distortion mask with FGSM, PGD, and MI-FGSM. The results in Table 1 indicates that PGD-based and MI-FGSM-based attacks have similar aggressiveness, and they are both superior to the FGSM-based attack. Intuitively, the FGSM-based attack compensates the distortion only once, resulting in less aggressive adversarial examples compared iterative methods. As for the similar performance between the iterative attacks, the reason may be that the critical factors influencing the attack are the distortion in PV images and strategy to fuse multiple adversarial PVs. The iterative attacks have similar compensation degrees for distortion, which leads to similar performance.
|
| 348 |
+
|
| 349 |
+
# 4.1.4 Evaluation against Adversarial Training
|
| 350 |
+
|
| 351 |
+
Adversarial training, which fine-tunes the model with correctly labeled adversarial examples, is one of the most effective defensive methods against adversarial attacks. We evaluate the performance of 360-attack in the adversarial training setting. Specifically, the victim models are fine-tuned
|
| 352 |
+
|
| 353 |
+
with adversarial spherical images generated by panorama-domain attack and 360-attack with $\epsilon = 0.04$ , while the model for generating planar adversarial images remains unchanged. The adversarial examples are then fed into the fine-tuned models, and the results are shown in Table 2.
|
| 354 |
+
|
| 355 |
+
The results indicate that adversarial training significantly improves the robustness of models. For example, for $M_{s}$ , the accuracy on the adversarial examples of $\epsilon = 0.04$ generated from the panorama-domain attack improves from 0.7627 to 0.842, while for the 360-attack improves from 0.627 to 0.767. We observe that the panorama-domain attack has little effect on the fine-tuned $M_{s}$ , while the 360-attack still severely misleads the models, further confirming that 360-attack is more aggressive than the panorama-domain attack. The behavior of $M_{e}$ is a little different from that of $M_{s}$ . When facing weak attacks, the improvement of performance is evident, and the model works normally. However, when the adversarial examples generated with a larger $\epsilon$ are fed into the model, its accuracy steeply degrades by 20-60%, and the defense is even useless against the 360-attack of $\epsilon = 0.08$ . This is caused by the intrinsic instability of the planar model, just like the results in Fig. 3. In summary, the results in Table 2 demonstrate that the 360-attack is still highly effective in the adversarial training setting, remarkably outperforming the panorama-domain attack.
|
| 356 |
+
|
| 357 |
+
# 4.1.5 Ablation Study
|
| 358 |
+
|
| 359 |
+
To measure the effectiveness of the proposed distortion-aware attack and saliency-aware fusion, we perform an ablation study by replacing the DAI-FGSM with PGD in the 2D attack step and replacing the saliency-aware fusion with average fusion. The results of the ablation study are shown in Fig. 5. It can be seen that the lines with purple triangles, which show the results of the PGD attack with average fusion, are always above the blue x-mark lines showing the DAI-FGSM attack with average fusion. This indicates that DAI-FGSM method can reserve more adversarial perturbations than PGD, which benefits greatly from the ability of DAI-FGSM to alleviate the negative impact of distortion introduced by the inverse perspective projection. In addition, the blue lines are always above the red dotted lines that show the DAI-FGSM with saliency fusion. This means the saliency-aware fusion strengthens the aggressiveness of the attack, which results from its effect of reserving more accurate adversarial information. We can also observe that the blue lines are above the orange star lines that indicates the PGD attack with saliency fusion, which demonstrates the saliency-aware fusion contributes more to the attack performance than the DAI-FGSM. It may be because the smoothing effect of the average fusion severely degrades the impact of the perturbations. Overall, the DAI-FGSM method with saliency-aware fusion operations always performs better than any other attacks, further verifying their necessities.
|
| 360 |
+
|
| 361 |
+
<table><tr><td colspan="2">Attack</td><td>Panorama-domain attack</td><td>360-attack (PGD)</td><td>360-attack (Average)</td><td>360-attack (DAI-FGSM)</td></tr><tr><td rowspan="2">UNet</td><td>IoU</td><td>0.359/0.347/0.255</td><td>0.359/0.340/0.234</td><td>0.359/0.357/0.298</td><td>0.359/0.331/0.217</td></tr><tr><td>Acc.</td><td>0.558/0.543/0.496</td><td>0.558/0.536/0.473</td><td>0.558/0.549/0.509</td><td>0.558/0.531/0.447</td></tr><tr><td rowspan="2">UG-SCNN</td><td>IoU</td><td>0.413/0.398/0.320</td><td>0.413/0.385/0.298</td><td>0.413/0.406/0.350</td><td>0.413/0.387/0.273</td></tr><tr><td>Acc.</td><td>0.569/0.553/0.490</td><td>0.569/0.547/0.477</td><td>0.569/0.553/0.511</td><td>0.569/0.543/0.463</td></tr></table>
|
| 362 |
+
|
| 363 |
+
Table 3. Attack performance on semantic segmentation task. $\left( {\epsilon = 0/{0.03}/{0.08}}\right)$
|
| 364 |
+
|
| 365 |
+
<table><tr><td rowspan="2">Attack</td><td colspan="2">CFL</td><td colspan="2">LayoutNet</td></tr><tr><td>IoU</td><td>Accuracy</td><td>IoU</td><td>Accuracy</td></tr><tr><td>Panorama-attack</td><td>0.595/0.535/0.329</td><td>0.932/0.917/0.855</td><td>0.564/0.450/0.250</td><td>0.911/0.906/0.792</td></tr><tr><td>360-attack (PGD)</td><td>0.595/0.530/0.318</td><td>0.932/0.916/0.839</td><td>0.564/0.444/0.239</td><td>0.911/0.911/0.773</td></tr><tr><td>360-attack (Average)</td><td>0.595/0.552/0.357</td><td>0.932/0.932/0.865</td><td>0.564/0.479/0.31</td><td>0.911/0.915/0.830</td></tr><tr><td>360-attack (DAI-FGSM)</td><td>0.595/0.522/0.282</td><td>0.932/0.908/0.830</td><td>0.564/0.420/0.212</td><td>0.911/0.870/0.750</td></tr></table>
|
| 366 |
+
|
| 367 |
+
Table 4. Attack performance on the 3D layout reconstruction task. $\left( {\epsilon = 0/{0.03}/{0.08}}\right)$
|
| 368 |
+
|
| 369 |
+
# 4.2. Tasks on Real-world $360^{\circ}$ Datasets
|
| 370 |
+
|
| 371 |
+
Aforementioned experiments show that 360-attack is effective on the synthetic dataset. We may wonder whether it is still effective on the real-world datasets. Thus, we perform experiments on the real-world datasets for tasks including semantic segmentation and layout prediction. In these experiments, we compare our attack (360-attack (DAI-FGSM)) with panorama-domain attack, 360-attack with PGD, and 360-attack with average fusion, and the experimental setting is the same as that of the 3D object classification. Note that the last two attacks are modified from the proposed attack, and the comparison experiments between them and our attack can be considered as ablation studies.
|
| 372 |
+
|
| 373 |
+
# 4.2.1 $360^{\circ}$ Semantic Segmentation
|
| 374 |
+
|
| 375 |
+
In this task, UNet [32] and UG-SCNN [22] models are chosen as the target models, and the experimental dataset is the Standford 2D/3D dataset. The experimental results are shown in Table 3. The results indicate 360-attack performs the best in reducing IoU and accuracy of the model prediction among the attacks. It is worth noting that the impact of the attacks on semantic segmentation models is less than those on classification models, and the reason may be that the measurements are calculated from the predictions on all of the pixels, and the slight perturbations added by the attacks only change the predictions of part of the pixels.
|
| 376 |
+
|
| 377 |
+
# 4.2.2 3D Layout Reconstruction
|
| 378 |
+
|
| 379 |
+
We also test our attack against the models trained for 3D layout reconstruction task. In this experiment, we consider CFL [13] and LayoutNet [49] models as our target models, and the test dataset is the SUN360 dataset [44]. Table 4 shows the results of this experiment. Similar to the experiments on the semantic segmentation task, the effect of our attack on the prediction of the target models is greater than that of other compared attacks. Compared to the classification models, the models used in this experiment are more robust to adversarial attacks, and it is due to the simple target of this task: Only eight corners and their corresponding
|
| 380 |
+
|
| 381 |
+
contour lines are expected to be predicted.
|
| 382 |
+
|
| 383 |
+
# 4.3. Broader Impact and Limitations
|
| 384 |
+
|
| 385 |
+
This work can potentially contribute to deeper understanding of DNNs, especially for those processing $360^{\circ}$ images. Although we reveal the spherical models are also vulnerable to adversarial examples, our work focuses on assisting researchers to perform more thorough evaluations on DNNs, rather than on attacking real-world systems. We firmly believe that our work can help researchers design new robust models and efficient defenses. In the future, we will focus on addressing the limitation of relying on assigned positions to render PV images, and work for selecting adaptively PV images to implement the attack.
|
| 386 |
+
|
| 387 |
+
# 5. Conclusion
|
| 388 |
+
|
| 389 |
+
We investigate the vulnerability of DNNs trained for spherical images against adversarial attack by transferring adversarial perturbations from the PV domain to the spherical domain. Two key procedures are proposed to preserve more embedded perturbations against the conversion of attack domain space. In the planar attack step, a distortion-aware attack is proposed to suppress the impact of distortion introduced by the projection between spherical and PV images. In the fusion step, we proposed a saliency-aware fusion approach to merge multiple inversely projected spherical parts to the final adversarial spherical image. A systematic study on the spherical and panorama-based models with various synthetic and real-world datasets demonstrates the effectiveness of the proposed attack. Finally, our work also demonstrates the transferability of the adversarial examples between the 2D and 3D spaces.
|
| 390 |
+
|
| 391 |
+
# 6. Acknowledgement
|
| 392 |
+
|
| 393 |
+
This work was supported in part by National Natural Science Foundation of China under Grant 61771469 and the Cooperation project between Chongqing Municipal undergraduate universities and institutes affiliated to CAS (HZ2021015).
|
| 394 |
+
|
| 395 |
+
# References
|
| 396 |
+
|
| 397 |
+
[1] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the International Conference on Machine Learning (ICML, 2018. 2
|
| 398 |
+
[2] Wieland Brendel, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In International Conference on Machine Learning (ICML), 2018. 2
|
| 399 |
+
[3] Peter J Burt and Edward H Adelson. A multiresolution spline with application to image mosaics. ACM Transactions on Graphics (TOG), 2(4):217-236, 1983. 4
|
| 400 |
+
[4] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niebner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. In International Conference on 3D Vision (3DV), 2017. 1
|
| 401 |
+
[5] Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017. 2
|
| 402 |
+
[6] Minhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh. Query-efficient hard-label black-box attack: An optimization-based approach. In International Conference on Learning Representation (ICLR), 2019. 2
|
| 403 |
+
[7] Oliver J Cobb, Christopher GR Wallis, Augustine N Mavor-Parker, Augustin Marignier, Matthew A Price, Mayeul d'Avezac, and Jason D McEwen. Efficient generalized spherical cnns. International Conference on Learning Representations (ICLR), 2021. 1
|
| 404 |
+
[8] Taco S Cohen, Mario Geiger, Jonas KΓΆhler, and Max Welling. Spherical cnns. In International Conference on Learning Representations (ICLR), 2018. 1, 2, 6
|
| 405 |
+
[9] Francesco Croce and Matthias Hein. Minimally distorted adversarial examples with a fast adaptive boundary attack. In International Conference on Machine Learning (ICML), 2020. 2
|
| 406 |
+
[10] Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, and Jun Zhu. Benchmarking adversarial robustness on image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 1
|
| 407 |
+
[11] Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
|
| 408 |
+
[12] Carlos Esteves, Christine Allen-Blanchette, Ameesh Makadia, and Kostas Daniilidis. Learning so (3) equivariant representations with spherical cnns. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. 1, 2
|
| 409 |
+
[13] Clara Fernandez-Labrador, JosΓ© M FΓ‘cil, Alejandro Perez-Yus, CΓ©dric Demonceaux, Javier Civera, and JosΓ© J Guer-
|
| 410 |
+
|
| 411 |
+
rero. Corners for layout: End-to-end layout recovery from 360 images. arXiv:1903.08094, 2019. 1, 2, 8
|
| 412 |
+
[14] Junhong Gao, Seon Joo Kim, and Michael S Brown. Constructing image panoramas using dual-homography warping. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. 4
|
| 413 |
+
[15] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11):1231-1237, 2013. 1
|
| 414 |
+
[16] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representation (ICLR), 2015. 1, 2
|
| 415 |
+
[17] Peter Henry, Michael Krainin, Evan Herbst, Xiaofeng Ren, and Dieter Fox. Rgb-d mapping: Using depth cameras for dense 3d modeling of indoor environments. In Experimental Robotics, pages 477-491. Springer, 2014. 1
|
| 416 |
+
[18] Xiaodi Hou and Liqing Zhang. Saliency detection: A spectral residual approach. In IEEE Conference on computer vision and pattern recognition (CVPR), 2007. 5
|
| 417 |
+
[19] Hou-Ning Hu, Yen-Chen Lin, Ming-Yu Liu, Hsien-Tzu Cheng, Yung-Ju Chang, and Min Sun. Deep 360 pilot: Learning a deep agent for piloting through 360 sports videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 1, 2
|
| 418 |
+
[20] Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge Belongie, and Ser-Nam Lim. Enhancing adversarial example transferability with an intermediate level attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019. 2
|
| 419 |
+
[21] Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning (ICML), 2018. 2
|
| 420 |
+
[22] Chiyu Jiang, Jingwei Huang, Karthik Kashinath, Philip Marcus, Matthias Niessner, et al. Spherical cnns on unstructured grids. In International Conference on Learning Representations (ICLR), 2019. 1, 2, 8
|
| 421 |
+
[23] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In International Conference on Learning Representation (ICLR), 2017. 2, 6
|
| 422 |
+
[24] Yeonkun Lee, Jaeseok Jeong, Jongseob Yun, Wonjune Cho, and Kuk-Jin Yoon. Spherephd: Applying cnns on a spherical polyhedron representation of 360deg images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1
|
| 423 |
+
[25] Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, and John E. Hopcroft. Nesterov accelerated gradient and scale invariance for adversarial attacks. In International Conference on Learning Representations (ICLR), 2021. 2
|
| 424 |
+
[26] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representation (ICLR), 2018. 2
|
| 425 |
+
[27] Claus MΓΌller. Spherical harmonics, volume 17. Springer, 2006. 5
|
| 426 |
+
|
| 427 |
+
[28] Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli. A self-supervised approach for adversarial robustness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 1
|
| 428 |
+
[29] Khanh Nguyen, Debadeepta Dey, Chris Brockett, and Bill Dolan. Vision-based navigation with language-based assistance via imitation learning with indirect intervention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. 1
|
| 429 |
+
[30] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the ACM Asia Conference on Computer and Communications Security (Asia CCS), 2017. 2
|
| 430 |
+
[31] William K Pratt. Introduction to digital image processing. CRC press, 2013. 4
|
| 431 |
+
[32] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2015. 8
|
| 432 |
+
[33] Jong-Chyi Su, Matheus Gadelha, Rui Wang, and Subhransu Maji. A deeper look at 3d shape classifiers. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. 1
|
| 433 |
+
[34] Yu-Chuan Su and Kristen Grauman. Learning spherical convolution for fast features from 360 imagery. In Advances in Neural Information Processing Systems (NIPS), pages 529-539, 2017. 1, 2
|
| 434 |
+
[35] Yu-Chuan Su and Kristen Grauman. Making 360 video watchable in 2d: Learning videography for click free viewing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 1
|
| 435 |
+
[36] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representation (ICLR), 2014. 1, 2
|
| 436 |
+
[37] Xiaosen Wang and Kun He. Enhancing the transferability of adversarial attacks through variance tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2
|
| 437 |
+
[38] Zhibo Wang, Hengchang Guo, Zhifei Zhang, Wenxin Liu, Zhan Qin, and Kui Ren. Feature importance-aware transferable adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. 2
|
| 438 |
+
[39] Xingxing Wei, Jun Zhu, Sha Yuan, and Hang Su. Sparse adversarial perturbations for videos. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2019. 1
|
| 439 |
+
|
| 440 |
+
[40] Alex Wong, Mukund Mundhra, and Stefano Soatto. Stereopagnosia: Fooling stereo networks with adversarial perturbations. In AAAI Conference on Artificial Intelligence (AAAI), 2021. 1
|
| 441 |
+
[41] Weibin, Wu, Yuxin. Su, Xixian. Chen, Shenglin. Zhao, Irwin. King, Michael. R. Lyu, and Yu-Wing. Tai. Boosting the transferability of adversarial samples via attention. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
|
| 442 |
+
[42] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 6
|
| 443 |
+
[43] Chong Xiang, Charles R Qi, and Bo Li. Generating 3d adversarial point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1
|
| 444 |
+
[44] Jianxiong Xiao, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Recognizing scene viewpoint using panoramic place representation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. 8
|
| 445 |
+
[45] Yanyu Xu, Yanbing Dong, Junru Wu, Zhengzhong Sun, Zhiru Shi, Jingyi Yu, and Shenghua Gao. Gaze prediction in dynamic 360 immersive videos. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 1
|
| 446 |
+
[46] Wenyan Yang, Yanlin Qian, Joni-Kristian KΓ€mΓ€rΓ€inen, Francesco Cricri, and Lixin Fan. Object detection in equirectangular panorama. In International Conference on Pattern Recognition (ICPR). IEEE, 2018. 1, 2
|
| 447 |
+
[47] Yihuan Zhang, Jun Wang, Xiaonian Wang, and John M Dolan. Road-segmentation-based curb detection method for self-driving via a 3d-lidar sensor. IEEE Transactions on Intelligent Transportation Systems (TITS), 19(12):3981-3991, 2018. 1
|
| 448 |
+
[48] Denis Zorin and Alan H. Barr. Correction of geometric perceptual distortions in pictures. In Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 1995. 2
|
| 449 |
+
[49] Chuhang Zou, Alex Colburn, Qi Shan, and Derek Hoiem. Layoutnet: Reconstructing the 3d room layout from a single rgb image. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 8
|
360attackdistortionawareperturbationsfromperspectiveviews/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a47d16a0db346d0384f7ccfe6e3f8d0c957d0ea73662eaf987da4f76db7f1f6b
|
| 3 |
+
size 420158
|
360attackdistortionawareperturbationsfromperspectiveviews/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:75697cb436bd4c936c5a74c259728f73ca8707621ee64f276caab7732b6c6adf
|
| 3 |
+
size 529388
|
360monodepthhighresolution360degmonoculardepthestimation/95547699-3553-46d5-9a18-b8edbbca0aea_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:297845a32d8e71746a60b014902e9ca1316a770903b2f8d1b3dce2232a00bb4a
|
| 3 |
+
size 77220
|
360monodepthhighresolution360degmonoculardepthestimation/95547699-3553-46d5-9a18-b8edbbca0aea_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:037ed6affede11c0becd3055e36236a405f6e3658fc63958880ddc5c3929c988
|
| 3 |
+
size 104627
|
360monodepthhighresolution360degmonoculardepthestimation/95547699-3553-46d5-9a18-b8edbbca0aea_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:56359c517fd6bf4a7bd4c5256e7fcb8c32adb2d72b56bb8568f67ebf11bbea36
|
| 3 |
+
size 8148800
|
360monodepthhighresolution360degmonoculardepthestimation/full.md
ADDED
|
@@ -0,0 +1,316 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 360MonoDepth: High-Resolution $360^{\circ}$ Monocular Depth Estimation
|
| 2 |
+
|
| 3 |
+
Manuel Rey-Area* Mingze Yuan* Christian Richardt
|
| 4 |
+
University of Bath
|
| 5 |
+
|
| 6 |
+

|
| 7 |
+
Figure 1. We present a flexible framework for estimating high-resolution disparity maps from a single $360^{\circ}$ input image by decomposing it into perspective tangent images, which are used for monocular depth estimation. We then globally align all disparity maps using multi-scale alignment fields, and blend them in the gradient domain to produce a detailed, consistent and high-resolution $360^{\circ}$ spherical disparity map.
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
$360^{\circ}$ cameras can capture complete environments in a single shot, which makes $360^{\circ}$ imagery alluring in many computer vision tasks. However, monocular depth estimation remains a challenge for $360^{\circ}$ data, particularly for high resolutions like $2K$ ( $2048 \times 1024$ ) and beyond that are important for novel-view synthesis and virtual reality applications. Current CNN-based methods do not support such high resolutions due to limited GPU memory. In this work, we propose a flexible framework for monocular depth estimation from high-resolution $360^{\circ}$ images using tangent images. We project the $360^{\circ}$ input image onto a set of tangent planes that produce perspective views, which are suitable for the latest, most accurate state-of-the-art perspective monocular depth estimators. To achieve globally consistent disparity estimates, we recombine the individual depth estimates using deformable multi-scale alignment followed by gradient-domain blending. The result is a dense, high-resolution $360^{\circ}$ depth map with a high level of detail, also for outdoor scenes which are not supported by existing methods. Our source code and data are available at https://manurare.github.io/360monodepth/.
|
| 12 |
+
|
| 13 |
+
# 1. Introduction
|
| 14 |
+
|
| 15 |
+
Monocular depth estimation has recently seen a significant boost thanks to convolutional neural networks. CNNs have demonstrated an unprecedented expressive power to learn intricate geometric relationships from data, resembling the capability of humans to exploit visual cues to perceive depth. Monocular depth estimates have enabled impressive new approaches for 3D photography [33, 61] and novel-view synthesis of dynamic scenes [20, 43]. However, most approaches for monocular depth estimation are limited to low-resolution<sup>1</sup> perspective images, with a limited field-of-view.
|
| 16 |
+
|
| 17 |
+
Nevertheless, $360^{\circ}$ cameras are becoming increasingly popular and widespread in the computer vision community. The omnidirectional $360^{\circ}$ field-of-view captured by these devices is appealing for tasks such as robust, omnidirectional SLAM [66, 77], scene understanding and layout estimation [31, 67, 75, 81], or VR photography and video [5, 59]. State-
|
| 18 |
+
|
| 19 |
+
of-the-art monocular depth estimation approaches for $360^{\circ}$ images [30, 40, 52, 67, 74] are currently limited to resolutions of $1024 \times 512 \approx 0.5$ megapixels. While this is sufficient for tasks like layout estimation, it is insufficient for VR applications as they require resolutions of at least 2 megapixels to match the resolution of VR headsets [34] and achieve full immersion [12, 45]. Our work aims to fill this gap.
|
| 20 |
+
|
| 21 |
+
Existing monocular $360^{\circ}$ depth estimation approaches build on CNNs whose spatial resolution is fundamentally limited by the GPU memory available during training. These methods are therefore restricted to small batch sizes of 4 to 8 for 0.5 megapixel images on an NVIDIA 2080 Ti with 11 GB memory [30, 52, 67]. For this reason, single-CNN approaches become impractical for predicting high-resolution depth maps with multiple megapixels.
|
| 22 |
+
|
| 23 |
+
In this work, we introduce a general and flexible framework for monocular depth estimation from high-resolution $360^{\circ}$ images inspired by Eder et al.'s tangent images [16]. Our approach projects the input $360^{\circ}$ image to a collection of perspective tangent images, e.g. using the faces of an icosahedron. We then use state-of-the-art perspective monocular depth estimators endowed with powerful generalisation capability for obtaining dense, detailed depth maps for each tangent image. Subsequently, we optimally align individual depth maps using multi-scale spatially-varying deformation fields to bring them into global agreement. Finally, we merge the aligned depth maps using gradient-based blending for a seamless high-resolution $360^{\circ}$ depth map. Our technical contributions are as follow:
|
| 24 |
+
|
| 25 |
+
1. A simple, yet powerful and practical framework for high-quality multi-megapixel $360^{\circ}$ monocular depth estimation based on aligning and blending depth maps predicted from perspective tangent images.
|
| 26 |
+
2. Support for increased resolutions using tangent images, and improved quality by forward compatibility for future monocular depth estimation approaches.
|
| 27 |
+
3. We provide $2048 \times 1024$ ground-truth depth maps for Matterport3D's stitched skyboxes to advance future high-resolution depth estimation approaches.
|
| 28 |
+
|
| 29 |
+
# 2. Related Work
|
| 30 |
+
|
| 31 |
+
Monocular depth estimation. Predicting a dense depth map from a single input image is a challenging, ill-posed task due to the high level of ambiguity between possible reconstructions. Early approaches relied on simple geometric assumptions [27], geometric reasoning using Markov random fields [58], or non-parametric depth transfer [32]. The rise of deep learning has made it possible to train convolutional neural networks that are supervised by ground-truth depth maps [17, 36, 44], e.g. from synthetic renderings or depth sensors, or by exploiting defocus blur [60, 62]. However, suitable training data is scarce, particularly for outdoor scenes.
|
| 32 |
+
|
| 33 |
+
Subsequent work therefore explored alternative training regimes, in particular from stereo views that provide self-supervision via view synthesis [21, 22, 23, 47, 54, 72, 78], from camera ego-motion in videos [24, 46, 48, 57, 71, 82, 85], and from multiview stereo reconstructions [41, 42]. Ranftl and Lasinger et al.'s MiDaS [56] demonstrated substantial improvements and generalisation performance by learning from five varied datasets using multi-objective learning. The fidelity of depth predictions can also be improved by merging estimates at multiple scales [49]. Recently, Ranftl et al. [55] introduced transformers [14, 70] into monocular depth estimation, to produce finer-grained and more globally consistent results than CNN-based methods. We base our new monocular $360^{\circ}$ depth estimation method on their state-of-the-art performance, but our method would transparently benefit from future advances in monocular depth estimation.
|
| 34 |
+
|
| 35 |
+
Spherical CNNs. Most CNNs are applied to flat 2D images with little image distortion. However, $360^{\circ}$ images need a different approach to correctly handle the inevitable distortions of projecting a spherical image onto a plane, e.g. in the commonly used equirectangular projection. Su and Grauman proposed a pragmatic solution using wider kernels near the poles [64]. However, these kernels do not share any information, which leads to suboptimal performance. Another pragmatic approach is to project the spherical image into a padded cubemap, process all sides as perspective images, and to recombine the results [8]. This approach struggles for the top and bottom faces, as kernel orientations become ambiguous due to 90-degree rotational symmetry. Eder et al. [16] generalise this approach to more than six tangent images, which achieves higher and more uniform angular pixel resolutions. However, predictions on tangent images are recombined per pixel without any alignment or blending, which works poorly for monocular depth estimation (see our experiments in Section 4.3).
|
| 36 |
+
|
| 37 |
+
Cubemaps have since been generalised to the 20 triangular faces of an icosahedron, which can be unwrapped into 5 rectangles with shared convolution kernels [10, 37, 83]. Distortion-aware convolutions [11, 19, 65, 69, 84] can directly model the distortions of equirectangular projection. Interestingly, this also enables the transfer of models trained on perspective images to equirectangular images without any additional training, but it requires matching angular pixel resolutions. Full rotation-equivariance can be achieved using spherical convolutions [9, 18], but this may not always be desirable as the down direction is usually consistent with gravity. These approaches have high memory requirements that make them unsuitable for multi-megapixel resolutions.
|
| 38 |
+
|
| 39 |
+
$360^{\circ}$ depth estimation. Deep learning has also boosted monocular depth estimation for $360^{\circ}$ images. Most methods are supervised using synthetic datasets due to the difficulty of acquiring ground-truth spherical depth maps [15, 87]. Similar to the perspective case, several methods perform
|
| 40 |
+
|
| 41 |
+
self-supervised training via view synthesis [39, 51, 73, 88]. Tateno et al. [69] adapt pre-trained monocular depth estimation for perspective images [36] to spherical images using distortion-aware convolutional filters. Depth accuracy can be improved by fusing predictions for equirectangular and cubemap projections [4, 74], while deformable [7] or dilated [86] convolutions can make methods more distortion-aware. Pintore et al. [52] and Sun et al. [67] exploit gravity-aligned features in man-made interior environments using vertical slicing. However, the performance of these learning-based approaches highly depends on their training data. Most datasets are synthetic, low-resolution $(1024\times 512)$ and only consider indoor scenes. These methods therefore tend to perform poorly on real high-resolution or outdoor scenes.
|
| 42 |
+
|
| 43 |
+
Learning-based spherical stereo methods again mostly rely on synthetic training data, making them unsuitable for real outdoor scenes. They assume a known, fixed camera baseline [35, 38, 76], or estimate the relative pose between cameras [73]. Under the assumption of a moving camera in a static environment, structure-from-motion and multi-view stereo can be used [13, 28, 29]. However, these assumptions are violated by most usage scenarios, in which the camera might be stationary or environments are dynamic. Crucially, these techniques do not work for a single monocular input image as information from multiple viewpoints or points in time must be combined.
|
| 44 |
+
|
| 45 |
+
# 3. The 360MonoDepth framework
|
| 46 |
+
|
| 47 |
+
Our approach builds on a general framework for estimating high-resolution depth maps from just a single monocular $360^{\circ}$ input image. Figure 1 illustrates the four main steps of our approach. We start by projecting the $360^{\circ}$ input image to a set of overlapping perspective tangent images (Section 3.1), for instance the 20 faces of an icosahedron for an equirectangular image of resolution $2048 \times 1024$ pixels. For each tangent image, we independently predict a depth map (Section 3.2) using state-of-the-art perspective monocular depth estimation [55, 56]. Such methods predict disparity maps that are ambiguous up to affine ambiguity with unknown scale and shift [79]. We thus formulate a global optimisation to align all tangent disparity maps in the spherical domain (Section 3.3). Finally, we merge the aligned tangent disparity maps using Poisson blending [53] into a high-resolution spherical disparity map (Section 3.4).
|
| 48 |
+
|
| 49 |
+
In this paper, we use equirectangular projection (ERP) as the default format for spherical $360^{\circ}$ images due to its wide adoption in the computer vision community. However, our approach can easily be adapted to any other spherical projection by adapting the projection to/from tangent images.
|
| 50 |
+
|
| 51 |
+
# 3.1. Tangent image projection
|
| 52 |
+
|
| 53 |
+
Carl Friedrich Gauss proved that any projection of a spherical image to a plane introduces some degree of distortion.
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
Figure 2. Coverage of the sphere by the 20 tangent images of an icosahedron (with padding factor $p = 0.3$ ). The darkest regions have an overlap of 2, the brightest of 5 images.
|
| 57 |
+
|
| 58 |
+
For example, equirectangular projection stretches the regions near the poles across the longitudinal dimension. To minimise distortion, we project the spherical image to a set of perspective tangent images, each of which can be processed separately and then recombined. We found it convenient to work with the 20 tangent images produced by the faces of an icosahedron that circumscribes a sphere, as this arrangement fairly uniformly covers the sphere's surface (see Figure 2), but our framework easily adapts to different numbers. Each triangular face of the icosahedron is tangent to the sphere at its centroid, which we use to create the tangent images using gnomonic projection.
|
| 59 |
+
|
| 60 |
+
Padding. By default, the size of each tangent image is constrained by the size of its icosahedron face, producing a field of view of $72^{\circ}$ . Tightly cropped tangent images include some overlap with adjacent icosahedron faces that share an edge, by nature of packing a triangular shape into a rectangular image (see the blue region in Figure 3). However, more overlap between tangent images, especially for icosahedron faces that only share a single vertex, is desirable for providing consistency constraints in our disparity map alignment step in Section 3.3, as this helps find a globally consistent alignment. Therefore, we extend the boundaries of tangent images by a padding factor of $p \in [0,1]$ relative to the base shape, as illustrated in Figure 3. We use a padding of $p = 0.3$ , which extends the default tangent image by $30\%$ in all directions.
|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
Figure 3. Each icosahedron face (thick triangle outline) is fit within a rectangular tangent image (blue) without padding, i.e. $p = 0$ . The green region shows a padding of $p = 0.1$ , and red shows $p = 0.2$ . Right: Equirectangular projection for two padded tangent images.
|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+
# 3.2. Tangent disparity map estimation
|
| 68 |
+
|
| 69 |
+
We use monocular depth estimation on each individual tangent image to predict dense disparity maps that will be aligned and merged in the next steps. Specifically, we use MiDaS v2 [56] and v3 [55] for their state-of-the-art performance for both indoor and outdoor images. Nevertheless, our framework is agnostic to the specific perspective monocular depth estimator and will benefit from future improvements.
|
| 70 |
+
|
| 71 |
+
MiDaS predicts disparity maps that correspond to inverse depth, but with an unknown scale factor and shift offset due to its scale- and shift-invariant training procedure. Our method works consistently in disparity space, as this improves the numerical stability during the optimisation in Section 3.3, particularly for distant parts of the environment.
|
| 72 |
+
|
| 73 |
+
Perspective to spherical disparity. Perspective disparity maps, as predicted by MiDaS, describe disparity estimates with respect to the viewing direction of a tangent image, i.e. the $z$ -component of a camera ray to a 3D point (in camera coordinates). However, each tangent image has a different viewing direction, so the definitions of disparity are incompatible between tangent images. In contrast, spherical disparity is the inverse (radial) Euclidean distance from the camera's centre of projection to a 3D point. This definition is consistent for all tangent images as they all share the same centre of projection. We convert the tangent disparity maps from perspective to spherical disparity, and from tangent image space to the equirectangular projection of the input image in preparation for the disparity map alignment step.
|
| 74 |
+
|
| 75 |
+
# 3.3. Global disparity map alignment
|
| 76 |
+
|
| 77 |
+
The individual disparity maps $D(\cdot)$ estimated in the previous step may have inconsistent scales and offsets, as they are predicted independently from each other. Nonetheless, each individual prediction should by design correspond to the ground-truth disparity (i.e. inverse depth) subject to a different unknown affine transform (i.e. scale and offset). To ensure that disparity estimates are consistent with each other, we need to align them globally by finding suitable scale and offset values for each disparity map.
|
| 78 |
+
|
| 79 |
+
Our global disparity map alignment method is inspired by Hedman and Kopf's deformable depth alignment [26]. Instead of finding a constant scale and offset per disparity map, they use spatially varying affine adjustment fields. These adjustment fields are modelled as 2D grids of size $m \times n$ in tangent image space. Each grid-point $i$ stores a pair of scale and offset variables $(s^i, o^i)$ that are interpolated bilinearly across the tangent image domain. The rescaled disparity $\tilde{D}$ of a pixel at position $\mathbf{x}$ is computed using
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\tilde {D} (\mathbf {x}) = s (\mathbf {x}) D (\mathbf {x}) + o (\mathbf {x}), \tag {1}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $s(\mathbf{x}) = \sum_{i} w_{i}(\mathbf{x}) s^{i}$ and $o(\mathbf{x}) = \sum_{i} w_{i}(\mathbf{x}) o^{i}$ are the interpolated scale and offset values, and $w_{i}(\mathbf{x})$ the bilinear interpolation weights for pixel location $\mathbf{x}$ .
|
| 86 |
+
|
| 87 |
+
To globally align all tangent disparity maps, we optimise for the affine adjustment fields that minimise the energy
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\underset {\left\{s _ {a} ^ {i}, o _ {a} ^ {i} \right\}} {\operatorname {a r g m i n}} E _ {\text {a l i g n e m t}} + \lambda_ {\text {s m o o t h n e s s}} E _ {\text {s m o o t h n e s s}} + \lambda_ {\text {s c a l e}} E _ {\text {s c a l e}}, \tag {2}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
which trades off alignment with the spatial smoothness of adjustment fields and a scale regularisation term. We use $\lambda_{\mathrm{smoothness}} = 40$ and $\lambda_{\mathrm{scale}} = 0.007$ for all results.
|
| 94 |
+
|
| 95 |
+
Disparity alignment term. Once aligned, disparity maps should agree where they overlap as they represent the same region of a scene. Given the set $\mathcal{T}$ of tangent image indices, we create the set $\mathcal{Z} = \{(a,b)\mid a,b\in \mathcal{T},a < b\}$ of ordered pairs of tangent images and use $\Omega (a,b)$ to denote the set of overlapping pixels in images $a$ and $b$ . We quantify the alignment between rescaled disparity maps $\tilde{D}_a$ and $\tilde{D}_b$ using:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
E _ {\text {a l i g n m e n t}} = \frac {1}{z _ {\mathrm {a}}} \sum_ {(a, b) \in \mathcal {Z}} \sum_ {\mathbf {x} \in \Omega (a, b)} \left(\tilde {D} _ {a} (\mathbf {x}) - \tilde {D} _ {b} (\mathbf {x})\right) ^ {2}, \tag {3}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where $z_{\mathrm{a}} = \sum_{(a,b)\in \mathcal{Z}}|\Omega (a,b)|$ is used for normalising by the number of considered pixel pairs. For efficiency, we only sample $1\%$ of pixels from the overlap regions $\Omega (a,b)$ .
|
| 102 |
+
|
| 103 |
+
Smoothness term. We encourage the deformable adjustment fields to be spatially smooth between neighbouring grid-points $i$ and $j$ using
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
E _ {\text {s m o o t h n e s s}} = \frac {1}{z _ {\mathrm {s}}} \sum_ {a \in \mathcal {T}} \sum_ {(i, j)} \left\| s _ {a} ^ {i} - s _ {a} ^ {j} \right\| _ {2} ^ {2} + \left\| o _ {a} ^ {i} - o _ {a} ^ {j} \right\| _ {2} ^ {2}, \tag {4}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $z_{\mathrm{s}} = |\mathcal{T}| \cdot m \cdot n$ normalises by the number of grid-points in all tangent images.
|
| 110 |
+
|
| 111 |
+
Scale term. The final term regularises the scale to avoid a collapse to the trivial solution of scale $s = 0$ :
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
E _ {\text {s c a l e}} = \sum_ {a \in \mathcal {T}} \sum_ {i} \left(s _ {a} ^ {i}\right) ^ {- 1}. \tag {5}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
Initialisation. We standardise the input spherical disparity maps to unit scale and zero offset [56] using
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
D ^ {\prime} (\mathbf {x}) = \frac {D (\mathbf {x}) - \operatorname {m e d i a n} (D)}{| \mathcal {P} | ^ {- 1} \sum_ {\mathbf {x} \in \mathcal {P}} | D (\mathbf {x}) - \operatorname {m e d i a n} (D) |} \tag {6}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
to pre-align their ranges, where $\mathcal{P}$ is the set of pixel coordinates. Similarly, we initialise the deformation fields to unit scale $s_a^i = 1$ and zero offset $o_{a}^{i} = 0$ for all $a$ and $i$ .
|
| 124 |
+
|
| 125 |
+
# 3.3.1 Multi-scale deformable alignment
|
| 126 |
+
|
| 127 |
+
Different from Hedman and Kopf, we perform deformable alignment at multiple scales, which we found to be beneficial for fine-tuning the global alignment. We start by optimising for a coarse deformation grid of $4 \times 3$ grid-points per tangent disparity map. We then apply these deformation fields to the disparity maps, and perform a new optimisation for a $8 \times 7$ grid without re-standardising the input disparity maps. We again apply these deformation fields to the disparity maps, and perform a final refinement with a grid size of $16 \times 14$ .
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
Figure 4. Comparison of blending weights for icosahedron tangent images, in equirectangular projection. Vanilla tangent images [16] select estimates only from the nearest tangent image ('NN'). Mean weights average all overlapping tangent images per pixel. Radial weights start decaying at $15^{\circ}$ from the centre of projection. Frustum blending weights start decaying $30\%$ diagonally towards the principal point off from each corner. Notice that disparity maps blended using 'NN', 'mean' and 'radial' weights contain visible seams, which 'frustum' minimises.
|
| 131 |
+
|
| 132 |
+
# 3.4. Disparity map blending
|
| 133 |
+
|
| 134 |
+
After the alignment, the individual disparity maps need to be merged into a single spherical disparity map, similar to how multiple photos are merged into a panorama during stitching. NaΓ―vely merging the tangent disparity maps using nearest-neighbour ('NN') or averaging per-pixel ('mean') leads to undesirable seams, as shown in Figure 4. Using smoothly feathered blending weights [80] in the shape of a frustum reduces seams, but may produce blurrier results.
|
| 135 |
+
|
| 136 |
+
For the highest fidelity blending, we take inspiration from panorama stitching [68] and blend disparity maps in the gradient domain using Poisson blending [53]. Specifically, we look for the blended disparity map $B(\cdot)$ that minimises:
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
\begin{array}{l} \underset {B} {\operatorname {a r g m i n}} \sum_ {a \in \mathcal {T}} \sum_ {\mathbf {x}} \omega_ {a} (\mathbf {x}) \| \nabla B (\mathbf {x}) - \nabla \tilde {D} _ {a} (\mathbf {x}) \| _ {2} ^ {2} \tag {7} \\ + \lambda_ {\text {f i d e l i t y}} \cdot \sum_ {\mathbf {x}} (B (\mathbf {x}) - D _ {\mathrm {N N}} (\mathbf {x})) ^ {2} \\ \end{array}
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
where $\omega_{a}(\mathbf{x})$ are the spatially varying 'frustum' blending weights that modulate the influence of pixels (see Figure 4), and $\lambda_{\mathrm{fidelity}} = 0.1$ is a weight to encourage the solution to stay close to the nearest-neighbour disparity map stitch $D_{\mathrm{NN}}$ .
|
| 143 |
+
|
| 144 |
+
# 4. Experiments and Results
|
| 145 |
+
|
| 146 |
+
Implementation. When processing equirectangular images at a resolution of $2048 \times 1024$ pixels, we use the 20 tangent images of a icosahedron. We project each tangent image using a padding of $p = 0.3$ to a resolution of $400 \times 346$ pixels. This closely matches the $384 \times 384$ training resolution used by MiDaS v2/v3 [55, 56], for which we use the authors'
|
| 147 |
+
|
| 148 |
+
implementation. We solve the global disparity map alignment problem in Equation 2 using the Ceres non-linear least-squares solver [1]. Specifically, we perform L-BFGS line search for 50 iterations at each scale. The gradient-based disparity map blending in Equation 7 is a large sparse least-squares problem that we solve using Eigen's biconjugate gradient stabilized solver (BiCGSTAB) [25]. As the Matterport3D dataset [6] does not include the top and bottom regions of the scene, we exclude a circular region of radius $25^{\circ}$ at the top and bottom from our alignment step.
|
| 149 |
+
|
| 150 |
+
Datasets. For benchmarking, we use equirectangular input images and ground-truth depth maps created from the Matterport3D [6] and Replica [63] datasets. These datasets contain indoor environments reconstructed as a textured mesh and thus provide ground-truth depth. We also show qualitative results on varied outdoor images from OmniPhotos [5], for which no ground-truth depth maps are available.
|
| 151 |
+
|
| 152 |
+
Matterport3D [6] is a real indoor dataset that comprises 10,800 panoramic images. Unfortunately, the poses of these 'skybox' images relative to the mesh reconstruction are not provided, which prevents rendering aligned ground-truth depth maps. Previous work overcame this by rendering both images and depth maps from the textured mesh [87]. However, the image quality of these synthetic images is worse than the real skybox images, particularly at the $2048 \times 1024$ resolution we are targeting. We therefore estimate the poses for the real skybox images relative to the mesh using $360^{\circ}$ structure-from-motion [50] applied to a mixture of real and rendered skybox images at known camera positions. The estimated camera poses allow us to render ground-truth depth maps with pixel accuracy from the provided scene mesh.
|
| 153 |
+
|
| 154 |
+
Table 1. Quantitative results for Matterport3D-2K and Replica360-2K, at ${2048} \times {1024}$ with Poisson blending. Highlighting: best, second-best.
|
| 155 |
+
Matterport3D-2K
|
| 156 |
+
Replica360-2K
|
| 157 |
+
|
| 158 |
+
<table><tr><td>Method</td><td>AbsRelβΌ</td><td>MAEβΌ</td><td>RMSEβΌ</td><td>RMSE-logβΌ</td><td>Ξ΄<1.25β²</td><td>Ξ΄<1.252β²</td><td>Ξ΄<1.253β²</td><td>AbsRelβΌ</td><td>MAEβΌ</td><td>RMSEοΏ½οΏ½</td><td>RMSE-logβΌ</td><td>Ξ΄<1.25β²</td><td>Ξ΄<1.252β²</td><td>Ξ΄<1.253β²</td></tr><tr><td>OmniDepth [87]</td><td>0.473</td><td>0.946</td><td>1.317</td><td>0.212</td><td>0.378</td><td>0.647</td><td>0.820</td><td>0.352</td><td>0.589</td><td>0.787</td><td>0.168</td><td>0.479</td><td>0.776</td><td>0.906</td></tr><tr><td>BiFuse [74]</td><td>0.321</td><td>0.649</td><td>0.994</td><td>0.158</td><td>0.564</td><td>0.802</td><td>0.910</td><td>0.318</td><td>0.468</td><td>0.663</td><td>0.152</td><td>0.591</td><td>0.840</td><td>0.927</td></tr><tr><td>HoHoNetM[67]</td><td>0.227</td><td>0.430</td><td>0.686</td><td>0.132</td><td>0.723</td><td>0.887</td><td>0.946</td><td>0.259</td><td>0.381</td><td>0.520</td><td>0.131</td><td>0.672</td><td>0.888</td><td>0.942</td></tr><tr><td>HoHoNetS[67]</td><td>0.234</td><td>0.487</td><td>0.736</td><td>0.120</td><td>0.654</td><td>0.886</td><td>0.959</td><td>0.221</td><td>0.355</td><td>0.480</td><td>0.112</td><td>0.701</td><td>0.905</td><td>0.960</td></tr><tr><td>UniFuseM[30]</td><td>0.200</td><td>0.396</td><td>0.652</td><td>0.113</td><td>0.769</td><td>0.908</td><td>0.958</td><td>0.233</td><td>0.330</td><td>0.474</td><td>0.120</td><td>0.728</td><td>0.905</td><td>0.954</td></tr><tr><td>OursM2(single-scale)</td><td>0.223</td><td>0.491</td><td>0.828</td><td>0.129</td><td>0.619</td><td>0.867</td><td>0.953</td><td>0.182</td><td>0.412</td><td>0.732</td><td>0.095</td><td>0.750</td><td>0.935</td><td>0.971</td></tr><tr><td>OursM3(single-scale)</td><td>0.210</td><td>0.476</td><td>0.840</td><td>0.121</td><td>0.656</td><td>0.889</td><td>0.958</td><td>0.192</td><td>0.447</td><td>0.805</td><td>0.100</td><td>0.737</td><td>0.925</td><td>0.969</td></tr><tr><td>OursM2(multi-scale)</td><td>0.224</td><td>0.494</td><td>0.831</td><td>0.130</td><td>0.616</td><td>0.866</td><td>0.953</td><td>0.167</td><td>0.364</td><td>0.619</td><td>0.089</td><td>0.769</td><td>0.948</td><td>0.981</td></tr><tr><td>OursM3(multi-scale)</td><td>0.208</td><td>0.446</td><td>0.791</td><td>0.119</td><td>0.656</td><td>0.890</td><td>0.961</td><td>0.198</td><td>0.465</td><td>0.841</td><td>0.103</td><td>0.730</td><td>0.920</td><td>0.965</td></tr></table>
|
| 159 |
+
|
| 160 |
+
$^{\mathrm{M}}$ Trained on Matterport3D [6]
|
| 161 |
+
S Trained on Stanford 2D-3D-S [2]
|
| 162 |
+
M2 Using MiDaS v2 [56]
|
| 163 |
+
M3 Using MiDaS v3 [55]
|
| 164 |
+
|
| 165 |
+
From the original test split of Matterport3D with 2,014 samples, we managed to estimate accurate camera poses for 1,850 (92%) skybox images, and rendered the aligned ground-truth depth maps at $2048 \times 1024$ resolution. We will make skybox poses and ground-truth depth maps available.
|
| 166 |
+
|
| 167 |
+
To assess the generalisation capability and scalability of our framework against baselines, we also evaluate on $360^{\circ}$ RGBD data from the Replica dataset [63], which features high-quality indoor room scans that have not been used for training any method. For 13 rooms, we rendered 10 images and ground-truth depth maps at $2048 \times 1024$ and $4096 \times 2048$ resolution with random poses using the Replica360 renderer [3], for a total of 130 samples each.
|
| 168 |
+
|
| 169 |
+
Baselines. We compare our results to OmniDepth [87], BiFuse [74], HoHoNet [67] and UniFuse [30] using the authors' public implementations and pretrained weights. OmniDepth is trained for $512 \times 256$ input, while the other methods are for $1024 \times 512$ . For each method, we downscale the input images to match the expected resolution, and upsample the estimated depth map bilinearly to the input image resolution.
|
| 170 |
+
|
| 171 |
+
Metrics. We use the standard evaluation metrics adopted for monocular depth estimation evaluation [17]. Although our method operates in disparity space, we report metrics in depth space for fair comparisons with baselines. Please see our supplemental document for details.
|
| 172 |
+
|
| 173 |
+
# 4.1. Quantitative evaluation
|
| 174 |
+
|
| 175 |
+
Table 1 shows the quantitative comparison of our method to the baselines on the Matterport3D-2K and Replica360-2K test sets. Matterport3D is often used for training and evaluating $360^{\circ}$ monodepth methods. Indeed, methods trained on it (HoHoNet, UniFuse) tend to perform best. Our method produces competitive results (in several metrics) without any training on Matterport3D, while producing depth maps at a higher resolution and level of detail (see Figures 5 and 6). Replica360 has not been used for training any method, so we can use it to measure generalisation to unseen data. In
|
| 176 |
+
|
| 177 |
+
Table 2. Quantitative results for Replica360-4K at $4096 \times 2048$ with frustum blending (best trade-off between runtime and performance). For superscripts, see Table 1. Highlighting: best, second-best.
|
| 178 |
+
|
| 179 |
+
<table><tr><td>Method</td><td>AbsRelβΌ</td><td>MAEβΌ</td><td>RMSEβΌ</td><td>RMSE-logβΌ</td><td>Ξ΄<1.25β²</td><td>Ξ΄<1.252β²</td><td>Ξ΄<1.253β²</td></tr><tr><td>OmniDepth</td><td>0.337</td><td>0.582</td><td>0.778</td><td>0.161</td><td>0.484</td><td>0.785</td><td>0.920</td></tr><tr><td>BiFuse</td><td>0.292</td><td>0.445</td><td>0.637</td><td>0.143</td><td>0.606</td><td>0.857</td><td>0.941</td></tr><tr><td>HoHoNetM</td><td>0.251</td><td>0.379</td><td>0.509</td><td>0.127</td><td>0.670</td><td>0.884</td><td>0.948</td></tr><tr><td>HoHoNetS</td><td>0.208</td><td>0.335</td><td>0.455</td><td>0.106</td><td>0.728</td><td>0.909</td><td>0.961</td></tr><tr><td>UniFuseM</td><td>0.223</td><td>0.324</td><td>0.464</td><td>0.116</td><td>0.744</td><td>0.910</td><td>0.959</td></tr><tr><td>OursM2(multi-scale)</td><td>0.150</td><td>0.335</td><td>0.558</td><td>0.081</td><td>0.813</td><td>0.953</td><td>0.983</td></tr><tr><td>OursM3(multi-scale)</td><td>0.161</td><td>0.363</td><td>0.607</td><td>0.085</td><td>0.781</td><td>0.951</td><td>0.984</td></tr></table>
|
| 180 |
+
|
| 181 |
+
most metrics, our approach clearly outperforms the baselines, which struggle to generalise to this new dataset. The other two metrics, MAE and RMSE, are closely related to the L1 and BerHu (mixed L1/L2) losses used for training HoHoNet [67] and UniFuse [30], respectively, which explains these methods' better performance in these specific metrics. We further show results at 4K resolution in Table 2. Our results improved across all metrics compared to 2K resolution, and our approach ranks as top-2 in 6 out of 7 metrics, up from 5 out of 7 at 2K resolution (8% improvement in MAE). This shows that our method robustly scales to higher resolutions.
|
| 182 |
+
|
| 183 |
+
# 4.2. Qualitative comparisons
|
| 184 |
+
|
| 185 |
+
We show qualitative comparisons in Figure 5, 6 and 7, and our supplemental results website. For datasets with available ground-truth depth maps, we show depth maps, otherwise disparity maps. On Matterport3D, our results are mostly on par with UniFuse (best in Table 1). On Replica360, our results show fewer errors and cleaner surfaces. Our approach clearly outperforms the baselines on the outdoor OmniPhotos, as no baseline is trained on outdoor data. Our results show the highest level of detail and the sharpest depth edges.
|
| 186 |
+
|
| 187 |
+
# 4.3. Ablation studies
|
| 188 |
+
|
| 189 |
+
We perform two ablation studies to test our design choices in the disparity maps alignment and blending stages of our
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
Figure 5. Qualitative comparison to different methods on different datasets. Our results show the highest level of detail of all predictions.
|
| 193 |
+
|
| 194 |
+
method, summarised in Table 3. Our multi-scale alignment and Poisson blending approaches outperform other alternatives. In particular, our alignment step substantially outperforms the "No alignment" of Eder et al. [16] across all metrics. Both deformable multi-scale alignment and blending are necessary for the best results.
|
| 195 |
+
|
| 196 |
+
# 5. Discussion and Conclusion
|
| 197 |
+
|
| 198 |
+
Our method can fail if the tangent disparity estimates are incorrect, e.g. for large plain walls, saturated skies, or photorealistic wallpapers. As these estimates improve over time, our method can take advantage of them. In some cases, the least-squares rescaling to fit the ground-truth disparity results in negative disparities, which produces incorrect, negative depth values. We also saw inconsistencies in the ground-truth depth maps, such as mirrors or missing lamps or chandeliers that are visible in the image. We show examples of these failure cases in the supplemental document.
|
| 199 |
+
|
| 200 |
+
Table 3. Ablation studies for disparity map alignment (top) and blending (bottom), evaluated on the Matterport3D test set. Multiscale deformable alignment outperforms all single-scale alignments across all metrics when using MiDaS v3. Gradient-based Poisson blending outperforms simpler blending modes in all but one metric when using MiDaS v2. Highlighting: best, second-best.
|
| 201 |
+
|
| 202 |
+
<table><tr><td>Method</td><td>AbsRelβΌ</td><td>MAEβΌ</td><td>RMSEβΌ</td><td>RMSE-logβΌ</td><td>Ξ΄<1.25β²</td><td>Ξ΄<1.252β²</td><td>Ξ΄<1.253β²</td></tr><tr><td>No alignmentM3</td><td>0.259</td><td>0.600</td><td>0.969</td><td>0.150</td><td>0.532</td><td>0.821</td><td>0.933</td></tr><tr><td>2Γ2 single-scaleM3</td><td>0.210</td><td>0.476</td><td>0.838</td><td>0.122</td><td>0.654</td><td>0.888</td><td>0.959</td></tr><tr><td>4Γ3 single-scaleM3</td><td>0.210</td><td>0.475</td><td>0.838</td><td>0.121</td><td>0.655</td><td>0.889</td><td>0.959</td></tr><tr><td>8Γ7 single-scaleM3</td><td>0.210</td><td>0.476</td><td>0.840</td><td>0.121</td><td>0.656</td><td>0.889</td><td>0.958</td></tr><tr><td>16Γ14 single-scaleM3</td><td>0.231</td><td>0.528</td><td>0.905</td><td>0.134</td><td>0.609</td><td>0.859</td><td>0.944</td></tr><tr><td>multi-scaleM3</td><td>0.208</td><td>0.446</td><td>0.791</td><td>0.119</td><td>0.656</td><td>0.890</td><td>0.961</td></tr><tr><td>NN blendingM2</td><td>0.226</td><td>0.501</td><td>0.841</td><td>0.131</td><td>0.611</td><td>0.864</td><td>0.952</td></tr><tr><td>Mean blendingM2</td><td>0.230</td><td>0.501</td><td>0.828</td><td>0.132</td><td>0.601</td><td>0.859</td><td>0.952</td></tr><tr><td>Frustum blendingM2</td><td>0.229</td><td>0.499</td><td>0.826</td><td>0.131</td><td>0.604</td><td>0.861</td><td>0.953</td></tr><tr><td>Poisson blendingM2</td><td>0.224</td><td>0.494</td><td>0.831</td><td>0.130</td><td>0.616</td><td>0.866</td><td>0.953</td></tr></table>
|
| 203 |
+
|
| 204 |
+
$^{\mathrm{M2}}$ Using MiDaS v2 [56] ${}^{M3}$ Using MiDaS v3 [55]
|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
Matterport3D-2K [6]
|
| 208 |
+
Replica360-2K [3, 63]
|
| 209 |
+
Figure 6. Estimated $360^{\circ}$ depth maps at 2K resolution for indoor environments. Our results are closer to the ground-truth depth maps.
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
Figure 7. Estimated $360^{\circ}$ disparity maps at $2048\times 1024$ for outdoor environments [5]. Our results are more consistent geometrically.
|
| 213 |
+
|
| 214 |
+
We found in our experiments that blending disparity maps with the 'frustum' weights (see Figure 4) usually produces
|
| 215 |
+
|
| 216 |
+
results that are nearly as good as (see Table 3) but considerably faster than the Poisson blending of our complete method. This is a good compromise if speed is of essence. Concurrent to our work, Li et al. [40] use transformers for aligning and blending tangent depth maps based on predicted confidence.
|
| 217 |
+
|
| 218 |
+
Our proposed framework is the first to deal with high-resolution $360^{\circ}$ images, and not limited to indoor scenes. Projecting the spherical input image onto a set of tangent images lets us overcome both the distortions of spherical projections and the resolution limits of deep monocular depth estimation methods. We proposed specially tailored optimisation techniques for global deformable multi-scale alignment and gradient-domain blending of the individual tangent disparity maps to overcome the discontinuous nature of tangent images. A major advantage of our approach is that we can leverage the high performance of MiDaS (or any future method) to generalise to new $360^{\circ}$ datasets with higher accuracy and resolution than previous approaches. The resulting disparity maps at 2K resolution show a high level of geometric detail for both indoor and outdoor scenes.
|
| 219 |
+
|
| 220 |
+
Acknowledgements. This work was supported by the EPSRC CDT in Digital Entertainment (EP/L016540/1), an EPSRC-UKRI Innovation Fellowship (EP/S001050/1) and EPSRC grant CAMERA (EP/M023281/1, EP/T022523/1).
|
| 221 |
+
|
| 222 |
+
# References
|
| 223 |
+
|
| 224 |
+
[1] Sameer Agarwal, Keir Mierle, and Others. Ceres solver. http://ceres-solver.org, 2012.
|
| 225 |
+
[2] Iro Armeni, Sasha Sax, Amir R. Zamir, and Silvio Savarese. Joint 2D-3D-semantic data for indoor scene understanding. arXiv:1702.01105, 2017.
|
| 226 |
+
[3] Benjamin Attal, Selena Ling, Aaron Gokaslan, Christian Richardt, and James Tompkin. MatryODShka: Real-time 6DoF video view synthesis using multi-sphere images. In ECCV, 2020.
|
| 227 |
+
[4] Jiayang Bai, Shuichang Lai, Haoyu Qin, Jie Guo, and Yanwen Guo. GLPanoDepth: Global-to-local panoramic depth estimation. arXiv:2202.02796, 2022.
|
| 228 |
+
[5] Tobias Bertel, Mingze Yuan, Reuben Lindroos, and Christian Richardt. OmniPhotos: Casual $360^{\circ}$ VR photography. ACM Trans. Graph., 39(6):267:1-12, 2020.
|
| 229 |
+
[6] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias NieΓner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3D: Learning from RGB-D data in indoor environments. In 3DV, pages 667-676, 2017.
|
| 230 |
+
[7] Hong-Xiang Chen, Kunhong Li, Zhiheng Fu, Mengyi Liu, Zonghao Chen, and Yulan Guo. Distortion-aware monocular depth estimation for omnidirectional images. IEEE Signal Processing Letters, 28:334-338, 2021.
|
| 231 |
+
[8] Hsien-Tzu Cheng, Chun-Hung Chao, Jin-Dong Dong, Hao-Kai Wen, Tyng-Luh Liu, and Min Sun. Cube padding for weakly-supervised saliency prediction in $360^{\circ}$ videos. In CVPR, pages 1420-1429, 2018.
|
| 232 |
+
[9] Taco S. Cohen, Mario Geiger, Jonas Koehler, and Max Welling. Spherical CNNs. In ICLR, 2018.
|
| 233 |
+
[10] Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral CNN. In ICML, 2019.
|
| 234 |
+
[11] Benjamin Coors, Alexandru Paul Condurache, and Andreas Geiger. SphereNet: Learning spherical representations for detection and classification in omnidirectional images. In ECCV, pages 518-533, 2018.
|
| 235 |
+
[12] James J. Cummings and Jeremy N. Bailenson. How immersive is enough? a meta-analysis of the effect of immersive technology on user presence. *Media Psychology*, 19(2):272β309, 2016.
|
| 236 |
+
[13] Thiago Lopes Trugillo da Silveira and Claudio R. Jung. Dense 3D scene reconstruction from multiple spherical images for 3-DoF+ VR applications. In IEEE VR, pages 9β18, 2019.
|
| 237 |
+
[14] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth $16 \times 16$ words: Transformers for image recognition at scale. In ICLR, 2021.
|
| 238 |
+
[15] Marc Eder, Pierre Moulon, and Li Guan. Pano pickups: Indoor 3D reconstruction with a plane-aware network. In 3DV, pages 76-84, 2019.
|
| 239 |
+
[16] Marc Eder, Mykhailo Shvets, John Lim, and Jan-Michael Frahm. Tangent images for mitigating spherical distortion. In CVPR, 2020.
|
| 240 |
+
|
| 241 |
+
[17] David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. In NIPS, pages 2366-2374, 2014.
|
| 242 |
+
[18] Carlos Esteves, Christine Allen-Blanchette, Ameesh Makadia, and Kostas Daniilidis. Learning SO(3) equivariant representations with spherical CNNs. In ECCV, pages 52β68, 2018.
|
| 243 |
+
[19] Clara Fernandez-Labrador, Jose M. Facil, Alejandro Perez-Yus, CΓ©dric Demonceaux, Javier Civera, and Jose J. Guerrero. Corners for layout: End-to-end layout recovery from 360 images. IEEE Robotics and Automation Letters, 5(2):1255-1262, 2020.
|
| 244 |
+
[20] Chen Gao, Ayush Saraf, Johannes Kopf, and Jia-Bin Huang. Dynamic view synthesis from dynamic monocular video. In ICCV, 2021.
|
| 245 |
+
[21] Ravi Garg, Vijay Kumar B G, Gustavo Carneiro, and Ian Reid. Unsupervised CNN for single view depth estimation: Geometry to the rescue. In ECCV, 2016.
|
| 246 |
+
[22] ClΓ©ment Godard, Oisin Mac Aodha, and Gabriel J. Brostow. Unsupervised monocular depth estimation with left-right consistency. In CVPR, pages 6602-6611, 2017.
|
| 247 |
+
[23] ClΓ©ment Godard, Oisin Mac Aodha, Michael Firman, and Gabriel Brostow. Digging into self-supervised monocular depth estimation. In ICCV, 2019.
|
| 248 |
+
[24] Ariel Gordon, Hanhan Li, Rico Jonschkowski, and Anelia Angelova. Depth from videos in the wild: Unsupervised monocular depth learning from unknown cameras. In ICCV, pages 8977-8986, 2019.
|
| 249 |
+
[25] Gael Guennebaud, Benoit Jacob, and Others. Eigen v3. https://eigen.tuxfamily.org, 2010.
|
| 250 |
+
[26] Peter Hedman and Johannes Kopf. Instant 3D photography. ACM Trans. Graph., 37(4):101:1-12, 2018.
|
| 251 |
+
[27] Derek Hoiem, Alexei A. Efros, and Martial Hebert. Automatic photo pop-up. ACM Trans. Graph., 24(3):577-584, 2005.
|
| 252 |
+
[28] Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin. 6-DOF VR videos with a single 360-camera. In IEEE VR, pages 37-44, 2017.
|
| 253 |
+
[29] Sunghoon Im, Hyowon Ha, FranΓ§ois Rameau, Hae-Gon Jeon, Gyeongmin Choe, and In So Kweon. All-around depth from small motion with a spherical panoramic camera. In ECCV, 2016.
|
| 254 |
+
[30] Hualie Jiang, Zhe Sheng, Siyu Zhu, Zilong Dong, and Rui Huang. UniFuse: Unidirectional fusion for $360^{\circ}$ panorama depth estimation. IEEE Robotics and Automation Letters, 6 (2):1519-1526, 2021.
|
| 255 |
+
[31] Lei Jin, Yanyu Xu, Jia Zheng, Junfei Zhang, Rui Tang, Shugong Xu, Jingyi Yu, and Shenghua Gao. Geometric structure based and regularized depth estimation from 360 indoor imagery. In CVPR, pages 886-895, 2020.
|
| 256 |
+
[32] Kevin Karsch, Ce Liu, and Sing Bing Kang. Depth transfer: Depth extraction from video using non-parametric sampling. TPAMI, 36(11):2144-2158, 2014.
|
| 257 |
+
[33] Johannes Kopf, Kevin Matzen, Suhib Alsisan, Ocean Quigley, Francis Ge, Yangming Chong, Josh Patterson, Jan-Michael Frahm, Shu Wu, Matthew Yu, Peizhao Zhang, Zijian He, Peter Vajda, Ayush Saraf, and Michael Cohen. One shot 3D photography. ACM Trans. Graph., 39(4):76:1-13, 2020.
|
| 258 |
+
|
| 259 |
+
[34] George Alex Koulieris, Kaan Akshit, Michael Stengel, Rafal K. Mantiuk, Katerina Mania, and Christian Richardt. Near-eye display and tracking technologies for virtual and augmented reality. Comput. Graph. Forum, 38(2):493-519, 2019.
|
| 260 |
+
[35] Po Kong Lai, Shuang Xie, Jochen Lang, and Robert Laganière. Real-time panoramic depth maps from omni-directional stereo images for 6 DoF videos in virtual reality. In IEEE VR, pages 405-412, 2019.
|
| 261 |
+
[36] Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. Deeper depth prediction with fully convolutional residual networks. In 3DV, 2016.
|
| 262 |
+
[37] Yeonkun Lee, Jaeseok Jeong, Jongseob Yun, Wonjune Cho, and Kuk-Jin Yoon. SpherePHD: Applying CNNs on $360^{\circ}$ images with non-euclidean spherical PolyHeDron representation. TPAMI, 44(2):834-847, 2022.
|
| 263 |
+
[38] Junxuan Li, Hongdong Li, and Yasuyuki Matsushita. Lighting, reflectance and geometry estimation from $360^{\circ}$ panoramic stereo. In CVPR, 2021.
|
| 264 |
+
[39] Yuyan Li, Zhixin Yan, Ye Duan, and Liu Ren. PanoDepth: A two-stage approach for monocular omnidirectional depth estimation. In 3DV, 2021.
|
| 265 |
+
[40] Yuyan Li, Yuliang Guo, Zhixin Yan, Xinyu Huang, Ye Duan, and Liu Ren. OmniFusion: 360 monocular depth estimation via geometry-aware fusion. In CVPR, 2022.
|
| 266 |
+
[41] Zhengqi Li and Noah Snavely. MegaDepth: Learning single-view depth prediction from internet photos. In CVPR, pages 2041-2050, 2018.
|
| 267 |
+
[42] Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu, and William T. Freeman. MannequinChallenge: Learning the depths of moving people by watching frozen people. TPAMI, 43(12):4229-4241, 2021.
|
| 268 |
+
[43] Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. Neural scene flow fields for space-time view synthesis of dynamic scenes. In CVPR, 2021.
|
| 269 |
+
[44] Miaomiao Liu, Mathieu Salzmann, and Xuming He. Discrete-continuous depth estimation from a single image. In CVPR, 2014.
|
| 270 |
+
[45] Thibault Louis, Jocelyne Troccaz, AmΓ©lie Rochet-Capellan, and FranΓ§ois BΓ©rard. Is it real? measuring the effect of resolution, latency, frame rate and jitter on the presence of virtual entities. In International Conference on Interactive Surfaces and Spaces (ISS), pages 5-16, 2019.
|
| 271 |
+
[46] Chenxu Luo, Zhenheng Yang, Peng Wang, Yang Wang, Wei Xu, Ram Nevatia, and Alan Yuille. Every pixel counts ++: Joint learning of geometry and motion with 3D holistic understanding. TPAMI, 42(10):2624-2641, 2020.
|
| 272 |
+
[47] Yue Luo, Jimmy Ren, Mude Lin, Jiahao Pang, Wenxiu Sun, Hongsheng Li, and Liang Lin. Single view stereo matching. In CVPR, 2018.
|
| 273 |
+
[48] Reza Mahjourian, Martin Wicke, and Anelia Angelova. Unsupervised learning of depth and ego-motion from monocular video using 3D geometric constraints. In CVPR, pages 5667-5675, 2018.
|
| 274 |
+
[49] S. Mahdi H. Miangoleh, Sebastian Dille, Long Mai, Sylvain Paris, and Yaqiz Aksoy. Boosting monocular depth estimation models to high-resolution via content-adaptive multi-resolution merging. In CVPR, 2021.
|
| 275 |
+
|
| 276 |
+
[50] Pierre Moulon, Pascal Monasse, Romuald Perrot, and Renaud Marlet. OpenMVG: Open multiple view geometry. In International Workshop on Reproducible Research in Pattern Recognition, pages 60-74, 2016.
|
| 277 |
+
[51] GrΓ©goire Payen de La Garanderie, Amir Atapour Abarghouei, and Toby P. Breckon. Eliminating the blind spot: Adapting 3D object detection and monocular depth estimation to $360^{\circ}$ panoramic imagery. In ECCV, pages 789-807, 2018.
|
| 278 |
+
[52] Giovanni Pintore, Marco Agus, Eva Almansa, Jens Schneider, and Enrico Gobbetti. SliceNet: Deep dense depth estimation from a single indoor panorama using a slice-based representation. In CVPR, pages 11531-11540, 2021.
|
| 279 |
+
[53] Patrick PΓ©rez, Michel Gangnet, and Andrew Blake. Poisson image editing. ACM Trans. Graph., 22(3):313-318, 2003.
|
| 280 |
+
[54] Michael Ramamonjisoa, Michael Firman, Jamie Watson, Vincent Lepetit, and Daniyar Turmukhambetov. Single image depth estimation using wavelet decomposition. In CVPR, 2021.
|
| 281 |
+
[55] RenΓ© Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In ICCV, pages 12179-12188, 2021.
|
| 282 |
+
[56] RenΓ© Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. TPAMI, 2021.
|
| 283 |
+
[57] Anurag Ranjan, Varun Jampani, Lukas Balles, Kihwan Kim, Deqing Sun, Jonas Wulff, and Michael J. Black. Competitive collaboration: Joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. In CVPR, 2018.
|
| 284 |
+
[58] Ashutosh Saxena, Min Sun, and Andrew Y. Ng. Make3D: Learning 3D scene structure from a single still image. TPAMI, 31(5):824-840, 2009.
|
| 285 |
+
[59] Ana Serrano, Incheol Kim, Zhili Chen, Stephen DiVerdi, Diego Gutierrez, Aaron Hertzmann, and Belen Masia. Motion parallax for $360^{\circ}$ RGBD video. TVCG, 25(5):1817-1827, 2019.
|
| 286 |
+
[60] Jianping Shi, Xin Tao, Li Xu, and Jiaya Jia. Break Ames room illusion: Depth from general single images. ACM Trans. Graph., 34(6):225:1-11, 2015.
|
| 287 |
+
[61] Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. 3D photography using context-aware layered depth inpainting. In CVPR, 2020.
|
| 288 |
+
[62] Pratul P. Srinivasan, Rahul Garg, Neal Wadhwa, Ren Ng, and Jonathan T. Barron. Aperture supervision for monocular depth estimation. In CVPR, 2018.
|
| 289 |
+
[63] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J. Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat, Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard Newcombe. The Replica dataset: A digital replica of indoor spaces. arXiv:1906.05797, 2019.
|
| 290 |
+
[64] Yu-Chuan Su and Kristen Grauman. Learning spherical convolution for fast features from $360^{\circ}$ imagery. In NIPS, 2017.
|
| 291 |
+
|
| 292 |
+
[65] Yu-Chuan Su and Kristen Grauman. Kernel transformer networks for compact spherical convolution. In CVPR, pages 9442-9451, 2019.
|
| 293 |
+
[66] Shinya Sumikura, Mikiya Shibuya, and Ken Sakurada. Open-VSLAM: a versatile visual SLAM framework. In Proceedings of the International Conference on Multimedia, 2019.
|
| 294 |
+
[67] Cheng Sun, Min Sun, and Hwann-Tzong Chen. HoHoNet: 360 indoor holistic understanding with latent horizontal features. In CVPR, pages 2573-2582, 2021.
|
| 295 |
+
[68] Richard Szeliski. Image alignment and stitching: a tutorial. Foundations and Trends in Computer Graphics and Vision, 2 (1):1-104, 2006.
|
| 296 |
+
[69] Keisuke Tateno, Nassir Navab, and Federico Tombari. Distortion-aware convolutional filters for dense prediction in panoramic images. In ECCV, pages 732-750, 2018.
|
| 297 |
+
[70] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
|
| 298 |
+
[71] Chaoyang Wang, JosΓ© Miguel Buenapasada, Rui Zhu, and Simon Lucey. Learning depth from monocular videos using direct methods. In CVPR, pages 2022-2030, 2018.
|
| 299 |
+
[72] Chaoyang Wang, Simon Lucey, Federico Perazzi, and Oliver Wang. Web stereo video supervision for depth prediction from dynamic scenes. In 3DV, 2019.
|
| 300 |
+
[73] Fu-En Wang, Hou-Ning Hu, Hsien-Tzu Cheng, Juan-Ting Lin, Shang-Ta Yang, Meng-Li Shih, Hung-Kuo Chu, and Min Sun. Self-supervised learning of depth and camera motion from $360^{\circ}$ videos. In ACCV, pages 53β68, 2018.
|
| 301 |
+
[74] Fu-En Wang, Yu-Hsuan Yeh, Min Sun, Wei-Chen Chiu, and Yi-Hsuan Tsai. BiFuse: Monocular 360 depth estimation via bi-projection fusion. In CVPR, pages 462β471, 2020.
|
| 302 |
+
[75] Fu-En Wang, Yu-Hsuan Yeh, Min Sun, Wei-Chen Chiu, and Yi-Hsuan Tsai. LED $^2$ -net: Monocular 360 layout estimation via differentiable depth rendering. In CVPR, 2021.
|
| 303 |
+
[76] Ning-Hsu Wang, Bolivar Solarte, Yi-Hsuan Tsai, Wei-Chen Chiu, and Min Sun. 360SD-net: $360^{\circ}$ stereo depth estimation with learnable cost volume. In ICRA, pages 582-588, 2020.
|
| 304 |
+
[77] Changhee Won, Hochang Seok, Zhaopeng Cui, Marc Pollefeys, and Jongwoo Lim. OmniSLAM: Omnidirectional localization and dense mapping for wide-baseline multi-camera systems. In ICRA, pages 559-566, 2020.
|
| 305 |
+
[78] Ke Xian, Chunhua Shen, Zhiguo Cao, Hao Lu, Yang Xiao, Ruibo Li, and Zhenbo Luo. Monocular relative depth perception with web stereo data supervision. In CVPR, pages 311-320, 2018.
|
| 306 |
+
[79] Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Long Mai, Simon Chen, and Chunhua Shen. Learning to recover 3D scene shape from a single image. In CVPR, 2021.
|
| 307 |
+
[80] Mingze Yuan and Christian Richardt. $360^{\circ}$ optical flow using tangent images. In BMVC, 2021.
|
| 308 |
+
[81] Wei Zeng, Sezer Karaoglu, and Theo Gevers. Joint 3D layout and depth prediction from a single indoor panorama image. In ECCV, 2020.
|
| 309 |
+
[82] Huangying Zhan, Ravi Garg, Chamara Saroj Weerasekera, Kejie Li, Harsh Agarwal, and Ian Reid. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In CVPR, 2018.
|
| 310 |
+
|
| 311 |
+
[83] Chao Zhang, Stephan Liwicki, William Smith, and Roberto Cipolla. Orientation-aware semantic segmentation on icosahedron spheres. In ICCV, pages 3533-3541, 2019.
|
| 312 |
+
[84] Qiang Zhao, Chen Zhu, Feng Dai, Yike Ma, Guoqing Jin, and Yongdong Zhang. Distortion-aware CNNs for spherical images. In International Joint Conference on Artificial Intelligence (IJCAI), pages 1198-1204, 2018.
|
| 313 |
+
[85] Tinghui Zhou, Matthew Brown, Noah Snavely, and David G. Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, 2017.
|
| 314 |
+
[86] Chuanqing Zhuang, Zhengda Lu, Yiqun Wang, Jun Xiao, and Ying Wang. ACDNet: Adaptively combined dilated convolution for monocular panorama depth estimation. In AAAI, 2022.
|
| 315 |
+
[87] Nikolaos Zioulis, Antonis Karakottas, Dimitrios Zarpalas, and Petros Daras. OmniDepth: Dense depth estimation for indoors spherical panoramicas. In ECCV, pages 448-465, 2018.
|
| 316 |
+
[88] Nikolaos Zioulis, Antonis Karakottas, Dimitrios Zarpalas, Federico Alvarez, and Petros Daras. Spherical view synthesis for self-supervised $360^{\circ}$ depth estimation. In 3DV, pages 690-699, 2019.
|
360monodepthhighresolution360degmonoculardepthestimation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2dbc65a6d482eb84a77b8d9d25579975b2bf005bc1f6cf1ae230d00931f6bb9e
|
| 3 |
+
size 868597
|
360monodepthhighresolution360degmonoculardepthestimation/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:df906c3c0299725a671ed94c74d3edfd39eff67f5fcc46e3236bf5483a5b6d46
|
| 3 |
+
size 445191
|
3daclearningattributecompressionforpointclouds/a469c9ab-1f24-40a4-b64f-3f15ba1a545d_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cd994da19189acecbd29a7ccd3ee7672dc4af58b2f4d8db13fbc0bdbbcc648cc
|
| 3 |
+
size 75397
|
3daclearningattributecompressionforpointclouds/a469c9ab-1f24-40a4-b64f-3f15ba1a545d_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7ed00b035d597821c904ae8306d17786f90e1f64075ee2ec08ea6b214597d779
|
| 3 |
+
size 97354
|
3daclearningattributecompressionforpointclouds/a469c9ab-1f24-40a4-b64f-3f15ba1a545d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0fd9ad27ae00fec565d6747e965fcb87ad1b68422f870a0044c694b43477d188
|
| 3 |
+
size 3376913
|
3daclearningattributecompressionforpointclouds/full.md
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3DAC: Learning Attribute Compression for Point Clouds
|
| 2 |
+
|
| 3 |
+
Guangchi Fang $^{1,2}$ , Qingyong Hu $^{3}$ , Hanyun Wang $^{4}$ , Yiling Xu $^{5}$ , Yulan Guo $^{1,2,6*}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Sun Yat-sen University, <sup>2</sup>The Shenzhen Campus of Sun Yat-sen University, <sup>3</sup>University of Oxford <sup>4</sup>Information Engineering University, <sup>5</sup>Shanghai Jiaotong University <sup>6</sup>National University of Defense Technology
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
We study the problem of attribute compression for large-scale unstructured 3D point clouds. Through an in-depth exploration of the relationships between different encoding steps and different attribute channels, we introduce a deep compression network, termed 3DAC, to explicitly compress the attributes of 3D point clouds and reduce storage usage in this paper. Specifically, the point cloud attributes such as color and reflectance are firstly converted to transform coefficients. We then propose a deep entropy model to model the probabilities of these coefficients by considering information hidden in attribute transforms and previous encoded attributes. Finally, the estimated probabilities are used to further compress these transform coefficients to a final attributes bitstream. Extensive experiments conducted on both indoor and outdoor large-scale open point cloud datasets, including ScanNet and SemanticKITTI, demonstrated the superior compression rates and reconstruction quality of the proposed method.
|
| 10 |
+
|
| 11 |
+
# 1. Introduction
|
| 12 |
+
|
| 13 |
+
As a common 3D data representation, point clouds have been widely used in a variety of real applications such as mixed reality [28], self-driving vehicles [16, 21], and high-resolution mapping [20, 37]. Thanks to the remarkable progress achieved in 3D acquisition, point clouds become increasingly accessible. However, the storage and transmission of massive irregularly sampled points pose a new challenge to existing compression techniques. In particular, along with the raw 3D coordinates of points, the compression of their attributes (e.g., color, reflectance) is also non-trivial<sup>1</sup>. In this regard, we will study effective attribute compression for unstructured point clouds in this paper.
|
| 14 |
+
|
| 15 |
+
To achieve point cloud attribute compression, early
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Raw Point Cloud (Ground Truth) BPP:24
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
3D Auto-Encoder BPP:12.34 $\mathrm{PSNR}_y$ :25.57
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
G-PCC BPP:11.91 PSNRy:51.77
|
| 25 |
+
Figure 1. Qualitative point cloud attribute compression results of 3D auto-encoder, G-PCC, and our method on the ScanNet [10]. Bits Per Point (BPP) and Peak Signal-to-Noise Ratio (PSNR) of the luminance component are reported. Note that, the raw point clouds usually use uint8 RGB values (i.e., $8 \times 3 = 24$ BPP).
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
Ours BPP:8.62 $\mathrm{PSNR}_y$ :52.98
|
| 29 |
+
|
| 30 |
+
works [11,29,30,41,42,45,48,57] usually apply image processing techniques to 3D point clouds. In particular, early methods usually follow a two-step framework, i.e., initial coding of attributes and entropy coding of transform coefficients. A number of approaches focus on developing sophisticated initial coding algorithms, which convert point cloud attributes to coefficients in a specific domain. However, entropy coding, which further losslessly encodes coefficients to a final bitstream, has been largely overlooked. There are only a few entropy coders [18,26,39] have been proposed for attribute compression in recent works. In general, entropy coding is independently included in the aforementioned attribute compression framework with initial coding. More specifically, these traditional hand-crafted entropy coders take coefficients as a sequence of symbols and estimate the probability distribution $^{2}$ only considering
|
| 31 |
+
|
| 32 |
+
the previous input symbols as context information. Most of prior entropy coders do not incorporate geometry information of point clouds or context information of initial coding. Moreover, these approaches separately encode attributes for each channel and do not make full use of the inter-channel correlations between different attributes.
|
| 33 |
+
|
| 34 |
+
In this paper, we propose a learning-based compression framework, termed 3DAC, for point cloud attributes. Specifically, the proposed 3DAC adopts Region Adaptive Hierarchical Transform (RAHT) [11] for initial coding. Then, we propose an attribute-oriented deep entropy model to estimate the probability distribution of transform coefficients. In particular, we model the probabilities of these coefficients by exploring context information from the initial coding stage and inter-channel correlations between different attributes. As shown in Fig. 1, our method achieves higher reconstruction quality with a lower bitrate, indicating excellent attribute compression performance. The main contributions of this paper are as follows:
|
| 35 |
+
|
| 36 |
+
- We introduce a learning-based, effective framework for attribute compression of 3D point clouds, with competitive compression performance.
|
| 37 |
+
- We propose an attribute-oriented deep entropy model to connect the initial coding and entropy coding steps in attribute compression, and explore the inter-channel correlations between different attributes.
|
| 38 |
+
- We demonstrate the state-of-the-art compression performance of the proposed method on both indoor and outdoor point cloud datasets.
|
| 39 |
+
|
| 40 |
+
# 2. Related Work
|
| 41 |
+
|
| 42 |
+
# 2.1. Point Cloud Geometry Compression
|
| 43 |
+
|
| 44 |
+
Point cloud geometry compression aims at compressing the 3D coordinates of points. In light of the unstructured nature of point clouds, existing methods [27, 39] usually leverage an advanced data structure such as octree to organize the raw point clouds. For example, G-PCC [39] includes an octree-based method for geometry compression. Later, in several learning-based approaches [6, 22, 35], the octree structure was also used for initial coding and octree-structured entropy models were proposed for probability estimation. These octree-based approaches are mainly tailored for geometry compression. In addition, deep image compression [3] are extended to the 3D domain [15,23,32,34,50,53] with 3D auto-encoders [7,8,31]. However, due to the high complexity of point cloud attributes, neural networks tend to ignore high-frequency components (i.e., attribute details). Therefore, it is infeasible to extend this framework to attribute compression.
|
| 45 |
+
|
| 46 |
+
# 2.2. Point Cloud Attribute Compression
|
| 47 |
+
|
| 48 |
+
The objective of attribute compression is to compress the attributes (e.g., color, reflectance) of points. The general
|
| 49 |
+
|
| 50 |
+
idea is to apply traditional image compression techniques to 3D point clouds, which usually includes initial coding of attributes and entropy coding of transform coefficients.
|
| 51 |
+
|
| 52 |
+
Initial coding aims at capturing the signal redundancy in the transform domain. For example, Zhang et al. [57] first split a point cloud into several blocks through octree and used graph transform to convert attributes to eigenvectors. Shao et al. [41,42] organized a point cloud with an KD-Tree and adopted graph transform for attribute compression. Due to the repeated use of eigendecomposition, graph transform-based methods are usually time-consuming. To handle this problem, Queiroz et al. [11] proposed a variation of the Haar wavelet transform, namely, region adaptive hierarchical transform (RAHT), for real-time point cloud transmission. In several follow up works, RAHT is extended with intra-frame and inter-frame prediction [30,45], graph transform [29], and fixed-point operations [38].
|
| 53 |
+
|
| 54 |
+
Entropy coding tends to encode symbols into a bitstream with the estimated distribution through an entropy coder, such as Huffman [24], arithmetic [52], and Golomb-Rice coders [36, 51]. Zhang et al. [57] and Queiroz et al. [12] assumed that transform coefficients follow a certain Laplacian distribution and then encoded them with arithmetic coding. However, the estimated distribution is usually difficult to approximate the real distribution due to the naive assumption. In [11, 30, 45], an adaptive run-length Golomb-Rice (RLGR) coder [26] was adopted for entropy coding. Two variations of run-length coding were also adopted by [18, 39]. All these entropy coders take input symbols as sequential data and only adjust their encoding parameters depending on the previous input symbols.
|
| 55 |
+
|
| 56 |
+
A handful of recent works [6, 33, 43] started to explore deep learning techniques for attribute compression. Maurice et al. [33] used FoldingNet [56] to reorganize a point cloud as an image and then compressed the image with a conventional 2D image codec. Muscle [6] is proposed for dynamic LiDAR intensity compression, which exploited the LiDAR geometry and intensity of the previous frame for its deep entropy model. The compression performance can further be improved by exploiting context information from other initial coding methods and point cloud data structures.
|
| 57 |
+
|
| 58 |
+
# 2.3. Deep Image Compression
|
| 59 |
+
|
| 60 |
+
Traditional image compression methods [5,17,44,46,49] typically contain three steps: transformation, quantization, and entropy coding. In particular, the source image is first transformed from signal domain to frequency domain to capture spatial redundancy with a few transform coefficients. These coefficients are then quantized with a handcrafted quantization table and entropy coded for further compression.
|
| 61 |
+
|
| 62 |
+
In contrast to traditional image compression approaches, deep learning based methods jointly optimize the three
|
| 63 |
+
|
| 64 |
+

|
| 65 |
+
Figure 2. The network architecture of our point cloud attribute compression method.
|
| 66 |
+
|
| 67 |
+
steps. Most recent methods [1-3, 13, 14, 19, 54] follow an auto-encoder pipeline. These methods formulate a nonlinear function with an encoder to map the source image to a more compressible latent space, and recover the image from the latent codes with a decoder. Then, they also model the entropy of the latent codes with neural networks. For example, Balle et al. [2] used 2D convolution neural networks to model both the non-linear transform and the entropy model for image compression.
|
| 68 |
+
|
| 69 |
+
# 3. Methodology
|
| 70 |
+
|
| 71 |
+
# 3.1. Overview
|
| 72 |
+
|
| 73 |
+
Given a 3D point cloud, its geometry is assumed to have been transmitted separately and this paper mainly focuses on the task of point cloud attribute compression. We propose a learning based attribute compression framework, termed 3DAC, to reduce the storage while ensuring reconstruction quality. Without loss of generality, our framework takes a colored point cloud as its input, as shown in Fig. 2. Note that, other attributes, such as opacity and reflectance, can also be compressed with our framework.
|
| 74 |
+
|
| 75 |
+
As depicted in Fig. 2, we first adopt an efficient initial coding method (i.e., RAHT) to decompose point cloud attributes into low- and high-frequency coefficients. Then, we model this process with a tree structure (i.e., RAHT tree). The resulting high-frequency coefficients are quantized and formulated as a sequence of symbols via the RAHT tree. Next, we propose an attribute-oriented entropy model to exploit the initial coding context and the inter-channel correlation. Consequently, the probability distribution of each symbol can be well modelled. In the entropy coding stage, the symbol stream and the predicted probabilities are fur
|
| 76 |
+
|
| 77 |
+
ther passed into an arithmetic coder to produce the final attributes bitstream.
|
| 78 |
+
|
| 79 |
+
# 3.2. Initial Coding
|
| 80 |
+
|
| 81 |
+
The initial coding methods are capable of capturing spatial redundancy of point cloud attributes by converting attributes to transform coefficients. Here, we adopt Region Adaptive Hierarchical Transform (RAHT) [11] for initial coding due to its simplicity and flexibility. Note that, it is possible to integrate an improved version of RAHT into our framework for better performance.
|
| 82 |
+
|
| 83 |
+
# 3.2.1 Region Adaptive Hierarchical Transform
|
| 84 |
+
|
| 85 |
+
RAHT is a variation of Haar wavelet transform, which converts point cloud attributes to frequency-domain coefficients. Specifically, the raw point cloud is first voxelized, and the attributes are then decomposed to low- and high-frequency coefficients by combining points from individual voxels to the entire 3D space. Here, we briefly revisit RAHT [11] through a 2D toy example.
|
| 86 |
+
|
| 87 |
+
Figure 3(a) shows an 2D example of RAHT. Only two dimensions (i.e., $x$ and $y$ ) are considered in this case. In Fig. 3(a), $l_{i}$ and $h_i$ denote low- and high-frequency coefficients, respectively. In the first block of Fig. 3(a), points are represented as occupied voxels, and the attributes of corresponding points are denoted as low-frequency coefficients $l_{1}, l_{2}$ and $l_{3}$ . In the encoding stage, we apply transform along the $x$ axis first and then the $y$ axis by turn until all voxels are merged to the root space. At the first depth level, $l_{2}$ and $l_{3}$ are transformed to $l_{4}$ and $h_1$ , while $l_{1}$ is transmitted to the next depth level directly due to lack of neighbor along the $x$ axis. In the second depth level, $l_{1}$ and $l_{4}$ are transformed
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
Figure 3. An illustration of RAHT and RAHT tree. (a) A 2D example of RAHT, (b) the corresponding RAHT tree.
|
| 91 |
+
|
| 92 |
+

|
| 93 |
+
|
| 94 |
+
to $l_{5}$ and $h_{2}$ along the $y$ axis. During decoding, the DC coefficient $l_{5}$ and high-frequency coefficient $h_{2}$ are used to restore low-frequency coefficients $l_{1}$ and $l_{4}$ . Similarly, $h_{1}$ is used for $l_{2}$ and $l_{3}$ with the reconstructed $l_{4}$ . Thus, only the DC coefficient $l_{5}$ and all high-frequency coefficients $h_{1}$ and $h_{2}$ are required to be transmitted as symbols. That is, these coefficients have to be encoded for attribute compression.
|
| 95 |
+
|
| 96 |
+
For 3D point clouds, RAHT transforms attributes to coefficients along three dimensions repeatedly (e.g., along the $x$ axis first, then the $y$ axis and the $z$ axis) until all subspaces are merged to the entire 3D space. Two neighboring points are merged with the following transform:
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
\left[ \begin{array}{l} l _ {d + 1, x, y, z} \\ h _ {d + 1, x, y, z} \end{array} \right] = \mathbf {T} _ {w _ {1}, w _ {2}} \left[ \begin{array}{l} l _ {d, 2 x, y, z} \\ l _ {d, 2 x + 1, y, z} \end{array} \right], \tag {1}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
where $l_{d,2x,y,z}$ and $l_{d,2x + 1,y,z}$ are low-frequency coefficients of two neighboring points along the $x$ dimension, and $l_{d + 1,x,y,z}$ and $h_{d + 1,x,y,z}$ are the decomposed low-frequency and high-frequency coefficients. Here, $\mathbf{T}_{w_1}$ is defined as
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\mathbf {T} _ {w _ {1}, w _ {2}} = \frac {1}{\sqrt {w _ {1} + w _ {2}}} \left[ \begin{array}{c c} \sqrt {w _ {1}} & \sqrt {w _ {2}} \\ - \sqrt {w _ {2}} & \sqrt {w _ {1}} \end{array} \right], \tag {2}
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
where $w_{1}$ and $w_{2}$ are the weights (i.e., the number of leaf nodes) of $l_{d,2x,y,z}$ and $l_{d,2x+1,y,z}$ , respectively. Low-frequency coefficients are directly passed to the next level if the point does not have a neighbor.
|
| 109 |
+
|
| 110 |
+
# 3.2.2 RAHT Tree
|
| 111 |
+
|
| 112 |
+
We model RAHT with a tree structure for context feature extraction in Sec. 3.3. In general, points are organized as a RAHT tree according to the hierarchical transform steps. Here, we show a constructed RAHT tree of the aforementioned 2D toy example in Fig. 3(b). Consistent to Sec. 3.2.1, tree nodes of $l_{2}$ and $l_{3}$ are merged to that of $l_{4}$ and $h_{1}$ , while $l_{1}$ and $l_{4}$ are merged to $l_{5}$ and $h_{2}$ . In this RAHT tree, leaf nodes represent original voxelized points and internal
|
| 113 |
+
|
| 114 |
+
nodes represent the corresponding subspace. In particular, the low-frequency coefficient of a RAHT tree node is passed to its parent node if this node has no neighbor along the transform direction. Otherwise, two nodes are merged to their parent. Meanwhile, low- and high-frequency coefficients are generated.
|
| 115 |
+
|
| 116 |
+
For a better illustration of probability and context modeling in Sec. 3.3, we denote nodes only containing low-frequency coefficients as low-frequency nodes (i.e., green nodes in Fig. 3(b)), and others containing both low- and high-frequency coefficients as high-frequency nodes (i.e., yellow nodes in Fig. 3(b)).
|
| 117 |
+
|
| 118 |
+
# 3.2.3 Serialization
|
| 119 |
+
|
| 120 |
+
We quantize all high-frequency coefficients and serialize these quantized coefficients into a symbol stream with a breadth-first traversal of the RAHT tree. Take the aforementioned 2D toy case as an example, we serialize coefficients of the root level as $\{h_2\}$ and the second level as $\{h_1\}$ . The serialization of coefficient is lossless and the attribute distortion only comes from the quantization step. Note that, all context information adopted in Secs. 3.3.2 and 3.3.3 is accessible to our deep entropy model during entropy decoding given the point cloud geometry and the breadth-first traversal format. For the DC coefficient, we transmit it directly.
|
| 121 |
+
|
| 122 |
+
# 3.3. Our Deep Entropy Model
|
| 123 |
+
|
| 124 |
+
According to the information theory [40], given the actual distribution of the transmitted high-frequency coefficients $\mathcal{R}$ , a lower bound of bitrate can be achieved by entropy coding. However, the actual distribution $p(\mathcal{R})$ is usually unavailable in practice. To deal with this problem, we propose a deep entropy model to approximate the unknown distribution $p(\mathcal{R})$ with the estimated distribution $q(\mathcal{R})$ . In general, we first formulate $q(\mathcal{R})$ with initial coding context and inter-channel correlation, and then utilize a deep neural network (which consists of two modules) to approximate $q(\mathcal{R})$ with $p(\mathcal{R})$ using the cross-entropy loss.
|
| 125 |
+
|
| 126 |
+
# 3.3.1 Formulation
|
| 127 |
+
|
| 128 |
+
Given a 3D point cloud and its associated RAHT tree with $m$ high-frequency nodes and $n$ attribute channels, the transmitted high-frequency coefficients are denoted as $\mathcal{R} = \{\mathcal{R}^{(1)},\dots ,\mathcal{R}^{(n)}\}$ and $\mathcal{R}^{(i)} = \left\{r_1^{(i)},\ldots ,r_m^{(i)}\right\}$ . Considering the inter-channel correlation, we first factorize $q(\mathcal{R})$ into a product of conditional probabilities:
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
q (\mathcal {R}) = \prod_ {i} q \left(\mathcal {R} ^ {(i)} \mid \mathcal {R} ^ {(i - 1)}, \dots , \mathcal {R} ^ {(1)}\right). \tag {3}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
Here, the probability distribution of $\mathcal{R}^{(i)}$ is assumed to depend on previous encoded sequences $\{\mathcal{R}^{(1)},\ldots ,\mathcal{R}^{(i - 1)}\}$
|
| 135 |
+
|
| 136 |
+
It is noticed that the probability distribution $q(\mathcal{R})$ also depends on the context information from the initial coding stage, hence we further factorize $q(\mathcal{R}^{(i)})$ with initial coding context $\mathbf{I}_j$ of the coefficient $r_j^{(i)}$ :
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
q \left(\mathcal {R} ^ {(i)}\right) = \prod_ {j} q \left(r _ {j} ^ {(i)} \mid \mathbf {I} _ {j}, \mathcal {R} ^ {(i - 1)}, \dots , \mathcal {R} ^ {(1)}\right). \tag {4}
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
Then, we model the estimated probability distribution $q\left(\cdot \mid \mathbf{I}_j,\mathcal{R}^{(i - 1)},\ldots ,\mathcal{R}^{(1)}\right)$ through a probability density model [3] with two proposed context modules, including the initial coding context module (which encodes the information from attribute transform) and the inter-channel correlation module (which explores the dependence on previously encoded attributes).
|
| 143 |
+
|
| 144 |
+
# 3.3.2 Initial Coding Context Module
|
| 145 |
+
|
| 146 |
+
As mentioned in Section 3.2, we adopt RAHT for initial coding and represent the process of RAHT with a tree structure. Here, we exploit the context information hidden in the initial coding stage by extracting context features from both low- and high-frequency tree nodes, and propose our initial coding context module.
|
| 147 |
+
|
| 148 |
+
Context from High-frequency Nodes. We first start to process the information of high-frequency nodes. As mentioned in Secs. 3.2.2 and 3.2.3, the transmitted symbol stream is composed of high-frequency coefficients, and each coefficient has a corresponding high-frequency node. In light of the strong relationships between the high-frequency coefficients and nodes, our initial coding context module follows [22] to extract latent embedding $\mathbf{h}_j$ of each high-frequency node. In particular, we feed context information, which is obtained in the initial coding stage, to a MultiLayer Perceptron (MLP) to obtain $\mathbf{h}_j$ . For a given high-frequency node, the context information contains the depth level, the weight (i.e., the number of child nodes), the low-frequency coefficient, and the attributes. Note that, all information is available during decoding.
|
| 149 |
+
|
| 150 |
+
Context from Low-frequency Nodes. We further extract the information of low-frequency nodes, and fuse it with that of high-frequency ones into the initial coding context feature. Although low-frequency nodes do not interact directly with the transmitted symbols, there remains a massive quantity of information hidden in these nodes. Thus, we extract the context information that exists in these nodes. Here, we use SparseConv [9, 47] to process RAHT tree nodes for efficiency. In particular, we perform 3D sparse convolutions at each depth level, and take both locations of low- and high-frequency nodes (i.e., center of corresponding subspace) as input points and their context information as input features. With the progressive sparse convolution and downsampling, the context information of low-frequency nodes is fused with that of high-frequency nodes
|
| 151 |
+
|
| 152 |
+
and diffused into multi-scale feature volumes. For each high-frequency node, we interpolate the latent features from multi-scale feature spaces at its 3D location, and then concatenate them into a final embedding feature $\mathbf{l}_j$ .
|
| 153 |
+
|
| 154 |
+
We concatenate the latent embeddings $\mathbf{h}_j$ and $\mathbf{l}_j$ , and feed them into a MLP to obtain the initial coding context $\mathbf{I}_j$ for each transmitted coefficients:
|
| 155 |
+
|
| 156 |
+
$$
|
| 157 |
+
\mathbf {I} _ {j} = \operatorname {M L P} \left(\left[ \mathbf {h} _ {j}, \mathbf {l} _ {j} \right]\right). \tag {5}
|
| 158 |
+
$$
|
| 159 |
+
|
| 160 |
+
# 3.3.3 Inter-Channel Correlation Module
|
| 161 |
+
|
| 162 |
+
In most cases, a point cloud contains multiple attribute channels (e.g., color, normal) and there is significant information redundancy between different attributes. Thus, we propose an inter-channel correlation module to exploit the inter-channel correlation.
|
| 163 |
+
|
| 164 |
+
Inter-Channel Coefficient Correlation. For an uncompressed coefficient $r_j^{(i)}$ , we first incorporate the previous encoded coefficients $\{r_j^{(1)},\ldots ,r_j^{(i - 1)}\}$ to explore the inter-channel coefficient correlation. Specifically, we use the previous encoded coefficients as prior knowledge and feed them into a MLP to extract the latent feature $\mathbf{c}_j^{(i)}$ :
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
\mathbf {c} _ {j} ^ {(i)} = \operatorname {M L P} ([ r _ {j} ^ {(1)}, \dots , r _ {j} ^ {(i - 1)} ]). \tag {6}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
Here, we simply concatenate all previously encoded coefficients and use them as the input of an MLP layer.
|
| 171 |
+
|
| 172 |
+
Inter-Channel Spatial Correlation. We further aggregate the encoded coefficients through point cloud geometry to benefit the probability estimation. A key observation is that, the attributes of other points are helpful in predicting those of a given point. Therefore, we utilize the spatial relationships via the RAHT tree by incorporating all high-frequency nodes through 3D sparse convolution. Note that, only high-frequency nodes are considered in this part because low-frequency nodes do not contain transmitted coefficients. More specifically, we perform 3D sparse convolutions at each depth level, and take locations of high-frequency nodes as input points and their encoded coefficients as input features. For each high-frequency node, we obtain the coefficient spatial embedding $\mathbf{s}_j^{(i)}$ by interpolating the latent feature in multi-scale feature space.
|
| 173 |
+
|
| 174 |
+
Similar to the aggregation of initial coding context features, we first concatenate $\mathbf{c}_j^{(i)}$ and $\mathbf{s}_j^{(i)}$ , and then feed them into a MLP layer to obtain the prior channel embedding $\mathbf{C}_j^{(i)}$ . For the first attribute channel, we set the embedding $\mathbf{C}_j^{(1)}$ as all zeros features for model consistency. Thus, we can finally model $q\left(r_j^{(i)} \mid \mathbf{I}_j, \mathcal{R}^{(i-1)}, \ldots, \mathcal{R}^{(1)}\right)$ as $q(r_j^{(i)} \mid \mathbf{I}_j, \mathbf{C}_j^{(i)})$ .
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
Figure 4. Quantitative results of different attribute compression approaches on the ScanNet (a) and SemanticKITTI (b) datasets.
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
|
| 181 |
+
# 3.3.4 Probability Estimation
|
| 182 |
+
|
| 183 |
+
We aggregate two context features $\mathbf{I}_j$ and $\mathbf{C}_j^{(i)}$ through a MLP layer to obtain the final latent embedding. This embedding is further used to generate learnable parameters for a fully factorized density model [3], which is able to model the probability distribution $q\left(r_j^{(i)} \mid \mathbf{I}_j, \mathcal{R}^{(i-1)}, \ldots, \mathcal{R}^{(1)}\right)$ .
|
| 184 |
+
|
| 185 |
+
# 3.4. Entropy Coding
|
| 186 |
+
|
| 187 |
+
For entropy coding, we adopt an arithmetic coder to obtain the final attributes bitstream. In the previous steps, we have already converted attributes to transform coefficients in Sec. 3.2.1, serialized these coefficients to a symbol stream in Sec. 3.2.3, and obtained the probabilities of symbols in Sec. 3.3.4. At the final entropy encoding stage, the transform coefficients and the estimated probabilities are passed to the arithmetic coder to generate a final attribute bitstream for further compression. During decoding, the arithmetic coder is capable of restoring the coefficients with the same probability produced by our deep entropy model.
|
| 188 |
+
|
| 189 |
+
# 3.5. Learning
|
| 190 |
+
|
| 191 |
+
During training, we adopt the cross-entropy loss for the deep entropy model:
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
\ell = - \sum_ {i} \sum_ {j} \log q \left(r _ {j} ^ {(i)} \mid \mathbf {I} _ {j}, \mathbf {C} _ {j} ^ {(i)}\right). \tag {7}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
To reduce the bitrate of the final attributes bitstream, we approximate the estimated distribution $q(\mathcal{R})$ with the actual distribution $p(\mathcal{R})$ by minimizing the cross-entropy loss.
|
| 198 |
+
|
| 199 |
+
# 4. Experiments
|
| 200 |
+
|
| 201 |
+
In this section, we first evaluate the attribute compression performance of our method on two point cloud datasets. Then, the effectiveness of our approach is validated on the downstream tasks. We finally conduct extensive ablative experiments to validate the contribution of each component.
|
| 202 |
+
|
| 203 |
+
# 4.1. Experimental Setup
|
| 204 |
+
|
| 205 |
+
(1) Datasets. We conduct experiments on the following two datasets:
|
| 206 |
+
|
| 207 |
+
- ScanNet [10]. This is a large-scale indoor point cloud dataset containing 1513 dense point clouds. Each scan contains over ten thousand colored points. Following the official training/testing split, we use 1,201 point clouds for training and 312 point clouds for test.
|
| 208 |
+
- SemanticKITTI [4]. This is a large-scale outdoor LiDAR dataset with 22 sparse point cloud sequences. Point cloud reflectance captured by LiDAR sensors is used for attribute compression. Following the default setting in [4], we use 11 point sequences for training and the other 11 sequences for test.
|
| 209 |
+
|
| 210 |
+
(2) Evaluation Metrics. The peak signal-to-noise ratio (PSNR) is reported to evaluate the reconstruction quality. Following [11], we report the peak signal-to-noise ratio of the luminance component in ScanNet. Analogously, the peak signal-to-noise ratio of reflectance is reported on the SemanticKITTI dataset. Bits Per Point (BPP) is also adopted to evaluate the compression ratio.
|
| 211 |
+
(3) Baselines. We compare the proposed method with the following selected baselines:
|
| 212 |
+
|
| 213 |
+
- RAHT [11]. We use RAHT for initial coding and the run-length Golomb-Rice coder [26] for entropy coding.
|
| 214 |
+
- RAGFT [29]. This is an improved version of RAHT with graph transform. The run-length Golomb-Rice coder is also adopted for entropy coding.
|
| 215 |
+
- G-PCC [39]. This is a standard point cloud compression method (G-PCC) provided by MPEG<sup>3</sup>.
|
| 216 |
+
- Sponv AE. We adopt torchsparse [47] to construct MinkUnet [9] for attribute reconstruction and use a fully factorized density model [3] for entropy coding.
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
Figure 5. Qualitative results achieved by our method and other baselines including Spconv AE, RAHT [11], and G-PCC [39]. We visualize ScanNet scans at low and high bitrates, respectively. Our method achieves the best compression quality (PSNR) with the lowest bitrates.
|
| 220 |
+
|
| 221 |
+
- w/o Transform. This is motivated by MuSCLE [6]. The attributes are transmitted without any transform and entropy coded by a probability density model [3] with geometric information. In contrast to [6], we extract geometric context features through 3D sparse convolution from the current point cloud frame.
|
| 222 |
+
|
| 223 |
+
Note that, uniform quantization of transform coefficients is used in our method, RAHT and RAGFT, while adaptive quantization is used in G-PCC.
|
| 224 |
+
|
| 225 |
+
Implementation Details. In order to simulate the real condition of point cloud compression, we voxelize the raw point cloud data with the 9-level and 12-level octree for ScanNet and SemanticKITTI, respectively, and assume that the geometry of point clouds has been transmitted separately. For ScanNet, a conversion is performed from the RGB color space to the YUV color space following the default setting of G-PCC [39]. We adopt both of the initial coding context module and the inter-channel correlation module. For SemanticKITTI, considering that a single channel of attribute (i.e., reflectance) is included, we only use the initial coding context module and simply disable the inter-channel correlation module.
|
| 226 |
+
|
| 227 |
+
# 4.2. Evaluation on Public Datasets
|
| 228 |
+
|
| 229 |
+
Evaluation on ScanNet. The quantitative attribute compression results of different approaches on ScanNet are shown in Fig. 4. It can be seen that transform-based methods (i.e., 3DAC, G-PCC, RAHT, and RAGFT) significantly outperform other methods (i.e., w/o Transform and Spconv
|
| 230 |
+
|
| 231 |
+
AE), demonstrating the effectiveness of the initial coding scheme. Additionally, the proposed method consistently achieves the best results compared with other baselines. Although the same initial coding is adopted for our method and RAHT, our method can achieve the same reconstruction quality with a much lower bitrate. This can be mainly attributed to the proposed entropy model and the learning framework. We also provide the qualitative comparison on ScanNet in Fig. 5. It can be seen that our approach achieves better reconstruction performance even compared with the standard point cloud compression algorithm, G-PCC.
|
| 232 |
+
|
| 233 |
+
Evaluation on SemanticKITTI. The quantitative result of the compression performance achieved by different methods on the SemanticKITTI dataset is shown in Fig. 4. It is clear that the proposed method consistently outperforms other methods by a large margin. This is primarily because that the proposed module is able to implicitly learn the geometry information from sparse LiDAR point cloud through 3D convolutions. We also noticed that existing compression baselines (including RAHT, RAGFT, and G-PCC) achieve unsatisfactory performance on this dataset, since these methods are mainly developed for dense point clouds. In contrast, the proposed method is demonstrated to work well on both indoor dense point clouds and outdoor sparse LiDAR point clouds.
|
| 234 |
+
|
| 235 |
+
# 4.3. Evaluation on Downstream Tasks
|
| 236 |
+
|
| 237 |
+
We further evaluate the performance of attribute compression through two representative downstream tasks, including a 3D scene flow estimation task for machine per
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
(a)
|
| 241 |
+
Figure 6. The quantitative results on two downstream tasks. (a): Scene flow estimation on FlyingThings3D. (b): Quality assessment on ScanNet.
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
(b)
|
| 245 |
+
|
| 246 |
+
ception and a quality assessment task for human perception.
|
| 247 |
+
|
| 248 |
+
Scene Flow Estimation. We present the performance of scene flow estimation on FlyingThings3D with FlowNet3D [25]. In particular, we first train the network on the raw point clouds. Then, we evaluate the flow estimation performance on the compressed point clouds. The results of raw point clouds with and without attributes are also reported as 'w RGB' and 'w/o RGB', respectively. End-point-error (EPE) at different bits per point is adopted as the evaluation metric. As shown in Fig. 6, the proposed method achieves better performance than RAHT and G-PCC. The performance on this downstream task further demonstrates the superior compression quality of our method.
|
| 249 |
+
|
| 250 |
+
Quality Assessment. For human perception tasks, we adopt point cloud quality assessment here since the human visual system is more sensitive to attributes. In particular, we adopt the GraphSIM [55] metric to indicate the quality of the compressed point clouds, and higher GraphSIM score means lower attribute distortion. The scores at different bits per point are reported. As shown in Fig. 6, the proposed method achieves the highest GraphSIM score at the same bitrate, further demonstrating the effectiveness of our approach on downstream human vision-oriented tasks.
|
| 251 |
+
|
| 252 |
+
<table><tr><td colspan="2">Initial Coding</td><td colspan="2">Inter Channel</td><td rowspan="2">Birate</td></tr><tr><td>H</td><td>L</td><td>C</td><td>S</td></tr><tr><td></td><td></td><td></td><td></td><td>3.85</td></tr><tr><td>β</td><td></td><td></td><td></td><td>3.48</td></tr><tr><td>β</td><td>β</td><td></td><td></td><td>3.28</td></tr><tr><td>β</td><td>β</td><td>β</td><td></td><td>3.08</td></tr><tr><td>β</td><td>β</td><td>β</td><td>β</td><td>2.79</td></tr></table>
|
| 253 |
+
|
| 254 |
+
Table 1. Ablation study on the initial coding context and inter-channel correlation modules. H, L, C, and S denote the information from High-frequency nodes, information from Low-frequency nodes, inter-channel Coefficient dependence, and inter-channel Spatial dependence, respectively.
|
| 255 |
+
|
| 256 |
+
<table><tr><td>D</td><td>W</td><td>F</td><td>A</td><td>Birate</td></tr><tr><td></td><td></td><td></td><td></td><td>3.85</td></tr><tr><td>β</td><td></td><td></td><td></td><td>3.64</td></tr><tr><td>β</td><td>β</td><td></td><td></td><td>3.57</td></tr><tr><td>β</td><td>β</td><td>β</td><td></td><td>3.51</td></tr><tr><td>β</td><td>β</td><td>β</td><td>β</td><td>3.48</td></tr></table>
|
| 257 |
+
|
| 258 |
+
Table 2. Ablation study on information from high-frequency nodes. D, W, F, and A stand for the node's RAHT tree depth level, weight, low-frequency coefficients, and attributes, respectively.
|
| 259 |
+
|
| 260 |
+
# 4.4. Ablation Study
|
| 261 |
+
|
| 262 |
+
To further determine the contribution of each component in our framework, we conduct several groups of ablation experiments in this section. Note that, all experiments are conducted on ScanNet and all uniform quantization parameters are set as 10 for the same reconstruction quality. First, we evaluate the initial coding context and inter-channel correlation modules. As shown in Table 1, we start with using a non-parametric, fully factorized density model [3], and then add information from high- and low-frequency nodes (H and L, respectively), inter-channel coefficient dependence (C), and finally adopt inter-channel spatial dependence (S). It is clear that the progressive incorporation of the proposed components can lead to a lower bitrate.
|
| 263 |
+
|
| 264 |
+
We then report the ablation studies over the context information of the RAHT-tree node in Table 2. By progressively incorporating the node's depth level (D), weight (W), low-frequency coefficients (F), and attributes (A) into the context information of the high-frequency nodes, we can see a steady reduction of the encoding entropy, verifying the effectiveness of the proposed components.
|
| 265 |
+
|
| 266 |
+
# 5. Conclusion
|
| 267 |
+
|
| 268 |
+
In this paper, we presented a point cloud attribute compression algorithm. Our method includes an attribute-oriented deep entropy model considering both attribute initial coding and inter-channel correlations to reduce the storage of attributes. We showed the compression performance of our method over both indoor and outdoor datasets, and the results demonstrated that our approach has superior capability to reduce the bitrate as well as ensuring the reconstruction quality.
|
| 269 |
+
|
| 270 |
+
Acknowledgements. This work was partially supported by the National Natural Science Foundation of China (No. U20A20185, 61972435, 61971282), the Shenzhen Science and Technology Program (No. RCYX20200714114641140), and the Natural Science Foundation of Guangdong Province (2022B1515020103). Qingyong Hu was also supported by China Scholarship Council (CSC) scholarship and the Huawei AI UK Fellowship.
|
| 271 |
+
|
| 272 |
+
# References
|
| 273 |
+
|
| 274 |
+
[1] Yuanchao Bai, Xianming Liu, Wangmeng Zuo, Yaowei Wang, and Xiangyang Ji. Learning scalable ly=-constrained near-lossless image compression via joint lossy image and residual compression. In CVPR, pages 11946-11955, 2021. 3
|
| 275 |
+
[2] Johannes BallΓ©, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression. In ICLR, 2017. 3
|
| 276 |
+
[3] Johannes Balle, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. In ICLR, 2018. 2, 3, 5, 6, 7, 8
|
| 277 |
+
[4] Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyril Stachniss, and Jurgen Gall. SemanticKITTI: A dataset for semantic scene understanding of LiDAR sequences. In ICCV, pages 9297-9307, 2019. 6
|
| 278 |
+
[5] Fabrice Bellard. Bpg image format. https://bellard.org/bpg, 2015.2
|
| 279 |
+
[6] Sourav Biswas, Jerry Liu, Kelvin Wong, Shenlong Wang, and Raquel Urtasun. Muscle: Multi sweep compression of lidar using deep entropy models. NeurIPS, 33, 2020. 2, 7
|
| 280 |
+
[7] Andrew Brock, Theodore Lim, James Millar Ritchie, and Nicholas J Weston. Generative and discriminative voxel modeling with convolutional neural networks. In Neural Information Processing Conference: 3D Deep Learning, 2016. 2
|
| 281 |
+
[8] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In CVPR, pages 3070-3079. IEEE, 2019. 2
|
| 282 |
+
[9] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4D spatio-temporal convnets: Minkowski convolutional neural networks. In CVPR, pages 3075-3084, 2019. 5, 6
|
| 283 |
+
[10] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias NieΓner. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In CVPR, pages 5828-5839, 2017. 1, 6
|
| 284 |
+
[11] Ricardo L De Queiroz and Philip A Chou. Compression of 3d point clouds using a region-adaptive hierarchical transform. IEEE TIP, 25(8):3947-3956, 2016. 1, 2, 3, 6, 7
|
| 285 |
+
[12] Ricardo L de Queiroz and Philip A Chou. Transform coding for point clouds using a gaussian process model. IEEE TIP, 26(7):3507-3517, 2017. 2
|
| 286 |
+
[13] Xin Deng, Wenzhe Yang, Ren Yang, Mai Xu, Enpeng Liu, Qianhan Feng, and Radu Timofte. Deep homography for efficient stereo image compression. In CVPR, pages 1492-1501, 2021. 3
|
| 287 |
+
[14] Ge Gao, Pei You, Rong Pan, Shunyuan Han, Yuanyuan Zhang, Yuchao Dai, and Hojae Lee. Neural image compression via attentional multi-scale back projection and frequency decomposition. In ICCV, pages 14677-14686, 2021. 3
|
| 288 |
+
[15] Linyao Gao, Tingyu Fan, Jianqiang Wan, Yiling Xu, Jun Sun, and Zhan Ma. Point cloud geometry compression via neural graph sampling. In ICIP, pages 3373-3377. IEEE, 2021. 2
|
| 289 |
+
[16] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. In CVPR, pages 3354-3361, 2012. 1
|
| 290 |
+
|
| 291 |
+
[17] Vivek K Goyal. Theoretical foundations of transform coding. IEEE Signal Processing Magazine, 18(5):9-21, 2001. 2
|
| 292 |
+
[18] Shuai Gu, Junhui Hou, Huanqiang Zeng, Hui Yuan, and Kai-Kuang Ma. 3d point cloud attribute compression using geometry-guided sparse representation. IEEE TIP, 29:796-808, 2019. 1, 2
|
| 293 |
+
[19] Dailan He, Yaoyan Zheng, Baocheng Sun, Yan Wang, and Hongwei Qin. Checkerboard context model for efficient learned image compression. In CVPR, pages 14771-14780, 2021. 3
|
| 294 |
+
[20] Qingyong Hu, Bo Yang, Sheikh Khalid, Wen Xiao, Niki Trigoni, and Andrew Markham. Sensaturban: Learning semantics from urban-scale photogrammetric point clouds. International Journal of Computer Vision, pages 1-28, 2022. 1
|
| 295 |
+
[21] Qingyong Hu, Bo Yang, Linhai Xie, Stefano Rosa, Yulan Guo, Zhihua Wang, Niki Trigoni, and Andrew Markham. Learning semantic segmentation of large-scale point clouds with random sampling. IEEE TPAMI, 2021. 1
|
| 296 |
+
[22] Lila Huang, Shenlong Wang, Kelvin Wong, Jerry Liu, and Raquel Urtasun. Octsqueeze: Octree-structured entropy model for lidar compression. In CVPR, pages 1313β1323, 2020. 2, 5
|
| 297 |
+
[23] Tianxin Huang and Yong Liu. 3d point cloud geometry compression on deep learning. In ACM MM, pages 890-898, 2019. 2
|
| 298 |
+
[24] David A Huffman. A method for the construction of minimum-redundancy codes. Proceedings of the IRE, 40(9):1098-1101, 1952. 2
|
| 299 |
+
[25] Xingyu Liu, Charles R Qi, and Leonidas J Guibas. Flownet3d: Learning scene flow in 3d point clouds. In CVPR, pages 529-537, 2019. 8
|
| 300 |
+
[26] Henrique S Malvar. Adaptive run-length/golomb-rice encoding of quantized generalized gaussian sources with unknown statistics. In Data Compression Conference (DCC'06), pages 23-32. IEEE, 2006. 1, 2, 6
|
| 301 |
+
[27] Donald Meagher. Geometric modeling using octree encoding. Computer graphics and image processing, 19(2):129-147, 1982. 2
|
| 302 |
+
[28] Zhigeng Pan, Adrian David Cheok, Hongwei Yang, Jiejie Zhu, and Jiaoying Shi. Virtual reality and mixed reality for virtual learning environments. Computers & graphics, 30(1):20-28, 2006. 1
|
| 303 |
+
[29] Eduardo Pavez, Benjamin Girault, Antonio Ortega, and Philip A Chou. Region adaptive graph fourier transform for 3d point clouds. In ICIP, pages 2726-2730. IEEE, 2020. 1, 2, 6
|
| 304 |
+
[30] Eduardo Pavez, Andre L Souto, Ricardo L De Queiroz, and Antonio Ortega. Multi-resolution intra-predictive coding of 3d point cloud attributes. arXiv preprint arXiv:2106.08562, 2021. 1, 2
|
| 305 |
+
[31] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. PointNet: Deep learning on point sets for 3D classification and segmentation. In CVPR, pages 652-660, 2017. 2
|
| 306 |
+
[32] Maurice Quach, Giuseppe Valenzise, and Frederic Dufaux. Learning convolutional transforms for lossy point cloud geometry compression. In ICIP, pages 4320-4324. IEEE, 2019. 2
|
| 307 |
+
|
| 308 |
+
[33] Maurice Quach, Giuseppe Valenzise, and Frederic Dufaux. Folding-based compression of point cloud attributes. In ICIP, pages 3309-3313. IEEE, 2020. 2
|
| 309 |
+
[34] Maurice Quach, Giuseppe Valenzise, and Frederic Dufaux. Improved deep point cloud geometry compression. In 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP), pages 1-6. IEEE, 2020. 2
|
| 310 |
+
[35] Zizheng Que, Guo Lu, and Dong Xu. Voxelcontext-net: An octree based framework for point cloud compression. In CVPR, pages 6042-6051, 2021. 2
|
| 311 |
+
[36] Iain E Richardson. H. 264 and MPEG-4 video compression: video coding for next-generation multimedia. John Wiley & Sons, 2004. 2
|
| 312 |
+
[37] Radu Bogdan Rusu, Zoltan Csaba Marton, Nico Blodow, Mihai Dolha, and Michael Beetz. Towards 3d point cloud based object maps for household environments. Robotics and Autonomous Systems, 2008. 1
|
| 313 |
+
[38] Gustavo P Sandri, Philip A Chou, Maja Krivokuca, and Ricardo L de Queiroz. Integer alternative for the region-adaptive hierarchical transform. IEEE Sign. Process. Letters, 26(9):1369-1372, 2019. 2
|
| 314 |
+
[39] Sebastian Schwarz, Marius Preda, Vittorio Baroncini, Madhukar Budagavi, Pablo Cesar, Philip A Chou, Robert A Cohen, Maja Krivokuca, SΓ©bastien Lasserre, Zhu Li, et al. Emerging mpeg standards for point cloud compression. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 9(1):133-148, 2018. 1, 2, 6, 7
|
| 315 |
+
[40] Claude Elwood Shannon. A mathematical theory of communication. The Bell system technical journal, 27(3):379-423, 1948. 1, 4
|
| 316 |
+
[41] Yiting Shao, Qi Zhang, Ge Li, Zhu Li, and Li Li. Hybrid point cloud attribute compression using slice-based layered structure and block-based intra prediction. In ACM MM, pages 1199-1207, 2018. 1, 2
|
| 317 |
+
[42] Yiting Shao, Zhaobin Zhang, Zhu Li, Kui Fan, and Ge Li. Attribute compression of 3d point clouds using laplacian sparsity optimized graph transform. In VCIP, pages 1-4. IEEE, 2017. 1, 2
|
| 318 |
+
[43] Xihua Sheng, Li Li, Dong Liu, Zhiwei Xiong, Zhu Li, and Feng Wu. Deep-pcac: An end-to-end deep lossy compression framework for point cloud attributes. IEEE TMM, 2021. 2
|
| 319 |
+
[44] Athanassios Skodras, Charilaos Christopoulos, and Touradj Ebrahimi. The jpeg 2000 still image compression standard. IEEE Signal processing magazine, 18(5):36-58, 2001. 2
|
| 320 |
+
[45] AndrΓ© L Souto and Ricardo L de Queiroz. On predictive raht for dynamic point cloud coding. In ICIP, pages 2701-2705. IEEE, 2020. 1, 2
|
| 321 |
+
[46] Gary J Sullivan, Jens-Rainer Ohm, Woo-Jin Han, and Thomas Wiegand. Overview of the high efficiency video coding (hevc) standard. IEEE TCSVT, 22(12):1649-1668, 2012. 2
|
| 322 |
+
[47] Haotian Tang, Zhijian Liu, Shengyu Zhao, Yujun Lin, Ji Lin, Hanrui Wang, and Song Han. Searching efficient 3d architectures with sparse point-voxel convolution. In ECCV, pages 685β702. Springer, 2020. 5, 6
|
| 323 |
+
|
| 324 |
+
[48] Dorina Thanou, Philip A Chou, and Pascal Frossard. Graph-based motion estimation and compensation for dynamic 3d point cloud compression. In ICIP, pages 3235-3239. IEEE, 2015. 1
|
| 325 |
+
[49] Gregory K Wallace. TheJPEG still picture compression standard.IEEE transactions on consumer electronics, 38(1):xviii-xxxiv,1992.2
|
| 326 |
+
[50] Jianqiang Wang, Dandan Ding, Zhu Li, and Zhan Ma. Multiscale point cloud geometry compression. In 2021 Data Compression Conference (DCC), pages 73-82. IEEE, 2021. 2
|
| 327 |
+
[51] Marcelo J Weinberger, Gadiel Seroussi, and Guillermo Sapiro. The loco-i lossless image compression algorithm: Principles and standardization into JPEG-Is. IEEE TIP, 9(8):1309-1324, 2000. 2
|
| 328 |
+
[52] Ian H Witten, Radford M Neal, and John G Cleary. Arithmetic coding for data compression. Communications of the ACM, 30(6):520-540, 1987. 2
|
| 329 |
+
[53] Wei Yan, Shan Liu, Thomas H Li, Zhu Li, Ge Li, et al. Deep autoencoder-based lossy geometry compression for point clouds. arXiv preprint arXiv:1905.03691, 2019. 2
|
| 330 |
+
[54] Fei Yang, Luis Herranz, Yongmei Cheng, and Mikhail G Mozerov. Slimmable compressive autoencoders for practical neural image compression. In CVPR, pages 4998-5007, 2021. 3
|
| 331 |
+
[55] Qi Yang, Zhan Ma, Yiling Xu, Zhu Li, and Jun Sun. Inferring point cloud quality via graph similarity. IEEE TPAMI, 2020. 8
|
| 332 |
+
[56] Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian. Foldingnet: Point cloud auto-encoder via deep grid deformation. In CVPR, pages 206β215, 2018. 2
|
| 333 |
+
[57] Cha Zhang, Dinei Florencio, and Charles Loop. Point cloud attribute compression with graph transform. In ICIP, pages 2066-2070. IEEE, 2014. 1, 2
|
3daclearningattributecompressionforpointclouds/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f154204a517c74e8c39c6c8f24c528b044d2c8c3a52e7ae84823b3dc369e77bc
|
| 3 |
+
size 496100
|
3daclearningattributecompressionforpointclouds/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c6ce574f5d812e5a6de566ddf6ea648724d0cefbe5ae3243f5c370ff1d0e2b53
|
| 3 |
+
size 413961
|
3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/53bff2b9-1255-4216-a377-5b793f0ea58e_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:48646d95997c7ff5a63a2e4275ae6c8c98bd0cf5c90bf60550556fd078dbb455
|
| 3 |
+
size 72212
|
3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/53bff2b9-1255-4216-a377-5b793f0ea58e_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3e413a7ca6732d7c446e905eb4b6181908236bf82a98e0a1a522e0e7d47838e0
|
| 3 |
+
size 89487
|
3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/53bff2b9-1255-4216-a377-5b793f0ea58e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2cca8b7c60a4efdade474244a9ff00dec824f048e5f8a0165b0ad7c351c9afdc
|
| 3 |
+
size 2173843
|
3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/full.md
ADDED
|
@@ -0,0 +1,305 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D-aware Image Synthesis via Learning Structural and Textural Representations
|
| 2 |
+
|
| 3 |
+
Yinghao Xu $^{1}$ Sida Peng $^{2}$ Ceyuan Yang $^{1}$ Yujun Shen $^{3}$ Bolei Zhou $^{1}$
|
| 4 |
+
the Chinese University of Hong Kong $^{2}$ Zhejiang University $^{3}$ Bytedance Inc.
|
| 5 |
+
|
| 6 |
+
$\{xy119, yc019, bzhou\} @ie.cuhk.edu.hk pengsida@zju.edu.cn shenyujun0302@gmail.com$
|
| 7 |
+
|
| 8 |
+
# Abstract
|
| 9 |
+
|
| 10 |
+
Making generative models 3D-aware bridges the 2D image space and the 3D physical world yet remains challenging. Recent attempts equip a Generative Adversarial Network (GAN) with a Neural Radiance Field (NeRF), which maps 3D coordinates to pixel values, as a 3D prior. However, the implicit function in NeRF has a very local receptive field, making the generator hard to become aware of the global structure. Meanwhile, NeRF is built on volume rendering which can be too costly to produce high-resolution results, increasing the optimization difficulty. To alleviate these two problems, we propose a novel framework, termed as VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation. We first learn a feature volume to represent the underlying structure, which is then converted to a feature field using a NeRF-like model. The feature field is further accumulated into a 2D feature map as the textural representation, followed by a neural renderer for appearance synthesis. Such a design enables independent control of the shape and the appearance. Project page is at https://genforce.github.io/volumegan.
|
| 11 |
+
|
| 12 |
+
# 1. Introduction
|
| 13 |
+
|
| 14 |
+
Learning 3D-aware image synthesis draws wide attention recently [3,30,35]. An emerging solution is to integrate a Neural Radiance Field (NeRF) [28] into a Generative Adversarial Network (GAN) [7]. Specifically, the 2D Convolutional Neural Network (CNN) based generator is replaced with a generative implicit function, which maps the raw 3D coordinates to point-wise densities and colors conditioned on the given latent code. Such an implicit function encodes the structure and the texture of the output image in the 3D space.
|
| 15 |
+
|
| 16 |
+
However, there are two problems of directly employing NeRF [28] in the generator. On one hand, the implicit function in NeRF produces the color and density for each 3D point using a Multi-Layer Perceptron (MLP) network.
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
Figure 1. Images of faces and cars synthesized by VolumeGAN, which enables the control of viewpoint, structure, and texture.
|
| 20 |
+
|
| 21 |
+
With a very local receptive field, it is hard for the MLP to represent the underlying structure globally when synthesizing images. Thus only using the 3D coordinates as the inputs [3, 30, 35] is not expressive enough to guide the generator with the global structure. On the other hand, volume rendering generates the pixel values of the output image separately, which requires sampling numerous points along the camera ray regarding each pixel. The computational cost hence significantly increases when the image size becomes larger. It may cause the insufficient optimization of the model training, and further lead to unsatisfying performance for high-resolution image generation.
|
| 22 |
+
|
| 23 |
+
Prior work has found that 2D GANs benefits from valid representations learned by the generator [36, 44, 45]. Such generative representations describe a synthesis with high-level features. For example, Xu et al. [44] confirm that a face synthesis model is aware of the landmark positions of the output face, and Yang et al. [45] identify the multilevel variation factors emerging from generating bedroom images. These representative features encode rich texture and structure information, thereby enhancing the synthesis quality [16] and the controllability [36] of image GANs. In contrast, as mentioned above, existing 3D-aware generative models directly render the pixel values from coordinates [3, 35], without learning explicit representations.
|
| 24 |
+
|
| 25 |
+
In this work, we propose a new generative model, termed as VolumeGAN, which achieves 3D-aware image synthesis through explicitly learning a structural and a textural repre
|
| 26 |
+
|
| 27 |
+
sentation. Instead of using the 3D coordinates as the inputs, we generate a feature volume using a 3D convolutional network, which encodes the relationship between various spatial regions and hence compensates for the insufficient receptive field caused by the MLP in NeRF. With the feature volume modeling the underlying structure, we query a coordinate descriptor from the feature volume to describe the structural information for each 3D point. We then employ a NeRF-like model to create a feature field, by taking the coordinate descriptor attached with the raw coordinate as the input. The feature field is further accumulated into a 2D feature map as the textural representation, followed by a CNN with $1 \times 1$ kernel size to finally render the output image. In this way, we separately model the structure and the texture with the 3D feature volume and the 2D feature map, enabling the disentangled control of the shape and the appearance.
|
| 28 |
+
|
| 29 |
+
We evaluate our approach on various datasets and demonstrate its superior performance over existing alternatives. In terms of the image quality, VolumeGAN achieves substantially better FrΓ©chet Inception Distance (FID) score [11]. Taking the FFHQ dataset [16] under $256 \times 256$ resolution as an instance, we improve the FID from 36.7 to 9.1. We also enable 3D-aware image synthesis on the challenging indoor scene dataset, i.e., LSUN bedroom [47]. Our model also suggests stable control of the object pose and shows better consistency across different viewpoints, benefiting from the learned structural representation (i.e., the feature volume). Furthermore, we conduct a detailed empirical study on the learned structural and textural representations, and analyze the trade-off between the image quality and the 3D property.
|
| 30 |
+
|
| 31 |
+
# 2. Related work
|
| 32 |
+
|
| 33 |
+
Neural Implicit Representations. Recent methods [5, 20, 27, 28, 33, 39] propose to represent 3D scenes with neural implicit functions, such as occupancy field [27], signed distance field [33], and radiance field [28]. To recover these representations from images, they develop differentiable renderers [19, 21, 31, 41] that render implicit functions into images, and optimize the network parameters by minimizing the difference between rendered images and observed images. These methods can reconstruct high-quality 3D shapes and perform photo-realistic view synthesis, but they have several strong assumptions on the input data, including dense camera views, precise camera parameters, and constant lighting effects. More recently, some methods [3, 13, 25, 26, 30, 35] have attempted to reduce the constraints on the input data. By appending an appearance embedding to each input image, [25] can recover 3D scenes from multi-view images with different lighting effects. [13, 26] reconstructs neural radiance fields from very sparse views by applying a discriminator to supervise
|
| 34 |
+
|
| 35 |
+
the synthesized images on novel views. Different from these methods requiring multi-view images, our approach can synthesize high-resolution images by training networks only on unstructured single-view image collections.
|
| 36 |
+
|
| 37 |
+
Image Synthesis with 2D GANs. Generative Adversarial Networks (GANs) [7, 14] have made significant progress in synthesizing photo-realistic images but lack the ability to control the generation. To obtain better controllability in synthesizing process, [36, 37, 45, 50] investigate the latent space of the pre-trained GANs to determine the semantic direction. Many works [4, 34] add regularizers or modify the network structure [10, 15-17] to improve the disentanglement of variation factors without explicit supervision. Besides, recent methods [1, 9, 44, 51] adopt optimization or train encoders for controlling attributes of real images by pre-trained GANs. However, these efforts control the generation only in 2D space and ignore the 3D nature of the physical world, resulting in a lack of consistency for view synthesis.
|
| 38 |
+
|
| 39 |
+
3D-Aware Image Synthesis. 2D GANs lack knowledge of 3D structure. Some prior works directly introduce 3D representation to perform 3D-aware image synthesis. VON [52] generates a 3D shape represented by voxels which is then projected into 2D image space by a differentiable renderer. HoloGAN [29] propose voxelized and implicit 3D representations and then render it to 2D space with a reshape operation. While these methods can achieve good results, the synthesized images suffer from the fine details and identity shift because of the voxel resolution restriction. Instead of voxel representation, GRAF [35] and $\pi$ -GAN [3] propose to model 3D shapes by neural implicit representation, which maps the coordinates to the RGB color. GOF [43] and ShadeGAN [32] introduce the occupancy field and albedo field instead of radiance field for image rendering. However, due to the computationally intensive rendering process, they cannot synthesize high-resolution images with good visual quality. To overcome this problem, [30] first render low-resolution feature maps with neural feature fields and then generate high-resolution images with 2D CNNs, also with the coordinates as the input. However, severe artifacts across different camera views are introduced because CNN-based decoder harms the 3D consistency. Unlike previous attempts, we leverage the feature volume to provide the feature descriptor for each coordinate and a neural renderer consisting of $1 \times 1$ convolution block to synthesize high-quality images with better multi-view consistency and 3D control. The concurrent work StyleNeRF [8] also adopts $1 \times 1$ convolution block to synthesize high-quality images. However, we adopt the feature volume to provide the structural description for the synthesized object instead of using regularizers to improve the 3D properties.
|
| 40 |
+
|
| 41 |
+

|
| 42 |
+
Figure 2. Framework of the proposed VolumeGAN. We first learn a feature volume, starting from a learnable spatial template, as the structural representation. Given the camera pose $\xi$ , we sample points along a camera ray and query the coordinate descriptor of each point from the feature volume via trilinear interpolation. The resulting coordinate descriptors, concatenated with the raw 3D coordinates, are then converted to a generative feature field and further accumulated as a 2D feature map. Such a feature map is regarded as the textural representation, which guides the rendering of the appearance of the output synthesis with the help of a neural renderer.
|
| 43 |
+
|
| 44 |
+
# 3. Method
|
| 45 |
+
|
| 46 |
+
This work targets at learning 3D-aware image generative models from a collection of 2D images. Previous attempts replace the generator of a GAN model with an implicit function [28], which maps 3D coordinates to pixel values. To improve the controllability and synthesis quality, we propose to explicitly learn the structural and the textural representations that are responsible for the underlying structure and texture of the object respectively. Concretely, instead of directly bridging coordinates with densities and RGB colors, we ask the implicit function to transform 3D feature volume (i.e., the structural representation) to a generative feature field, which are then accumulated into a 2D feature map (i.e., the textural representation). The overall framework is illustrated in Fig. 2. Before going into details, we first briefly introduce the Neural Radiance Field (NeRF), which is a core module of the proposed model.
|
| 47 |
+
|
| 48 |
+
# 3.1. Preliminary
|
| 49 |
+
|
| 50 |
+
The neural radiance field [28] is formulated as a continuous function $F(\mathbf{x}, \mathbf{d}) = (\mathbf{c}, \sigma)$ , which maps a 3D coordinate $\mathbf{x} \in \mathbb{R}^3$ and the viewing direction $\mathbf{d} \in \mathbb{S}^2$ to the RGB color $\mathbf{c} \in \mathbb{R}^3$ and a volume density $\sigma \in \mathbb{R}$ . Then, given a sampled ray, we can predict the colors and densities of all the points that the ray goes through, which are then accumulated into the pixel value with volume rendering techniques. Typically, the function $F(\cdot, \cdot)$ is parameterized with a multi-layer perceptron (MLP), $\Phi(\cdot)$ , as the backbone, and two independent heads, $\phi_c(\cdot, \cdot)$ and $\phi_d(\cdot)$ , to regress the color and density:
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
\mathbf {c} (\mathbf {x}, \mathbf {d}) = \phi_ {c} (\Phi (\mathbf {x}), \mathbf {d}), \tag {1}
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
\sigma (\mathbf {x}) = \phi_ {d} (\Phi (\mathbf {x})), \tag {2}
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
where the color is related with the viewing direction $\mathbf{d}$ due to the variation factors like lighting, while the density $\sigma$ is independent of $\mathbf{d}$ .
|
| 61 |
+
|
| 62 |
+
NeRF is primarily proposed for 3D reconstruction and novel view synthesis, which is trained with the supervision from multi-view images. To enable random sampling by learning from a collection of single-view images, recent attempts [3, 35] introduce a latent code $\mathbf{z}$ to the function $F(\cdot ,\cdot)$ . In this way, the geometry and appearance of the rendered image will vary according to the input $\mathbf{z}$ , resulting in diverse generation. Such a stochastic implicit function is asked to compete with a discriminator of GANs [7] to mimic the distribution of real 2D images. In the learning process, the revised function $F(\mathbf{x},\mathbf{d},\mathbf{z}) = (\mathbf{c},\sigma)$ is supposed to encode the structure and the texture information simultaneously.
|
| 63 |
+
|
| 64 |
+
# 3.2. 3D-aware Generator in VolumeGAN
|
| 65 |
+
|
| 66 |
+
To improve the controllability and image quality of the NeRF-based 3D-aware generative model, we propose to explicitly learn a structural representation and a textural representation, which control the underlying structure and texture respectively. In this part, we will introduce the design of the structural and the textural representations, as well as their integration through a generative neural feature field.
|
| 67 |
+
|
| 68 |
+
3D Feature Volume as Structural Representation. As pointed out by NeRF [28], the low-dimensional coordinates $\mathbf{x}$ should be projected into a higher-dimensional feature to describe the complex 3D scenes. For this purpose, a typical solution is to characterize $\mathbf{x}$ into Fourier features [40]. However, such a Fourier transformation cannot introduce additional information beyond the spatial position. It may be enough for reconstructing a fixed scene, but yet far from encoding a distributed feature for the image synthesis of different object instances. Hence, we propose to learn a grid of features providing the inputs of implicit functions, which gives a more detailed description of each spatial point. We term such a 3D feature volume, $\mathbf{V}$ , as the structural representation which characterizes the underlying 3D structure. To obtain the feature volume, we employ a sequence of
|
| 69 |
+
|
| 70 |
+
3D convolutional layers with the Leaky ReLU (LReLU) functions [24]. Inspired by Karras et al. [16], we apply Adaptive Instance Normalization (AdaIN) [12] to the output of each layer to introduce diversity to the feature volume. Starting from a learnable 3D tensor, $\mathbf{V}_0$ , the structural representation is generated with
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
\mathbf {V} = \psi_ {n _ {s} - 1} \circ \psi_ {n _ {s} - 2} \circ \dots \circ \psi_ {0} (\mathbf {V} _ {0}), \tag {3}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\psi_ {i} \left(\mathbf {V} _ {i}\right) = \operatorname {A d a I N} \left(\operatorname {L R e L U} \left(\operatorname {C o n v} \left(\operatorname {U p} \left(\mathbf {V} _ {i}, s _ {i}\right)\right)\right), \mathbf {z}\right), \tag {4}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
where $n_s$ denotes the number of layers for structure learning. $s_i \in \{1, 2\}$ is the upsampling scale for the $i$ -th layer.
|
| 81 |
+
|
| 82 |
+
2D Feature Map as Textural Representation. As discussed before, volume rendering can be extremely slow and computationally expensive, making it costly to directly render the raw pixels of a high-resolution image. To mitigate the issue, we propose to learn a feature map at a low resolution, followed by a CNN to render a high-fidelity result. Here, the 2D feature map is responsible for describing the visual appearance of the final output. The tailing CNN consists of several Modulated Convolutional Layers (ModConv) [17], also activated by LReLU. To avoid the CNN from weakening the 3D consistency [30], we use $1 \times 1$ kernel size for all layers such that the per-pixel feature can be processed independently. In particular, given a 2D feature map, M, as the textural representation, the image is generated by
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
\mathbf {I} ^ {f} = f _ {n _ {t} - 1} \circ f _ {n _ {t} - 2} \circ \dots \circ f _ {0} (\mathbf {M}), \tag {5}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
f _ {i} \left(\mathbf {M} _ {i}\right) = \operatorname {L R e L U} \left(\operatorname {M o d C o n v} \left(\mathbf {M} _ {i}, t _ {i}, \mathbf {z}\right)\right), \tag {6}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
where $n_t$ denotes the number of layers for texture learning. $t_i \in \{1, 2\}$ is the upsampling scale for the $i$ -th layer.
|
| 93 |
+
|
| 94 |
+
Bridging Representations with Neural Feature Field. To connect the structural and the textural representations in the framework, we introduce a neural radiance field [28] as the bridge. Different from the implicit function in the original NeRF, which maps coordinates to pixel values, we first query the coordinate descriptor, $\mathbf{v}$ , from the feature volume, $\mathbf{V}$ , given a 3D coordinate $\mathbf{x}$ , and then concatenate it with $\mathbf{x}$ to obtain $\mathbf{v}^{\mathbf{x}}$ as the input. Then, the implicit function transform $\mathbf{v}^{\mathbf{x}}$ to the density and feature vector of the field. The above process can be formulated as
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\mathbf {v} = \operatorname {t r i l i n e a r} (\mathbf {V}, \mathbf {x}), \tag {7}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\mathbf {v} ^ {\mathbf {x}} = \operatorname {C o n c a t} (\mathbf {v}, \mathbf {x}), \tag {8}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\Phi (\mathbf {v} ^ {\mathbf {x}}) = \phi_ {n - 1} \circ \phi_ {n - 2} \circ \dots \circ \phi_ {0} (\mathbf {v} ^ {\mathbf {x}}), \tag {9}
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\phi_ {i} \left(\mathbf {v} _ {i} ^ {\mathbf {x}}\right) = \sin \left(\gamma_ {i} (\mathbf {z}) \cdot \left(\mathbf {W} _ {i} \mathbf {v} _ {i} ^ {\mathbf {x}} + \mathbf {b} _ {i}\right) + \beta_ {i} (\mathbf {z}))\right), \tag {10}
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\mathbf {f} (\mathbf {x}, \mathbf {d}) = \phi_ {f} \left(\Phi \left(\mathbf {v} ^ {\mathbf {x}}\right), \mathbf {d}\right), \tag {11}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
\sigma (\mathbf {x}) = \phi_ {d} \left(\Phi \left(\mathbf {v} ^ {\mathbf {x}}\right)\right), \tag {12}
|
| 118 |
+
$$
|
| 119 |
+
|
| 120 |
+
where $n$ denotes the number of layers to parameterize the neural field, while $\mathbf{W}_i$ and $\mathbf{b}_i$ are the learnable layer-wise
|
| 121 |
+
|
| 122 |
+
weight and bias. Eq. (8) concatenates coordinates $\mathbf{x}$ onto feature $\mathbf{v}$ to explicitly introduce the structural information. Eq. (10) follows Chan et al. [3], which conditions the layerwise output of the backbone $\Phi (\cdot)$ on the frequencies, $\gamma_{i}(\cdot)$ , and phase shifts, $\beta_{i}(\cdot)$ , learned from the random noise $\mathbf{z}$ . Eq. (11) replaces the color modeling in Eq. (1) with feature modeling.
|
| 123 |
+
|
| 124 |
+
A per-pixel final feature $\mathbf{m}$ can be obtained via volume rendering along a ray $\mathbf{r}$ (with viewing direction $\mathbf{d}$ ). A collection of $\mathbf{m}$ regarding different rays group into a 2D feature map as the textural representation, $\mathbf{M}$ , which will be further used to render the image.
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\mathbf {m} (\mathbf {r}) = \sum_ {k = 1} ^ {N} T _ {k} \left(1 - \exp \left(- \sigma \left(\mathbf {x} _ {k}\right) \delta_ {k}\right)\right) \mathbf {f} \left(\mathbf {x} _ {k}, \mathbf {d}\right), \tag {13}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
T _ {k} = \exp \left(- \sum_ {j = 1} ^ {k - 1} \sigma \left(\mathbf {x} _ {j}\right) \delta_ {j}\right). \tag {14}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
Eq. (13) approximates the integral of $N$ points $\{\mathbf{x}_k\}_{k=1}^N$ on the sampled ray $\mathbf{r}$ , where $\delta_k = ||\mathbf{x}_{k+1} - \mathbf{x}_k||_2$ stands for the distance between adjacent sampled points.
|
| 135 |
+
|
| 136 |
+
# 3.3. Training Pipeline
|
| 137 |
+
|
| 138 |
+
Generative Sampling. The whole generation process is formulated as $\mathbf{I}^f = G(\mathbf{z},\xi)$ , where $\mathbf{z}$ is a latent code sampled from a Gaussian distribution $\mathcal{N}(0,1)$ and $\xi$ denotes the camera pose sampled from a prior distribution $p_{\xi}$ . $p_{\xi}$ is tuned for different datasets as either Gaussian or Uniform.
|
| 139 |
+
|
| 140 |
+
Discriminator. Like existing approaches for 3D-aware image synthesis [3,30,35], we employ a discriminator $D(\cdot)$ to compete with the generator. The discriminator is a CNN consisting of several residual blocks like [17].
|
| 141 |
+
|
| 142 |
+
Training Objectives. During training, we randomly sample $\mathbf{z}$ and $\xi$ from the prior distributions, while the real images $\mathbf{I}^r$ are sampled from the real data distribution $p_{\mathcal{D}}$ . The generator and the discriminator are jointly trained with
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
\min \mathcal {L} _ {G} = \mathbb {E} _ {\mathbf {z} \sim p _ {z}, \xi \sim p _ {\xi}} [ f (D (G (\mathbf {z}, \xi))) ], \tag {15}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\min \mathcal {L} _ {D} = \mathbb {E} _ {\mathbf {I} ^ {r} \sim p _ {D}} [ f (- D (\mathbf {I} ^ {r})) + \lambda | | \nabla_ {\mathbf {I} ^ {r}} D (\mathbf {I} ^ {r}) | | _ {2} ^ {2} ], \tag {16}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
where $f(t) = -\log (1 + \exp (-t))$ is the softplus function. The last term in Eq. (16) stands for the gradient penalty regularizer and $\lambda$ is the loss weight.
|
| 153 |
+
|
| 154 |
+
# 4. Experiment
|
| 155 |
+
|
| 156 |
+
# 4.1. Settings
|
| 157 |
+
|
| 158 |
+
Datasets. We evaluate the proposed VolumeGAN on five real-world unstructured datasets including CelebA [22], Cats [48], FFHQ [16], CompCars [46], LSUN bedroom [47], and a synthetic dataset Carla [6]. CelebA contains around $20K$ face images from $10K$ identities. The
|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
Figure 3. Qualitative comparison between our VolumeGAN and existing alternatives on FFHQ [16], CompCars [46], and LSUN bedroom [47] datasets. All images are in $256 \times 256$ resolution.
|
| 162 |
+
|
| 163 |
+
crop from the top of the hair to the bottom of the chin is adopted for data preprocessing on CelebA. The Cats dataset contains $6.5K$ images of cat heads at $128 \times 128$ resolution. FFHQ contains $70K$ images of real human faces in a resolution of $1024 \times 1024$ . We follow the protocol of [16] to preprocess the faces of FFHQ. Compcars includes $136K$ real cars whose pose varies greatly. The original images are in different aspect ratios. Hence we center crop the cars and resize them into $256 \times 256$ . Carla dataset contains $10K$ images which are rendered from Carla Driving simulator [6] using 16 car models with different textures. LSUN bedroom includes $300K$ samples in various camera views and aspect ratios. We also use center cropping to preprocess the bedroom images. We train VolumeGAN on resolutions of $128 \times 128$ for CelebA, Cats, and Carla and $256 \times 256$ for FFHQ, CompCars and LSUN bedroom.
|
| 164 |
+
|
| 165 |
+
Baselines. We choose four 3D-aware image synthesis approaches as the baselines, including HoloGAN [29], GRAF [35], $\pi$ -GAN [3] and GIRAFFE [30]. Baseline models are officially released by the original papers or trained with the official implementation. More details can
|
| 166 |
+
|
| 167 |
+
# be found in Supplementary Material.
|
| 168 |
+
|
| 169 |
+
Implementation Details. The learnable 3D template $\mathbf{V}_0$ are randomly initialized in $4 \times 4 \times 4$ shape and 3D convolutions with kernel size $3 \times 3 \times 3$ are stacked to embed the template, resulting in the feature volume in $32 \times 32 \times 32$ resolution. We sample rays in a resolution of $64 \times 64$ , and four conditioned MLPs (SIREN [3,38]) with 256 dimensions are adopted to model the feature field and the volume density. We use an Upsample block [17] and two $1 \times 1$ ModConv [2,17] at each resolution for the neural renderer until reaching the output image resolution. We also apply progressive training strategy used in PG-GAN [14] to achieve better image qualities. For the network training, we use Adam [18] optimizer with $\beta_0 = 0$ and $\beta_1 = 0.999$ over 8 GPUs. The entire training requires the discriminator to see $25000K$ images. The batch size is 64, and the weight decay is 0. More details can be found in Supplementary Material.
|
| 170 |
+
|
| 171 |
+
Table 1. Quantitative comparisons on different datasets. FID [11] (lower is better) is used as the evaluation metric. Numbers in brackets indicate the improvements of our VolumeGAN over the second method.
|
| 172 |
+
|
| 173 |
+
<table><tr><td>Method</td><td>CelebA 128</td><td>Cats 128</td><td>Carla 128</td><td>FFHQ 256</td><td>CompCars 256</td><td>Bedroom 256</td></tr><tr><td>HoloGAN [29]</td><td>39.7</td><td>40.4</td><td>126.4</td><td>72.6</td><td>65.6</td><td>-</td></tr><tr><td>GRAF [35]</td><td>41.1</td><td>28.9</td><td>41.6</td><td>81.3</td><td>222.1</td><td>63.9</td></tr><tr><td>Ο-GAN [3]</td><td>15.9</td><td>17.7</td><td>30.1</td><td>53.2</td><td>194.5</td><td>33.9</td></tr><tr><td>GIRAFFE [30]</td><td>17.5</td><td>20.1</td><td>30.8</td><td>36.7</td><td>27.2</td><td>44.2</td></tr><tr><td>VolumeGAN (Ours)</td><td>8.9 (-7.0)</td><td>5.1 (-12.6)</td><td>7.9 (-22.2)</td><td>9.1 (-27.6)</td><td>12.9 (-14.3)</td><td>17.3 (-16.6)</td></tr></table>
|
| 174 |
+
|
| 175 |
+
# 4.2. Main Results
|
| 176 |
+
|
| 177 |
+
Qualitative Results. Fig. 3 compares the synthesized images of our method with baselines on FFHQ, CompCars and LSUN bedroom. The images are sampled from three views and synthesized in a resolution of $256 \times 256$ for visualization. Although all baseline methods can synthesize images under different camera poses on FFHQ, they suffer from low image quality and the identity shift across different angles. When transferred to challenging CompCars with larger viewpoint variations, some baselines such as GRAF [35] and $\pi$ -GAN [3] struggle to generate realistic cars. HoloGAN can achieve good image quality but suffers from multi-view inconsistency. GIRAFFE can generate realistic cars while the color of the cars changes significantly under different views. When tested on bedroom, HoloGAN, GRAF, $\pi$ -GAN and GIRRAFE cannot handle such indoor scene data with larger structure and texture variations.
|
| 178 |
+
|
| 179 |
+
VolumeGAN can synthesize high-fidelity view-consistent images. Compared with the existing approaches, it generates more fine details, such as teeth (face), headlights (car) and windows (bedroom). Even on the more challenging CompCars and LSUN bedroom datasets, VolumeGAN still achieves satisfying synthesis performance thanks to the feature volume and the neural renderer.
|
| 180 |
+
|
| 181 |
+
Quantitative Results. We quantitatively evaluate the visual quality of the synthesized images using Frechet Inception Distance (FID) [11]. We follow the evaluation protocol of StyleGAN [16] which adopts $50K$ real and fake samples to calculate the FID score. All baseline models are evaluated with the same setting for a fair comparison. As shown in Tab. 1, our approach leads to a significant improvement compared with baselines, particularly on the challenging datasets with the larger pose variation or the finer details. Note that although GIRAFFE also uses the neural renderer, our method still outperforms it with a clear margin. It demonstrates that the structural information encoded in the feature volume provides representative visual concepts, resulting in better images quality.
|
| 182 |
+
|
| 183 |
+
# 4.3. Ablation Studies
|
| 184 |
+
|
| 185 |
+
We conduct ablation studies on CelebA $128 \times 128$ to examine the importance of each component in VolumeGAN. Metrics. In addition to the FID score that measures the
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
Figure 4. Synthesized results with the front camera view by $\pi$ -GAN [3] and our VolumeGAN, where the faces proposed by VolumeGAN are more consistent to the given view, suggesting a better 3D controllability.
|
| 189 |
+
|
| 190 |
+
image quality, we also provide two quantitative metrics to measure the multi-view consistency and the precision of 3D control as follows. 1) Reprojection Error. We first extract the underlying geometry of an object from the generated density using marching cubes [23]. Then, We render each object in sequence and sample five viewpoints uniformly to synthesize the images.
|
| 191 |
+
|
| 192 |
+
The depth of each image is rendered from the resulting extracted mesh, which is used to calculate the reprojection error on two consecutive views by warping them each other. Specifically, we fix the yaw to be 0 and sample pitch from $[-0.3, 0.3]$ . The threshold of marching cube is set to 10 due to the best visualization results of meshes. The reprojection error is calculated in the normalized image space $[-1, +1]$ like [1, 44, 51] to evaluate the multi-view consistency. 2) Pose Error. We synthesize 20,000 images and regard the results predicted from the head pose estimator [49] as the ground truth. The L1 distance between the given camera pose and the predicted pose is reported to evaluate the 3D control quantitatively.
|
| 193 |
+
|
| 194 |
+
Ablations on VolumeGAN Components. Our approach proposes to use Feature Volume as the structural representation and adopt the neural renderer consisting of ModConv to render textural representation into high-fidelity images. We ablate them to better understand their individual contributions. Our baseline is built upon $\pi$ -GAN [3] using conditioned MLPs to achieve 3D-aware image synthesis by mapping coordinates to RGB color. The layer number of the baseline is set to be 4, the same as our setting illustrated in Sec. 4.1 for a fair comparison. As shown in Tab. 2, introducing the feature volume that provides the structural representation could further improve the FID score of the baseline approach from 18.7 to 13.6.
|
| 195 |
+
|
| 196 |
+
More importantly, lower reprojection error and pose error are also achieved, demonstrating the structural representation from the feature volume not only facilitates better visual results but also maintains the 3D properties regarding multi-view consistency and 3D explicit controlling. On top of this, the neural renderer further enhances FID to 8.9 with a slight drop in reprojection error and pose error, leading to the new state-of-the-art result on 3D-aware image synthesis. Notably, involving the neural renderer to the baseline could also boost the FID score but apparently sacrifice the 3D properties to some extent according to the 3D metrics. It also indicates that FID is not a comprehensive metric to evaluate 3D-aware image synthesis. In addition, Fig. 4 gives several synthesized samples of $\pi$ -GAN baseline and our approach under the front view. More samples can be found in Supplementary Material. Qualitatively, the poses of our synthesized samples are closer to the given camera view which is quantitatively reflected by the pose-error score.
|
| 197 |
+
|
| 198 |
+
Resolution of the Feature Volume. The feature volume resolution depicts the spatial refinement of the structural representation, and thus it plays an essential role in synthesizing images. Tab. 3 presents the metrics of the synthesis results for various resolutions of feature volume. As the resolution increases, the multi-view consistency and 3D control become better consistently while the visual quality measured by FID fluctuates little. This demonstrates that a more detailed feature volume provides better geometry consistency across various camera poses. However, increasing the feature volume resolution inevitably results in a greater computational burden. As a result, we choose a feature volume resolution of 32 in all of our experiments to maintain the balance between efficiency and image quality.
|
| 199 |
+
|
| 200 |
+
Neural Renderer Depth. The neural renderer is adopted to convert textural representations into 2D images; thus, its capacity is critical to the quality of the generated images. We adjust its capacity by varying the depth of the neural renderer to investigate its effect. Tab. 4 shows a trade-off between image quality and 3D properties. As the depth of the network increases, better image visual quality can be achieved while the quality of multi-view consistency and 3D control downgrades. This implies that increasing the capacity of the neural renderer would damage the 3D structure to some extent, revealing FID is not a comprehensive metric for 3D-aware image synthesis again. We thus choose the shallower network as the neural renderer for better 3D consistency and control.
|
| 201 |
+
|
| 202 |
+
# 4.4. Properties of Learned Representations
|
| 203 |
+
|
| 204 |
+
A key advantage of our approach over previous attempts is that by separately modeling the structure and texture with the 3D feature volume and 2D feature map, our model learns the disentangle representations for the object. These representations allow us to achieve control of the shape and
|
| 205 |
+
|
| 206 |
+
Table 2. Ablation studies on the components of VolumeGAN, including the feature volume (FV) and the neural renderer (NR). "Rep-Er" and "Pose-Er" are the reprojection-error and pose-error.
|
| 207 |
+
|
| 208 |
+
<table><tr><td>FV</td><td>NR</td><td>FID</td><td>Rep-Er</td><td>Pose-Er</td></tr><tr><td colspan="2">Ο-GAN</td><td>18.7</td><td>0.071</td><td>12.7</td></tr><tr><td colspan="2">β</td><td>13.6</td><td>0.031</td><td>8.3</td></tr><tr><td></td><td>β</td><td>11.3</td><td>0.103</td><td>12.1</td></tr><tr><td>β</td><td>β</td><td>8.9</td><td>0.037</td><td>8.6</td></tr></table>
|
| 209 |
+
|
| 210 |
+
Table 3. Effect of the size of feature volume. "Str Res" denotes the resolution of the feature volume (i.e., the structural representation).
|
| 211 |
+
|
| 212 |
+
<table><tr><td>Str Res</td><td>FID</td><td>Rep-Er</td><td>Pose-Er</td><td>Speed (fps)</td></tr><tr><td>16</td><td>9.0</td><td>0.040</td><td>9.1</td><td>5.58</td></tr><tr><td>32</td><td>8.9</td><td>0.037</td><td>8.6</td><td>5.15</td></tr><tr><td>64</td><td>9.2</td><td>0.032</td><td>8.4</td><td>3.86</td></tr></table>
|
| 213 |
+
|
| 214 |
+
Table 4. Effect of the depth of neural renderer. "Tex Res" denotes the resolution of the 2D feature map (i.e., the textural representation).
|
| 215 |
+
|
| 216 |
+
<table><tr><td>Depth</td><td>Tex Res</td><td>FID</td><td>Rep-Er</td><td>Pose-Er</td></tr><tr><td>6</td><td>64</td><td>8.0</td><td>0.051</td><td>9.7</td></tr><tr><td>4</td><td>64</td><td>8.8</td><td>0.046</td><td>9.3</td></tr><tr><td>2</td><td>64</td><td>8.9</td><td>0.037</td><td>8.6</td></tr></table>
|
| 217 |
+
|
| 218 |
+
appearance. The coordinate descriptor and the 3D mesh extracted from the density are visualized to interpret the learned representations.
|
| 219 |
+
|
| 220 |
+
Independent Control of Structure and Texture. At test time, we could easily swap and combine the latent codes regarding the structural and textural individually. In this way, we can investigate whether such two representations are well disentangled. For example, we could combine the structural representation (i.e., feature volume code) of a certain instance with the textural (i.e., generative feature field and neural renderer code) of another. The corresponding results are shown in Fig. 5. The faces results show that the feature volume code controls the shape of the face and hairstyle, whereas the feature field and neural renderer code determine the skin and hair color. Concretely, glasses are controlled by the volume code, in line with our perception. We can swap the structure and texture of cars successfully. It demonstrates that our method can disentangle shape and appearance in synthesizing images. Different from GRAF [35] and GIRRAFE [30], we do not explicitly introduce shape code and appearance code to control image synthesis. Thanks to the structural and textural representations in our framework, the disentanglement between shape and appearance emerges naturally.
|
| 221 |
+
|
| 222 |
+
Coordinate Descriptor Visualization. To further explore how the feature volume describes the underlying structure, we visualize the corresponding coordinate descriptors queried in the feature volume. Specifically, we accumulate
|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
Figure 5. Synthesized results by exchanging the structural and the textural latent codes.
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
Figure 6. Visualization of coordinate descriptor. PCA is used to reduce the feature dimension.
|
| 229 |
+
|
| 230 |
+
the coordinate descriptors on each ray, resulting in a high-dimensional feature map. PCA [42] is utilized to reduce the dimension to 3 for visualization. Fig. 6 shows that the feature volume serves as a coarse structure template. The face outline, hair, and background can be recognized easily. Impressively, the eyes have a strong symmetry even with the glasses. Compared to raw coordinates, the feature descriptor provides a structured constraint to guide the image synthesis so that our method inherently synthesize image with better visual quality and 3D properties.
|
| 231 |
+
|
| 232 |
+
Underlying Geometry. The volume density of the implicit representation can construct an underlying geometry of the object due to its view-independent properties. We extract the underlying geometry with marching cube [23] on the density, resulting in a surface mesh. Fig. 7 shows the meshes with various views and identities. The geometry is consistent across different views, supporting the good 3D properties of our method.
|
| 233 |
+
|
| 234 |
+
# 5. Conclusion and Discussion
|
| 235 |
+
|
| 236 |
+
In this paper, we propose a new 3D-aware generative model, VolumeGAN, for synthesizing high-fidelity images. By learning structural and textural representations, our
|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
Figure 7. 3D Mesh extracted from the density.
|
| 240 |
+
|
| 241 |
+
model achieves sufficiently higher image quality and better 3D control on various challenging datasets.
|
| 242 |
+
|
| 243 |
+
Limitations. Despite the structural representation learned by VolumeGAN, the synthesized 3D mesh surface is still not smooth and lacks fine details. Meanwhile, even though we can improve the synthesis resolution via introducing a deeper CNN (i.e., the neural renderer), it may weaken the multi-view consistency and 3D control. Future research will focus on generating fine-grained 3D shape as well as making the tailing CNN in VolumeGAN with improved 3D properties through introducing regularizers.
|
| 244 |
+
|
| 245 |
+
Ethical Consideration. Due to the high-quality 3D-aware synthesis performance, our approach is potentially applicable for deep fake generation. We strongly oppose the abuse of our method in violating privacy and security. On the contrary, we hope it can be used to improve the existing fake detection systems.
|
| 246 |
+
|
| 247 |
+
Acknowledgement: This work is supported in part by the Early Career Scheme (ECS) through the Research Grants Council (RGC) of Hong Kong under Grant No.24206219, CUHK FoE RSFS Grant, and Centre for Perceptual and Interactive Intelligence (CPII) Ltd under the Innovation and Technology Fund.
|
| 248 |
+
|
| 249 |
+
# References
|
| 250 |
+
|
| 251 |
+
[1] Rameen Abdul, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In ICCV, 2019. 2, 6
|
| 252 |
+
[2] Ivan Anokhin, Kirill Demochkin, Taras Khakhulin, Gleb Sterkin, Victor Lempitsky, and Denis Korzhenkov. Image generators with conditionally-independent pixel synthesis. In CVPR, 2021. 5
|
| 253 |
+
[3] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In IEEE Conf. Comput. Vis. Pattern Recog., 2021. 1, 2, 3, 4, 5, 6
|
| 254 |
+
[4] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016. 2
|
| 255 |
+
[5] Julian Chibane, Thiemo Alldieck, and Gerard Pons-Moll. Implicit functions in feature space for 3d shape reconstruction and completion. In CVPR, 2020. 2
|
| 256 |
+
[6] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In Conference on robot learning, 2017. 4, 5
|
| 257 |
+
[7] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Adv. Neural Inform. Process. Syst., 2014. 1, 2, 3
|
| 258 |
+
[8] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis. arXiv preprint arXiv:2110.08985, 2021. 2
|
| 259 |
+
[9] Jinjin Gu, Yujun Shen, and Bolei Zhou. Image processing using multi-code gan prior. In IEEE Conf. Comput. Vis. Pattern Recog., 2020. 2
|
| 260 |
+
[10] Zhenliang He, Meina Kan, and Shiguang Shan. Eigengan: Layer-wise eigen-learning for gans. ICCV, 2021. 2
|
| 261 |
+
[11] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Adv. Neural Inform. Process. Syst., 2017, 2, 6
|
| 262 |
+
[12] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Int. Conf. Comput. Vis., 2017. 4
|
| 263 |
+
[13] Ajay Jain, Matthew Tancik, and Pieter Abbeel. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In ICCV, 2021. 2
|
| 264 |
+
[14] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In Int. Conf. Learn. Represent., 2018. 2, 5
|
| 265 |
+
[15] Tero Karras, Miika Aittala, Samuli Laine, Erik HΓ€rkΓΆnen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. In NIPS, 2021. 2
|
| 266 |
+
[16] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In IEEE Conf. Comput. Vis. Pattern Recog., 2019. 1, 2, 4, 5, 6
|
| 267 |
+
|
| 268 |
+
[17] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In CVPR, 2020. 2, 4, 5
|
| 269 |
+
[18] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Int. Conf. Learn. Represent., 2015. 5
|
| 270 |
+
[19] Yariv Lior, Kasten Yoni, Moran Dror, Galun Meirav, Atzmon Matan, Basri Ronen, and Lipman Yaron. Multiview neural surface reconstruction by disentangling geometry and appearance. In NeurIPS, 2020. 2
|
| 271 |
+
[20] Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. NeurIPS, 2020. 2
|
| 272 |
+
[21] Shaohui Liu, Yinda Zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, and Zhaopeng Cui. Dist: Rendering deep implicit signed distance function with differentiable sphere tracing. In CVPR, 2020. 2
|
| 273 |
+
[22] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In Int. Conf. Comput. Vis., 2015. 4
|
| 274 |
+
[23] William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. ACM siggraph computer graphics, 1987. 6, 8
|
| 275 |
+
[24] Andrew L Maas, Awni Y Hannun, Andrew Y Ng, et al. Rectifier nonlinearities improve neural network acoustic models. In ICML, 2013. 4
|
| 276 |
+
[25] Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In CVPR, 2021. 2
|
| 277 |
+
[26] Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, and Jingyi Yu. Gnerf: Gan-based neural radiance field without posed camera. In ICCV, 2021. 2
|
| 278 |
+
[27] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In IEEE Conf. Comput. Vis. Pattern Recog., 2019. 2
|
| 279 |
+
[28] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Eur. Conf. Comput. Vis., 2020. 1, 2, 3, 4
|
| 280 |
+
[29] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In ICCV, 2019. 2, 5, 6
|
| 281 |
+
[30] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In IEEE Conf. Comput. Vis. Pattern Recog., 2021. 1, 2, 4, 5, 6, 7
|
| 282 |
+
[31] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In CVPR, 2020. 2
|
| 283 |
+
[32] Xingang Pan, Xudong Xu, Chen Change Loy, Christian Theobalt, and Bo Dai. A shading-guided generative implicit model for shape-accurate 3d-aware image synthesis. In NIPS, 2021. 2
|
| 284 |
+
|
| 285 |
+
[33] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In IEEE Conf. Comput. Vis. Pattern Recog., 2019. 2
|
| 286 |
+
[34] William Peebles, John Peebles, Jun-Yan Zhu, Alexei Efros, and Antonio Torralba. The hessian penalty: A weak prior for unsupervised disentanglement. In ECCV, 2020. 2
|
| 287 |
+
[35] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. In Adv. Neural Inform. Process. Syst., 2020. 1, 2, 3, 4, 5, 6, 7
|
| 288 |
+
[36] Yujun Shen, Ceyuan Yang, Xiaou Tang, and Bolei Zhou. Interfacegan: Interpreting the disentangled face representation learned by gans. IEEE Trans. Pattern Anal. Mach. Intell., 2020. 1, 2
|
| 289 |
+
[37] Yujun Shen and Bolei Zhou. Closed-form factorization of latent semantics in gans. In CVPR, 2021. 2
|
| 290 |
+
[38] Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. NIPS, 2020. 5
|
| 291 |
+
[39] Vincent Sitzmann, Michael ZollhΓΆfer, and Gordon Wetzstein. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In NeurIPS, 2019. 2
|
| 292 |
+
[40] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. 3
|
| 293 |
+
[41] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. In NeurIPS, 2021. 2
|
| 294 |
+
[42] Svante Wold, Kim Esbensen, and Paul Geladi. Principal component analysis. Chemometrics and intelligent laboratory systems, 1987. 8
|
| 295 |
+
[43] Xudong Xu, Xingang Pan, Dahua Lin, and Bo Dai. Generative occupancy fields for 3d surface-aware image synthesis. In NIPS, 2021. 2
|
| 296 |
+
[44] Yinghao Xu, Yujun Shen, Jiapeng Zhu, Ceyuan Yang, and Bolei Zhou. Generative hierarchical features from synthesizing images. In IEEE Conf. Comput. Vis. Pattern Recog., 2021. 1, 2, 6
|
| 297 |
+
[45] Ceyuan Yang, Yujun Shen, and Bolei Zhou. Semantic hierarchy emerges in deep generative representations for scene synthesis. Int. J. Comput. Vis., 2020. 1, 2
|
| 298 |
+
[46] Linjie Yang, Ping Luo, Chen Change Loy, and Xiaou Tang. A large-scale car dataset for fine-grained categorization and verification. In CVPR, 2015. 4, 5
|
| 299 |
+
[47] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. 2, 4, 5
|
| 300 |
+
[48] Weiwei Zhang, Jian Sun, and Xiaou Tang. Cat head detection-how to effectively exploit shape and texture features. In ECCV, 2008. 4
|
| 301 |
+
[49] Yijun Zhou and James Gregson. Whenet: Real-time fine-grained estimation for wide range head pose. BMVC, 2020. 6
|
| 302 |
+
|
| 303 |
+
[50] Jiapeng Zhu, Ruili Feng, Yujun Shen, Deli Zhao, Zhengjun Zha, Jingren Zhou, and Qifeng Chen. Low-rank subspaces in gans. NIPS, 2021. 2
|
| 304 |
+
[51] Jiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. Indomain gan inversion for real image editing. In Eur. Conf. Comput. Vis., 2020. 2, 6
|
| 305 |
+
[52] Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Joshua B. Tenenbaum, and William T. Freeman. Visual object networks: Image generation with disentangled 3D representations. In NIPS, 2018. 2
|
3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:99de8045ffbcf55ff59037fc1f81f7602b7074e5f3f6759daedb8ead14089f0b
|
| 3 |
+
size 682883
|
3dawareimagesynthesisvialearningstructuralandtexturalrepresentations/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2d66cd4327fc90e3ce173fa0e9a5f156e85f6eac7e9f9208b9a4617f68955892
|
| 3 |
+
size 374778
|
3dcommoncorruptionsanddataaugmentation/f2e777f3-b2b6-4e77-b91a-b3be86f414ef_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:81b4c949d13187e3b83125d2ce38b32d4f13491d25bea47bd155755a69fdfabf
|
| 3 |
+
size 88887
|
3dcommoncorruptionsanddataaugmentation/f2e777f3-b2b6-4e77-b91a-b3be86f414ef_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b4951b83f8517e3d9fcd76cb20c99cb60158620469e4080fb10735825276735e
|
| 3 |
+
size 115947
|
3dcommoncorruptionsanddataaugmentation/f2e777f3-b2b6-4e77-b91a-b3be86f414ef_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ee249cb4a34b1db8614c9c31808858e839647ac8fb7ee60a8999109be5b9343d
|
| 3 |
+
size 9643960
|
3dcommoncorruptionsanddataaugmentation/full.md
ADDED
|
@@ -0,0 +1,366 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D Common Corruptions and Data Augmentation
|
| 2 |
+
|
| 3 |
+
OΔuzhan Fatih Kar Teresa Yeo Andrei Atanov Amir Zamir Swiss Federal Institute of Technology (EPFL)
|
| 4 |
+
|
| 5 |
+
https://3dcommoncorruptions.epfl.ch/
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
We introduce a set of image transformations that can be used as corruptions to evaluate the robustness of models as well as data augmentation mechanisms for training neural networks. The primary distinction of the proposed transformations is that, unlike existing approaches such as Common Corruptions [27], the geometry of the scene is incorporated in the transformations β thus leading to corruptions that are more likely to occur in the real world. We also introduce a set of semantic corruptions (e.g. natural object occlusions. See Fig. 1).
|
| 10 |
+
|
| 11 |
+
We show these transformations are 'efficient' (can be computed on-the-fly), 'extendable' (can be applied on most image datasets), expose vulnerability of existing models, and can effectively make models more robust when employed as '3D data augmentation' mechanisms. The evaluations on several tasks and datasets suggest incorporating 3D information into benchmarking and training opens up a promising direction for robustness research.
|
| 12 |
+
|
| 13 |
+
# 1. Introduction
|
| 14 |
+
|
| 15 |
+
Computer vision models deployed in the real world will encounter naturally occurring distribution shifts from their training data. These shifts range from lower-level distortions, such as motion blur and illumination changes, to semantic ones, like object occlusion. Each of them represents a possible failure mode of a model and has been frequently shown to result in profoundly unreliable predictions [15, 23, 27, 31, 66]. Thus, a systematic testing of vulnerabilities to these shifts is critical before deploying these models in the real world.
|
| 16 |
+
|
| 17 |
+
This work presents a set of distribution shifts in order to test models' robustness. In contrast to previously proposed shifts which perform uniform 2D modifications over the image, such as Common Corruptions (2DCC) [27], our shifts incorporate 3D information to generate corruptions that are consistent with the scene geometry. This leads to shifts that are more likely to occur in the real world (See Fig. 1). The resulting set includes 20 corruptions, each representing a
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1. Using 3D information to generate real-world corruptions. The top row shows sample 2D corruptions applied uniformly over the image, e.g. as in Common Corruptions [27], disregarding 3D information. This leads to corruptions that are unlikely to happen in the real world, e.g. having the same motion blur over the entire image irrespective of the distance to camera (top left). Middle row shows their 3D counterparts from 3D Common Corruptions (3DCC). The circled regions highlight the effect of incorporating 3D information. More specifically, in 3DCC, 1. motion blur has a motion parallax effect where objects further away from the camera seem to move less, 2. defocus blur has a depth of field effect, akin to a large aperture effect in real cameras, where certain regions of the image can be selected to be in focus, 3. lighting takes the scene geometry into account when illuminating the scene and casts shadows on objects, 4. fog gets denser further away from the camera, 5. occlusions of a target object, e.g. fridge (blue mask), are created by changing the camera's viewpoint and having its view naturally obscured by another object, e.g. the plant (red mask). This is in contrast to its 2D counterpart that randomly discards patches [13]. See the project page for a video version of the figure.
|
| 21 |
+
|
| 22 |
+
distribution shift from training data, which we denote as 3D Common Corruptions (3DCC). 3DCC addresses several aspects of the real world, such as camera motion, weather, occlusions, depth of field, and lighting. Figure 2 provides an overview of all corruptions. As shown in Fig. 1, the corruptions in 3DCC are more diverse and realistic compared to 2D-only approaches.
|
| 23 |
+
|
| 24 |
+
We show in Sec. 5 that the performance of the methods aiming to improve robustness, including those with diverse data augmentation, reduce drastically under 3DCC. Furthermore, we observe that the robustness issues exposed by 3DCC well correlate with corruptions generated via photorealistic synthesis. Thus, 3DCC can serve as a challenge
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
Clean Input
|
| 28 |
+
Fog 3D
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
Far Focus
|
| 32 |
+
Shadow
|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
Near Focus
|
| 36 |
+
Multi-illumination
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
Occlusion
|
| 40 |
+
Flash
|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
Scale
|
| 44 |
+
XY-Motion Blur
|
| 45 |
+
|
| 46 |
+

|
| 47 |
+
Bit Error
|
| 48 |
+
CRF Compress.
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
Color quantization
|
| 52 |
+
ISO Noise
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
Camera Pitch
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
Camera Roll
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
Field of View
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
View Jitter
|
| 65 |
+
|
| 66 |
+

|
| 67 |
+
Z-Motion Blur
|
| 68 |
+
|
| 69 |
+

|
| 70 |
+
ABR Compress.
|
| 71 |
+
|
| 72 |
+

|
| 73 |
+
Low-light Noise
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
Figure 2. The new corruptions. We propose a diverse set of new corruption operations ranging from defocusing (near/far focus) to lighting changes and 3D-semantic ones, e.g. object occlusion. These corruptions are all automatically generated, efficient to compute, and can be applied to most datasets (Sec. 3.3). We show that they expose vulnerabilities in models (Sec. 5.2.1) and are a good approximation of realistic corruptions (Sec. 5.2.3). A subset of the corruptions marked in the last column are novel and commonly faced in the real world, but are not 3D based. We include them in our benchmark. For occlusion and scale corruptions, the blue and red masks denote the amodal visible and occluded parts of an object, e.g. the fridge.
|
| 77 |
+
|
| 78 |
+

|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
|
| 82 |
+

|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
|
| 86 |
+

|
| 87 |
+
|
| 88 |
+

|
| 89 |
+
|
| 90 |
+
ing testbed for real-world corruptions, especially those that depend on scene geometry.
|
| 91 |
+
|
| 92 |
+
Motivated by this, our framework also introduces new 3D data augmentations. They take the scene geometry into account, as opposed to 2D augmentations, thus enabling models to build invariances against more realistic corruptions. We show in Sec. 5.3 that they significantly boost model robustness against such corruptions, including the ones that cannot be addressed by the 2D augmentations.
|
| 93 |
+
|
| 94 |
+
The proposed corruptions are generated programmatically with exposed parameters, enabling fine-grained analysis of robustness, e.g. by continuously increasing the 3D motion blur. They are efficient to compute and can be computed on-the-fly during training as data augmentation with a small increase in computational cost. They are also extendable, i.e. they can be applied to standard vision datasets, e.g. ImageNet [12], that do not come with 3D labels.
|
| 95 |
+
|
| 96 |
+
# 2. Related Work
|
| 97 |
+
|
| 98 |
+
This work presents a data-focused approach [51, 62] to robustness. We give an overview of some of the related topics within the constraints of space.
|
| 99 |
+
|
| 100 |
+
Robustness benchmarks based on corruptions: Several studies have proposed robustness benchmarks to understand the vulnerability of models to corruptions. A popular benchmark, Common Corruptions (2DCC) [27], generates synthetic corruptions on real images that expose sensitivities of image recognition models. It led to a series of works either creating new corruptions or applying similar corruptions on other datasets for different
|
| 101 |
+
|
| 102 |
+
tasks [7,32,42,44,65,78]. In contrast to these works, 3DCC modifies real images using 3D information to generate realistic corruptions. The resulting images are both perceptually different and expose different failure modes in model predictions compared to their 2D counterparts (See Fig. 1 and 8). Other works create and capture the corruptions in the real world, e.g. ObjectNet [3]. Although being realistic, it requires significant manual effort and is not extendable. A more scalable approach is to use computer graphics based 3D simulators to generate corrupted data [37] which can lead to generalization concerns. 3DCC aims to generate corruptions as close to the real world as possible while staying scalable.
|
| 103 |
+
|
| 104 |
+
Robustness analysis works using existing benchmarks to probe the robustness of different methods, e.g. data augmentation or self-supervised training, under several distribution shifts. Recent works investigated the relation between synthetic and natural distribution shifts [14,26,43,67] and effectiveness of architectural advancements [5,47,63]. We select several popular methods to show that 3DCC can serve as a challenging benchmark (Fig. 6 and 7).
|
| 105 |
+
|
| 106 |
+
Improving robustness: Numerous methods have been proposed to improve model robustness such as data augmentation with corrupted data [22, 39, 40, 59], texture changes [24, 26], image compositions [80, 82] and transformations [29, 79]. While these methods can generalize to some unseen examples, performance gains are nonuniform [22, 60]. Other methods include self-training [74], pre-training [28, 49], architectural changes [5, 63], and diverse ensembling [33, 50, 76, 77]. Here we instead adopt a data-focused approach to robustness by i. providing a
|
| 107 |
+
|
| 108 |
+

|
| 109 |
+
Figure 3. Left: We show the inputs needed to create each of our corruptions, e.g. the 3D information such as depth, and RGB image. These corruptions have also been grouped (in solid colored lines) according to their corruption types. For example, to create the distortions in the dashed box in the right, one only needs the RGB image and its corresponding depth. For the ones in the left dashed box, 3D mesh is required. Note that one can create view changes corruptions also from panoramic images if available, without a mesh. Right: As an example, we show an overview of generating depth of field effect efficiently. The scene is first split into multiple layers by discretizing scene depth. Next, a region is chosen to be kept in focus (here it is the region closest to the camera). We then compute the corresponding blur levels for each layer according to their distance from the focus region, using a pinhole camera model. The final refocused image is obtained by compositing blurred image layers.
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
|
| 113 |
+
large set of realistic distribution shifts and ii. introducing new 3D data augmentation that improves robustness against real-world corruptions (Sec. 5.3).
|
| 114 |
+
|
| 115 |
+
Photorealistic image synthesis involves techniques to generate realistic images. Some of these techniques have been recently used to create corruption data. These techniques are generally specific to a single real-world corruption. Examples include adverse weather conditions [19, 30, 61, 68, 69], motion blur [6, 48], depth of field [4, 17, 52, 70, 71], lighting [25, 75], and noise [21, 73]. They may be used for purely artistic purposes or to create training data. Some of our 3D transformations are instantiations of these methods, with the downstream goal of testing and improving model robustness in a unified framework with a wide set of corruptions.
|
| 116 |
+
|
| 117 |
+
Image restoration aims to undo the corruption in the image using classical signal processing techniques [18, 20, 34, 41] or learning-based approaches [1, 8, 45, 46, 56, 83, 84]. We differ from these works by generating corrupted data, rather than removing it, to use them for benchmarking or data augmentation. Thus, in the latter, we train with these corrupted data to encourage the model to be invariant to corruptions, as opposed to training the model to remove the corruptions as a pre-processing step.
|
| 118 |
+
|
| 119 |
+
Adversarial corruptions add imperceptible worst-case shifts to the input to fool a model [11,35,40,66]. Most of the failure cases of models in the real world are not the result of adversarial corruptions but rather naturally occurring distribution shifts. Thus, our focus in this paper is to generate corruptions that are likely to occur in the real world.
|
| 120 |
+
|
| 121 |
+
# 3. Generating 3D Common Corruptions
|
| 122 |
+
|
| 123 |
+
# 3.1. Corruption Types
|
| 124 |
+
|
| 125 |
+
We define different corruption types, namely depth of field, camera motion, lighting, video, weather, view changes, semantics, and noise, resulting in 20 corruptions
|
| 126 |
+
|
| 127 |
+
in 3DCC. Most of the corruptions require an RGB image and scene depth, while some needs 3D mesh (See Fig. 3). We use a set of methods leveraging 3D synthesis techniques or image formation models to generate different corruption types, as explained in more detail below. Further details are provided in the supplementary.
|
| 128 |
+
|
| 129 |
+
Depth of field corruptions create refocused images. They keep a part of the image in focus while blurring the rest. We consider a layered approach [4, 17] that splits the scene into multiple layers. For each layer, the corresponding blur level is computed using the pinhole camera model. The blurred layers are then composited with alpha blending. Figure 3 (right) shows an overview of the process. We generate near focus and far focus corruptions by randomly changing the focus region to the near or far part of the scene.
|
| 130 |
+
|
| 131 |
+
Camera motion creates blurry images due to camera movement during exposure. To generate this effect, we first transform the input image into a point cloud using the depth information. Then, we define a trajectory (camera motion) and render novel views along this trajectory. As the point cloud was generated from a single RGB image, it has incomplete information about the scene when the camera moves. Thus, the rendered views will have disocclusion artifacts. To alleviate this, we apply an inpainting method from [48]. The generated views are then combined to obtain parallax-consistent motion blur. We define XY-motion blur and Z-motion blur when the main camera motion is along the image XY-plane or Z-axis, respectively.
|
| 132 |
+
|
| 133 |
+
Lighting corruptions change scene illumination by adding new light sources and modifying the original illumination. We use Blender [10] to place these new light sources and compute the corresponding illumination for a given viewpoint in the 3D mesh. For the flash corruption, a light source is placed at the camera's location, while for shadow corruption, it is placed at random diverse locations outside the camera frustum. Likewise, for multi-illumination corruption, we compute the illumination from a set of random light
|
| 134 |
+
|
| 135 |
+
sources with different locations and luminosities.
|
| 136 |
+
|
| 137 |
+
Video corruptions arise during the processing and streaming of videos. Using the scene 3D, we create a video using multiple frames from a single image by defining a trajectory, similar to motion blur. Inspired by [78], we generate average bit rate (ABR) and constant rate factor (CRF) as H.265 codec compression artifacts, and bit error to capture corruptions induced by imperfect video transmission channel. After applying the corruptions over the video, we pick a single frame as the final corrupted image.
|
| 138 |
+
|
| 139 |
+
Weather corruptions degrade visibility by obscuring parts of the scene due to disturbances in the medium. We define a single corruption and denote it as fog $3D$ to differentiate it from the fog corruption in 2DCC. We use the standard optical model for fog [19,61,69]:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\mathbf {I} (\mathbf {x}) = \mathbf {R} (\mathbf {x}) \mathbf {t} (\mathbf {x}) + \mathbf {A} (1 - \mathbf {t} (\mathbf {x})), \tag {1}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where $\mathbf{I}(\mathbf{x})$ is the resulting foggy image at pixel $x$ , $\mathbf{R}(\mathbf{x})$ is the clean image, $\mathbf{A}$ is atmospheric light, and $\mathbf{t}(\mathbf{x})$ is the transmission function describing the amount of light that reaches the camera. When the medium is homogeneous, the transmission depends on the distance from the camera, $\mathbf{t}(\mathbf{x}) = \exp(-\beta \mathbf{d}(\mathbf{x}))$ where $\mathbf{d}(\mathbf{x})$ is the scene depth and $\beta$ is the attenuation coefficient controlling the fog thickness.
|
| 146 |
+
|
| 147 |
+
View changes are due to variations in the camera extrinsic and focal length. Our framework enables rendering RGB images conditioned on several changes, such as field of view, camera roll and camera pitch, using Blender. This enables us to analyze the sensitivity of models to various view changes in a controlled manner. We also generate images with view jitter that can be used to analyze if models predictions flicker with slight changes in viewpoint.
|
| 148 |
+
|
| 149 |
+
Semantics: In addition to view changes, we also render images by selecting an object in the scene and changing its occlusion level and scale. In occlusion corruption, we generate views of an object occluded by other objects. This is in contrast to random 2D masking of pixels to create an unnatural occlusion effect that is irrespective of image content, e.g. as in [13, 47] (See Fig. 1). Occlusion rate can be controlled to probe model robustness against occlusion changes. Similarly, in scale corruption, we render views of an object with varying distances from the camera location. Note that the corruptions require a mesh with semantic annotations, and are generated automatically, similar to [2]. This is in contrast to [3] which requires tedious manual effort. The objects can be selected by randomly picking a point in the scene or using the semantic annotations.
|
| 150 |
+
|
| 151 |
+
Noise corruptions arise from imperfect camera sensors. We introduce new noise corruptions that do not exist in the previous 2DCC benchmark. For low-light noise, we decreased the pixel intensities and added Poisson-Gaussian distributed noise to reflect the low-light imaging setting [21]. ISO noise also follows a Poisson-Gaussian distribution, with a fixed photon noise (modeled by a Poisson), and varying electronic noise (modeled by a Gaussian). We also included
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
Figure 4. Visualizations of 3DCC with increasing shift intensities. Top: Increasing the shift intensity results in larger blur, less illumination, and denser fog. Bottom: The object becomes more occluded or shrinks in size using calculated viewpoint changes. The blue mask denotes the amodal visible parts of the fridge/couch, and the red mask is the occluded part. The leftmost column shows the clean images. Visuals for all corrections for all shift intensities are shown in the supplementary.
|
| 155 |
+
|
| 156 |
+
color quantization as another corruption that reduces the bit depth of the RGB image. Only this subset of our corruptions is not based on 3D information.
|
| 157 |
+
|
| 158 |
+
# 3.2. Starter 3D Common Corruptions Dataset
|
| 159 |
+
|
| 160 |
+
We release the full open source code of our pipeline, which enables using the implemented corruptions on any dataset. As a starter dataset, we applied the corruptions on 16k Taskonomy [81] test images. For all the corruptions except the ones in view changes and semantics which change the scene, we follow the protocol in 2DCC and define 5 shift intensities, resulting in approximately 1 million corrupted images $(16\mathrm{k}\times 14\times 5)$ . Directly applying the methods to generate corruptions results in uncalibrated shift intensities with respect to 2DCC. Thus, to enable aligned comparison with 2DCC on a more uniform intensity change, we perform a calibration step. For the corruptions with a direct counterpart in 2DCC, e.g. motion blur, we set the corruption level in 3DCC such that for each shift intensity in 2DCC, the average SSIM [72] values over all images is the same in both benchmarks. For the corruptions that do not have a counterpart in 2DCC, we adjust the distortion parameters to increase shift intensity while staying in a similar SSIM range as the others. For view changes and semantics, we render 32k images with smoothly changing parameters, e.g. roll angle, using the Replica [64] dataset. Figure 4 shows example corruptions with different shift intensities.
|
| 161 |
+
|
| 162 |
+
# 3.3. Applying 3DCC to standard vision datasets
|
| 163 |
+
|
| 164 |
+
While we employed datasets with full scene geometry information such as Taskonomy [81], 3DCC can also be applied to standard datasets without 3D information. We exemplify this on ImageNet [12] and COCO [38] validation
|
| 165 |
+
|
| 166 |
+
sets by leveraging depth predictions from the MiDaS [54] model, a state-of-the-art depth estimator. Figure 5 shows example images with near focus, far focus, and fog $3D$ corruptions. Generated images are physically plausible, demonstrating that 3DCC can be used for other datasets by the community to generate a diverse set of image corruptions. In Sec. 5.2.4, we quantitatively demonstrate the effectiveness of using predicted depth to generate 3DCC.
|
| 167 |
+
|
| 168 |
+
# 4.3D Data Augmentation
|
| 169 |
+
|
| 170 |
+
While benchmarking uses corrupted images as test data, one can also use them as augmentations of training data to build invariances towards these corruptions. This is the case for us since, unlike 2DCC, 3DCC is designed to capture corruptions that are more likely to appear in the real world, hence it has a sensible augmentation value as well. Thus, in addition to benchmarking robustness using 3DCC, our framework can also be viewed as new data augmentation strategies that take the 3D scene geometry into account. We augment with the following corruption types in our experiments: depth of field, camera motion, and lighting. The augmentations can be efficiently generated on-the-fly during training using parallel implementations. For example, the depth of field augmentations take 0.87 seconds (wall clock time) on a single V100 GPU for a batch size of 128 images with $224 \times 224$ resolution. For comparison, applying 2D defocus blur requires 0.54 seconds, on average. It is also possible to precompute certain selected parts of the augmentation process, e.g. the illuminations for lighting augmentations, to increase efficiency. We incorporated these mechanisms in our implementation. We show in Sec. 5.3 that these augmentations can significantly improve robustness against real-world distortions.
|
| 171 |
+
|
| 172 |
+
# 5. Experiments
|
| 173 |
+
|
| 174 |
+
We perform evaluations to demonstrate that 3DCC can expose vulnerabilities in models (Sec. 5.2.1) that are not captured by 2DCC (Sec. 5.2.2). The generated corruptions are similar to expensive realistic synthetic ones (Sec. 5.2.3) and are applicable to datasets without 3D information (Sec. 5.2.4). Finally, the proposed 3D data augmentation improves robustness qualitatively and quantitatively (Sec. 5.3). Please see the project page for more extensive qualitative results.
|
| 175 |
+
|
| 176 |
+
# 5.1. Preliminaries
|
| 177 |
+
|
| 178 |
+
Evaluation Tasks: 3DCC can be applied to any dataset, irrespective of the target task, e.g. dense regression or low-dimensional classification. Here we mainly experiment with surface normals and depth estimation as target tasks widely employed by the community. We note that the robustness of models solving such tasks is underexplored compared to classification tasks (See supplementary for results on classification). To evaluate robustness, we compute
|
| 179 |
+
|
| 180 |
+

|
| 181 |
+
Figure 5. 3DCC can be applied to most datasets, even those that do not come with 3D information. Several query images from the ImageNet [12] and COCO [38] dataset are shown with near focus, far focus and fog 3D corruptions applied. Notice how the objects in the circled regions go from sharp to blurry depending on the focus region and scene geometry. To get the depth information needed to create these corruptions, predictions from MiDaS [54] model is used. This gives a good enough approximation to generate realistic corruptions (as we will quantify in Sec. 5.2.4).
|
| 182 |
+
|
| 183 |
+
the $\ell_1$ error between predicted and ground truth images.
|
| 184 |
+
|
| 185 |
+
Training Details: We train UNet [58] and DPT [53] models on Taskonomy [81] using learning rate $5 \times 10^{-4}$ and weight decay $2 \times 10^{-6}$ . We optimize the likelihood loss with Laplacian prior using AMSGrad [55], following [77]. Unless specified, all the models use the same UNet backbone (e.g. Fig. 6). We also experiment with DPT models trained on Omnidata [17] that mixes a diverse set of training datasets. Following [17], we train with learning rate $1 \times 10^{-5}$ , weight decay $2 \times 10^{-6}$ with angular & $\ell_1$ losses.
|
| 186 |
+
|
| 187 |
+
Robustness mechanisms evaluated: We evaluate several popular data augmentation strategies: DeepAugment [26], style augmentation [24], and adversarial training [35]. We also include Cross-Domain Ensembles (X-DE) [77] that has been recently shown to improve robustness to corruptions by creating diverse ensemble components via input transformations. We refer to the supplementary for training details. Finally, we train a model with augmentation with corruptions from 2DCC [27] (2DCC augmentation), and another model with 3D data augmentation on top of that (2DCC + 3D augmentation).
|
| 188 |
+
|
| 189 |
+
# 5.2.3D Common Corruptions Benchmark
|
| 190 |
+
|
| 191 |
+
# 5.2.1 3DCC can expose vulnerabilities
|
| 192 |
+
|
| 193 |
+
We perform a benchmarking of the existing models against 3DCC to understand their vulnerabilities. However, we note that our main contribution is not the performed analyses but the benchmark itself. The state-of-the-art models may change over time and 3DCC aims to identify the robustness trends, similar to other benchmarks.
|
| 194 |
+
|
| 195 |
+
Effect of robustness mechanisms: Figure 6 shows the average performance of different robustness mechanisms on 3DCC for surface normals and depth estimation tasks. These mechanisms improved the performance over the baseline but are still far from the performance on clean data.
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
Figure 6. Existing robustness mechanisms are found to be insufficient for addressing real-world corruptions approximated by 3DCC. Performance of models with different robustness mechanisms under 3DCC for surface normals (left) and depth (right) estimation tasks are shown. All models here are UNets and are trained with Taskonomy data. Each bar shows the $\ell_1$ error averaged over all 3DCC corruptions (lower is better). The black error bars show the error at the lowest and highest shift intensity. The red line denotes the performance of the baseline model on clean (uncorrupted) data. This denotes that existing robustness mechanisms, including those with diverse augmentations, perform poorly under 3DCC.
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
|
| 202 |
+
This suggests that 3DCC exposes robustness issues and can serve as a challenging testbed for models. The 2DCC augmentation model returns slightly lower $\ell_1$ error, indicating that diverse 2D data augmentation only partially helps against 3D corruptions.
|
| 203 |
+
|
| 204 |
+
Effect of dataset and architecture: We provide a detailed breakdown of performance against 3DCC in Fig. 7. We first observe that baseline UNet and DPT models trained on Taskonomy have similar performance, especially on the view change corruptions. By training with larger and more diverse data with Omnidata, the DPT performance improves. Similar observations were made on vision transformers for classification [5, 16]. This improvement is notable with view change corruptions, while for the other corruptions, there is a decrease in error from 0.069 to 0.061. This suggests that combining architectural advancements with diverse and large training data can play an important role in robustness against 3DCC. Furthermore, when combined with 3D augmentations, they improve robustness to real-world corruptions (Sec. 5.3).
|
| 205 |
+
|
| 206 |
+
# 5.2.2 Redundancy of corruptions in 3DCC and 2DCC
|
| 207 |
+
|
| 208 |
+
In Fig. 1, a qualitative comparison was made between 3DCC and 2DCC. The former generates more realistic corruptions while the latter does not take scene 3D into account and applies uniform modifications over the image. In Fig. 8, we aim to quantify the similarity between 3DCC and 2DCC. On the left of Fig. 8, we compute the correlations of $\ell_1$ errors between clean and corrupted predictions made by the baseline model for a subset of corruptions (full set is in supplementary). 3DCC incurs less correlations both intrabenchmark as well as against 2DCC (Mean correlations are 0.32 for 2DCC-2DCC, 0.28 for 3DCC-3DCC, and 0.30 for 2DCC-3DCC). Similar conclusions are obtained for depth estimation (in the supplementary). In the right, we provide the same analysis on the RGB domain by computing the $\ell_1$ error between clean and corrupted images, again suggesting that 3DCC yields lower correlations.
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
Figure 7. Detailed breakdown of performance on 3DCC. The benchmark can expose trends and models' sensitivity to a wide range of corruptions. We show this by training models on either Taskonomy [81] or Omnidata [17] and with either a UNet [58] or DPT [53] architecture. The average $\ell_1$ error over all shift intensities for each corruption is shown (lower is better). Top: We observe that Taskonomy models are more susceptible to changes in field of view, camera roll, and pitch compared to Omnidata trained model, which is consistent with their methods. Bottom: The numbers in the legend are the average performance over all the corruptions. We can see that all the models are sensitive to 3D corruptions, e.g. $z$ -motion blur and shadow. Overall, training with large diverse data, e.g. Omnidata, and using DPT is observed to notably improve performance.
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
Figure 8. Redundancy among corruptions. We quantified the pairwise similarity of a subset of corruptions from 2DCC and 3DCC by computing their correlations in the $\ell_1$ errors of the surface normals predictions (left) and RGB images (right). 3DCC incurs less correlations both intra-benchmark as well as against 2DCC. Thus, 3DCC has a diverse set of corruptions and these corruptions do not have a significant overlap with 2DCC. Using depth as target task yields similar conclusions (full affinity matrices are provided in the supplementary).
|
| 217 |
+
|
| 218 |
+
# 5.2.3 Soundness: 3DCC vs Expensive Synthesis
|
| 219 |
+
|
| 220 |
+
3DCC aims to expose a model's vulnerabilities to certain real-world corruptions. This requires the corruptions generated by 3DCC to be similar to real corrupted data. As generating such labeled data is expensive and scarcely avail
|
| 221 |
+
|
| 222 |
+

|
| 223 |
+
Figure 9. Qualitative results of learning with 3D data augmentation on random queries from OASIS [9], AE (Sec. 5.2.3), manually collected DSLR data, and in-the-wild YouTube videos for surface normals. The ground truth is gray when it is not available, e.g. for YouTube. The predictions in the last row (ours) are from the $\mathrm{O + DPT + 2DCC + 3D}$ model. They are noticeably sharper and more accurate. See the project page and supplementary for more results.
|
| 224 |
+
|
| 225 |
+
able, as a proxy evaluation, we instead compare the realism of 3DCC to synthesis made by Adobe After Effects (AE) which is a commercial product to generate high-quality photorealistic data and often relies on expensive and manual processes. To achieve this, we use the Hypersim [57] dataset that comes with high-resolution z-depth labels. We then generated 200 images that are near- and far-focused using 3DCC and AE. Figure 10 shows sample generated images from both approaches that are perceptually similar. Next, we computed the prediction errors of a baseline normal model when the input is from 3DCC or AE. The scatter plot of $\ell_1$ errors are given in Fig. 11 and demonstrates a strong correlation, 0.80, between the two approaches. For calibration and control, we also provide the scatter plots for some corruptions from 2DCC to show the significance of correlations. They have significantly lower correlations with AE, indicating the depth of field effect created via 3DCC matches AE generated data reasonably well.
|
| 226 |
+
|
| 227 |
+
# 5.2.4 Effectiveness of applying 3DCC to other datasets
|
| 228 |
+
|
| 229 |
+
We showed qualitatively in Fig. 5 that 3DCC can be applied to standard vision datasets like ImageNet [12] and COCO [38] by leveraging predicted depth from a state-of-the-art model from MiDaS [54]. Here, we quantitatively show the impact of using predicted depth instead of ground truth. For this, we use the Replica [64] dataset that comes with ground truth depth labels. We then generated 1280 corrupted images using ground truth depth and predicted depth from MiDaS [54] without fine-tuning on Replica. Figure 12 shows the trends on three corruptions from 3DCC generated using ground truth and predicted depth. The trends are similar and the correlation of errors is strong (0.79). This
|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
Figure 10. Visual comparisons of 3DCC and expensive After Effects (AE) generated depth of field effect on query images from Hypersim. 3DCC generated corruptions are visually similar to those from AE.
|
| 233 |
+
|
| 234 |
+
suggests that the predicted depth can be effectively used to apply 3DCC to other datasets, and the performance is expected to improve with better depth predictions. See the supplementary for more analysis and quantitative evaluations on ImageNet which suggests that 3DCC can be informative during model development by exposing nonlinear trends and vulnerabilities that are not captured by 2DCC.
|
| 235 |
+
|
| 236 |
+
# 5.3. 3D data augmentation to improve robustness
|
| 237 |
+
|
| 238 |
+
We demonstrate the effectiveness of the proposed augmentations qualitatively and quantitatively. We evaluate UNet and DPT models trained on Taskonomy (T+UNet, T+DPT) and DPT trained on Omnidata (O+DPT) to see the effect of training dataset and model architecture. The training procedure is as described in Sec. 5.1. For the other models, we initialize from O+DPT model and train with 2DCC augmentations (O+DPT+2DCC) and 3D augmentations on top of that (O+DPT+2DCC+3D), i.e. our proposed model. Qualitative evaluations: We consider i. OASIS [9], ii.
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
Figure 11. Corruptions of 3DCC are similar to expensive realistic synthetic ones while being cheaper to generate. Scatter plots of $\ell_1$ errors from the baseline model predictions on 3DCC against those created by Adobe After Effects (AE). The correlation between 3DCC near (far) focus and those from AE near (far) focus is the strongest (numbers are in the legend of left column). We also added the most similar corruption from 2DCC (defocus blur) for comparison, yielding weaker correlations (middle). Shot noise (right) is a control baseline, i.e. a randomly selected corruption, to calibrate the significance of the correlation measure.
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
Figure 12. Effectiveness of applying 3DCC without ground truth depth. Three corruptions from 3DCC are generated using depth predictions from MiDaS [54] model on unseen Replica data. Scatter plots show the $\ell_1$ errors from the baseline model when corruptions are generated using the predicted depth (x-axis) or the ground truth (y-axis). The trends are similar between two corrupted data results, suggesting the predicted depth is an effective approximation to generate 3DCC. See the supplementary for more tests including control baselines.
|
| 245 |
+
|
| 246 |
+
AE corrupted data from Sec. 5.2.3, iii. manually collected DSLR data, and iv. in-the-wild YouTube videos. Figure 9 shows that predictions made by the proposed model are significantly more robust compared to baselines. We also recommend watching the clips on the project page.
|
| 247 |
+
|
| 248 |
+
Quantitative evaluations: In Table 1, we compute errors made by the models on 2DCC, 3DCC, AE, and OASIS [9] data (no fine-tuning). Again, the proposed model yields lower errors across datasets showing the effectiveness of augmentations. Note that robustness against corrupted data is improved without sacrificing performance on in-the-wild clean data, i.e. OASIS.
|
| 249 |
+
|
| 250 |
+
# 6. Conclusion and Limitations
|
| 251 |
+
|
| 252 |
+
We introduce a framework to test and improve model robustness against real-world distribution shifts, particularly those centered around 3D. Experiments demonstrate that the proposed 3D Common Corruptions is a challenging
|
| 253 |
+
|
| 254 |
+
<table><tr><td>Model Benchmark</td><td>T+UNet</td><td>T+DPT</td><td>O+DPT</td><td>O+DPT+2DCC</td><td>O+DPT+2DCC+3D (Ours)</td></tr><tr><td>2DCC [27] (\( \ell_1 \) error)</td><td>8.15</td><td>7.47</td><td>6.43</td><td>5.78</td><td>5.32</td></tr><tr><td>3DCC (\( \ell_1 \) error)</td><td>7.08</td><td>6.89</td><td>6.13</td><td>5.94</td><td>5.42</td></tr><tr><td>AE (Sec. 5.2.3) (\( \ell_1 \) error)</td><td>12.86</td><td>12.39</td><td>7.84</td><td>6.50</td><td>4.94</td></tr><tr><td>OASIS [9] (angular error)</td><td>30.49</td><td>32.13</td><td>24.42</td><td>23.67</td><td>24.65</td></tr></table>
|
| 255 |
+
|
| 256 |
+
Table 1. Effectiveness of 3D augmentations quantified using different benchmarks. $\ell_1$ errors are multiplied by 100 for readability. Our model yields lower errors across the benchmarks. 2DCC and 3DCC are applied on the same Taskonomy test images. More results are given in supplementary. Evaluations on OASIS sometimes show a large variance due to its sparse ground truth.
|
| 257 |
+
|
| 258 |
+
benchmark that exposes model vulnerabilities under real-world plausible corruptions. Furthermore, the proposed data augmentation leads to more robust predictions compared to baselines. We believe this work opens up a promising direction in robustness research by showing the usefulness of 3D corruptions in benchmarking and training. Below we briefly discuss some of the limitations:
|
| 259 |
+
|
| 260 |
+
3D quality: 3DCC is upper-bounded by the quality of 3D data. The current 3DCC is an imperfect but useful approximation of real-world 3D corruptions, as we showed. The fidelity is expected to improve with higher resolution sensory data and better depth prediction models.
|
| 261 |
+
|
| 262 |
+
Non-exhaustive set: Our set of 3D corruptions and augmentations are not exhaustive. They instead serve as a starter set for researchers to experiment with. The framework can be employed to generate more domain-specific distribution shifts with minimal manual effort.
|
| 263 |
+
|
| 264 |
+
Large-scale evaluation: While we evaluate some recent robustness approaches in our analyses, our main goal was to show that 3DCC successfully exposes vulnerabilities. Thus, performing a comprehensive robustness analysis is beyond the scope of this work. We encourage researchers to test their models against our corruptions.
|
| 265 |
+
|
| 266 |
+
Balancing the benchmark: We did not explicitly balance the corruption types in our benchmark, e.g. having the same number of noise and blur distortions. Our work can further benefit from weighting strategies trying to calibrate average performance on corruption benchmarks, such as [36].
|
| 267 |
+
|
| 268 |
+
Use cases of augmentations: While we focus on robustness, investigating their usefulness on other applications, e.g. self-supervised learning, could be worthwhile.
|
| 269 |
+
|
| 270 |
+
Evaluation tasks: We experiment with dense regression tasks. However, 3DCC can be applied to different tasks, including classification and other semantic ones. Investigating failure cases of semantic models against, e.g. on smoothly changing occlusion rates, using our framework could provide useful insights.
|
| 271 |
+
|
| 272 |
+
Acknowledgement: We thank Zeynep Kar and Abhijeet Jagdev. This work was partially supported by the ETH4D and EPFL EssentialTech Centre Humanitarian Action Challenge Grant.
|
| 273 |
+
|
| 274 |
+
# References
|
| 275 |
+
|
| 276 |
+
[1] Abdelrahman Abdelhamed, Stephen Lin, and Michael S Brown. A high-quality denoising dataset for smartphone cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1692-1700, 2018. 3
|
| 277 |
+
[2] Iro Armeni, Zhi-Yang He, JunYoung Gwak, Amir R Zamir, Martin Fischer, Jitendra Malik, and Silvio Savarese. 3d scene graph: A structure for unified semantics, 3d space, and camera. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5664-5673, 2019. 4
|
| 278 |
+
[3] Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Danny Gutfreund, Joshua Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. 2019. 2, 4
|
| 279 |
+
[4] Brian A Barsky and Todd J Kosloff. Algorithms for rendering depth of field effects in computer graphics. In Proceedings of the 12th WSEAS international conference on Computers, volume 2008. World Scientific and Engineering Academy and Society (WSEAS), 2008. 3
|
| 280 |
+
[5] Srinadh Bhojanapalli, Ayan Chakrabarti, Daniel Glasner, Daliang Li, Thomas Unterthiner, and Andreas Veit. Understanding robustness of transformers for image classification. arXiv preprint arXiv:2103.14586, 2021. 2, 6
|
| 281 |
+
[6] Tim Brooks and Jonathan T Barron. Learning to synthesize motion blur. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6840-6848, 2019. 3
|
| 282 |
+
[7] Prithvjit Chattopadhyay, Judy Hoffman, Roozbeh Mottaghi, and Aniruddha Kembhavi. Robustnav: Towards benchmarking robustness in embodied navigation. arXiv preprint arXiv:2106.04531, 2021. 2
|
| 283 |
+
[8] Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3291-3300, 2018. 3
|
| 284 |
+
[9] Weifeng Chen, Shengyi Qian, David Fan, Noriyuki Kojima, Max Hamilton, and Jia Deng. Oasis: A large-scale dataset for single image 3d in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 679-688, 2020. 7, 8
|
| 285 |
+
[10] Blender Online Community. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018. 3
|
| 286 |
+
[11] Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670, 2020.3
|
| 287 |
+
[12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248β255. IEEE, 2009. 2, 4, 5, 7
|
| 288 |
+
[13] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. 1, 4
|
| 289 |
+
[14] Josip Djolonga, Jessica Yung, Michael Tschannen, Rob Romijnders, Lucas Beyer, Alexander Kolesnikov, Joan
|
| 290 |
+
|
| 291 |
+
Puigcerver, Matthias Minderer, Alexander D'Amour, Dan Moldovan, et al. On robustness and transferability of convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16458-16468, 2021. 2
|
| 292 |
+
[15] Samuel Dodge and Lina Karam. A study and comparison of human and deep learning recognition performance under visual distortions. In 2017 26th International Conference on Computer Communication and Networks (ICCCN), pages 1-7. IEEE, 2017. 1
|
| 293 |
+
[16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. 6
|
| 294 |
+
[17] Ainaz Eftekhar, Alexander Sax, Jitendra Malik, and Amir Zamir. Omnidata: A scalable pipeline for making multitask mid-level vision datasets from 3d scans. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10786-10796, 2021. 3, 5, 6
|
| 295 |
+
[18] Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736-3745, 2006. 3
|
| 296 |
+
[19] Raanan Fattal. Single image dehazing. ACM transactions on graphics (TOG), 27(3):1-9, 2008. 3, 4
|
| 297 |
+
[20] Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T Roweis, and William T Freeman. Removing camera shake from a single photograph. In ACM SIGGRAPH 2006 Papers, pages 787-794. 2006. 3
|
| 298 |
+
[21] Alessandro Foi, Mejdi Trimeche, Vladimir Katkovnik, and Karen Egiazarian. Practical poissonian-gaussian noise modeling and fitting for single-image raw-data. IEEE Transactions on Image Processing, 17(10):1737-1754, 2008. 3, 4
|
| 299 |
+
[22] Nic Ford, Justin Gilmer, Nicolas Carlini, and Dogus Cubuk. Adversarial examples are a natural consequence of test error in noise. arXiv preprint arXiv:1901.10513, 2019. 2
|
| 300 |
+
[23] Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. arXiv preprint arXiv:2004.07780, 2020. 1
|
| 301 |
+
[24] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2018. 2, 5
|
| 302 |
+
[25] Majed El Helou, Ruofan Zhou, Johan Barthas, and Sabine Susstrunk. Vidit: virtual image dataset for illumination transfer. arXiv preprint arXiv:2005.05460, 2020. 3
|
| 303 |
+
[26] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8340-8349, 2021. 2, 5
|
| 304 |
+
[27] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261, 2019. 1, 2, 5, 8
|
| 305 |
+
|
| 306 |
+
[28] Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In International Conference on Machine Learning, pages 2712-2721. PMLR, 2019. 2
|
| 307 |
+
[29] Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781, 2019. 2
|
| 308 |
+
[30] Xiaowei Hu, Chi-Wing Fu, Lei Zhu, and Pheng-Ann Heng. Depth-attentional features for single-image rain removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8022β8031, 2019. 3
|
| 309 |
+
[31] Jason Jo and Yoshua Bengio. Measuring the tendency of cnns to learn surface statistical regularities. arXiv preprint arXiv:1711.11561, 2017. 1
|
| 310 |
+
[32] Christoph Kamann and Carsten Rother. Benchmarking the robustness of semantic segmentation models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8828-8838, 2020. 2
|
| 311 |
+
[33] Sanjay Kariyappa and Moinuddin K Qureshi. Improving adversarial robustness of ensembles with diversity training. arXiv preprint arXiv:1901.09981, 2019. 2
|
| 312 |
+
[34] Deepa Kundur and Dimitrios Hatzinakos. Blind image deconvolution. IEEE signal processing magazine, 13(3):43-64, 1996. 3
|
| 313 |
+
[35] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016. 3, 5
|
| 314 |
+
[36] Alfred Laugros, Alice Caplier, and Matthieu Ospici. Using synthetic corruptions to measure robustness to natural distribution shifts. arXiv preprint arXiv:2107.12052, 2021. 8
|
| 315 |
+
[37] Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, et al. 3db: A framework for debugging computer vision models. arXiv preprint arXiv:2106.03805, 2021. 2
|
| 316 |
+
[38] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr DΓ³lar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 4, 5, 7
|
| 317 |
+
[39] Raphael Gontijo Lopes, Dong Yin, Ben Poole, Justin Gilmer, and Ekin D Cubuk. Improving robustness without sacrificing accuracy with patch gaussian augmentation. arXiv preprint arXiv:1906.02611, 2019. 2
|
| 318 |
+
[40] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 2, 3
|
| 319 |
+
[41] Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Non-local sparse models for image restoration. In 2009 IEEE 12th International Conference on Computer Vision, pages 2272-2279. IEEE, 2009. 3
|
| 320 |
+
[42] Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S Ecker, Matthias Bethge, and Wieland Brendel. Benchmarking robustness in object detection: Autonomous driving when winter is coming. arXiv preprint arXiv:1907.07484, 2019. 2
|
| 321 |
+
|
| 322 |
+
[43] John P Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: On the strong correlation between out-of-distribution and indistribution generalization. In International Conference on Machine Learning, pages 7721-7735. PMLR, 2021. 2
|
| 323 |
+
[44] Eric Mintun, Alexander Kirillov, and Saining Xie. On interaction between augmentations and corruptions in natural corruption robustness. arXiv preprint arXiv:2102.11273, 2021. 2
|
| 324 |
+
[45] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3883-3891, 2017. 3
|
| 325 |
+
[46] Seungjun Nah, Sanghyun Son, Suyoung Lee, Radu Timofte, and Kyoung Mu Lee. Ntire 2021 challenge on image deblurring. In CVPR Workshops, pages 149-165, June 2021. 3
|
| 326 |
+
[47] Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Intriguing properties of vision transformers. arXiv preprint arXiv:2105.10497, 2021. 2, 4
|
| 327 |
+
[48] Simon Niklaus, Long Mai, Jimei Yang, and Feng Liu. 3d ken burns effect from a single image. ACM Transactions on Graphics (TOG), 38(6):1-15, 2019. 3
|
| 328 |
+
[49] A Emin Orhan. Robustness properties of facebook's resnext wsl models. arXiv preprint arXiv:1907.07640, 2019. 2
|
| 329 |
+
[50] Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. Improving adversarial robustness via promoting ensemble diversity. arXiv preprint arXiv:1901.08846, 2019. 2
|
| 330 |
+
[51] Amandalynne Paullada, Inioluwa Deborah Raji, Emily M Bender, Emily Denton, and Alex Hanna. Data and its (dis) contents: A survey of dataset development and use in machine learning research. arXiv preprint arXiv:2012.05345, 2020. 2
|
| 331 |
+
[52] Michael Potmesil and Indranil Chakravarty. A lens and aperture camera model for synthetic image generation. ACM SIGGRAPH Computer Graphics, 15(3):297-305, 1981. 3
|
| 332 |
+
[53] RenΓ© Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12179-12188, 2021. 5, 6
|
| 333 |
+
[54] RenΓ© Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. arXiv preprint arXiv:1907.01341, 2019. 5, 7, 8
|
| 334 |
+
[55] Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. arXiv preprint arXiv:1904.09237, 2019. 5
|
| 335 |
+
[56] Jaesung Rim, Haeyun Lee, Jucheol Won, and Sunghyun Cho. Real-world blur dataset for learning and benchmarking deblurring algorithms. In European Conference on Computer Vision, pages 184-201. Springer, 2020. 3
|
| 336 |
+
[57] Mike Roberts, Jason Ramapuram, Anurag Ranjan, Atulit Kumar, Miguel Angel Bautista, Nathan Paczan, Russ Webb, and Joshua M Susskind. Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10912-10922, 2021. 7
|
| 337 |
+
|
| 338 |
+
[58] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-assisted Intervention, pages 234-241. Springer, 2015. 5, 6
|
| 339 |
+
[59] Evgenia Rusak, Lukas Schott, Roland Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, and Wieland Brendel. Increasing the robustness of dnns against image corruptions by playing the game of noise. 2020. 2
|
| 340 |
+
[60] Evgenia Rusak, Lukas Schott, Roland S Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, and Wieland Brendel. A simple way to make neural networks robust against diverse image corruptions. In European Conference on Computer Vision, pages 53-69. Springer, 2020. 2
|
| 341 |
+
[61] Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126(9):973-992, 2018. 3, 4
|
| 342 |
+
[62] Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. "everyone wants to do the model work, not the data work": Data cascades in high-stakes ai. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-15, 2021. 2
|
| 343 |
+
[63] Rulin Shao, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and Cho-Jui Hsieh. On the adversarial robustness of visual transformers. arXiv preprint arXiv:2103.15670, 2021. 2
|
| 344 |
+
[64] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J. Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat, Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard Newcombe. The Replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797, 2019. 4, 7
|
| 345 |
+
[65] Jiachen Sun, Qingzhao Zhang, Bhavya Kailkhura, Zhiding Yu, Chaowei Xiao, and Z Morley Mao. Benchmarking robustness of 3d point cloud recognition against common corruptions. arXiv preprint arXiv:2201.12296, 2022. 2
|
| 346 |
+
[66] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. 1, 3
|
| 347 |
+
[67] Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. arXiv preprint arXiv:2007.00644, 2020. 2
|
| 348 |
+
[68] Maxime Tremblay, Shirsendu Sukanta Halder, Raoul de Charette, and Jean-FranΓ§ois Lalonde. Rain rendering for evaluating and improving robustness to bad weather. International Journal of Computer Vision, 129(2):341-360, 2021.
|
| 349 |
+
[69] Alexander Von Bernuth, Georg Volk, and Oliver Bringmann. Simulating photo-realistic snow and fog on existing images for enhanced cnn training and evaluation. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pages 41-46. IEEE, 2019. 3, 4
|
| 350 |
+
|
| 351 |
+
[70] Neal Wadhwa, Rahul Garg, David E Jacobs, Bryan E Feldman, Nori Kanazawa, Robert Carroll, Yair Movshovitz-Attias, Jonathan T Barron, Yael Pritch, and Marc Levoy. Synthetic depth-of-field with a single-camera mobile phone. ACM Transactions on Graphics (ToG), 37(4):1-13, 2018. 3
|
| 352 |
+
[71] Lijun Wang, Xiaohui Shen, Jianming Zhang, Oliver Wang, Zhe Lin, Chih-Yao Hsieh, Sarah Kong, and Huchuan Lu. Deepplens: Shallow depth of field from a single image. arXiv preprint arXiv:1810.08100, 2018. 3
|
| 353 |
+
[72] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600-612, 2004. 4
|
| 354 |
+
[73] Kaixuan Wei, Ying Fu, Jiaolong Yang, and Hua Huang. A physics-based noise formation model for extreme low-light raw denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2758-2767, 2020. 3
|
| 355 |
+
[74] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10687-10698, 2020. 2
|
| 356 |
+
[75] Zexiang Xu, Kalyan Sunkavalli, Sunil Hadap, and Ravi Ramamoorthi. Deep image-based relighting from optimal sparse samples. ACM Transactions on Graphics (ToG), 37(4):1-13, 2018. 3
|
| 357 |
+
[76] Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, and Hai Li. Diverseg: Diversifying vulnerabilities for enhanced robust generation of ensembles. Advances in Neural Information Processing Systems, 33, 2020. 2
|
| 358 |
+
[77] Teresa Yeo, OΔuzhan Fatih Kar, and Amir Zamir. Robustness via cross-domain ensembles. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 12189-12199, October 2021. 2, 5
|
| 359 |
+
[78] Chenyu Yi, Siyuan Yang, Haoliang Li, Yap-peng Tan, and Alex Kot. Benchmarking the robustness of spatial-temporal models against corruptions. arXiv preprint arXiv:2110.06513, 2021. 2, 4
|
| 360 |
+
[79] Dong Yin, Raphael Gontijo Lopes, Jon Shlens, Ekin Dogus Cubuk, and Justin Gilmer. A fourier perspective on model robustness in computer vision. In Advances in Neural Information Processing Systems, pages 13276-13286, 2019. 2
|
| 361 |
+
[80] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6023-6032, 2019. 2
|
| 362 |
+
[81] Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3712-3722, 2018. 4, 5, 6
|
| 363 |
+
[82] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. 2
|
| 364 |
+
|
| 365 |
+
[83] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7):3142-3155, 2017. 3
|
| 366 |
+
[84] Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. Learning deep cnn denoiser prior for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3929-3938, 2017. 3
|
3dcommoncorruptionsanddataaugmentation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:228851cd513de1bf27019b79ae6e8b49b6f169070917d5926497b73ca62b7c99
|
| 3 |
+
size 734256
|
3dcommoncorruptionsanddataaugmentation/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ee33902c8f56ffaae40d39dd953a6be42b43bb0d6528187102312567095ace94
|
| 3 |
+
size 429515
|
3deformrscertifyingspatialdeformationsonpointclouds/8e74b4ec-d4a4-469e-970c-3a30b923b6a8_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cd72cffd8f2b1ed5f6dce349a8e3340a5addfad35a261a65c51373a884955bab
|
| 3 |
+
size 78314
|
3deformrscertifyingspatialdeformationsonpointclouds/8e74b4ec-d4a4-469e-970c-3a30b923b6a8_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d57f64615e67aebc00d88d976109d33209ddc49a46d5b64c58f807490cdf0d86
|
| 3 |
+
size 96644
|
3deformrscertifyingspatialdeformationsonpointclouds/8e74b4ec-d4a4-469e-970c-3a30b923b6a8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a0086fb3b57692ce7b37fedb896554636ac5a5ecb66f3d1219967d75ae24f6ca
|
| 3 |
+
size 3454611
|
3deformrscertifyingspatialdeformationsonpointclouds/full.md
ADDED
|
@@ -0,0 +1,291 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3DeformRS: Certifying Spatial Deformations on Point Clouds
|
| 2 |
+
|
| 3 |
+
Gabriel PΓ©rez S.\*,1,2
|
| 4 |
+
|
| 5 |
+
Juan C. PΓ©rez*,2
|
| 6 |
+
|
| 7 |
+
Motasem Alfarra*,2
|
| 8 |
+
|
| 9 |
+
Silvio Giancola
|
| 10 |
+
|
| 11 |
+
# Bernard Ghanem<sup>2</sup>
|
| 12 |
+
|
| 13 |
+
<sup>1</sup>Universidad Nacional de Colombia, <sup>2</sup>King Abdullah University of Science and Technology (KAUST)
|
| 14 |
+
|
| 15 |
+
gaperezsa@unal.edu.co
|
| 16 |
+
|
| 17 |
+
$^{2}\{juan.perezsantamaria, motasem.alfarra, silvio.giancola, bernard.ghanem\}@kaust.edu.sa$
|
| 18 |
+
|
| 19 |
+
# Abstract
|
| 20 |
+
|
| 21 |
+
3D computer vision models are commonly used in security-critical applications such as autonomous driving and surgical robotics. Emerging concerns over the robustness of these models against real-world deformations must be addressed practically and reliably. In this work, we propose 3DeformRS, a method to certify the robustness of point cloud Deep Neural Networks (DNNs) against real-world deformations. We developed 3DeformRS by building upon recent work that generalized Randomized Smoothing (RS) from pixel-intensity perturbations to vector-field deformations. In particular, we specialized RS to certify DNNs against parameterized deformations (e.g. rotation, twisting), while enjoying practical computational costs. We leverage the virtues of 3DeformRS to conduct a comprehensive empirical study on the certified robustness of four representative point cloud DNNs on two datasets and against seven different deformations. Compared to previous approaches for certifying point cloud DNNs, 3DeformRS is fast, scales well with point cloud size, and provides comparable-to-better certificates. For instance, when certifying a plain PointNet against a $3^{\circ}$ z-rotation on 1024-point clouds, 3DeformRS grants a certificate $3\times$ larger and $20\times$ faster than previous work $^{1}$ .
|
| 22 |
+
|
| 23 |
+
# 1. Introduction
|
| 24 |
+
|
| 25 |
+
Perception of 3D scenes plays a critical role in applications such as autonomous navigation [14] and robotics [32]. The success of autonomous driving cars and surgical robots largely depends on their ability to understand the surrounding scenes. Recent works [36, 37, 46, 51] demonstrated the capability of Deep Neural Networks (DNNs) to process 3D point clouds. These advances allowed DNNs to achieve exceptional performance in challenging 3D tasks,
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
Figure 1. Randomized Smoothing for Point Cloud DNNs. We certify the prediction of a DNN on the input point cloud via randomized smoothing by constructing a smooth DNN $\hat{g}_{\phi}$ around the original DNN $f$ . For an input $p$ , a parametric transformation $\nu_{\phi + \epsilon}$ and a parameter $\sigma$ , the Smooth DNN predicts the expected value of the predictions from transformed versions of $p$ .
|
| 29 |
+
|
| 30 |
+
such as shape classification [45, 49], object detection [17] and semantic segmentation [3, 12]. Despite the superior performance of DNNs over traditional approaches, explaining the logic behind their decisions and understanding their limitations is extremely challenging. For instance, earlier works [22, 50] elucidated the limitations of point cloud DNNs to withstand tiny and meaningless perturbations in their input, known as adversarial attacks. The vulnerability of point cloud DNNs against adversarial attacks underscores the importance of considering their robustness for security-critical applications.
|
| 31 |
+
|
| 32 |
+
The study of adversarial robustness in the image domain demonstrated that defenses [20] are often later broken by more sophisticated attacks [4, 8, 11]. This phenomenon poses a difficulty for evaluating robustness [7]. Such difficulty found response in the domain of certified robustness,
|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
Figure 2. Visualization of Spatially-Deformed Point Clouds. In blue the original point cloud and in red its deformed version. From left to right: Rotation (plane), Translation (plane), Shearing (bottle), Tapering (bottle) and Twisting (table).
|
| 36 |
+
|
| 37 |
+
wherein a defense's robustness is theoretically guaranteed and thus independent of the attack's sophistication. Recent methods for certifying DNNs on images have spurred interest in extending such methods to point cloud DNNs. In particular, the seminal work of Liu et al. [27] certified point cloud DNNs against modification, addition, and deletion of points. Furthermore, Lorenz et al. [28] recently proposed a verification approach to certify against 3D transformations.
|
| 38 |
+
|
| 39 |
+
In this work, we propose 3DeformRS, a certification approach for point cloud DNNs against spatial and deformable transformations. Figure 1 provides an overview of 3DeformRS's inner workings. Our approach builds upon the theoretical background of Randomized Smoothing (RS) [10]. Specifically, we build 3DeformRS by leveraging DeformRS [1], an RS reformulation that generalized from pixel-intensity perturbations to vector field deformations, and specializing it to point cloud data. In contrast to previous approaches, our work considers spatial deformations on any point cloud DNN, providing efficiency and practicality. Compared to previous work, we find that 3DeformRS produces comparable-to-better certificates, while requiring significantly less computation time. We thus build on 3DeformRS' virtues and provide a comprehensive empirical study on the certified robustness of four representative point cloud DNNs (PointNet [36], PointNet++ [37], DGCNN [46], and CurveNet [51]) on two datasets (ModelNet40 [49] and ScanObjectNN [45]), and against seven different types of spatial deformations (Rotation, Translation, Affine, Twisting, Tapering, Shearing, and Gaussian Noise).
|
| 40 |
+
|
| 41 |
+
Contributions. In summary, our contributions are threefold: (i) We propose 3DeformRS by extending Randomized Smoothing (RS) from certifying image deformations to point cloud transformations. We further show that RS' classical formulation for certifying input perturbations can be seen as a special case of 3DeformRS. (ii) We conduct a comprehensive empirical study with 3DeformRS, where we assess the certified robustness of four point cloud DNNs, on two classification datasets and against seven spatial deformations. (iii) We compare 3DeformRS with an earlier point cloud certification approach in certification and runtime. We show that 3DeformRS delivers consistent im
|
| 42 |
+
|
| 43 |
+
provements while inheriting RS' scalability and efficiency.
|
| 44 |
+
|
| 45 |
+
# 2. Related Work
|
| 46 |
+
|
| 47 |
+
3D Computer Vision. Images enjoy a canonical representation as fixed-sized matrices; in contrast, several representations exist for 3D data. Such representations include point clouds [36, 37, 51], meshes [15, 25, 40], voxels [9, 38, 56], multi-view [21, 41, 47] and implicit representations [33-35]. Given the prevalence and practicality of point clouds, we focus our attention on this representation and the associated DNNs that process it. PointNet [36] was the first successful attempt at using DNNs on point clouds. This architecture introduced a point-wise MLP with a global set pooling, and achieved remarkable performance in shape classification and semantic segmentation. PointNet++ [37] then introduced intermediate pooling operations in point clouds for local neighborhood aggregation. Afterwards, DGCNN [46] modeled convolutional operations in point clouds based on dynamically-generated graphs between closest point features. Recently, CurveNet [51] learned point sequences for local aggregation, achieving state-of-the-art performance on 3D computer vision tasks. In this work, we conduct a comprehensive empirical study on certified robustness by analyzing the robustness of these four point cloud DNNs.
|
| 48 |
+
|
| 49 |
+
Robustness. Szegedy et al.'s [43] seminal work exposed the vulnerability of DNNs against small input modifications, now known as adversarial examples. Later works observed the pervasiveness of this phenomenon [8, 18], spurring an arms race between defenses that enhanced DNNs' adversarial robustness [20, 31, 52] and attacks that could break such defenses [4, 8]. The conflict between ever-more complex defenses and attacks also incited interest towards "certified robustness" [48], wherein defenses enjoy theoretical guarantees about the inexistence of adversarial examples that could fool them. A set of works focused on exact verification [19, 54], while others considered probabilistic certification [24, 26]. Randomized Smoothing (RS) [10] has emerged as a certification approach from the probabilistic paradigm that scales well with models and datasets. Notably, RS has been successfully combined with adversarial training [39], regularization [53], and
|
| 50 |
+
|
| 51 |
+
smoothing-parameters' optimization [2, 13]. Recently, DeformRS [1] reformulated RS to consider general parameterized vector-field deformations. In this work, we develop 3DeformRS by specializing and extending DeformRS to certify point cloud DNNs against spatial deformations.
|
| 52 |
+
|
| 53 |
+
Certification on 3D Point Clouds. Since the seminal work of Xiang et al. [50] attacked point cloud DNNs, several works studied the robustness of such DNNs [22, 23, 29, 30, 42, 44, 50, 55]. Despite growing interest in the robustness of point cloud DNNs, only two works have addressed their certification. PointGuard [27] provided tight robustness guarantees against modification, addition, and deletion of points. 3DCertify [28] generalized DeepG [5] to 3D point clouds and proposed 3DCertify, a verification approach to certify robustness against common 3D transformations. PointGuard has the benefit of low computational cost (compared to that of exact verification), but does not allow for spatial deformations. Conversely, 3DCertify considers such transformations, but suffers from impractical computational costs. In contrast, our 3DeformRS approach combines the best of both worlds, thus allowing for spatial deformations while enjoying low computational cost.
|
| 54 |
+
|
| 55 |
+
# 3. Our approach: 3DeformRS
|
| 56 |
+
|
| 57 |
+
We present 3DeformRS, a probabilistic certification for point cloud DNNs against spatial deformations.
|
| 58 |
+
|
| 59 |
+
Preliminaries. Our methodology builds on Randomized Smoothing (RS) [10], arguably the most scalable DNN-certification method. Given a classifier $f: \mathbb{R}^d \to \mathcal{P}(\mathcal{V})$ that maps an input $x \in \mathbb{R}^d$ to the probability simplex $\mathcal{P}(\mathcal{V})$ , RS constructs a smooth classifier $g$ that outputs the most probable class when $f$ 's input is subjected to additive perturbations sampled from a distribution $\mathcal{D}$ . While RS certified against additive pixel perturbations in images, DeformRS [1] extended this formulation to certify against image deformations by proposing a parametric-domain smooth classifier. Specifically, given image $x$ with coordinates $p$ , a parametric deformation function $\nu_{\phi}$ with parameters $\phi$ (e.g. if $\nu$ is a rotation, then $\phi$ is the rotation angle), and an interpolation function $I_T$ , DeformRS defined a parametric-domain smooth classifier
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
g _ {\phi} (x, p) = \mathbb {E} _ {\epsilon \sim \mathcal {D}} \left[ f \left(I _ {T} (x, p + \nu_ {\phi + \epsilon})\right) \right]. \tag {1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
In a nutshell, $g$ outputs $f$ 's average prediction over transformed versions of $x$ . Note that, in contrast to RS, this formulation deforms pixel locations instead of their intensity. DeformRS showed that parametric-domain smooth classifiers are certifiably robust against perturbations to the deformation function's parameters via the following corollary.
|
| 66 |
+
|
| 67 |
+
Corollary 1 (restated from [1]). Suppose $g$ assigns class $c_{A}$ to an input $x$ , i.e. $c_{A} = \arg \max_{i} g_{\phi}^{i}(x, p)$ with
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
p _ {A} = g _ {\phi} ^ {c _ {A}} (x, p) \text {a n d} p _ {B} = \max _ {c \neq c _ {A}} g _ {\phi} ^ {c} (x, p)
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
Then $\arg \max_c g_{\phi +\delta}^c (x,p) = c_A$ for all parametric perturbations $\delta$ satisfying:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\| \delta \| _ {1} \leq \lambda \left(p _ {A} - p _ {B}\right) \quad f o r \mathcal {D} = \mathcal {U} [ - \lambda , \lambda ],
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\| \delta \| _ {2} \leq \frac {\sigma}{2} \left(\Phi^ {- 1} (p _ {A}) - \Phi^ {- 1} (p _ {B})\right) \quad f o r \mathcal {D} = \mathcal {N} (0, \sigma^ {2} I).
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
In short, Corollary (1) states that, as long as the parameters of the deformation function (e.g. rotation angle) are perturbed by a quantity upper bounded by the certified radius, $g_{\phi}$ 's prediction will remain constant. This result allowed certification against various image deformations. In this work, we build upon DeformRS and specialize it to certify against spatial deformations in point clouds.
|
| 84 |
+
|
| 85 |
+
# 3.1. Parametric Certification for 3D DNNs
|
| 86 |
+
|
| 87 |
+
We specialize the result in Corollary (1) to point clouds. In this setup, $p \in \mathbb{R}^{N \times 3}$ is a point cloud consisting of $N$ 3-dimensional points. We highlight two key differences between certifying images and point clouds. (i) The interpolation function $I_{T}$ , while essential in images for the pixels' discrete locations, is irrelevant for 3D coordinates and so can be omitted. (ii) Most recent DNNs neglect the points' color information and exclusively rely on the points' location. We combine these observations and modify the parametric domain smooth classifier from Eq. (1) to propose
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\hat {g} _ {\phi} (p) = \mathbb {E} _ {\epsilon \sim \mathcal {D}} [ f (p + \nu_ {\phi + \epsilon}) ]. \tag {2}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
Our $\hat{g}_{\phi}$ , inheriting $g_{\phi}$ 's structure, is certifiable against parametric perturbations via Corollary (1). Note that, since point cloud DNNs exclusively rely on location, our parametric certification is thus equivalent to input certification.
|
| 94 |
+
|
| 95 |
+
To consider input deformations, let us study the general case where $\nu_{\phi} = \phi \in \mathbb{R}^{N\times 3}$ . Under this setup, and by setting $p^\prime = p + \phi$ our smooth classifier becomes:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\tilde {g} _ {\phi} (p) = \mathbb {E} _ {\epsilon \sim \mathcal {D}} [ f (p ^ {\prime} + \epsilon) ]. \tag {3}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
The classifier in Eq. (3) is a general case of both the domain smooth classifier [1] and the input smooth classifier proposed earlier in [10]. Note that this expression elucidates how parametric certification of point cloud DNNs is equivalent to input certification. At last, we note that the smooth classifier in Eq. (3) is also certifiable via Corollary 1. We highlight here that directly certifying $\tilde{g}$ against parametric transformations (e.g. rotation) will perform poorly, as empirically observed in [27]. However, such deficient performance is not an inherent weakness of RS for certifying point cloud transformations, but rather due to a sub-optimal formulation of RS for spatial deformations. We argue that our formulation, i.e. the parametric domain smooth classifier in Eq. (2), is more suitable for modeling spatial deformations than the one presented in [27].
|
| 102 |
+
|
| 103 |
+
Next, we outline various spatial deformations we consider for assessing the robustness point cloud DNNs.
|
| 104 |
+
|
| 105 |
+
<table><tr><td>Name</td><td>Flow (pΜ)</td><td>Ο</td><td>|| Name</td><td>Flow (pΜ)</td><td>Ο</td><td>|| Name</td><td>Flow (pΜ)</td><td>Ο</td></tr><tr><td rowspan="3">Translation</td><td>xΜ = tx</td><td>[tx]</td><td rowspan="3">z- Rotation</td><td>xΜ = (cΞ³ - 1)x - sΞ³y</td><td rowspan="3">[Ξ³]</td><td rowspan="3">Affine</td><td>xΜ = ax + by + cz + d</td><td>a</td></tr><tr><td>yΜ = ty</td><td>[ty]</td><td>yΜ = sΞ³x + (cΞ³ - 1)y</td><td>yΜ = ex + fy + gz + h</td><td>:</td></tr><tr><td>zΜ = tz</td><td>[tz]</td><td>zΜ = 0</td><td>zΜ = ix + jy + kz + l</td><td>l</td></tr><tr><td rowspan="3">z- Shearing</td><td>xΜ = az</td><td>[a]</td><td rowspan="3">z- Twisting</td><td>xΜ = (cΞ³z - 1)x - sΞ³z y</td><td rowspan="3">[Ξ³]</td><td rowspan="3">z- Tapering</td><td>xΜ = (1/2a2 + b)zx</td><td>a</td></tr><tr><td>yΜ = bz</td><td>[b]</td><td>yΜ = sΞ³z x + (cΞ³z - 1)y</td><td>yΜ = (1/2a2 + b)zy</td><td>b</td></tr><tr><td>zΜ = 0</td><td></td><td>zΜ = 0</td><td>zΜ = 0</td><td></td></tr></table>
|
| 106 |
+
|
| 107 |
+
Table 1. Per-point Deformation Flows for semantically-viable spatial deformations. Convention: $c_{\alpha} = \cos(\alpha)$ and $s_{\alpha} = \sin(\alpha)$ . Without loss of generality, we only show the rotation around the $z$ -axis, and leave the formulation of other rotations to the Appendix.
|
| 108 |
+
|
| 109 |
+
# 3.2. Modeling Spatial Deformations
|
| 110 |
+
|
| 111 |
+
We now detail the spatial deformations we consider to assess the robustness of point cloud DNNs. Our formulation from Eq. (3) requires modeling spatial deformations as additive perturbations on the points' coordinates. Thus, given a point cloud $p$ that is transformed to yield $p'$ , we define the flow that additively perturbs $p$ as $\tilde{p} = p' - p$ , where $\tilde{p} \in \mathbb{R}^{N \times 3}$ . Modeling transformations is then equivalent to modeling the induced per-point flow. Thus, we are required to model each deformation via a parametric flow, whose parameters are those of the corresponding transformation.
|
| 112 |
+
|
| 113 |
+
We consider six parameterizable flows, corresponding to four linear and two nonlinear transformations. In particular, we consider four linear, rigid and deformable transformations: (1) rotation, (2) translation, (3) shearing, and (4) the general affine transformation. Furthermore, we follow prior work [28] and consider two nonlinear transformations: (5) tapering and (6) twisting. As a summary, we report all the deformation flows and their corresponding parameters in Table 1, and visualize some of their effects in Figure 2.
|
| 114 |
+
|
| 115 |
+
Note that the affine transformation is the most general transformation we consider, capable of modeling any linear transformation. Presumably, a DNN enjoying certified robustness against affine transformations would also enjoy robustness against combinations of the other spatial deformations. We leave formulations of the aforementioned transformations in homogeneous coordinates to the Appendix.
|
| 116 |
+
|
| 117 |
+
Gaussian Noise. In addition to the above transformations, we also consider Gaussian noise perturbations. Notably, this noise deforms the underlying vector field and, thus, is the most general perturbation. To certify against Gaussian noise, we construct the smooth classifier in Eq. (3) with $\epsilon \in \mathbb{R}^{N\times 3}$ and $\mathcal{D} = \mathcal{N}(0,\sigma^2 I)$ . While this deformation is very general, the high dimensionality of the certified parameters limits its applicability to imperceptible perturbations. Nevertheless, as adversaries may take such form [50], we consider this perturbation in our experiments.
|
| 118 |
+
|
| 119 |
+
Design Choices. For each spatial deformation and point cloud DNN, we assess certified robustness by constructing the smooth classifier from Eq. (2). In deformations whose
|
| 120 |
+
|
| 121 |
+
parameter space is bounded (e.g. rotations, where angles beyond $\pm \pi$ radians are redundant), we smooth with a uniform distribution and thus obtain an $\ell_1$ certificate. For the remaining deformations, we employ Gaussian smoothing and thus obtain an $\ell_2$ certificate.
|
| 122 |
+
|
| 123 |
+
# 4. Experiments
|
| 124 |
+
|
| 125 |
+
# 4.1. Setup
|
| 126 |
+
|
| 127 |
+
Datasets. We experiment on ModelNet40 [49] and ScanObjectNN [45]. ModelNet40 comprises 12,311 3D CAD models from 40 classes. ScanObjectNN has 2,902 real-world 3D scans of 15 classes. While ModelNet40 has synthetic objects, ScanObjectNN used 3D cameras and so contains natural and challenging self-occlusions. We use the ScanObjectNN variant that omits background data. For both datasets, the shapes are dimension-normalized, origin-centered and sampled with 1024 points.
|
| 128 |
+
|
| 129 |
+
Models. We study four point cloud DNNs: PointNet [36], PointNet++ [37], DGCNN [46] and CurveNet [51]. For PointNet, we used 3D Certify's implementation, which uses $z$ -rotation augmentation on ModelNet40. For CurveNet, we used the official implementation's weights, trained on ModelNet40 with axis-independent [.66, 1.5] scaling and $\pm 0.2$ translation. On ScanObjectNN, PointNet and CurveNet are trained without augmentation. For PointNet++ and DGCNN, we used the PyTorch Geometric library [16] with default hyper-parameters and without augmentations.
|
| 130 |
+
|
| 131 |
+
Certification. In our experiments, following [1,39,53], we construct the hard version of the smooth classifier in Eq. (2) for each deformation. Moreover, we follow common practice and adapt the public implementation from [10] for estimating the certified radius via Monte Carlo sampling. In particular, we use 1,000 samples to estimate the certified radius with a probability of failure of $10^{-3}$ . For all experiments, we provide envelope certified accuracy curves cross validated at several values of smoothing parameters, detailed in the Appendix. Since all deformations used Gaussian smoothing, the certificates we find are in the $\ell_2$ sense (except for rotation, which used uniform smoothing and so its certificate is in the $\ell_1$ sense).
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
Figure 3. 3DeformRS Certification on ModelNet40 of four point cloud DNNs against 10 transformations.
|
| 135 |
+
|
| 136 |
+

|
| 137 |
+
Figure 4. 3DeformRS Certification on ScanObjectNN of four point cloud DNNs against 10 transformations.
|
| 138 |
+
|
| 139 |
+
# 4.2. Benchmarking 3D Networks
|
| 140 |
+
|
| 141 |
+
We present certification curves in Figures 3 and 4 for ModelNet40 and ScanObjectNN, respectively. We also report each curve's associated Average Certification Radius (ACR) in Table 2 as a summary metric. We further highlight nine main observations from these results, and leave a detailed analysis, together with ablations on RS' hyperparameters, to the Appendix.
|
| 142 |
+
|
| 143 |
+
Vulnerability against Rigid Transformations. Our analysis considers, among others, rotation and translation. Remarkably, we find that DNNs are significantly vulnerable even against these rigid perturbations. For most DNNs, the certified accuracy plots in Figures 3 and 4 show that performance drops dramatically as the perturbation's magnitude increases. This phenomenon supports further research on increasing the robustness of point cloud DNNs against sim
|
| 144 |
+
|
| 145 |
+
ple transformations that could happen in the real world.
|
| 146 |
+
|
| 147 |
+
Deformation Complexity. Each spatial deformation is parameterized by a certain number of values. The number of parameters can be associated with the deformation's complexity: "simple" deformations, i.e. rotation around the $x$ axis, require few parameters, while "complex" deformations, i.e. affine, require several parameters. Under these notions, we notice that as the deformation's complexity increases, the DNNs' certified accuracies drop more rapidly. This observation agrees with intuition: complex transformations should be harder to resist than simple ones.
|
| 148 |
+
|
| 149 |
+
Gaussian Noise. This deformation is arguably the most general and information-destroying, as it may not preserve distances, angles nor parallelism. Experimentally, we indeed observe that DNNs are rather brittle against this noise: even small magnitudes can break the DNNs' performance.
|
| 150 |
+
|
| 151 |
+
<table><tr><td>ModelNet40</td><td>z-Rot.</td><td>xz-Rot.</td><td>xyz-Rot.</td><td>Shearing</td><td>Twisting</td><td>Tapering</td><td>Translation</td><td>Affine</td><td>Affine (NT)</td><td>Gaussian Noise</td></tr><tr><td>PointNet [36]</td><td>2.64</td><td>0.31</td><td>0.21</td><td>0.42</td><td>2.48</td><td>0.50</td><td>0.07</td><td>0.06</td><td>0.13</td><td>0.09</td></tr><tr><td>PointNet++ [37]</td><td>1.45</td><td>0.34</td><td>0.25</td><td>0.55</td><td>1.66</td><td>0.81</td><td>0.64</td><td>0.16</td><td>0.17</td><td>0.07</td></tr><tr><td>DGCNN [46]</td><td>1.29</td><td>0.33</td><td>0.26</td><td>0.54</td><td>1.78</td><td>0.90</td><td>0.18</td><td>0.14</td><td>0.20</td><td>0.10</td></tr><tr><td>CurveNet [51]</td><td>0.76</td><td>0.39</td><td>0.34</td><td>0.56</td><td>1.51</td><td>1.09</td><td>1.32</td><td>0.26</td><td>0.26</td><td>0.08</td></tr><tr><td>ScanObjectNN</td><td>z-Rot.</td><td>xz-Rot.</td><td>xyz-Rot.</td><td>Shearing</td><td>Twisting</td><td>Tapering</td><td>Translation</td><td>Affine</td><td>Affine (NT)</td><td>Gaussian Noise</td></tr><tr><td>PointNet [36]</td><td>0.30</td><td>0.16</td><td>0.15</td><td>0.35</td><td>0.90</td><td>0.72</td><td>0.06</td><td>0.05</td><td>0.11</td><td>0.10</td></tr><tr><td>PointNet++ [37]</td><td>0.38</td><td>0.23</td><td>0.22</td><td>0.45</td><td>0.89</td><td>0.93</td><td>0.67</td><td>0.16</td><td>0.17</td><td>0.05</td></tr><tr><td>DGCNN [46]</td><td>0.32</td><td>0.20</td><td>0.19</td><td>0.37</td><td>0.92</td><td>0.81</td><td>0.10</td><td>0.08</td><td>0.14</td><td>0.05</td></tr><tr><td>CurveNet [51]</td><td>0.51</td><td>0.31</td><td>0.29</td><td>0.51</td><td>1.24</td><td>1.02</td><td>0.19</td><td>0.14</td><td>0.20</td><td>0.08</td></tr></table>
|
| 152 |
+
|
| 153 |
+
CurveNet Performs Remarkably. In terms of ACR, CurveNet displays larger robustness than competitors across the board. In particular, Table 2 (top), shows that, on ModelNet40, CurveNet achieves the best ACR for seven of the 10 deformations we present, while scoring last only in two deformations (z-Rot and Twisting). Analogously, for ScanObjectNN (Table 2, bottom), CurveNet is the best performer, displaying the best robustness for seven of the 10 deformations we consider, while never scoring last.
|
| 154 |
+
|
| 155 |
+
PointNet Performs Poorly. On ModelNet40, reported in Table 2 (top), PointNet achieves the lowest ACR values for nine of the 10 deformations, while only achieving the best score in two deformations ( $z$ -Rot and Twisting). Similarly, for ScanObjectNN, in Table 2 (bottom), PointNet consistently scores last across all transformations, holding the last position in eight out of 10 transformations, and only holding the first position in only one (Gaussian noise).
|
| 156 |
+
|
| 157 |
+
Training-time Augmentations Boost Certified Robustness. On ModelNet40, PointNet was trained with $z$ -rotation augmentations. Indeed, we find that, when PointNet is evaluated on ModelNet40, it displays superior performance against rotation-based transformations along the $z$ axis, i.e. $z$ -Rot and Twisting. In particular, its ACR is $> 0.8$ more than the runner-up in $z$ -Rot and $\sim 0.4$ in Twisting, while its robust accuracy is mostly maintained across the entire rotation regime (from $-\pi$ to $+\pi$ radians, i.e. all possible rotations). That is, PointNet correctly classifies most objects, independent of whether they are rotated or twisted around the $z$ axis. More interestingly, we also remark how PointNet's dominance is not observed in ScanObjectNN, where it was not trained with $z$ -rotation augmentations. We thus attribute PointNet's robustness to the training-time augmentations it enjoyed, and so we further study this phenomenon in the next subsection.
|
| 158 |
+
|
| 159 |
+
Certified vs. Regular Accuracy. Overall, our analysis finds that the best performer is CurveNet, while the worst performer is PointNet. Notably, this fact agrees with each DNN's plain test performance: CurveNet has an advantage over PointNet of about $7\%$ and $10\%$ in ModelNet40 and
|
| 160 |
+
|
| 161 |
+
Table 2. Certified Robustness Assessment on ModelNet40 and ScanObjectNN. We report the Average Certification Radius (ACR) of all DNNs against 10 deformations, on each dataset. For each deformation, we embolden the best performance and underline the worst.
|
| 162 |
+
|
| 163 |
+
<table><tr><td></td><td>ModelNet40</td><td>ScanObjectNN</td></tr><tr><td>PointNet [36]</td><td>85.94%</td><td>71.35%</td></tr><tr><td>PointNet++ [37]</td><td>90.03%</td><td>83.53%</td></tr><tr><td>DGCNN [46]</td><td>90.03%</td><td>78.04%</td></tr><tr><td>CurveNet [51]</td><td>93.84%</td><td>81.47%</td></tr></table>
|
| 164 |
+
|
| 165 |
+
Table 3. Test Set Accuracy on ModelNet40 and ScanObjectNN.
|
| 166 |
+
|
| 167 |
+
ScanObjectNN, respectively. Thus, our results find a correlation between regular and certified performance.
|
| 168 |
+
|
| 169 |
+
ModelNet40 and ScanObjectNN. CurveNet is, arguably, the best performer on both datasets in terms of ACR. However, we note that all of CurveNet's certified accuracies drop significantly from ModelNet40 to ScanObjectNN. Thus, our results agree with how ModelNet40's synthetic nature (compared to ScanObjectNN's realistic nature) implies that ModelNet40 is a "simpler" dataset than ScanObjectNN.
|
| 170 |
+
|
| 171 |
+
Sizable Variations in Robustness. For a given deformation and dataset, we note sizable robustness differences across DNNs. Specifically, we observe that one DNN may obtain an ACR even $10 \times$ larger than other DNN. This phenomenon happens even when the plain accuracy of the DNNs being certified is mostly comparable, as reported in Table 3. Thus, we argue that certified robustness should be a design consideration when developing DNNs. That is, our results suggest that (i) plain accuracy may not provide the full picture into a DNN's performance, and (ii) certified accuracy may be effective and efficient for assessing DNNs.
|
| 172 |
+
|
| 173 |
+
# 4.3. Simple but Effective: Augmented Training
|
| 174 |
+
|
| 175 |
+
Previously, in Figure 3, we observed PointNet's dominant performance on $z$ -axis rotation and twisting. We attributed this superiority to augmentations PointNet enjoyed during training. Here we investigate the effect of other training augmentations on certified robustness. In particular, we train two DNNs with four different deformations and then assess their certified robustness against such deformations. We report the results of this experiment in Figure 5.
|
| 176 |
+
|
| 177 |
+

|
| 178 |
+
Figure 5. Relative effects of training augmentations on Average Certified Accuracy. We conduct augmentations during training and record relative improvements on certified accuracy. Most training augmentations improve certification across the board.
|
| 179 |
+
|
| 180 |
+
<table><tr><td></td><td>Plain</td><td>z-Rot.</td><td>xyz-Rot.</td><td>Trans.</td><td>Twist.</td></tr><tr><td>PointNet++</td><td>90.1%</td><td>+0.2%</td><td>-0.3%</td><td>-0.4%</td><td>-0.1%</td></tr><tr><td>DGCNN</td><td>89.8%</td><td>+1.3%</td><td>+0.8%</td><td>+1.3%</td><td>-0.2%</td></tr></table>
|
| 181 |
+
|
| 182 |
+
Table 4. Augmentation impact on accuracy on ModelNet40.
|
| 183 |
+
|
| 184 |
+
We draw the following three conclusions from these results. (i) Conducting augmentation with one deformation significantly increases the robustness against that same deformation. This is an expected phenomenon, as the model trained on deformed versions of inputs. (ii) Training on some deformations yields robustness against other deformations (e.g., augmenting with twisting results in robustness against rotation). This observation aligns with out earlier result where PointNet displayed superiority in $z$ -axis rotation and twisting. This result further suggests that simple training augmentations strategies are effective for "robustifying" models against deformations. (iii) The larger certificates come at virtually no cost on clean accuracy: each model's clean accuracy varied less than $\pm 1.5\%$ w.r.t. the model trained without augmentations (refer to Table 4).
|
| 185 |
+
|
| 186 |
+
# 4.4. Comparison with 3Dcertify
|
| 187 |
+
|
| 188 |
+
We further compare 3DeformRS and 3Dcertify [28], which is, to the best of our knowledge, the current state-of-the-art certification approach against spatial deformations. 3Dcertify's analysis focuses exclusively on PointNet and ModelNet40. We used their official implementation and pre-trained weights, and note that 3Dcertify certifies a 100-instance subset of the test set. We note that the certified accuracies reported in [28] are w.r.t. the correctly-classified samples from such subset. Moreover, we underscore that 3Dcertify's analysis focuses exclusively on PointNet trained on ModelNet40.
|
| 189 |
+
|
| 190 |
+
Technical Details. We first discuss the intricacies of providing a fair comparison between 3Dcertify (an exact verification approach) and 3DeformRS (a probabilistic certifi
|
| 191 |
+
|
| 192 |
+
cation approach). We underscore fundamental differences regarding the input each approach receives when certifying (i) a given DNN, e.g. PointNet, (ii) w.r.t. a desired transformation, e.g. Rotation, on (iii) a selected point cloud input, e.g. a plane. In particular, 3Dcertify receives as additional input the exact magnitude of the transformation. 3Dcertify then uses the transformation and its magnitude, e.g. $3^{\circ}$ , to return a boolean value stating if the given DNN's output is certifiable against a transformation of such magnitude. On the other hand, 3DeformRS mechanism is fundamentally different: it computes a tight certification radius for the same (DNN-transformation-point cloud) tuple, but leveraging a random distribution with which the input was smoothed. The two approaches can be compared if the certified radius provided by 3DeformRS is evaluated against the transformation magnitude received by 3Dcertify.
|
| 193 |
+
|
| 194 |
+
However, we note that the certification radius computed by 3DeformRS depends on the hyper-parameter $\sigma$ (or $\lambda$ for uniform smoothing) from Corollary 1. That is, 3DeformRS will compute a different radius for each $\sigma$ considered. While there exists a $\sigma^{\star}$ with which 3DeformRS provides the largest certified radius for the transformation being considered, $\sigma^{\star}$ is not known a priori. Thus, we are required to run a grid search of $\sigma$ values to compute each instance's largest certified radius. While this procedure is computationally intensive, the output of each experimentβconsidering one $\sigma$ over the whole datasetβprovides an entire certified accuracy curve, in contrast to a single point in this curve, as provided by exact verification approaches. We underscore how this curve can provide insights into a DNN's robustness. In practice, when comparing with 3D Certify on specific transformation magnitudes, we run a grid search of at most $18\sigma$ values to obtain each instance's largest certified radius. Hence, we report certified accuracy envelope curves, i.e. certified accuracy curves that consider each instance's largest certified radius. Additionally, to circumvent the problem arisen from considering different norms (3D Certify's $\ell_{\infty}$ norm vs. 3DeformRS's $\ell_1$ or $\ell_2$ ), we limit our comparison to single-parameter transformations.
|
| 195 |
+
|
| 196 |
+
Results. We compare against 3DCertify, in their reported experimental setup, along three dimensions: (i) certificate magnitude, (ii) point cloud cardinality and (iii) speed. Overall, we find that 3DeformRS provides comparable-to-better certificates, scales well with point cloud size, and has manageable computational cost.
|
| 197 |
+
|
| 198 |
+
Certificate magnitude. We consider rotations w.r.t. each axis, and evaluate 3D Certify's official 64-points DNN against rotations of $\gamma \in \{1,2,3,4,5,6,7,8,10,15\}$ degrees. Whenever possible, we used 3D Certify's reported results, otherwise we used their public implementation to run certification. We find that 3DeformRS achieves comparable-to-better certificates, while providing the additional benefit of full certified accuracy curves, instead of
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Points</td><td>16</td><td>32</td><td>64</td><td>128</td><td>256</td><td>512</td><td>1024</td></tr><tr><td>Boopathy et al. [6]</td><td>3.7</td><td>3.6</td><td>3.3</td><td>2.2</td><td>4.4</td><td>5.6</td><td>6.7</td></tr><tr><td>DeepPoly</td><td>95.1</td><td>94.0</td><td>91.3</td><td>72.2</td><td>51.1</td><td>39.3</td><td>28.1</td></tr><tr><td>3Dcertify (Taylor 3D)</td><td>97.5</td><td>94.0</td><td>93.5</td><td>81.1</td><td>66.7</td><td>49.4</td><td>37.1</td></tr><tr><td>3DeformRS (ours)</td><td>98.8</td><td>97.6</td><td>97.8</td><td>100</td><td>100</td><td>97.8</td><td>100</td></tr></table>
|
| 201 |
+
|
| 202 |
+
individual points in these plots. Refer to the Appendix for such certified accuracy curves.
|
| 203 |
+
|
| 204 |
+
Point Cloud Cardinality. 3Dcertify provides exact verification on point clouds of limited size (64 points reported in [28]). However, DNNs may deal with point clouds of at least 1024 points in practice. We experiment on this setup and compare with previous approaches by varying the point clouds' size and certifying against a $3^{\circ}z$ -rotation. For this experiment, we follow 3Dcertify's setup [28] and use the same DNN weights for each certification method. Table 5 shows that our approach provides a better certificate for DNNs when trained and tested on large point clouds. In particular, we mark three main observations. First, 3DeformRS performs up to $60\%$ better on the realistic setup of 1024 points. Second, 3DeformRS provides a certified accuracy of over $97\%$ across the board; while large robustness is expected from a model augmented with $z$ -rotations, our approach shows that providing such certificates is possible. Third, 3DeformRS breaks the trend of decreasing certificates from which other approaches suffered when handling larger point clouds. Thus, we find that 3DeformRS enjoys scalability and invariance to larger input size.
|
| 205 |
+
|
| 206 |
+
Speed. Certification via 3DCertify [28] is computationally expensive, and its current official implementation cannot benefit from GPU hardware accelerators. Furthermore, 3DCertify's computational cost scales with the perturbation's magnitude, hindering certification against large perturbations. For example, certifying PointNet with 64 points against an $z$ -rotation on a CPU requires $\sim 13\mathrm{k}$ seconds for $1^{\circ}$ , and up to $\sim 89\mathrm{k}$ seconds for $10^{\circ}$ .
|
| 207 |
+
|
| 208 |
+
Unlike 3DCertify, 3DeformRS enjoys a virtually constant computation cost, as it requires a fixed number of forward passes and $\sigma$ values. In addition, our native implementation can leverage GPUs for accelerating certification. Certifying against $z-$ rotation with 3DeformRS on the same CPU requires only $\sim 40$ seconds per $\sigma$ (independent of $\sigma$ 's magnitude, as reported in Table 6). Even the extreme case of certifying with 100 values for $\sigma$ , arguably an unnecessary amount, still requires only $\sim 4k$ seconds. That is, certifying small perturbations with 3DeformRS attains a $\sim 3\times$ speed boost, while large perturbations can even enjoy a $\sim 20\times$ boost. Moreover, accelerating 3DeformRS via an
|
| 209 |
+
|
| 210 |
+
Table 5. $3^{\circ}z-$ Rotation Certificates when varying Point Cloud Cardinality. The three previous methods consider linear relaxations and use the DeepPoly verifier, while 3DeformRS is based on Randomized Smoothing. Baselines taken from [28].
|
| 211 |
+
|
| 212 |
+
<table><tr><td rowspan="2">Device</td><td colspan="3">Ο</td></tr><tr><td>0.01</td><td>0.2</td><td>0.4</td></tr><tr><td>CPU</td><td>38.8</td><td>39.9</td><td>40.4</td></tr><tr><td>GPU</td><td>7.3</td><td>7.4</td><td>7.7</td></tr></table>
|
| 213 |
+
|
| 214 |
+
Table 6. Runtimes for 3DeformRS. We compare certification runtime for a single $\sigma$ value both in CPU and GPU (values in seconds). 3DeformRS's CPU version enjoys reasonable certification times, and leveraging a GPU lowers the runtime by $\sim 5\times$ .
|
| 215 |
+
|
| 216 |
+
Nvidia V100 GPU provides a $5 \times$ boost over its CPU counterpart, further improving runtime over 3Dcertify.
|
| 217 |
+
|
| 218 |
+
# 4.5. Ablations
|
| 219 |
+
|
| 220 |
+
The stochastic nature of 3DeformRS requires a Monte Carlo method followed by a statistical test to bound the probability of returning an incorrect prediction. This statistical test is parameterized by a failure ratio $\alpha$ which, throughout our experiments, was set to the default value of $10^{-3}$ , following [10]. Here, we analyze the sensitivity of 3DeformRS to other failure probabilities $\alpha$ . We show in the Appendix the certified accuracy curves for failure probabilities $\alpha \in \{10^{-2}, 10^{-4}, 10^{-5}\}$ . We underscore that, in our assessment, 3DeformRS shows negligible variation in certified accuracy w.r.t. changes in $\alpha$ . In particular, we notice that all ACRs are $\sim 0.22$ for all the $\alpha$ values we considered.
|
| 221 |
+
|
| 222 |
+
# 5. Conclusions and Limitations
|
| 223 |
+
|
| 224 |
+
In this work, we propose 3DeformRS, a method for certifying point cloud DNNs against spatial deformations. Our method provides comparable-to-better certificates than earlier works while scaling better to large point clouds and enjoying practical computation times. These virtues or 3DeformRS allow us to conduct a comprehensive empirical study of the certified robustness of point cloud DNNs against semantically-viable deformations. Furthermore, 3DeformRS' practical runtimes may enable its usage in real-world applications. While our stochastic approach is practical with its faster and top-performing certification, its stochasticity may also raise concerns when comparing against exact verification approaches. Moreover, we note that our work solely focused on assessing the 3D robustness of point cloud DNNs against input deformations, disregarding other types of perturbations. Possible avenues for future work include incorporating better training algorithms such as MACER [53] and SmoothAdv [39] for further robustness improvements.
|
| 225 |
+
|
| 226 |
+
Acknowledgements. This publication is based upon work supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-CRG2019-4033. We would also like to thank JesΓΊs Zarzar for the help and discussions.
|
| 227 |
+
|
| 228 |
+
# References
|
| 229 |
+
|
| 230 |
+
[1] Motasem Alfarra, Adel Bibi, Naeemullah Khan, Philip H. S. Torr, and Bernard Ghanem. Deformr: Certifying input deformations with randomized smoothing. CoRR, abs/2107.00996, 2021.
|
| 231 |
+
[2] Motasem Alfarra, Adel Bibi, Philip H. S. Torr, and Bernard Ghanem. Data dependent randomized smoothing, 2020.
|
| 232 |
+
[3] Iro Armeni, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 2016.
|
| 233 |
+
[4] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning (ICML), 2018.
|
| 234 |
+
[5] Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, and Martin Vechev. Certifying geometric robustness of neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
|
| 235 |
+
[6] Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, and Luca Daniel. Cnn-cert: An efficient framework for certifying robustness of convolutional neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3240β3247, 2019.
|
| 236 |
+
[7] Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019.
|
| 237 |
+
[8] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), 2017.
|
| 238 |
+
[9] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3075-3084, 2019.
|
| 239 |
+
[10] Jeremy Cohen, *Elan Rosenfeld, and Zico Kolter*. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning (ICML), 2019.
|
| 240 |
+
[11] Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ICML, 2020.
|
| 241 |
+
[12] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias NieΓner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2017.
|
| 242 |
+
[13] Francisco Eiras, Motasem Alfarra, M. Pawan Kumar, Philip H. S. Torr, Puneet K. Dokania, Bernard Ghanem, and Adel Bibi. ANCER: anisotropic certification via sample-wise volume maximization. CoRR, abs/2107.04570, 2021.
|
| 243 |
+
[14] Francesca M FavarΓ², Nazanin Nader, Sky O Eurich, Michelle Tripp, and Naresh Varadaraju. Examining accident reports involving autonomous vehicles in california. PLoS one, 12(9):e0184952, 2017.
|
| 244 |
+
|
| 245 |
+
[15] Yutong Feng, Yifan Feng, Haoxuan You, Xibin Zhao, and Yue Gao. Meshnet: Mesh neural network for 3d shape representation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8279-8286, 2019.
|
| 246 |
+
[16] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
|
| 247 |
+
[17] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE conference on computer vision and pattern recognition, pages 3354-3361. IEEE, 2012.
|
| 248 |
+
[18] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015.
|
| 249 |
+
[19] Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018.
|
| 250 |
+
[20] Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. Countering adversarial images using input transformations. In International Conference on Learning Representations (ICLR), 2018.
|
| 251 |
+
[21] Abdullah Hamdi, Silvio Giancola, and Bernard Ghanem. Mvtn: Multi-view transformation network for 3d shape recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1-11, 2021.
|
| 252 |
+
[22] Abdullah Hamdi, Sara Rojas, Ali Thabet, and Bernard Ghanem. Advpc: Transferable adversarial perturbations on 3d point clouds. In European Conference on Computer Vision (ECCV), 2020.
|
| 253 |
+
[23] Jaeyeon Kim, Binh-Son Hua, Thanh Nguyen, and Sai-Kit Yeung. Minimal adversarial examples for deep learning on 3d point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), 2021.
|
| 254 |
+
[24] Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), 2019.
|
| 255 |
+
[25] Huan Lei, Naveed Akhtar, and Ajmal Mian. Picasso: Auda-based library for deep learning over 3d meshes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13854-13864, 2021.
|
| 256 |
+
[26] B Li, C Chen, W Wang, and Lawrence Carin. Second-order adversarial attack and certifiable robustness. arXiv preprint arXiv: 1809.03113, 2018.
|
| 257 |
+
[27] Hongbin Liu, Jinyuan Jia, and Neil Zhenqiang Gong. Pointguard: Provably robust 3d point cloud classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6186-6195, 2021.
|
| 258 |
+
[28] Tobias Lorenz, Anian Ruoss, Mislav BalunoviΔ, Gagandeep Singh, and Martin Vechev. Robustness certification for point cloud models. arXiv preprint arXiv:2103.16652, 2021.
|
| 259 |
+
[29] Chengcheng Ma, Weiliang Meng, Baoyuan Wu, Shibiao Xu, and Xiaopeng Zhang. Efficient joint gradient based attack
|
| 260 |
+
|
| 261 |
+
against sor defense for 3d point cloud classification. In Proceedings of the 28th ACM International Conference on Multimedia, 2020.
|
| 262 |
+
[30] Chengcheng Ma, Weiliang Meng, Baoyuan Wu, Shibiao Xu, and Xiaopeng Zhang. Towards effective adversarial attack against 3d point cloud classification. In 2021 IEEE International Conference on Multimedia and Expo (ICME), 2021.
|
| 263 |
+
[31] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018.
|
| 264 |
+
[32] Jacques Marescaux, Joel Leroy, Michel Gagner, Francesco Rubino, Didier Mutter, Michel Vix, Steven E Butner, and Michelle K Smith. Transatlantic robot-assisted telesurgery. Nature, 413(6854):379-380, 2001.
|
| 265 |
+
[33] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4460-4470, 2019.
|
| 266 |
+
[34] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision, pages 405-421. Springer, 2020.
|
| 267 |
+
[35] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 165-174, 2019.
|
| 268 |
+
[36] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017.
|
| 269 |
+
[37] Charles R Qi, Li Yi, Hao Su, and Leonidas J Guibas. Point-net++: Deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413, 2017.
|
| 270 |
+
[38] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3577-3586, 2017.
|
| 271 |
+
[39] Hadi Salman, Jerry Li, Ilya P Razenshteyn, Pengchuan Zhang, Huan Zhang, SΓ©bastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
|
| 272 |
+
[40] Jonas Schult, Francis Engelmann, Theodora Kontogianni, and Bastian Leibe. Dualconvmesh-net: Joint geodesic and euclidean convolutions on 3d meshes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8612-8622, 2020.
|
| 273 |
+
[41] Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE international conference on computer vision, pages 945-953, 2015.
|
| 274 |
+
|
| 275 |
+
[42] Jiachen Sun, Yulong Cao, Christopher Choy, Zhiding Yu, Anima Anandkumar, Zhuoqing Mao, and Chaowei Xiao. Adversarily robust 3d point cloud recognition using self-supervisions. In Thirty-Fifth Conference on Neural Information Processing Systems (NeurIPS), 2021.
|
| 276 |
+
[43] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014.
|
| 277 |
+
[44] Tzungyu Tsai, Kaichen Yang, Tsung-Yi Ho, and Yier Jin. Robust adversarial objects against deep learning models. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020.
|
| 278 |
+
[45] Mikaela Angelina Uy, Quang-Hieu Pham, Binh-Son Hua, Thanh Nguyen, and Sai-Kit Yeung. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1588β1597, 2019.
|
| 279 |
+
[46] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5):1-12, 2019.
|
| 280 |
+
[47] Xin Wei, Ruixuan Yu, and Jian Sun. View-gcn: View-based graph convolutional network for 3d shape analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1850-1859, 2020.
|
| 281 |
+
[48] Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning (ICML), 2018.
|
| 282 |
+
[49] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015.
|
| 283 |
+
[50] Chong Xiang, Charles R Qi, and Bo Li. Generating 3d adversarial point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
|
| 284 |
+
[51] Tiange Xiang, Chaoyi Zhang, Yang Song, Jianhui Yu, and Weidong Cai. Walk in the cloud: Learning curves for point clouds shape analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 915β924, October 2021.
|
| 285 |
+
[52] Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L. Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 286 |
+
[53] Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, and Liwei Wang. Macer: Attack-free and scalable robust training via maximizing certified radius. In International Conference on Learning Representations (ICLR), 2019.
|
| 287 |
+
[54] Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, and Cho-Jui Hsieh. Towards stable and efficient training of verifiably robust neu
|
| 288 |
+
|
| 289 |
+
ral networks. In International Conference on Learning Representations (ICLR), 2020.
|
| 290 |
+
[55] Hang Zhou, Kejiang Chen, Weiming Zhang, Han Fang, Wenbo Zhou, and Nenghai Yu. Dup-net: Denoiser and upsampler network for 3d adversarial point clouds defense. In Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), 2019.
|
| 291 |
+
[56] Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4490-4499, 2018.
|
3deformrscertifyingspatialdeformationsonpointclouds/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:df0ebbbc2f7ede46f3fbdb760457edcd0e37f94685aba199d7b5ed3315daa708
|
| 3 |
+
size 454840
|
3deformrscertifyingspatialdeformationsonpointclouds/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6b2d4f4ec47243aa8268b8dbbff97b0abe194585a37c9e954baf598b78279256
|
| 3 |
+
size 395418
|
3dhumantonguereconstructionfromsingleinthewildimages/0631587e-4fb7-46f3-bf3a-8e19d2908ac4_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5992e2d88cd48ee1152c6b955aba4c9d13c8c3ef255fee3bee353508e96fea98
|
| 3 |
+
size 77401
|
3dhumantonguereconstructionfromsingleinthewildimages/0631587e-4fb7-46f3-bf3a-8e19d2908ac4_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c25e052a2eb0f2ca99c76077a863dbaacf5cdd2e1ca3809bca400343e3fd64e9
|
| 3 |
+
size 94198
|
3dhumantonguereconstructionfromsingleinthewildimages/0631587e-4fb7-46f3-bf3a-8e19d2908ac4_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cda1f43dc50588d2ecfef1bc1554a15961fc7cad82b600c05cb8214c810db0a1
|
| 3 |
+
size 9184906
|
3dhumantonguereconstructionfromsingleinthewildimages/full.md
ADDED
|
@@ -0,0 +1,316 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3D human tongue reconstruction from single βin-the-wildβ images
|
| 2 |
+
|
| 3 |
+
Stylianos Ploumpis $^{1,2}$
|
| 4 |
+
|
| 5 |
+
Stylianos Moschoglou $^{1,2}$
|
| 6 |
+
Stefanos Zafeiriou $^{1,2}$
|
| 7 |
+
|
| 8 |
+
Vasileios Triantafyllou
|
| 9 |
+
|
| 10 |
+
<sup>1</sup>Imperial College London, UK
|
| 11 |
+
|
| 12 |
+
$^{2}$ Huawei Technologies Co. Ltd
|
| 13 |
+
|
| 14 |
+
1{s.ploumpis,s.moschoglou,s.zafeiriou}@imperial.ac.uk 2{vasilios.triantafyllou}@huawei.com
|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
Figure 1. We propose a framework that accurately derives the 3D tongue shape from single images. A high detailed 3D point cloud of the tongue surface and a full head topology along with the tongue expression can be estimated from the image domain. As we demonstrate, our framework is able to capture the tongue shape even in adverse "in-the-wild" conditions.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+
# Abstract
|
| 36 |
+
|
| 37 |
+
3D face reconstruction from a single image is a task that has garnered increased interest in the Computer Vision community, especially due to its broad use in a number of applications such as realistic 3D avatar creation, pose invariant face recognition and face hallucination. Since the introduction of the 3D Morphable Model in the late 90's, we witnessed an explosion of research aiming at particularly tackling this task. Nevertheless, despite the increasing level of detail in the 3D face reconstructions from single images mainly attributed to deep learning advances, finer and highly deformable components of the face such as the tongue are still absent from all 3D face models in the literature, although being very important for the realness of the 3D avatar representations. In this work we present the first, to the best of our knowledge, end-to-end trainable pipeline that accurately reconstructs the 3D face together with the tongue. Moreover, we make this pipeline robust in "in-the-wild" images by introducing a novel GAN method tailored for 3D tongue surface generation. Finally, we make publicly available to the community the first diverse tongue
|
| 38 |
+
|
| 39 |
+
dataset, consisting of 1,800 raw scans of 700 individuals varying in gender, age, and ethnicity backgrounds*. As we demonstrate in an extensive series of quantitative as well as qualitative experiments, our model proves to be robust and realistically captures the 3D tongue structure, even in adverse "in-the-wild" conditions.
|
| 40 |
+
|
| 41 |
+
# 1. Introduction
|
| 42 |
+
|
| 43 |
+
Recently, 3D face reconstruction from single "in-the-wild" images has been a very active topic in Computer Vision with applications ranging from realistic 3D avatar creation to image imputation and face recognition [12, 14, 22, 37, 39, 44]. Nevertheless, despite the improvement in the quality of the 3D reconstructions, all of these methods do not accommodate any statistical variations in the oral cavity let alone a tongue template mesh. As a result, the oral region is completely disregarded from the final result.
|
| 44 |
+
|
| 45 |
+
Being able to reconstruct the tongue expression has multiple advantages in various applications. First of all, the generated avatars would be more realistic and would be able
|
| 46 |
+
|
| 47 |
+
to mimic many more facial expressions. Moreover, speech animation tasks would be improved as the inclusion of the oral cavity plays a significant role. Finally, face recognition applications could be enhanced as more extreme poses and expressions would be modeled.
|
| 48 |
+
|
| 49 |
+
However, as we already pointed out, all of the current state-of-the-art (SOTA) methods [14, 39, 44] do not contain the tongue component in their implementations. This is because of two reasons: a) there is no publicly available tongue dataset, and b) it is very challenging to carry out 3D reconstruction of the face together with the tongue in "in-the-wild" conditions, because of the highly deformable nature of the human tongue.
|
| 50 |
+
|
| 51 |
+
To tackle the absence of tongue data, we collected a large and diverse dataset of textured 3D tongue point-clouds (more info about the data in Section 3). Having captured the data, we created a pipeline which is comprised of the following parts: a) a tongue point-cloud autoencoder (AE) which is used to derive useful 3D features of our raw collected 3D data, b) a tongue image encoder optimized based on the aforementioned 3D features, c) a shape decoder which translates the encoder outputs to the parameter space of the Universal Head Model (UHM) [33]. We should note that the UHM in our case is further rigged/modified so that it can model various tongue shapes/expressions, as explained in Section 3. We begin by training the AE in step a) and then we train steps b-c) in an end-to-end fashion so that the output tongue expression of the UHM model is as close as possible to the corresponding ground-truth 3D tongue point-cloud of the 2D tongue image.
|
| 52 |
+
|
| 53 |
+
Since there is a lack of ground-truth 3D tongue data corresponding to "in-the-wild" 2D tongue images, the pipeline we described so far is only trained using our collected data which were captured under controlled conditions. This results in sub-optimal performance in "in-the-wild" conditions. To remedy this, we developed a novel conditional GAN framework that is able to generate accurate 3D tongue point-clouds based on the image encoder outputs (step b) of the pipeline). Having created new image/point-cloud pairs of "in-the-wild" tongue data, we re-train the pipeline using also these new data. As we show in Section 4, this addition substantially improves the quality of the tongue reconstructions. To summarize, the contributions of our work are the following:
|
| 54 |
+
|
| 55 |
+
- We release a dataset of 1,800 raw tongue scans of various shapes and positions, corresponding to around 700 subjects. Being the first such diverse tongue dataset, it can be proven very useful to the community.
|
| 56 |
+
- We present a complete pipeline trained in an end-to-end fashion that is able to reconstruct the 3D face together with the tongue from a single image.
|
| 57 |
+
|
| 58 |
+
- To make this pipeline robust to "in-the-wild" images, we introduce a novel GAN framework which is able to accurately reconstruct 3D tongues from "in-the-wild" images with an increasing level of detail.
|
| 59 |
+
|
| 60 |
+
# 2. Related Work
|
| 61 |
+
|
| 62 |
+
Single view 3D reconstruction of human facial/head parts is undeniably an extremely valuable task in Computer Vision. However, it has posed many challenges to the research community, due to the fundamental depth ambiguities and the ill-posed nature of the problem. In order to constrain the ambiguity of the problem, many statistical parametric models have been introduced for different parts of the human face/head [2, 5, 26, 34].
|
| 63 |
+
|
| 64 |
+
Due to the increasing interest of facial analysis over the years the research community has mainly focused on human facial reconstructions. Since the inception of facial 3D Morphable Models (3DMMs) in [2], a myriad of scientific papers have been published focusing solely on the reconstruction of facial shape and appearance [3,4,14,22]. Only recently with the emergence of 3D scanning data has the research interest shifted to other significant parts of the human head. A few head models have been introduced during the recent years but without any statistical craniofacial correlations [26, 36]. The first craniofacial 3DMM of the human head was introduced in [10] and later extended and leveraged into a 3D head reconstruction setting from unconstrained single images [34]. A few recent works tried to align a skull structure of the the human head with the facial topology [27, 28] in order to obtain a distribution of plausible face shapes given a skull shape.
|
| 65 |
+
|
| 66 |
+
Finer details of the human face/head started to appear with the introduction of 3D human ear modeling [46]. Ears are key structures of the human head that have an important contribution to the biometric recognition and general appearance of a person. The two foremost examples of ear models were introduced in [9, 47] but none of them was fused to a face/head in order to create a complete appearance.
|
| 67 |
+
|
| 68 |
+
Moreover, in an attempt to overcome the "uncanny valley" problem, a few approaches have tried to model the independent variations/movements of the human eye and the the facial eye region [1, 42]. These efforts are challenging due to the limited amount of data around the eye region and the extreme level of detail required for this task. Moving towards the oral cavity, teeth modeling was introduced in [40, 43], where the 3D structure of the teeth was recovered from 2D images via an elaborate optimization scheme.
|
| 69 |
+
|
| 70 |
+
Only very recently, a few approaches [25, 33] have tried to combine all of these aforementioned attributes of the human head (eyes, ears, teeth, and inner oral cavity) in order to build a complete model in terms of shape and texture, which
|
| 71 |
+
|
| 72 |
+

|
| 73 |
+
Figure 2. Random 3D tongue expressions of our synthetic database based on the mean UHM template. The expressions are rigged and manually sculpted to induce more variance around the tongue surface and the general oral cavity.
|
| 74 |
+
|
| 75 |
+
accurately represents the human head. Although these models include an oral topology, none of them deals with the dynamics of the tongue, something which is really important for speech animation and the overall realness of the avatar representation. To this end, in this work we aim at extending these approaches and paving the way towards a realistic human appearance by releasing a diverse 3D tongue dataset to the research community. We also present the first framework for accurate 3D human tongue reconstruction from single images.
|
| 76 |
+
|
| 77 |
+
# 3. 3D human tongue reconstruction
|
| 78 |
+
|
| 79 |
+
In this Section we present the complete tongue reconstruction pipeline. We begin by describing our collected 2D/3D tongue dataset and our manually rigged tongue dataset which is based on the UHM [33] template. We further provide details about the point-cloud AE, the image encoder, the shape decoder and the overall loss functions we used to optimize the pipeline for the tongue reconstruction. Moreover, we present the novel conditional GAN method which is able to accurately reconstruct 3D tongue pointclouds of "in-the-wild" tongue images. Finally, we explain how we used the generated point-clouds of the GAN to retrain the pipeline to achieve better results in "in-the-wild" conditions.
|
| 80 |
+
|
| 81 |
+
# 3.1. Tongue datasets
|
| 82 |
+
|
| 83 |
+
TongueDB: the first 3D tongue dataset. As mentioned in Section 1, we collected a large dataset comprising of textured 3D tongue scans. Our point cloud database, dubbed TongueDB, contains approximately 1,800 3D tongue scans which were captured during a special exhibition in the Science Museum, London. The subjects were instructed to perform a range of tongue expressions (i.e., tongue out left and right, tongue out center, tongue out center round, tongue out center extreme open mouth, tongue inside left and right, etc.). Some example images can be seen in Fig. 6. The capturing apparatus utilized for this task was a 3dMD 4 camera structured light stereo system, which produces high quality dense meshes. We recorded a total of 700 distinct subjects with available metadata about them, including their gender (42% male, 58% female), age, and ethnicity (82% White,
|
| 84 |
+
|
| 85 |
+
9% Asian, 3% Black and 6% other).
|
| 86 |
+
|
| 87 |
+
Rigged tongue database. In order to carry out 3D tongue and face reconstruction, we would need to use a face/head model. Nevertheless, one major drawback of all of the currently used face/head models [10, 26, 36] is that they are missing the tongue component. This is because it is a challenging task to non-rigidly capture in a fixed template the 3D topology of the oral cavity. These challenges include: a) the highly deformable nature of the tongue, b) the non-convexity of the mouth region, c) the specular texture of the teeth. In order to alleviate this issue, we constructed a synthetic 3D head and tongue dataset rigged by 3D artists. The artists used a proportion of the raw tongue scans as a guide for the manual sculpting by tracing the 3D details of the original scans. The raw scans chosen for 3D tracing were carefully selected to represent most of the shape variance existing in TongueDB. For our neutral mesh template $\bar{\mathbf{T}}$ we utilize the mean template of the UHM [33] as it provides all the necessary components of the human oral cavity in accordance with the entire head statistical structure. The resulting rigged tongue expressions rise at 75 distinct meshes. In order to further augment our synthetic dataset, we performed trilinear interpolation between the closest expression meshes and generated a total of $n_{s} = 720$ tongue expressions. Some example synthetic expressions are shown in Fig. 2. A standard PCA was applied on the interpolated meshes resulting in an orthogonal basis matrix $\mathbf{U}_t\in \mathbb{R}^{3N\times n_t}$ (where $N$ are the mesh vertices and $n_t = 110$ the kept components). The PCA is performed on the entire set of head vertices and not solely on the oral cavity. In this way, it is more efficient afterwards to transfer the tongue expression from the mean head to a head with a different facial identity.
|
| 88 |
+
|
| 89 |
+
# 3.2. Method
|
| 90 |
+
|
| 91 |
+
Tongue point-cloud AE. In order to accurately reconstruct a tongue in its 3D form based on a 2D image, our image encoder needs to be guided by meaningful target labels which can capture all the desired 3D point-cloud information. These labels, denoted as $\mathbf{y} \in \mathbb{R}^{256}$ , are learned by autoencoding the raw point-clouds of our dataset (i.e., the raw 3D tongue scans). For this task, we utilize a self-organizing-map framework for hierarchical feature extrac
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
Figure 3. An illustration of our tongue reconstruction framework. First we train the point-cloud AE on its own to get meaningful 3D features (y) and then we trained the image encoder /shape decoder using a number of different losses as explained in Section 3
|
| 95 |
+
|
| 96 |
+
tion [24].
|
| 97 |
+
|
| 98 |
+
Tongue image encoder. The task of the tongue image encoder is to produce features which are close to the target 3D features $\mathbf{y}$ of the AE. To make the encoder robust to various camera angles or illuminations, we employ a rendering framework where we utilize the textured raw scans (TongueDB in Section 3.1). We render our 1,8K textured meshes with a pre-computed radiance transfer technique using spherical harmonics which efficiently represent global light scattering. Additionally, we use more than 15 different indoor scenes coupled with random light positions and mesh orientations around all 3D axes, resulting in approximately 100K images. As an encoder we used a ResNet-50 [17] model pre-trained on ImageNet [11] and fine-tuned it on our dataset. In particular, we modified the last layer of the network to output a vector $\tilde{\mathbf{y}} \in \mathbb{R}^{256}$ similar to the dimension of the ground truth vector $\mathbf{y}$ .
|
| 99 |
+
|
| 100 |
+
Shape decoder. In order to decode the encoder $\tilde{\mathbf{y}}$ labels into meaningful tongue shapes, we use the synthetic PCA model $\mathbf{U}_t$ of the rigged tongue expression dataset. To this end, after producing the $\tilde{\mathbf{y}}$ labels, we utilize a standard multi-layer perceptron (MLP) which works as a regression scheme to the latent parameters $\mathbf{p}_t \in \mathbb{R}^{110}$ of the synthetic PCA tongue model. The statistical nature of the PCA model helps us constrain the final result during training and ensures meaningful deformations which lie inside the spectrum of our rigged/modified UHM model.
|
| 101 |
+
|
| 102 |
+
Pipeline training. During training, we first train the point-cloud AE on its own and then train the pipeline of the image encoder and shape decoder in an end-to-end fashion. To optimize the pipeline, we apply a total of 6 losses with each one contributing to the quality of the final result. The first 2 losses are calculated between the predicted tongue expression of the rigged/modified UHM model and the
|
| 103 |
+
|
| 104 |
+
ground-truth tongue point-cloud of the corresponding input image. Similarly to [41], we adopt a Chamfer loss [13] $\mathcal{L}_{CD}$ to optimize the position of the resulting template points as well as a normal loss $\mathcal{L}_n$ to correct the orientation of the mesh. In order to compute an accurate Chamfer loss, we only utilize a small area around the oral cavity which is defined based on the ground-truth point-cloud. Additionally, we calculate a Laplacian regularization $\mathcal{L}_l$ loss between our predicted mesh and the mean shape of the PCA model in order to prevent the vertices from moving too freely outside the mean positions and constrain the resulting shape to be smooth. An edge length loss $\mathcal{L}_e$ is also introduced which penalizes any flying vertices (outliers). Finally, we employ a collision loss $\mathcal{L}_c$ which prevents the points of the tongue to penetrate the surface of the oral cavity and is formulated as the sum of each collision error around the 12 mouth landmarks of the UHM template (as illustrated in the supplementary material):
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\mathcal {L} _ {c} = \frac {1}{N} \sum_ {k = 0} ^ {1 1} \sum_ {i = 0} ^ {N - 1} \max \left(0, d _ {k} ^ {i}\right) \tag {1}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
d _ {k} ^ {i} = r ^ {2} - \left(q _ {1} ^ {i} - x _ {k}\right) ^ {2} - \left(q _ {2} ^ {i} - y _ {k}\right) ^ {2} - \left(q _ {3} ^ {i} - z _ {k}\right) ^ {2}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
the $\mathcal{L}_c$ is calculated as the sum of distances of each collided point $\mathbf{q}^i = \{q_1^i,q_2^i,q_3^i\}$ to the sphere $k$ with center the landmark coordinates $x_{k},y_{k},z_{k}$ and radius $r = 1,5cm$ . Lastly, we impose a final L2 loss $\mathcal{L}_y$ in the intermediate step of our pipeline where we constrain the predicted $\tilde{\mathbf{y}}$ encoded features to be as close as possible to the ground-truth features $\mathbf{y}$ of the corresponding autoencoded point-cloud. This loss is of paramount importance because: a) the $\tilde{\mathbf{y}}$ features in this way contain rich 3D information invariant to texture/illumination variations and b) our "in-the-wild" ex
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
Figure 4. Symbol $\mathbf{c}$ stands for row-wise concatenation along the channel dimension. Symbol $\mathbf{o}$ stands for element-wise (i.e., Hadamard) product. The Generator inputs are a Gaussian noise sample $\mathbf{z}$ and a label $\mathbf{y}$ corresponding to a particular tongue, from which we want to sample a 3D point. The Discriminator input pairs are a label $\mathbf{y}$ which corresponds to a specific tongue and $\mathbf{x}_t$ a real 3D point belonging to the aforementioned tongue point-cloud and $G(\mathbf{z},\mathbf{y}) = \hat{\mathbf{x}}_t$ is a generated point belonging to this tongue.
|
| 118 |
+
|
| 119 |
+
tension which we introduce later is based on a generative point-cloud framework that relies on such rich 3D features.
|
| 120 |
+
|
| 121 |
+
The final loss function $\mathcal{L}_{total}$ is given by:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\mathcal {L} _ {\text {t o t a l}} = \lambda_ {1} \mathcal {L} _ {C D} + \lambda_ {2} \mathcal {L} _ {n} + \lambda_ {3} \mathcal {L} _ {l} + \lambda_ {4} \mathcal {L} _ {e} + \lambda_ {5} \mathcal {L} _ {c} + \lambda_ {6} \mathcal {L} _ {y} \tag {2}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where $\lambda_1, \ldots, \lambda_6$ are training hyper-parameters. During inference, the encoder network takes as an input a single tongue image and predicts a 3D embedding $\tilde{\mathbf{y}}$ , which is later transformed to the corresponding $\mathbf{p}_t$ parameters of the synthetic expression model, through the MLP between the two latent spaces. Finally, we apply the PCA model of the rigged head model on these $\mathbf{p}_t$ parameters to derive the final mesh of the head with the tongue expression. An overview of the methodology can be seen in Fig. 3.
|
| 128 |
+
|
| 129 |
+
# 3.3. TongueGAN for "in-the-wild" reconstruction
|
| 130 |
+
|
| 131 |
+
Although the pipeline presented in Section 3.2 provides a good estimation of the tongue pose in the test set of our collected data, it does not perform very well in "in-the-wild" images (Fig. 9). This behavior is expected because our collected data were captured in controlled conditions and the training of the encoder was carried out only with rendered images which do not fully mimic "in-the-wild" conditions. To make our approach robust in "in-the-wild" images too, we would need to further train the pipeline by using also such data. However, for "in-the-wild" collected images from the web, we do not have their corresponding 3D tongue point-clouds. As a result, to use "in-the-wild" data in the pipeline, we would first need to have a method that can learn the distribution of our collected 3D tongue data and generalize well.
|
| 132 |
+
|
| 133 |
+
Finding a method to generate novel 3D tongues is tricky. This is because of several unique properties of the human tongue: a) it is a highly deformable object, so we cannot register our collected data in a reference template and apply
|
| 134 |
+
|
| 135 |
+
relevant methods [6,31,35], b) it is a non-watertight surface (i.e., it contains holes) so we cannot also use any implicit function approximations methods [29, 32, 38] or volumetric approaches [19, 20, 45]. Therefore, having excluded the aforementioned categories, we decided to use GANs [15] for the 3D tongue surface generations.
|
| 136 |
+
|
| 137 |
+
In order to generate accurate point-clouds that correspond to certain tongue images, our GAN, dubbed as TongueGAN, needs to be guided by meaningful labels which can capture all the desired 3D surface information. These labels are provided by the trained point-cloud AE as described in Section 3. Since the generation is driven by labels, TongueGAN is a conditional one [30]. In particular, given a label denoted as $\mathbf{y}$ and a random Gaussian noise $\mathbf{z} \in \mathbb{R}^{128}$ , the generator $G$ produces a novel point-cloud point $G(\mathbf{z},\mathbf{y}) \in \mathbb{R}^3$ , which we denote as $\tilde{\mathbf{x}}_t$ , that belongs to the tongue surface represented by the label $\mathbf{y}$ . On the other hand, the discriminator $D$ receives as inputs the label $\mathbf{y}$ , a real point-cloud point $\mathbf{x}_t$ (which belongs to the tongue represented by the label $\mathbf{y}$ ) and the generator output $\tilde{\mathbf{x}}_t$ and tries to discriminate the fake (i.e., generated) from the real point. In the mathematical parlance, this is described as:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\begin{array}{l} \mathcal {L} _ {D} = \mathbb {E} _ {\mathbf {x} _ {t}} \left[ \log D \left(\mathbf {x} _ {t}, \mathbf {y}\right) \right] - \mathbb {E} _ {\tilde {\mathbf {x}} _ {t}} \left[ \log D \left(\tilde {\mathbf {x}} _ {t}, \mathbf {y}\right) \right], \tag {3} \\ \mathcal {L} _ {G} = \mathbb {E} _ {\tilde {\mathbf {x}} _ {t}} [ \log D (\tilde {\mathbf {x}} _ {t}, \mathbf {y}) ] \\ \end{array}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
where $D$ tries to maximize $\mathcal{L}_D$ , whereas $G$ tries to minimize $\mathcal{L}_G$ . Please note that instead of generating whole point-clouds for every provided pair $(\mathbf{z},\mathbf{y})$ of noise and label, respectively, we merely generate a point corresponding to the surface which the label $\mathbf{y}$ represents. That confers several advantages in comparison to the rest of the methods in the literature, such as: a) we do not need to have in our training set point-clouds with the same number of points and as a result we can train our GAN without any data preprocessing on the raw point-clouds, which do not have a
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
a)
|
| 147 |
+
|
| 148 |
+

|
| 149 |
+
b)
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
c)
|
| 153 |
+
Figure 5. A visual description of the mechanism explained in Section 3.3.1. a) a raw tongue mesh along with a green section. b) 3D points (green color) that belong to the tongue surface and lie upon the section and with lighter green we depict the accepted areas from which points can be sampled. c) a zoomed-in area.
|
| 154 |
+
|
| 155 |
+
fixed number of points among them, b) when it comes to generating point-clouds corresponding to a particular label, we can generate on demand as many points as we want and, contrary to the rest of the literature, we are not constrained by any initially fixed resolution.
|
| 156 |
+
|
| 157 |
+
For the TongueGAN loss we chose the Wasserstein loss with Gradient Penalty (WGAN with GP) [16] due to its stability and good performance throughout the training process. As far as the architecture is concerned, we turned our attention to the recently proposed II-Nets [7, 8], which are easy to implement. In particular, we use our own, custom modification of II-Nets to accommodate the needs of our task. A graphical presentation of the network structure is provided in Fig. 4.
|
| 158 |
+
|
| 159 |
+
# 3.3.1 GAN loss for accurate surface approximation
|
| 160 |
+
|
| 161 |
+
Even though, as can be clearly seen in the experiments, our custom II-Nets modification along with the WGAN-GP loss significantly improve upon the point-cloud GAN [23], there is still room for improvement as the point-cloud representations are not ideally reconstructed (see Table 1).
|
| 162 |
+
|
| 163 |
+
We primarily attribute this to the strict behavior of the discriminator in GANs (i.e., deciding in our case whether a generated point matches exactly a point of the target point-cloud). This rigidity, especially in the early steps of the training process, is not very helpful, as the generator struggles to learn the real distribution of the point-clouds (i.e., all of the generated points are discarded as fake by the discriminator with high confidence). To remedy this, we slightly softened the discriminator, especially in the initial steps, by slightly modifying the real points fed to it. To achieve this, instead of directly feeding a real point $\mathbf{x}_t$ corresponding to a label $\mathbf{y}$ to the discriminator, we feed the following:
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
\mathbf {x} _ {t ^ {\prime} | \mathbf {y}} \sim \mathcal {N} \left(\mathbf {x} _ {t | \mathbf {y}}, \sigma_ {e} \mathbf {I}\right) \tag {4}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
where $\mathcal{N}(\mathbf{x}_{t|\mathbf{y}},\sigma_e)$ is a multi-variate normal distribution with mean $\mathbf{x}_t$ and (isotropic) variance $\sigma_{e}$ . The variance $\sigma_{e}$ is not dependent on the label $\mathbf{y}$ . It is only dependent on the epoch $e$ . By employing (4), especially when the training process commences, the generator can better learn the actual distribution as it does not get severely punished by the discriminator when it slightly misses out the actual surface (see the accompanying Fig. 5 for a visual understanding). As can be also seen in the experiments, this addition yields better results and stabilizes the training even further. We begin the training with a relatively small value for the variance and further linearly reduce it as we go along with the training process till it basically becomes zero towards the final epochs. This is verified empirically in Section 4.
|
| 170 |
+
|
| 171 |
+
# 3.3.2 Re-training the pipeline
|
| 172 |
+
|
| 173 |
+
After the training is complete, we use the trained generator together with the trained encoder from Section 3.2. We create "in-the-wild" pairs of 2D/3D data as follows: we feed the 2D "in-the-wild" image to the encoder and get the label $\tilde{\mathbf{y}}$ . We then use this label $\tilde{\mathbf{y}}$ and the generator to produce a 3D point-cloud of the input image. As we can see in Fig. 8, although TongueGAN is trained only on our collected data, it is able to generalize very well in "in-the-wild" images and as a result we can use it to create 2D/3D tongue pairs. We apply this process to a number of "in-the-wild" images to create multiple pairs. Finally, we re-train the pipeline in Section 3.2, using also the aforementioned pairs.
|
| 174 |
+
|
| 175 |
+
# 4. Experiments
|
| 176 |
+
|
| 177 |
+
In this section we provide details regarding the training and we outline a series of quantitative as well as qualitative experiments under control and "in-the-wild" conditions.
|
| 178 |
+
|
| 179 |
+
The MLP utilized for the regression between the labels $\tilde{\mathbf{y}}$ and the PCA parameters $\mathbf{p}_t$ in Section 3.2 has a structure of (256, 128, 110) with a ReLU activation in the intermediate layers. The hyper parameters of (2) which balance the losses are $\lambda_1 = 1.2$ , $\lambda_2 = 1.6e - 4$ , $\lambda_3 = 0.4$ , $\lambda_4 = 0.2$ , $\lambda_5 = 0.8$ and $\lambda_6 = 1.5$ . As described in Section 3.3, for TongueGAN we used a variant of WGAN with GP [16], which includes the injection mechanism [8], as well as the surface loss function presented in Section 3.3.1. More specifically, we utilized a 9-layer Generator $(G)$ and a 8-layer Discriminator $(D)$ with a total number of parameters of about $8\times 10^{6}$ and $4\times 10^{6}$ , respectively. We trained TongueGAN using the Adam optimizer [21] with $(\beta_{1} = 0,\beta_{2} = 0.9)$ . We also trained with a batch size of 2048 for a total of $10^{6}$ iterations. Following the idea introduced in [18], we use individual learning rates for D and G
|
| 180 |
+
|
| 181 |
+

|
| 182 |
+
Figure 6. Various raw 3D tongue scans of our database depicting different tongue expressions along with the corresponding 2D renders.
|
| 183 |
+
|
| 184 |
+
with values of $1e - 4$ and $1e - 5$ , respectively. Finally, we start training with the variance $\sigma_{e}$ in (4) being $5e - 3$ and we linearly decrease it by $10\%$ every $50 \times 10^{3}$ steps. The exact network structures are deferred to the supplementary material with more details.
|
| 185 |
+
|
| 186 |
+
# 4.1. 3D reconstruction in control conditions
|
| 187 |
+
|
| 188 |
+
In this set of experiments, we used $90\%$ of TongueDB for training and the rest for testing. Due to the intricacy of the tongue as a surface (as we explained in detail in Section 3.3), we decided to use a GAN for the tongue surface generation part. Moreover, an extra reason to utilize a GAN for the training is the fact that it is able to generalize very well in unseen labels during testing. To the best of our knowledge, the only method which has been introduced in the literature and is able to carry out point-cloud generations based on unseen labels is PointCloud GAN (PC-GAN) [23]. Consequently, in what follows we draw comparisons against PC-GAN [23] and another two variants of TongueGAN, namely: a) TongueGAN_v1, which is the same as TongueGAN with the only difference being that the novel loss function (Section 3.3.1) is not available in this version, and b) TongueGAN_v2, which is the typical GAN structure where, instead of the injections we have simple concatenations along the layers. Finally, we also report the results for the regressed tongue expression of the shape model after retraining the pipeline (3.2) as explained in Section 3.3.2. (referred to as Tongue-Reg). For this, we only take into account a small patch around the oral cavity defined by the ground truth point cloud, in order to deduce a correct error.
|
| 189 |
+
|
| 190 |
+
Quantitative results are provided in Table 1 and qualitative results are presented in Fig.7. For the quantitative results, we utilize the test set of our TongueDB and we measure the error based on the two commonly used type of distances when it comes to unordered 3D data, namely Chamfer Distance (CD) and Earth Mover's Distance (EMD) [13]. As can be clearly seen in all of the comparisons, TongueGAN outperforms the compared methods by a large margin
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
a)
|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
b)
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
c)
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
d)
|
| 203 |
+
Figure 7. Qualitative comparisons on various pointclouds from the test set of TongueDB. a) input image In b) ground-truth pointclouds, c) TongueGAN point-clouds and finally d) point-cloud GAN [23] point-clouds. As can be seen, TongueGAN produces a more accurate representation of the 3D tongue surface.
|
| 204 |
+
|
| 205 |
+
Table 1. Quantitative comparisons among the compared methods using CD and EMD as metrics. Lower values indicate better performance. TongueGAN achieves the best results in all settings.
|
| 206 |
+
|
| 207 |
+
<table><tr><td>Method</td><td>EMD</td><td>CD</td></tr><tr><td>TongueGAN</td><td>1.62e-2</td><td>5.25e-5</td></tr><tr><td>Tongue-Reg</td><td>1.79e-2</td><td>1.10e-4</td></tr><tr><td>PC-GAN</td><td>1.82e-2</td><td>1.13e-4</td></tr><tr><td>TongueGAN_v1</td><td>1.97e-2</td><td>1.67e-4</td></tr><tr><td>TongueGAN_v2</td><td>2.24e-2</td><td>2.09e-4</td></tr></table>
|
| 208 |
+
|
| 209 |
+
whereas Tongue-Reg outperforms the rest of the methods.
|
| 210 |
+
|
| 211 |
+
# 4.2.3D reconstruction in "in-the-wild" conditions
|
| 212 |
+
|
| 213 |
+
In this Section, we attempt to reconstruct the 3D surface of the tongue together with the entire head structure from "in-the-wild" images. In this set of experiments, we used all of TongueDB for training. We also added to our training data another $5K$ "in-the-wild" tongue images and created their 3D point-clouds using TongueGAN. Using all these data, we re-trained the pipeline according to Section 3.3.2. The results are only visual as we do not have ground-truth point-cloud data to report quantitative comparisons. Regarding the comparisons, we should note that PointCloud GAN [23] cannot be used in these experiments, as in order to work in the conditional setting, it needs as input the actual ground-truth point-cloud it attempts to reconstruct, something which we do not have at our disposal. Given that the TongueGAN variations (i.e., TongueGAN_v1 and TongueGAN_v2) perform worse than PointCloud GAN [23], we only present results in this Section regarding TongueGAN.
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
Figure 8. 3D head reconstructions with tongue animations from "in-the-wild" images. From left to right, we depict the "in-the-wild" image, then the point-cloud generations from two viewpoints and finally the 3D head reconstruction with a zoomed-in area around the oral cavity.
|
| 217 |
+
|
| 218 |
+
Since our tongue regression method is based on the mean template mesh of UHM we can easily utilize the pipeline presented in [33] in order to extent our approach to a particular facial identity. We begin by fitting a facial mesh to the image domain in order to get the 2D/3D landmarks and the identity of the subject and then we regress to the full head topology based on the UHM model. After reconstructing the head shape we crop the image around the projected 2D mouth landmarks. We then feed this cropped image to the re-trained pipeline and get the mean head shape with the tongue expression as mentioned in Section 3. Finally we merge the predicted tongue shape with the associated identity by treating the predicted tongue expression as a separate blend-shape. Some 3D reconstructions can be seen in Fig. 8. As evidenced, our pipeline is able to accurately reconstruct the 3D tongue details even in "in-the-wild" conditions. Additional tongue reconstructions of our method before and after the re-training framework against state-of-the-art methods can be seen in Fig. 9. To further empirically validate that TongueGAN is able to capture the 3D structures of random tongues that are not included in the training set we provide linear interpolations between unseen latent features in the supplementary material.
|
| 219 |
+
|
| 220 |
+
# 5. Conclusion and Limitations
|
| 221 |
+
|
| 222 |
+
In this work, we presented the first pipeline which is able to perform 3D head and tongue reconstruction from a single image. To achieve this, we collected the first diverse tongue dataset with various tongue shapes and positions which we make publicly available to the research community. To also make this pipeline robust in "in-the-wild" images and to mitigate the absence of their corresponding ground-truth 3D tongue data, we introduced the first GAN method that is tailored for accurately reconstructing the 3D surface of
|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
Input Image
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
Figure 9. Qualitative shape evaluation between our approach and the state-of-the-art methods of facial [14] (GANFit) and head [33] (UHM) reconstructions. We can easily deduce that the re-training framework plays an important role in the final reconstruction.
|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
GANFit
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
UHM
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
|
| 244 |
+

|
| 245 |
+
|
| 246 |
+

|
| 247 |
+
Ours
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+

|
| 254 |
+
Ours re-training
|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
|
| 260 |
+
a tongue from 2D images. As we show in a series of experiments, we are now able to accurately carry out 3D head reconstruction together with the tongue from a single image and thus create more realistic 3D avatars. However, people with extreme mouth and tongue poses in combination with images taken from difficult angles will cause failures in: i) the detection of the face, ii) the mouth landmark localization, iii) the regression of the tongue expression. Extreme tongue expressions such as tongue folding in between the teeth and inward/sideways twisting or rolling are not possible to reconstruct.
|
| 261 |
+
|
| 262 |
+
Acknowledgements: S. Zafeiriou acknowledges funding from the EPSRC Fellowship DEFORM (EP/S010203/1) and S. Moschoglou from the EPSRC DTA (EP/N509486/1).
|
| 263 |
+
|
| 264 |
+
# References
|
| 265 |
+
|
| 266 |
+
[1] Pascal BΓ©rard, Derek Bradley, Markus Gross, and Thabo Beeler. Lightweight eye capture using a parametric model. ACM Transactions on Graphics (TOG), 35(4):1-12, 2016. 2
|
| 267 |
+
[2] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In Proc 26th annual conf on Computer Graphics and Interactive Techniques, pages 187-194, 1999. 2
|
| 268 |
+
[3] James Booth, Epameinondas Antonakos, Stylianos Ploumpis, George Trigeorgis, Yannis Panagakis, and Stefanos Zafeiriou. 3d face morphable models" in-the-wild". In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5464-5473. IEEE, 2017. 2
|
| 269 |
+
[4] James Booth, Anastasios Roussos, Evangelos Ververas, Epameinondas Antonakos, Stylianos Ploumpis, Yannis Panagakis, and Stefanos Zafeiriou. 3d reconstruction of "inthe-wild" faces in images and videos. IEEE transactions on pattern analysis and machine intelligence, 40(11):2638-2652, 2018. 2
|
| 270 |
+
[5] James Booth, Anastasios Roussos, Stefanos Zafeiriou, Allan Ponniah, and David Dunaway. A 3d morphable model learnt from 10,000 faces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5543-5552, 2016. 2
|
| 271 |
+
[6] Giorgos Bouritsas, Sergiy Bokhnyak, Stylianos Ploumpis, Michael Bronstein, and Stefanos Zafeiriou. Neural 3d morphable models: Spiral convolutional networks for 3d shape representation learning and generation. In Proceedings of the IEEE International Conference on Computer Vision, pages 7213-7222, 2019. 5
|
| 272 |
+
[7] Grigorios Chrysos, Stylianos Moschoglou, Giorgos Bouritsas, Jiankang Deng, Yannis Panagakis, and Stefanos Zafeiriou. Deep polynomial neural networks. arXiv preprint arXiv:2006.13026, 2020. 6
|
| 273 |
+
[8] Grigorios G Chrysos, Stylianos Moschoglou, Giorgos Bouritsas, Yannis Panagakis, Jiankang Deng, and Stefanos Zafeiriou. P-nets: Deep polynomial neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7325-7335, 2020. 6
|
| 274 |
+
[9] Hang Dai, Nick Pears, and William Smith. A data-augmented 3d morphable model of the ear. In Automatic Face & Gesture Recognition (FG 2018), 2018 13th IEEE International Conference on, pages 404-408. IEEE, 2018. 2
|
| 275 |
+
[10] Hang Dai, Nick Pears, William Smith, and Christian Duncan. A 3d morphable model of craniofacial shape and texture variation. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 3104-3112, 2017. 2, 3
|
| 276 |
+
[11] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 4
|
| 277 |
+
[12] Bernhard Egger, William AP Smith, Ayush Tewari, Stefanie Wuhrer, Michael Zollhoefer, Thabo Beeler, Florian Bernard, Timo Bolkart, Adam Kortylewski, Sami Romdhani, et al. 3d morphable face modelsβpast, present, and future. ACM Transactions on Graphics (TOG), 39(5):1-38, 2020. 1
|
| 278 |
+
|
| 279 |
+
[13] Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 605-613, 2017. 4, 7
|
| 280 |
+
[14] Baris Gecer, Stylianos Ploumpis, Irene Kotsia, and Stefanos Zafeiriou. Ganfit: Generative adversarial network fitting for high fidelity 3d face reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1155-1164, 2019. 1, 2, 8
|
| 281 |
+
[15] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672β2680, 2014. 5
|
| 282 |
+
[16] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767-5777, 2017. 6
|
| 283 |
+
[17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 4
|
| 284 |
+
[18] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pages 6626-6637, 2017. 6
|
| 285 |
+
[19] Aaron S Jackson, Adrian Bulat, Vasileios Argyriou, and Georgios Tzimiropoulos. Large pose 3d face reconstruction from a single image via direct volumetric cnn regression. In Proceedings of the IEEE International Conference on Computer Vision, pages 1031-1039, 2017. 5
|
| 286 |
+
[20] Aaron S Jackson, Chris Manafas, and Georgios Tzimiropoulos. 3d human body reconstruction from a single image via volumetric regression. In Proceedings of the European Conference on Computer Vision (ECCV), pages 0β0, 2018. 5
|
| 287 |
+
[21] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6
|
| 288 |
+
[22] Alexandros Lattas, Stylianos Moschoglou, Baris Gecer, Stylianos Ploumpis, Vasileios Triantafyllou, Abhijeet Ghosh, and Stefanos Zafeiriou. Avatarme: Realistically renderable 3d facial reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 760-769, 2020. 1, 2
|
| 289 |
+
[23] Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnabas Poc-zos, and Ruslan Salakhutdinov. Point cloud gan. arXiv preprint arXiv:1810.05795, 2018. 6, 7
|
| 290 |
+
[24] Jiaxin Li, Ben M Chen, and Gim Hee Lee. So-net: Self-organizing network for point cloud analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9397β9406, 2018. 4
|
| 291 |
+
[25] Ruilong Li, Karl Bladin, Yajie Zhao, Chinmay Chinara, Owen Ingraham, Pengda Xiang, Xinglei Ren, Pratusha Prasad, Bipin Kishore, Jun Xing, et al. Learning formation of physically-based face attributes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3410-3419, 2020. 2
|
| 292 |
+
|
| 293 |
+
[26] Tianye Li, Timo Bolkart, Michael J Black, Hao Li, and Javier Romero. Learning a model of facial shape and expression from 4d scans. ACM Trans. Graph., 36(6):194-1, 2017. 2, 3
|
| 294 |
+
[27] Celong Liu and Xin Li. Superimposition-guided facial reconstruction from skull. arXiv preprint arXiv:1810.00107, 2018.2
|
| 295 |
+
[28] Dennis Madsen, Marcel LΓΌthi, Andreas Schneider, and Thomas Vetter. Probabilistic joint face-skull modelling for facial reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5295-5303, 2018. 2
|
| 296 |
+
[29] Mateusz Michalkiewicz, Jhony K Pontes, Dominic Jack, Mahsa Baktashmotlagh, and Anders Eriksson. Deep level sets: Implicit surface representations for 3d shape inference. arXiv preprint arXiv:1901.06802, 2019. 5
|
| 297 |
+
[30] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. 5
|
| 298 |
+
[31] Stylianos Moschoglou, Stylianos Ploumpis, and Mihalis Nicolaou. 3dfacegan: Adversarial nets for 3d face representation, generation, and translation. 5
|
| 299 |
+
[32] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. arXiv preprint arXiv:1901.05103, 2019. 5
|
| 300 |
+
[33] Stylianos Ploumpis, Evangelos Ververas, Eimear O'Sullivan, Stylianos Moschoglou, Haoyang Wang, Nick Pears, William Smith, Baris Gecer, and Stefanos P Zafeiriou. Towards a complete 3d morphable model of the human head. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 2, 3, 8
|
| 301 |
+
[34] Stylianos Ploumpis, Haoyang Wang, Nick Pears, William AP Smith, and Stefanos Zafeiriou. Combining 3d morphable models: A large scale face-and-head model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10934-10943, 2019. 2
|
| 302 |
+
[35] Rolandos Alexandros Potamias, Jiali Zheng, Stylianos Ploumpis, Giorgos Bouritsas, Evangelos Ververas, and Stefanos Zafeiriou. Learning to generate customized dynamic 3d facial expressions. arXiv preprint arXiv:2007.09805, 2020. 5
|
| 303 |
+
[36] Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and Michael J Black. Generating 3d faces using convolutional mesh autoencoders. In Proceedings of the European Conference on Computer Vision (ECCV), pages 704-720, 2018. 2, 3
|
| 304 |
+
[37] Nataniel Ruiz, Barry-John Theobald, Anurag Ranjan, Ahmed Hussein Abdelaziz, and Nicholas Apostoloff. Morphgan: One-shot face synthesis gan for detecting recognition bias. arXiv e-prints, pages arXiv-2012, 2020. 1
|
| 305 |
+
[38] Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. arXiv preprint arXiv:1905.05172, 2019. 5
|
| 306 |
+
[39] Ayush Tewari, Michael ZollhΓΆfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick PΓ©rez, and Christian Theobalt. Self-supervised multi-level face model learning
|
| 307 |
+
|
| 308 |
+
for monocular reconstruction at over $250\mathrm{~hz}$ . In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2549-2559, 2018. 1, 2
|
| 309 |
+
[40] Zdravko Velinov, Marios Papas, Derek Bradley, Paulo Gotta, Parsa Mirdeghan, Steve Marschner, Jan Novak, and Thabo Beeler. Appearance capture and modeling of human teeth. ACM Transactions on Graphics (TOG), 37(6):1-13, 2018. 2
|
| 310 |
+
[41] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images. In Proceedings of the European Conference on Computer Vision (ECCV), pages 52-67, 2018. 4
|
| 311 |
+
[42] Erroll Wood, Tadas Baltrusaitis, Louis-Philippe Morency, Peter Robinson, and Andreas Bulling. A 3d morphable eye region model for gaze estimation. In European Conference on Computer Vision, pages 297β313, 2016. 2
|
| 312 |
+
[43] Chenglei Wu, Derek Bradley, Pablo Garrido, Michael ZollhΓΆfer, Christian Theobalt, Markus H Gross, and Thabo Beeler. Model-based teeth reconstruction. ACM Trans. Graph., 35(6):220-1, 2016. 2
|
| 313 |
+
[44] Shangzhe Wu, Christian Rupprecht, and Andrea Vedaldi. Unsupervised learning of probably symmetric deformable 3d objects from images in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1-10, 2020. 1, 2
|
| 314 |
+
[45] Zerong Zheng, Tao Yu, Yixuan Wei, Qionghai Dai, and Yebin Liu. Deephuman: 3d human reconstruction from a single image. In Proceedings of the IEEE International Conference on Computer Vision, pages 7739-7749, 2019. 5
|
| 315 |
+
[46] Yuxiang Zhou and Stefanos Zaferiou. Deformable models of ears in-the-wild for alignment and recognition. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pages 626-633. IEEE, 2017. 2
|
| 316 |
+
[47] Reza Zolfaghari, Nicolas Epain, Craig T. Jin, Joan Glaunes, and Anthony Tew. Generating a morphable model of ears. pages 1771-1775. IEEE, Mar 2016. 2
|
3dhumantonguereconstructionfromsingleinthewildimages/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2dd21cea443f2241152f8a6197a18935c1e9db8c6af4f4d8d2d9cbc0555fc119
|
| 3 |
+
size 472825
|
3dhumantonguereconstructionfromsingleinthewildimages/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eddd8cb24b6392960f4e21c2294c6f24adef9aa065435297381cb6c24c9174c1
|
| 3 |
+
size 403183
|
3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/8c2f9c65-7ec2-4bc0-9267-6b4f35e52097_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:45993bb987580d39ddf87adfb44cbd3b4408e37be0011afb45c893295f5d088e
|
| 3 |
+
size 74774
|
3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/8c2f9c65-7ec2-4bc0-9267-6b4f35e52097_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0883db34e38520229da624a2fcbfb37583626bf0197ea923215917a829df16dc
|
| 3 |
+
size 91576
|
3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/8c2f9c65-7ec2-4bc0-9267-6b4f35e52097_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cf8adecbe6e94097ac1150dac5df422d4f4a758652e41050db9b0e5d9702a320
|
| 3 |
+
size 887606
|
3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/full.md
ADDED
|
@@ -0,0 +1,245 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds
|
| 2 |
+
|
| 3 |
+
Daigang Cai<sup>1</sup>, Lichen Zhao<sup>1</sup>, Jing Zhang<sup>β </sup>, Lu Sheng<sup>1</sup>, Dong Xu<sup>2</sup>
|
| 4 |
+
<sup>1</sup>College of Software, Beihang University, China, <sup>2</sup>The University of Sydney, Australia
|
| 5 |
+
{caidaigang, zlc114, zhang_jing, lsheng}@buaa.edu.cn, dong.xu@sydney.edu.au
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Observing that the 3D captioning task and the 3D grounding task contain both shared and complementary information in nature, in this work, we propose a unified framework to jointly solve these two distinct but closely related tasks in a synergistic fashion, which consists of both shared task-agnostic modules and lightweight task-specific modules. On one hand, the shared task-agnostic modules aim to learn precise locations of objects, fine-grained attribute features to characterize different objects, and complex relations between objects, which benefit both captioning and visual grounding. On the other hand, by casting each of the two tasks as the proxy task of another one, the lightweight task-specific modules solve the captioning task and the grounding task respectively. Extensive experiments and ablation study on three 3D vision and language datasets demonstrate that our joint training framework achieves significant performance gains for each individual task and finally improves the state-of-the-art performance for both captioning and grounding tasks.
|
| 10 |
+
|
| 11 |
+
# 1. Introduction
|
| 12 |
+
|
| 13 |
+
There is increasing research interest in the intersection field between 3D visual understanding and natural language processing, such as 3D dense captioning [9] and 3D visual grounding [1, 7, 21, 50]. These two tasks push the advance of the intersection field along different directions (i.e., from vision to language versus from language to vision), and encouraging progress has been achieved by separately solving each task. It still remains an open issue on whether it is possible to develop a unified framework to jointly solve the two closely related tasks in a synergistic fashion.
|
| 14 |
+
|
| 15 |
+
We observe that the two 3D vision-language tasks contain both shared and complementary information in nature, and it is possible to enhance the performance of both tasks if we treat one task as a proxy task of the other. On one
|
| 16 |
+
|
| 17 |
+
hand, each of the two tasks can be decomposed into several sub-tasks, and some of these sub-tasks share the common objectives and network structures. For example, as shown in the previous vision-language works [1,7,9,21,44,47,50] on RGB-D scans, both 3D dense captioning and 3D visual grounding require: 1) a 3D object detector to detect the salient object proposals in a 3D scene, 2) a relation modeling module to model complex 3D relations among these detected objects, and 3) a multi-modal learning module to learn fused information from both visual features and textual features to generate sentences or produce bounding boxes based on each input sentence. On the other hand, the opposite procedures are also used to separately solve the two problems, namely, the captioning task is to generate a meaningful textual description from the detected boxes (i.e., from vision to language), while the grounding task is to locate the desired box by understanding a given textual description (i.e., from language to vision).
|
| 18 |
+
|
| 19 |
+
Moreover, the 3D point clouds generated from RGB-D scans often contain rich and complex relations among different objects, while the corresponding RGB data provides more fine-grained attribute information, such as color, texture, and materials. Thus, the RGB-D scans intrinsically contain rich and abundant attribute and relation information for enhancing both 3D captioning and 3D grounding tasks. However, we empirically observe that the 3D dense captioning task is more object-oriented, which tends to learn more attribute information of the target objects (i.e., the objects of interest) in a scene and only the primary relationship between the target object and its surrounding objects. In contrast, the 3D visual grounding task is more relation-oriented, which focuses more on the relations between objects and distinguishes different objects (especially the objects from the same class) based on their relations. Thus, it is desirable to develop a joint framework to unify both 3D dense captioning and 3D visual grounding tasks and take advantage of each other for improving the performance of both tasks.
|
| 20 |
+
|
| 21 |
+
To this end, in this work, we propose a joint framework by unifying the distinct but closely related 3D vision
|
| 22 |
+
|
| 23 |
+
language tasks of 3D dense captioning and 3D visual grounding. Specifically, the proposed framework consists of three main modules: (1) a 3D object detector, (2) an attribute and relation-aware feature enhancement module, and (3) a task-specific grounding or captioning head. Specifically, the 3D object detector and the feature enhancement module are task-agnostic, which are designed for collaboratively supporting both captioning and grounding tasks. The two modules output the object proposals as the initial localization results of the potential objects in a scene, as well as the improved features within the proposals by integrating both attribute information from each object proposal and the complex relations between multiple proposals. With the strong task-agnostic modules, the task-specific captioning head and grounding head are designed as lightweight networks for dealing with each task, which consist of a lightweight transformer-based module together with simple preprocessing modules (i.e., the Query/Key/Value generation modules) and lightweight postprocessing modules (i.e., the word prediction or bounding box selection module). In this way, the 3D captioning and 3D visual grounding tasks can be cast as the proxy task of each other. In other words, the more object-oriented captioning task can provide more attribute information to potentially improve the grounding performance, while the more relation-oriented grounding task can help improve the captioning results by enhancing the captioning task with more relation information. Moreover, our joint framework also inspires the insights of the design of each individual captioning network and grounding network.
|
| 24 |
+
|
| 25 |
+
The contribution of this work is two-fold: (1) By analyzing both 3D dense captioning and 3D visual grounding tasks, we propose a unified framework to jointly solve the two distinct but closely related tasks by using our simple and strong network structure, which consists of a task-agnostic module with a 3D object detector and an attribute and relation-aware feature enhancement module, and two lightweight task-specific modules (i.e., a captioning head and a grounding head). (2) Extensive experiments conducted on three benchmark datasets ScanRefer [7], Scan2Cap [9], and Nr3D dataset [1] demonstrate our joint framework achieves the state-of-the-art results for both 3D dense captioning and 3D visual grounding tasks.
|
| 26 |
+
|
| 27 |
+
# 2. Related Work
|
| 28 |
+
|
| 29 |
+
2D Vision and Language tasks. Deep learning technologies have been extensively studied in various 2D vision and language tasks, such as visual grounding [15, 26, 35, 45]. image captioning & dense captioning [2, 11, 17, 18, 42], visual question answering [2, 4, 43] and text-to-image generation [25]. These impactful research problems advance the intersection research field between computer vision and natural language processing. With the rapid development of
|
| 30 |
+
|
| 31 |
+
deep learning, researchers introduced several collaborative methods (e.g., speaker-listener models [3, 46]) to solve various 2D vision and language tasks jointly. However, these models focus on 2D image-based tasks, while our method focuses on RGB-D-based tasks, where different types of data to be handled in our work require different network design strategies. Specifically, we propose a carefully designed task-agnostic feature enhancement module and the lightweight task-specific captioning and grounding heads, which all build upon the transformer architecture. Recently, several joint frameworks [8,23,24,27,41,48] focus on learning more generalizable image-text representations through a cumbersome model (e.g., VilBERT [23]) by using abundant and diverse 2D vision and language datasets. By contrast, based on in-depth analysis of the intrinsic properties of RGB-D scans and the characteristics of both 3D captioning and grounding tasks, our carefully designed joint learning framework with lightweight modules can effectively solve both tasks in a synergistic fashion without relying on a huge amount of paired training data.
|
| 32 |
+
|
| 33 |
+
3D Dense Captioning and Visual Grounding. Deep learning in 3D data has attracted a great deal of interest [10, 13, 20, 22, 32-34, 39, 40, 49, 51]. Recently, some dense captioning and visual grounding tasks tailored to 3D data are proposed. For example, some researches [9] proposed the 3D dense captioning methods and achieved impressive results by explicitly modeling the relation between different objects [9]. However, the dense captioning task is more object-oriented, which often focuses on the precise attribute descriptions based on the object appearance and thus the complex 3D geometrical relations among different objects might be ignored (even though they are intrinsically contained in the 3D data). As a result, the generated captions may be monotony.
|
| 34 |
+
|
| 35 |
+
Except for 3D dense captioning, visual grounding on 3D point clouds [1, 7, 14, 16, 44, 47, 50] has also attracted increasing research interest. Chen et al. [7] introduced the ScanRefer dataset for localizing objects by using natural language descriptions. Most recent 3D visual grounding methods [7, 16, 47] are composed of two stages. In the first stage, a 3D object detector or a panoptic segmentation model is applied to generate the target object proposals from the input scenes. In the second stage, a referring module is used to match the most relevant regions from the selected object proposals and the query sentences. These methods mainly focus on how to model the complex relations based on the object detection results, and pay less attention to the appearance features that characterize different objects, especially the objects within the same class. In other words, the current grounding methods are more relation-oriented.
|
| 36 |
+
|
| 37 |
+
Our joint framework takes advantage of the overlooked attribute information in the grounding task through the help of the more object-oriented captioning task, and employs
|
| 38 |
+
|
| 39 |
+
the relatively less explored relation information in the captioning task to increase the variety of generated sentences with the help of the more relation-oriented grounding task.
|
| 40 |
+
|
| 41 |
+
# 3. Methodology
|
| 42 |
+
|
| 43 |
+
In this section, we describe the technical details of our framework. As shown in Fig. 1(a), our framework consists of three modules: 1) the object detection module, 2) the attribute and relation-aware feature enhancement module, and 3) the task-specific captioning head and grounding head. The object detection module and feature enhancement module are task-agnostic and shared by both tasks. The captioning and grounding heads are task-specific with the lightweight transformer-based network structures for the captioning and grounding tasks, respectively. Specifically, the point clouds are encoded by the VoteNet [31] object detection module with an improved bounding box modeling method to more precisely locate the salient objects and produce the initial object proposals. Then the proposal features are enhanced through a task-agnostic attribute and relation-aware feature enhancement module to generate the enhanced object proposals. The enhanced object proposals are then fed into the captioning head and grounding heading for the dense captioning task and the visual grounding task, respectively, and generate the final result for each task.
|
| 44 |
+
|
| 45 |
+
# 3.1. Detection Module
|
| 46 |
+
|
| 47 |
+
The input of the detection module is the point cloud $\pmb{P} \in \mathbb{R}^{N \times (3 + K)}$ , which represents the whole 3D scene by $N$ 3D coordinates together with $K$ -dimensional auxiliary features. Here, we adopt the same 132-dimensional auxiliary features as in [7, 9], which include the pretrained 128-dimensional multi-view appearance features [7], 3-dimensional normals, and 1-dimensional height of each point above the ground.
|
| 48 |
+
|
| 49 |
+
We use VoteNet [31] as our detection module. Since the success of both captioning and grounding tasks relies on precise localization of initial object proposals together with discriminative features, we borrow the idea from the anchor-free FCOS method [36] to generate the initial object proposals by predicting the distance between the voting point and each side of the object proposal.
|
| 50 |
+
|
| 51 |
+
# 3.2. Attribute and Relation-aware Feature Enhancement Module
|
| 52 |
+
|
| 53 |
+
The initial object proposal features produced by the detection module are discriminative with respect to different object classes, thanks to the detection-related loss. However, they are unaware of the fine-grained object attributes (e.g., object positions, colors, and materials), especially for the within-class objects, and the complex relations among different objects, which are the key to the success of both 3D captioning and 3D grounding tasks. Hence, we further
|
| 54 |
+
|
| 55 |
+
propose an attribute and relation-aware feature enhancement module to strengthen the features for each proposal and better model the relations between proposals. Motivated by the Transformer encoder structure [37], we model the proposal feature enhancement module as two multi-head self-attention layers with additional attribute encoding module and relation encoding module, where the attribute or relation encoding module is composed of several fully connected layers.
|
| 56 |
+
|
| 57 |
+
The attribute encoding module. To aggregate the attribute features and the initial object features, we encode the auxiliary bounding box attribute related features (i.e., a 155-dimensional feature via a concatenation operation on the 27-dimensional box center and corner coordinates, and the 128-dimensional multi-view RGB features that potentially contain the attribute information such as colors and materials) into a 128-dimensional attribute embedding by using a fully connected layer. The attribute embedding has the same dimension as the initial object proposal features. It can then be added to the initial proposal features to enhance the initial object features with more attribute information.
|
| 58 |
+
|
| 59 |
+
The relation encoding module. Motivated by [50], we also encode the pairwise distances between any two object proposals to capture the complex object relations. Different from [50], we encode not only the (inverse) relative Euclidean distances (i.e., $\text{Dist} \in \mathbb{R}^{M \times M \times 1}$ ) but also three pairwise distances between any two centers of the initial object proposals along $x, y, z$ direction (i.e., $D_x, D_y, D_z \in \mathbb{R}^{M \times M \times 1}$ ) to better capture object relations along different directions, where $M$ is the number of initial object proposals. All four spatial proximity matrices $(D_x, D_y, D_z, \text{and } \text{Dist})$ are then aggregated along the channel dimension and fed into fully connected layers to produce the relation embeddings with the channel dimension $H$ matches the number of attention heads (i.e., $H = 4$ in our implementation) in the multi-head attention module. Each relation embedding (with the size of $M \times M \times 1$ ) is then added with the similarity matrix (i.e., the so-called attention map) generated from each head of the multi-head self-attention module.
|
| 60 |
+
|
| 61 |
+
Note the task-agnostic 3D object detector and the feature enhancement module can produce more accurate localization results and improved object features for both captioning and grounding tasks, and thus we can use more lightweight task-specific captioning head and grounding head in our framework which are simpler than the state-of-the-art methods [9, 50]. For both task-specific heads, we adopt similar lightweight 1-layer multi-head cross-attention-based network structures together with simple preprocessing modules (i.e., Query/Key/Value generation as shown in Fig. 2) and postprocessing modules (i.e., word prediction or BBox selection).
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
(a) Overview of our framework
|
| 65 |
+
|
| 66 |
+

|
| 67 |
+
(b) Attribute & Relation Aware Feature Enhancement
|
| 68 |
+
|
| 69 |
+

|
| 70 |
+
(c) Captioning Head
|
| 71 |
+
|
| 72 |
+

|
| 73 |
+
(d) Grounding Head
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
Figure 2. The Query, Key & Value generation processes for both captioning head and grounding head. For the captioning head, we firstly choose the object of interest to produce the target object proposal. We concatenate the target proposal feature, the tokenized word feature from the previous word and the hidden feature recurrently output by the multi-head cross-attention module, and use fully connected layers to generate the Query. We select K nearest neighbors of the target object proposal as the Key and Value. For the grounding head, the textual input is firstly tokenized and fed into a GRU cell to produce the the Key and Value of the multi-head cross-attention module. The Query for the grounding task is the enhanced object proposal features (see Fig. 1(d)).
|
| 77 |
+
|
| 78 |
+

|
| 79 |
+
Figure 1. (a) The overview of our framework. (b) The attribute and relation aware feature enhancement module. (c) The captioning head within our framework (d) The grounding head within our framework. "FC" means the fully connected layer.
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
|
| 83 |
+
# 3.3. Captioning Head
|
| 84 |
+
|
| 85 |
+
The 3D dense captioning task is to generate descriptions for each detected bounding box from the input point cloud, which is more object-oriented. Thus, the objectness (for accurately locating each object), the attribute information (for reasonably describing the attributes of objects), and the primary context (for further describing the key relations be
|
| 86 |
+
|
| 87 |
+
tween each object with other objects) of all the objects in a scene are of great importance. Since the object detector and the feature enhancement module can provide rich object class information, attribute features, and global context features, we simply design our captioning head with a 1-layer multi-head cross-attention network structure for effective message passing between the enhanced features from the target object proposal and all other initial object proposals, which will focus more on the primary context features.
|
| 88 |
+
|
| 89 |
+
For generating the query (Q) input of the multi-head cross-attention module, we firstly select the target object proposal and then encode the corresponding object features with a fully connected layer. During the training stage, we select the object proposal with the highest IoU score with the ground-truth bounding box as the query object. In the testing stage, we use all object proposals in the scene (after the Non-Maximum Suppression (NMS) process) in a one-by-one fashion as the query object. For the target object proposal, we follow most of the captioning methods [9] to use a recurrent network structure to progressively generate each word of the caption. Then, we recurrently aggregate the hidden feature output by the multi-head cross-attention module and the tokenized word feature of the previous word (which is the ground-truth word in the training stage, and the newly predicted word in the testing stage) with the current query object features. The fused features form the final generated query input.
|
| 90 |
+
|
| 91 |
+
In the recursive query generation process, to alleviate the exposure bias [6] in the sequence generation task be
|
| 92 |
+
|
| 93 |
+
tween the training stage (which uses the ground-truth word) and the testing stage (which uses the previously predicted word), we randomly use the autoregressive strategy during training. In details, we randomly replace $10\%$ of the ground-truth word tokens with the predicted word tokens as the input word feature during the training process.
|
| 94 |
+
|
| 95 |
+
In the key (K) and value (V) generation module, we use the $k$ -NN strategy to select the top $k$ object proposals that are located closest to the target proposal based on their center distance in the 3D coordinate space, which filters out the less related objects in the scene. The selected object proposals are used as the key and value for the multi-head cross-attention module. In our experiment, $k$ is empirically set as 20. This strategy is specially designed for the captioning task, because it mainly cares about the most obvious (or primary) relations between the target object and its surrounding objects and the rest of the relation information might be less important to the captioning task.
|
| 96 |
+
|
| 97 |
+
Finally, the multi-head cross-attention module is followed by a fully connected layer and a simple word prediction module to predict each word of the caption in a one-by-one fashion.
|
| 98 |
+
|
| 99 |
+
# 3.4. Grounding Head
|
| 100 |
+
|
| 101 |
+
For the 3D visual grounding task, the inputs include the 3D point clouds of a scene and the text-form language descriptions of one of the objects in the scene, and the task is to locate the object of interest based on the language description. Since the task-agnostic 3D object detector and the feature enhancement module already capture the object attributes and the complex relations among objects in a scene, the grounding head mainly focuses on matching between the given language descriptions and the detected object proposals. The grounding head in our method is more lightweight by simply using a 1-layer multi-head cross-attention module instead of multiple stacked cross-attention modules as used in [50] and [14].
|
| 102 |
+
|
| 103 |
+
The key (K) and value (V) inputs are generated based on the input language descriptions. Specifically, we use the similar language encoder as in ScanRefer [7]. The input language is firstly encoded by using a pretrained Glove [30] module, and then input to a GRU cell. The output word feature of the GRU cell forms the key (K) and value (V) inputs. Moreover, a global language feature is also generated from the GRU cell to predict the subject category of each sentence. The object proposals are used as the query (Q) input. By using the multi-head cross-attention mechanism between the language descriptions (K & V) and the object proposals (Q), the relationship between the sentence and the detected proposals is well captured.
|
| 104 |
+
|
| 105 |
+
To fully explore the contextual relations among the given textual description, we follow [50] to use two data augmentation strategies for both modalities (e.g., randomly erase
|
| 106 |
+
|
| 107 |
+
some words or change the order of the input text for the text input, and randomly copy some object proposals from other scene as the negative samples for enhancing object proposals), please refer to [50] for more details about the two data augmentation strategies.
|
| 108 |
+
|
| 109 |
+
Finally, a grounding classifier is used to generate the confidence score of each object proposal, and the proposal with the highest prediction score is considered as the final grounding result.
|
| 110 |
+
|
| 111 |
+
# 3.5. Training details
|
| 112 |
+
|
| 113 |
+
The loss function of our framework is a combination of the detection loss $L_{\text{detection}}$ , the grounding loss $L_{\text{grounding}}$ and the captioning loss $L_{\text{captioning}}$ .
|
| 114 |
+
|
| 115 |
+
The object detection loss is similar to that used in Qi et al. [31] for the ScanNet dataset [12], where $L_{\text{detection}} = 10L_{\text{vote-reg}} + L_{\text{obj-cls}} + L_{\text{sem-cls}} + 200L_{\text{boundary-reg}}$ , except that we replace the bounding box classification loss $L_{\text{box-cls}}$ and the regression loss $L_{\text{box-reg}}$ in [7, 31] with the boundary regression loss $L_{\text{boundary-reg}}$ [36]. For the visual grounding task, we apply the similar loss function as used in ScanRefer [7], which is a combination of the localization loss $L_{\text{loc}}$ for visual grounding and an auxiliary language-to-object classification loss $L_{\text{cls}}$ to enhance the subject classification of the input sentence, and $L_{\text{grounding}} = L_{\text{loc}} + L_{\text{cls}}$ . For the dense captioning task, we input the ground-truth words (or the predicted words with a probability of $10\%$ ) sequentially and $L_{\text{captioning}}$ is the average cross-entropy loss over all generated words. The final loss is a linear combination of these loss terms, i.e., $L = L_{\text{detection}} + 0.3L_{\text{grounding}} + 0.2L_{\text{captioning}}$ , where the trade-off parameters are empirically set for balancing different loss terms.
|
| 116 |
+
|
| 117 |
+
# 4. Experiments
|
| 118 |
+
|
| 119 |
+
# 4.1. Datasets and implementation details
|
| 120 |
+
|
| 121 |
+
Visual Grounding Dataset: We use the ScanRefer [7] dataset to evaluate our method for the visual grounding task. The ScanRefer dataset contains 51,583 textual descriptions about 11,046 objects from 800 scenes. The overall accuracy and the accuracies on both "unique" and "multiple" subsets are reported. We label each grounding data as "unique" if it only contains a single object from its class in the scene, otherwise it will be labeled as "multiple". For this dataset, we use Acc@0.25IoU and Acc@0.5IoU as our evaluation metrics. We also compare our method with the baseline methods on both the validation set and the online test set available at the ScanRefer's benchmark website<sup>1</sup>.
|
| 122 |
+
|
| 123 |
+
Visual Captioning Datasets: Scan2Cap [9] is a dense captioning dataset for 3D scenes. The descriptions that are longer than 30 tokens in the ScanRefer dataset are truncated and two special tokens [SOS] and [EOS] are added to
|
| 124 |
+
|
| 125 |
+
Table 1. Comparison of the visual grounding results from different methods on the ScanRefer [7] dataset. We report the percentage of the correctly predicted bounding boxes whose IoU scores with the ground-truth boxes are larger than 0.25 and 0.5, respectively. The results on both "unique" and "multiple" subsets are also reported. [*]: Note the InstanceRefer [47] method filters the predicted 3D proposals based on the object class prediction results such that this method only selects the target object proposal from the proposals in the same class, which simplifies the 3D visual grounding problem. This strategy is not adopted in our work.
|
| 126 |
+
|
| 127 |
+
<table><tr><td></td><td colspan="4">Unique</td><td colspan="2">Multiple</td><td colspan="2">Overall</td></tr><tr><td></td><td>Detector</td><td>Data</td><td>Acc@0.25</td><td>Acc@0.5</td><td>Acc@0.25</td><td>Acc@0.5</td><td>Acc@0.25</td><td>Acc@0.5</td></tr><tr><td colspan="9">Validation set</td></tr><tr><td>ScanRefer [7]</td><td>VoteNet</td><td>3D Only</td><td>67.64</td><td>46.19</td><td>32.06</td><td>21.26</td><td>38.97</td><td>26.10</td></tr><tr><td>InstanceRefer [47]*</td><td>PointGroup</td><td>3D Only</td><td>77.13</td><td>66.40</td><td>28.83</td><td>22.92</td><td>38.20</td><td>31.35</td></tr><tr><td>Non-SAT [44]</td><td>VoteNet</td><td>3D Only</td><td>68.48</td><td>47.38</td><td>31.81</td><td>21.34</td><td>38.92</td><td>26.40</td></tr><tr><td>3DVG-Transformer [50]</td><td>VoteNet</td><td>3D Only</td><td>77.16</td><td>58.47</td><td>38.38</td><td>28.70</td><td>45.90</td><td>34.47</td></tr><tr><td>Ours</td><td>VoteNet</td><td>3D Only</td><td>78.75</td><td>61.30</td><td>40.13</td><td>30.08</td><td>47.62</td><td>36.14</td></tr><tr><td>ScanRefer [7]</td><td>VoteNet</td><td>2D + 3D</td><td>76.33</td><td>53.51</td><td>32.73</td><td>21.11</td><td>41.19</td><td>27.40</td></tr><tr><td>TGNN [16]</td><td>3D-UNet</td><td>2D + 3D</td><td>68.61</td><td>56.80</td><td>29.84</td><td>23.18</td><td>37.37</td><td>29.70</td></tr><tr><td>SAT [44]</td><td>VoteNet</td><td>2D + 3D</td><td>73.21</td><td>50.83</td><td>37.64</td><td>25.16</td><td>44.54</td><td>30.14</td></tr><tr><td>InstanceRefer [47]*</td><td>PointGroup</td><td>2D + 3D</td><td>75.72</td><td>64.66</td><td>29.41</td><td>22.99</td><td>38.40</td><td>31.08</td></tr><tr><td>3DVG-Transformer [50]</td><td>VoteNet</td><td>2D + 3D</td><td>81.93</td><td>60.64</td><td>39.30</td><td>28.42</td><td>47.57</td><td>34.67</td></tr><tr><td>Ours</td><td>VoteNet</td><td>2D + 3D</td><td>83.47</td><td>64.34</td><td>41.39</td><td>30.82</td><td>49.56</td><td>37.33</td></tr><tr><td colspan="9">Online Benchmark</td></tr><tr><td>ScanRefer [7]</td><td>VoteNet</td><td>2D + 3D</td><td>68.59</td><td>43.53</td><td>34.88</td><td>20.97</td><td>42.44</td><td>26.03</td></tr><tr><td>TGNN [16]</td><td>3D-UNet</td><td>2D + 3D</td><td>68.34</td><td>58.94</td><td>33.12</td><td>25.26</td><td>41.02</td><td>32.81</td></tr><tr><td>InstanceRefer [47]*</td><td>PointGroup</td><td>2D + 3D</td><td>77.82</td><td>66.69</td><td>34.57</td><td>26.88</td><td>44.27</td><td>35.80</td></tr><tr><td>3DVG-Transformer [50]</td><td>VoteNet</td><td>2D + 3D</td><td>75.76</td><td>55.15</td><td>42.24</td><td>29.33</td><td>49.76</td><td>35.12</td></tr><tr><td>Ours</td><td>VoteNet</td><td>2D + 3D</td><td>76.75</td><td>60.59</td><td>43.89</td><td>31.17</td><td>51.26</td><td>37.76</td></tr></table>
|
| 128 |
+
|
| 129 |
+
Table 2. Comparison of the 3D dense captioning results from different methods on the Scan2Cap [9] validation set. We average the scores from the conventional captioning metrics based on the predicted bounding boxes whose IoU scores with the ground-truth boxes are larger than 0.25 and 0.5, respectively.
|
| 130 |
+
|
| 131 |
+
<table><tr><td></td><td>Detector</td><td>Data</td><td>C@0.25</td><td>B-4@0.25</td><td>M@0.25</td><td>R@0.25</td><td>C@0.5</td><td>B-4@0.5</td><td>M@0.5</td><td>R@0.5</td></tr><tr><td>Scan2Cap [9]</td><td>VoteNet</td><td>3D Only</td><td>53.73</td><td>34.25</td><td>26.14</td><td>54.95</td><td>35.20</td><td>22.36</td><td>21.44</td><td>43.57</td></tr><tr><td>Ours</td><td>VoteNet</td><td>3D Only</td><td>60.86</td><td>39.67</td><td>27.45</td><td>59.02</td><td>47.68</td><td>31.53</td><td>24.28</td><td>51.08</td></tr><tr><td>VoteNetRetr [31]</td><td>VoteNet</td><td>2D + 3D</td><td>15.12</td><td>18.09</td><td>19.93</td><td>38.99</td><td>10.18</td><td>13.38</td><td>17.14</td><td>33.22</td></tr><tr><td>Scan2Cap [9]</td><td>VoteNet</td><td>2D + 3D</td><td>56.82</td><td>34.18</td><td>26.29</td><td>55.27</td><td>39.08</td><td>23.32</td><td>21.97</td><td>44.48</td></tr><tr><td>Ours</td><td>VoteNet</td><td>2D + 3D</td><td>64.70</td><td>40.17</td><td>27.66</td><td>59.23</td><td>49.48</td><td>31.03</td><td>24.22</td><td>50.80</td></tr></table>
|
| 132 |
+
|
| 133 |
+
indicate the start and end of the description, and thus the textual descriptions for ScanRefer and Scan2Cap datasets are different.
|
| 134 |
+
|
| 135 |
+
As a sub-dataset of ReferIt3D [1], Nr3D is also built based on ScanNet with additional textual descriptions, and it contains 41, 503 samples collected by ReferItGame. We use the same metric as used for performance evaluation on the Scan2Cap dataset.
|
| 136 |
+
|
| 137 |
+
Specifically, the metric for performance evaluation on these two 3D captioning datasets combines the standard image captioning metrics under different IoU scores between the predicted bounding boxes and the target bounding boxes. The combined metric is defined as $m@kIoU = \frac{1}{P}\sum_{i=0}^{P}m_iu_i$ , where $u_i \in \{0,1\}$ is set to 1 if the detection IoU score for the $i$ -th bounding box is greater than $k$ , and 0 otherwise. We use $m_i$ to represent the captioning metrics such as CiDEr [38], BLEU [28], METEOR [5] and ROUGE-L [19], which are respectively abbreviated as C,
|
| 138 |
+
|
| 139 |
+
B-4, M and R in the following tables. $P$ is the number of ground-truth or detected object bounding boxes.
|
| 140 |
+
|
| 141 |
+
Implementation Details. We follow [50] to use 8 sentences for each scene from both datasets when training our framework. Our experiment is carried out on the machine with a single NVIDIA 11GB 2080Ti GPU and it tasks 200 epochs to train our framework on both ScanRefer [7] and Scan2Cap [9] datasets with a batch size of 10 in each iteration (i.e., there are 80 sentences from 10 point clouds). We apply the cosine learning rate decay strategy with the AdamW optimizer and a weight decay factor of 1e-5 to train our method. We empirically set the initial learning rate as 2e-3 for the detector, and 5e-4 for other modules of our framework (i.e., the feature enhancement module and two task-specific heads). In addition, the captioning task with the cross-entropy loss is prone to overfitting, so we only add the captioning loss during the last 50 epochs.
|
| 142 |
+
|
| 143 |
+
# 4.2. Comparison with the state-of-the-art methods
|
| 144 |
+
|
| 145 |
+
Following the works ScanRefer [7] and Scan2Cap [9], we report the results under both "3D Only" and "2D + 3D" settings according to whether the auxiliary features are used. Under the "3D Only" setting, we use "xyz + RGB + normals" as the auxiliary features. Under the "2D + 3D" setting, the auxiliary features contain "xyz + multiviews + normals", where "multiviews" means multiview image features from a pretrained ENet [29], and "normals" means the normal vectors from point clouds.
|
| 146 |
+
|
| 147 |
+
In Table 1 and Table 2, we compare the dense captioning and visual grounding results of our framework with several state-of-the-art methods on both ScanRefer [7] and Scan2Cap [9] datasets. Specifically, on the ScanRefer dataset, we compare our method with the 3D instance segmentation-based methods TGNN [16] and InstanceRefer [47] and the 3D detection-based methods including ScanRefer [7] and 3DVG-Transformer [50]. On the Scan2Cap dataset, we compare our method with the state-of-the-art 3D detection-based method Scan2Cap [9] and VoteNetRetr [31].
|
| 148 |
+
|
| 149 |
+
From Table 1, we observe that our method outperforms the baseline methods for the visual grounding task. Note that we use a simpler network structure when compared with the state-of-the-art method 3DVG-Transformer [50], so the results validate that our joint learning framework can benefit the grounding task with only a lightweight grounding head. Specifically, in terms of Acc@0.25 and Acc@0.5 metrics, our method achieves around $1.9\%$ and $2.6\%$ improvements in the "overall" case when compared with 3DVG-Transformer [50] on the validation set under the "2D+3D" setting. When compared with other detection-based methods, our method achieves more improvement on the "Unique" subset, possibly because the attribute information of the objects plays a more important role in the "Unique" subset when there is no confusing objects from the same category in the scene. The results also verify that the object-oriented captioning task enhances the grounding performance by providing more attribute information. Note that the baseline methods InstanceRefer [47] and TGNN [16] use the extra instance segmentation masks for generating 3D proposals, while the InstanceRefer [47] method further filters the instances based on the semantic prediction results, namely, it only retains the instances from the same predicted class for generating the visual grounding results. Possibly due to these two aspects, the InstanceRefer [47] method achieve good results in the "Unique" subset. In contrast to [16, 47], our work only relies on the detection results, and it still outperforms both methods in both "Multiple" and "Overall" cases.
|
| 150 |
+
|
| 151 |
+
When compared with the baseline method "Scan2Cap", from the results in Table 2, we observe that our joint learning framework using a simple feature enhancement module
|
| 152 |
+
|
| 153 |
+
and a lightweight captioning head achieves significant performance improvement for the captioning task. Under the "2D+3D" setting, our method achieves remarkable performance improvement of $10.4\%$ , $7.71\%$ and $6.32\%$ in terms of C@0.5IoU, B-4@0.5IoU and R@0.5IoU, respectively. For this task, the improvement comes from both network structure design (e.g., the attribute and relation aware feature enhancement module, and the lightweight captioning head) and the joint training strategy. The contribution of each module will be discussed in the ablation study below.
|
| 154 |
+
|
| 155 |
+
# 4.3. Ablation Study
|
| 156 |
+
|
| 157 |
+
Effectiveness of the feature enhancement module and the joint training strategy. To evaluate the effectiveness of the proposed task-agnostic feature enhancement module as well as the joint training strategy, we conduct the ablation study and report the corresponding results in Table 3. Without using the joint training strategy, the alternative method "w/o Grounding Head" (resp., "w/o Captioning Head") means we train the two separate networks consisting of two task-agnostic modules and the captioning head (resp., grounding head) for the 3D dense captioning task (resp., the visual grounding task). "w/o Feature Enhancement" means we remove the "attribute & relation aware feature enhancement" module in our joint learning framework. For both dense captioning and the visual grounding tasks, our complete 3DJCG method based on the default training data (i.e., from both Scan2Cap and ScanRefer datasets) outperforms those alternative methods, which indicate both strategies contribute to the final performance improvement to certain degree.
|
| 158 |
+
|
| 159 |
+
Does performance improvement come from more training data? Our joint training framework uses both the captioning and grounding training data, in which the only difference is the textual descriptions (i.e., the descriptions used for the grounding task are relatively longer or with more complex relations, while the dense captions are shorter textual descriptions focusing more on the object class and the corresponding attributes). Hence, we conduct the experiments to verify whether the performance improvement is due to the utilization of more training data (i.e., more textual descriptions from both tasks).
|
| 160 |
+
|
| 161 |
+
In Table 3 (a) and (b), 3DJCG ("Captioning Data Only" (resp., 3DJCG ("Grounding Data Only")) indicates that we only use the 3D captioning dataset Scan2Cap [9] (resp., the 3D visual grounding dataset ScanRefer [7]) when training our joint learning framework including both captioning and grounding heads and the two task-agnostic modules. Note both Scan2Cap and ScanRefer datasets can be readily used as the training data for these two tasks. By default, we use both datasets as the default training data when training our joint learning framework.
|
| 162 |
+
|
| 163 |
+
The results show that our 3DJCG framework using "Cap
|
| 164 |
+
|
| 165 |
+
Table 3. Comparison of the visual grounding results under the "2D+3D" setting and the dense captioning results based on the correctly predicted bounding boxes whose IoU scores with the ground-truth boxes are larger than 0.5. In the "Network Modules" column, for better presentation, we label our detector, the feature enhancement module, the captioning head and the grounding head as "DE", "FE", "CH" and "GH", respectively.
|
| 166 |
+
(a) The 3D dense captioning results on the dataset Scan2Cap [9]
|
| 167 |
+
|
| 168 |
+
<table><tr><td></td><td colspan="2">Training Dataset(s)</td><td colspan="4">Network Modules</td><td colspan="4">Dense Captioning Results</td></tr><tr><td></td><td>Scan2Cap</td><td>ScanRefer</td><td>DE</td><td>FE</td><td>CH</td><td>GH</td><td>B-4@0.5</td><td>C@0.5</td><td>R@0.5</td><td>M@0.5</td></tr><tr><td>3DJCG (w/o Grounding Head) / 3DJCG-C</td><td>β</td><td></td><td>β</td><td>β</td><td>β</td><td></td><td>26.24</td><td>45.04</td><td>46.69</td><td>23.27</td></tr><tr><td>3DJCG (w/o Feature Enhancement)</td><td>β</td><td>β</td><td>β</td><td></td><td>β</td><td>β</td><td>29.08</td><td>47.67</td><td>49.58</td><td>23.78</td></tr><tr><td>3DJCG (Captioning Data Only)</td><td>β</td><td></td><td>β</td><td>β</td><td>β</td><td>β</td><td>30.40</td><td>47.29</td><td>50.29</td><td>23.91</td></tr><tr><td>3DJCG (Default Training Data)</td><td>β</td><td>β</td><td>β</td><td>β</td><td>β</td><td>β</td><td>31.03</td><td>49.48</td><td>50.80</td><td>24.22</td></tr></table>
|
| 169 |
+
|
| 170 |
+
(b) The 3D visual grounding results on the dataset ScanRefer [7]
|
| 171 |
+
|
| 172 |
+
<table><tr><td></td><td colspan="2">Training Dataset(s)</td><td colspan="4">Network Modules</td><td colspan="3">Visual Grounding Results</td></tr><tr><td></td><td>Scan2Cap</td><td>ScanRefer</td><td>DE</td><td>FE</td><td>CH</td><td>GH</td><td>Unique@0.5</td><td>Multiple@0.5</td><td>Overall@0.5</td></tr><tr><td>3DJCG (w/o Captioning Head) / 3DJCG-G</td><td></td><td>β</td><td>β</td><td>β</td><td></td><td>β</td><td>62.60</td><td>30.48</td><td>36.72</td></tr><tr><td>3DJCG (w/o Feature Enhancement)</td><td>β</td><td>β</td><td>β</td><td></td><td>β</td><td>β</td><td>63.20</td><td>28.36</td><td>35.12</td></tr><tr><td>3DJCG (Grounding Data Only)</td><td></td><td>β</td><td>β</td><td>β</td><td>β</td><td>β</td><td>64.50</td><td>30.29</td><td>36.93</td></tr><tr><td>3DJCG (Default Training Data)</td><td>β</td><td>β</td><td>β</td><td>β</td><td>β</td><td>β</td><td>64.34</td><td>30.82</td><td>37.33</td></tr></table>
|
| 173 |
+
|
| 174 |
+
Table 4. The dense captioning results of different methods and different training strategies on the Nr3D dataset from ReferIt3D [1].
|
| 175 |
+
|
| 176 |
+
<table><tr><td></td><td>B-4@0.5</td><td>C@0.5</td><td>R@0.5</td><td>M@0.5</td></tr><tr><td>Scan2Cap [9]</td><td>17.24</td><td>27.47</td><td>49.06</td><td>21.80</td></tr><tr><td>3DJCG-C (From Scratch)</td><td>20.45</td><td>33.03</td><td>51.73</td><td>23.05</td></tr><tr><td>3DJCG-C* (Finetune)</td><td>22.82</td><td>38.06</td><td>52.99</td><td>23.77</td></tr></table>
|
| 177 |
+
|
| 178 |
+
tioning Data Only" (resp., "Grounding Data Only") generally improves the performance for the captioning task (resp., the grounding task) when compared to the alternative method 3DJCG ("w/o Grounding Head") (resp., 3DJCG ("w/o Captioning Head")), especially for the dense captioning task. The results validate that the performance gains come from both strategies (i.e., our network design and utilization of the additional training data). Moreover, the improved results from our joint learning framework under "Captioning Data Only" and "Grounding Data Only" settings also verify that our joint framework can also inspire the network design of each individual task.
|
| 179 |
+
|
| 180 |
+
Experiments on the Nr3D [1] dataset. We also take the dense captioning task on the Nr3D dataset as an example to evaluate our proposed framework when training from scratch or using the fine-tuning strategy. "3DJCG-C (From Scratch)" indicates that we train our 3DJCG-C network from scratch without using any pre-training strategies. "3DJCG-C* (Finetune)" indicates we fine-tune the pretrained model based on the Nr3D dataset. Note the pretrained model is learnt based on both ScanRefer and Scan2Cap datasets, and we also remove "Grounding Head" before performing the finetune process. We also list the results of the baseline method Scan2Cap trained from scratch based on the Nr3D dataset. As shown in Table 4, our
|
| 181 |
+
|
| 182 |
+
method "3DJCG-C (From Scratch)" outperforms the baseline method "Scan2Cap [9]", which further verifies the effectiveness of our newly designed network structure. We also observe that our "3DJCG-C* (Finetune)" method further improves "3DJCG-C (From Scratch)", which demonstrates that the results of our framework could also be boosted by using the fine-tuning strategy.
|
| 183 |
+
|
| 184 |
+
# 5. Conclusion and Future Work
|
| 185 |
+
|
| 186 |
+
Observing the shared and complementary properties of two different but closely related tasks 3D dense captioning and 3D visual grounding, we propose a unified framework to jointly solve the two tasks in a synergistic manner. In our framework, the task-agnostic modules are responsible for the precise object localization, the enhancement of the geometry and the fine-grained attribute features, and fully exploration of the complex geometrical relations between objects in a 3D scene, while the task-specific lightweight captioning head and grounding head solve the two tasks, respectively. The experimental results validate the effectiveness of the proposed framework for both tasks. While the joint framework improves the performance of both tasks, the performance improvement for the visual grounding task is not as significant as that for the dense captioning task. In our future work, we will develop more advanced joint training framework to further improve the 3D visual grounding performance.
|
| 187 |
+
|
| 188 |
+
Acknowledgement This work was supported by the National Key Research and Development Project of China (No. 2018AAA0101900), and the National Natural Science Foundation of China (No.61906012, No.62006012, No.62132001).
|
| 189 |
+
|
| 190 |
+
# References
|
| 191 |
+
|
| 192 |
+
[1] Panos Achlioptas, Ahmed Abdelreehm, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas. Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes. In ECCV, pages 422-440, 2020. 1, 2, 6, 8
|
| 193 |
+
[2] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, pages 6077-6086, 2018. 2
|
| 194 |
+
[3] Jacob Andreas and Dan Klein. Reasoning about pragmatics with neural listeners and speakers. In EMNLP, 2016. 2
|
| 195 |
+
[4] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: visual question answering. In ICCV, pages 2425-2433, 2015. 2
|
| 196 |
+
[5] Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In ACL Workshop, pages 65-72, 2005. 6
|
| 197 |
+
[6] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. NeurIPS, 28, 2015. 4
|
| 198 |
+
[7] Dave Zhenyu Chen, Angel X Chang, and Matthias NieΓner. ScanRefer: 3D object localization in RGB-D scans using natural language. In ECCV, pages 202-221, 2020. 1, 2, 3, 5, 6, 7, 8
|
| 199 |
+
[8] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. UNITER: universal image-text representation learning. In ECCV, pages 104-120, 2020. 2
|
| 200 |
+
[9] Zhenyu Chen, Ali Gholami, Matthias NieΓner, and Angel X Chang. Scan2Cap: Context-aware dense captioning in rgb-d scans. In CVPR, pages 3193-3203, 2021. 1, 2, 3, 4, 5, 6, 7, 8
|
| 201 |
+
[10] Bowen Cheng, Lu Sheng, Shaoshuai Shi, Ming Yang, and Dong Xu. Back-tracing representative points for voting-based 3d object detection in point clouds. In CVPR, pages 8963-8972, 2021. 2
|
| 202 |
+
[11] Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara. Meshed-memory transformer for image captioning. In CVPR, pages 10578-10587, 2020. 2
|
| 203 |
+
[12] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias NieΓner. ScanNet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, pages 5828-5839, 2017. 5
|
| 204 |
+
[13] Jinyang Guo, Jiaheng Liu, and Dong Xu. JointPruning: Pruning networks along multiple dimensions for efficient point cloud processing. IEEE TCSVT, 2021. 2
|
| 205 |
+
[14] Dailan He, Yusheng Zhao, Junyu Luo, Tianrui Hui, Shaofei Huang, Aixi Zhang, and Si Liu. Transrefer3d: Entity-andrelation aware transformer for fine-grained 3d visual grounding. In ACM MM, pages 2344-2352, 2021. 2, 5
|
| 206 |
+
[15] Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. Natural language object retrieval. In CVPR, pages 4555-4564, 2016. 2
|
| 207 |
+
|
| 208 |
+
[16] Pin-Hao Huang, Han-Hung Lee, Hwann-Tzong Chen, and Tyng-Luh Liu. Text-guided graph neural networks for referring 3d instance segmentation. In AAAI, pages 1610-1618, 2021. 2, 6, 7
|
| 209 |
+
[17] Justin Johnson, Andrej Karpathy, and Li Fei-Fei. Densecap: Fully convolutional localization networks for dense captioning. In CVPR, pages 4565β4574, 2016. 2
|
| 210 |
+
[18] Dong-Jin Kim, Jinsoo Choi, Tae-Hyun Oh, and In So Kweon. Dense relational captioning: Triple-stream networks for relationship-based captioning. In CVPR, pages 6271-6280, 2019. 2
|
| 211 |
+
[19] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In ACL Workshop, pages 74-81, 2004. 6
|
| 212 |
+
[20] Guanze Liu, Yu Rong, and Lu Sheng. Votehr: Occlusion-aware voting network for robust 3d human mesh recovery from partial point clouds. In ACM MM, pages 955-964, 2021. 2
|
| 213 |
+
[21] Haolin Liu, Anran Lin, Xiaoguang Han, Lei Yang, Yizhou Yu, and Shuguang Cui. Refer-it-in-rgbd: A bottom-up approach for 3d visual grounding in RGBD images. In CVPR, pages 6032-6041, 2021. 1
|
| 214 |
+
[22] Jiaheng Liu and Dong Xu. GeometryMotion-Net: A strong two-stream baseline for 3d action recognition. IEEE TCSVT, pages 4711β4721, 2021. 2
|
| 215 |
+
[23] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS, volume 32, 2019. 2
|
| 216 |
+
[24] Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 12-in-1: Multi-task vision and language representation learning. In CVPR, pages 10437β10446, 2020. 2
|
| 217 |
+
[25] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In CVPR, pages 11-20, 2016. 2
|
| 218 |
+
[26] Varun K. Nagaraja, Vlad I. Morariu, and Larry S. Davis. Modeling context between objects for referring expression understanding. In ECCV, pages 792-807, 2016. 2
|
| 219 |
+
[27] Duy-Kien Nguyen and Takayuki Okatani. Multi-task learning of hierarchical vision-language representation. In CVPR, pages 10492-10501, 2019. 2
|
| 220 |
+
[28] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311-318, 2002. 6
|
| 221 |
+
[29] Adam Paszke, Abhishek Chaurasia, Sanggil Kim, and Eugenio Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation. In arXiv preprint arXiv:1606.02147, 2016. 7
|
| 222 |
+
[30] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543, 2014. 5
|
| 223 |
+
[31] Charles R. Qi, Or Litany, Kaiming He, and Leonidas J. Guibas. Deep hough voting for 3d object detection in point clouds. In ICCV, pages 9277-9286, 2019. 3, 5, 6, 7
|
| 224 |
+
[32] Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. PointNet: Deep learning on point sets for 3d classification and segmentation. In CVPR, pages 652-660, 2017. 2
|
| 225 |
+
|
| 226 |
+
[33] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J. Guibas. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In NeurIPS, volume 30, 2017. 2
|
| 227 |
+
[34] Zizheng Que, Guo Lu, and Dong Xu. VoxelContext-Net: An octree based framework for point cloud compression. In CVPR, pages 6042β6051, 2021. 2
|
| 228 |
+
[35] Rui Su, Qian Yu, and Dong Xu. STVGBert: A visual-linguistic transformer based framework for spatio-temporal video grounding. In ICCV, pages 14618-14627, 2021. 2
|
| 229 |
+
[36] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In ICCV, pages 9627-9636, 2019. 3, 5
|
| 230 |
+
[37] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Ilia Polosukhin. Attention is all you need. In NeurIPS, volume 30, 2017. 3
|
| 231 |
+
[38] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In CVPR, pages 4566-4575, 2015. 6
|
| 232 |
+
[39] Feiyu Wang, Wen Li, and Dong Xu. Cross-dataset point cloud recognition using deep-shallow domain adaptation network. IEEE TIP, pages 7364-7377, 2021. 2
|
| 233 |
+
[40] Kaisiyuan Wang, Lu Sheng, Shuhang Gu, and Dong Xu. Sequential point cloud upsampling by exploiting multi-scale temporal dependency. IEEE TCSVT, pages 4686-4696, 2021. 2
|
| 234 |
+
[41] Huijuan Xu, Kun He, Bryan A Plummer, Leonid Sigal, Stan Sclaroff, and Kate Saenko. Multilevel language and vision integration for text-to-clip retrieval. In AAAI, pages 9062β9069, 2019. 2
|
| 235 |
+
[42] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, pages 2048-2057, 2015. 2
|
| 236 |
+
[43] Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Florencio, Lijuan Wang, Cha Zhang, Lei Zhang, and Jiebo Luo. TAP: text-aware pre-training for text-vqa and text-caption. In CVPR, pages 8751-8761, 2021. 2
|
| 237 |
+
[44] Zhengyuan Yang, Songyang Zhang, Liwei Wang, and Jiebo Luo. SAT: 2d semantics assisted training for 3d visual grounding. ICCV, pages 1856-1866, 2021. 1, 2, 6
|
| 238 |
+
[45] Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L. Berg. MAttNet: Modular attention network for referring expression comprehension. In CVPR, pages 4555-4564, 2018. 2
|
| 239 |
+
[46] Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L. Berg. A joint speaker-listener-reinforcer model for referring expressions. In CVPR, pages 7282-7290, 2017. 2
|
| 240 |
+
[47] Zhihao Yuan, Xu Yan, Yinghong Liao, Ruimao Zhang, Zhen Li, and Shuguang Cui. InstanceRefer: Cooperative holistic understanding for visual grounding on point clouds through instance multi-level contextual referring. ICCV, pages 1791-1800, 2021. 1, 2, 6, 7
|
| 241 |
+
[48] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Revisiting visual representations in vision-language models. In CVPR, pages 5579-5588, 2021. 2
|
| 242 |
+
|
| 243 |
+
[49] Weichen Zhang, Wen Li, and Dong Xu. SRDAN: Scale-aware and range-aware domain adaptation network for cross-dataset 3d object detection. In CVPR, pages 6769-6779, 2021. 2
|
| 244 |
+
[50] Lichen Zhao, Daigang Cai, Lu Sheng, and Dong Xu. 3DVG-Transformer: Relation modeling for visual grounding on point clouds. In ICCV, pages 2928β2937, 2021. 1, 2, 3, 5, 6, 7
|
| 245 |
+
[51] Lichen Zhao, Jinyang Guo, Dong Xu, and Lu Sheng. Transformer3D-Det: Improving 3d object detection by vote refinement. IEEE TCSVT, pages 4735-4746, 2021. 2
|
3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3def72808e747692ecd839852fac989c2580c139e487c3fb68c760e5efdbbc86
|
| 3 |
+
size 447094
|
3djcgaunifiedframeworkforjointdensecaptioningandvisualgroundingon3dpointclouds/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b82c39c8c230dd34385b5b3a6eafb34d7cbe746a2c25f82c2a505ceb771bbac0
|
| 3 |
+
size 300235
|
3dmomentsfromnearduplicatephotos/48a663ff-cb9f-445e-941f-b1850bb9d2fa_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:61af1f884b6767f5f7b8a56107856c3c6925f5029991c256f1c51c1473df9716
|
| 3 |
+
size 68359
|
3dmomentsfromnearduplicatephotos/48a663ff-cb9f-445e-941f-b1850bb9d2fa_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ef23ee0886e472eaa11a134f2ed5fb2f296393a59482c6342509845e9630641b
|
| 3 |
+
size 85949
|