SlowGuess commited on
Commit
804fc69
·
verified ·
1 Parent(s): b8de1c0

Add Batch 7f69c793-bf41-49d8-b6c8-744ef9a657f3

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_content_list.json +3 -0
  2. 2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_model.json +3 -0
  3. 2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_origin.pdf +3 -0
  4. 2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/full.md +0 -0
  5. 2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/images.zip +3 -0
  6. 2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/layout.json +3 -0
  7. 3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_content_list.json +3 -0
  8. 3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_model.json +3 -0
  9. 3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_origin.pdf +3 -0
  10. 3dawarevisualquestionansweringaboutpartsposesandocclusions/full.md +454 -0
  11. 3dawarevisualquestionansweringaboutpartsposesandocclusions/images.zip +3 -0
  12. 3dawarevisualquestionansweringaboutpartsposesandocclusions/layout.json +3 -0
  13. 3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_content_list.json +3 -0
  14. 3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_model.json +3 -0
  15. 3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_origin.pdf +3 -0
  16. 3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/full.md +263 -0
  17. 3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/images.zip +3 -0
  18. 3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/layout.json +3 -0
  19. 3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_content_list.json +3 -0
  20. 3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_model.json +3 -0
  21. 3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_origin.pdf +3 -0
  22. 3dindoorinstancesegmentationinanopenworld/full.md +420 -0
  23. 3dindoorinstancesegmentationinanopenworld/images.zip +3 -0
  24. 3dindoorinstancesegmentationinanopenworld/layout.json +3 -0
  25. 3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_content_list.json +3 -0
  26. 3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_model.json +3 -0
  27. 3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_origin.pdf +3 -0
  28. 3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/full.md +441 -0
  29. 3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/images.zip +3 -0
  30. 3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/layout.json +3 -0
  31. 3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_content_list.json +3 -0
  32. 3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_model.json +3 -0
  33. 3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_origin.pdf +3 -0
  34. 3dllminjectingthe3dworldintolargelanguagemodels/full.md +246 -0
  35. 3dllminjectingthe3dworldintolargelanguagemodels/images.zip +3 -0
  36. 3dllminjectingthe3dworldintolargelanguagemodels/layout.json +3 -0
  37. 3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_content_list.json +3 -0
  38. 3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_model.json +3 -0
  39. 3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_origin.pdf +3 -0
  40. 3dmoleculegenerationbydenoisingvoxelgrids/full.md +446 -0
  41. 3dmoleculegenerationbydenoisingvoxelgrids/images.zip +3 -0
  42. 3dmoleculegenerationbydenoisingvoxelgrids/layout.json +3 -0
  43. 4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_content_list.json +3 -0
  44. 4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_model.json +3 -0
  45. 4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_origin.pdf +3 -0
  46. 4dpanopticscenegraphgeneration/full.md +270 -0
  47. 4dpanopticscenegraphgeneration/images.zip +3 -0
  48. 4dpanopticscenegraphgeneration/layout.json +3 -0
  49. 4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_content_list.json +3 -0
  50. 4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_model.json +3 -0
2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57901cea50aac961ff94301329296a5618c3942d0b5608509d0cb78c83527c2c
3
+ size 483889
2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:822b4c46a65a9099da650797bc5891c1a1b68ae13e69328b8bac4e7d2e52caec
3
+ size 582544
2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b489276f6f095743d36e0142c2f4e24e0242d0860e1f639c2cc17709e2d70d9
3
+ size 1425928
2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14e60f93d4d810969d13381cf479718f8e8820afe258dc6cae69a5ffe9688315
3
+ size 6567519
2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d789e4a742c14d2c24f3d96e2029645db59d5700a74fc2bb42f21b55025f1449
3
+ size 2642224
3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ad9b52f502e93b13ec103d3cb22b20919e6ccddda189ca0ada7765c62433ee4
3
+ size 112934
3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adc88d95759ddeae2b5249a970703804ae9c41c93f3a356b870596ce9fea0b53
3
+ size 141698
3dawarevisualquestionansweringaboutpartsposesandocclusions/207f2216-51da-42cb-b184-1de04b1d6cb2_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70386fa17f4c2eb36626e882548c4a3d4016d34e2538167697b7f001867f68ec
3
+ size 5030621
3dawarevisualquestionansweringaboutpartsposesandocclusions/full.md ADDED
@@ -0,0 +1,454 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D-Aware Visual Question Answering about Parts, Poses and Occlusions
2
+
3
+ Xingrui Wang $^{1}$ Wufei Ma $^{1*}$ Zhuowan Li $^{1\dagger}$ Adam Kortylewski $^{2,3}$ Alan Yuille $^{1}$ $^{1}$ Johns Hopkins University $^{2}$ Max Planck Institute for Informatics $^{3}$ University of Freiburg
4
+ {xwang378, wma27, zli110, ayuille1}@jhu.edu akortyle@mpi-inf.mpg.de
5
+
6
+ # Abstract
7
+
8
+ Despite rapid progress in Visual question answering (VQA), existing datasets and models mainly focus on testing reasoning in 2D. However, it is important that VQA models also understand the 3D structure of visual scenes, for example to support tasks like navigation or manipulation. This includes an understanding of the 3D object pose, their parts and occlusions. In this work, we introduce the task of 3D-aware VQA, which focuses on challenging questions that require a compositional reasoning over the 3D structure of visual scenes. We address 3D-aware VQA from both the dataset and the model perspective. First, we introduce Super-CLEVR-3D, a compositional reasoning dataset that contains questions about object parts, their 3D poses, and occlusions. Second, we propose PO3D-VQA, a 3D-aware VQA model that marries two powerful ideas: probabilistic neural symbolic program execution for reasoning and deep neural networks with 3D generative representations of objects for robust visual recognition. Our experimental results show our model PO3D-VQA outperforms existing methods significantly, but we still observe a significant performance gap compared to 2D VQA benchmarks, indicating that 3D-aware VQA remains an important open research area. The code is available at https://github.com/XingruiWang/3D-Aware-VQA.
9
+
10
+ # 1 Introduction
11
+
12
+ Visual question answering (VQA) is a challenging task that requires an in-depth understanding of vision and language, as well as multi-modal reasoning. Various benchmarks and models have been proposed to tackle this challenging task, but they mainly focus on 2D questions about objects, attributes, or 2D spatial relationships. However, it is important that VQA models understand the 3D structure of scenes, in order to support tasks like autonomous navigation and manipulation.
13
+
14
+ An inherent property of human vision is that we can naturally answer questions that require a comprehensive understanding of the 3D structure in images. For example, humans can answer the questions shown in Fig. 1, which ask about the object parts, their 3D poses, and occlusions. However, current VQA models, which often rely on 2D bounding boxes to encode a visual scene [2, 59, 25] struggle to answer such questions reliably (as can be seen from our experiments). We hypothesize this is caused by the lack of understanding of the 3D structure images.
15
+
16
+ In this work, we introduce the task of 3D-aware VQA, where answering the questions requires compositional reasoning over the 3D structure of the visual scenes. More specifically, we focus on challenging questions that require multi-step reasoning about the object-part hierarchy, the 3D poses of the objects, and the occlusion relationships between objects or parts.
17
+
18
+ ![](images/d954c9cbe56e36658cfc7b91ecd77a2cf5d5e43dad1505727ddc78f22c845fb6.jpg)
19
+ Figure 1: Examples from Super-CLEVR-3D. We introduce the task of 3D-aware VQA, which requires 3D understanding of the image, including the parts, 3D poses, and occlusions.
20
+
21
+ Part Q: What is the name of the brown part of the large rubber thing? Wheel Q: What is the material of the trunk that belongs to the same object as the purple part? Metallic
22
+
23
+ 3D Pose
24
+
25
+ Q: Which direction the double bus is facing? Left
26
+ Q: What is the color of the small object which faces to the right? Truck
27
+
28
+ Occlusion
29
+
30
+ Q: Is the bumper of the purple SUV occluded? No
31
+ Q: What is the size of the aeroplane whose wing is occluded? Small
32
+
33
+ filter[aeroplane]
34
+
35
+ obj_to_part
36
+
37
+ filter_part[wing]
38
+
39
+ filter_occludee
40
+
41
+ part_to_obj
42
+
43
+ query_size
44
+
45
+ What is the size of the aeroplane whose wing is occluded?
46
+
47
+ We address the challenging 3D-aware VQA task from both the dataset and the model perspective. From the dataset perspective, we introduce Super-CLEVR-3D, which extends the Super-CLEVR dataset [32] with 3D-aware questions. Given the visual scenes from Super-CLEVR that contain randomly placed vehicles of various categories, we define a set of 3D-aware reasoning operations and automatically generate 3D questions based on these operations. Fig. 1 shows examples of the images, questions and the underlying 3D operations for the questions. From the model perspective, we introduce PO3D-VQA, a VQA model that marries two powerful ideas: probabilistic neural symbolic program execution for reasoning and a deep neural network with 3D generative representations of objects for robust visual scene parsing. Our model first recovers a 3D scene representation from the image and a program from the question, and subsequently executes the program on the 3D scene representation to obtain an answer using a probabilistic reasoning process that takes into account the confidence of predictions from the neural network. We refer to our system as PO3D-VQA, which stands for Parts, Poses, and Occlusions in 3D Visual Question Answering.
48
+
49
+ On Super-CLEVR-3D, we experiment with existing representative models, their variants, and our model PO3D-VQA. The results show that our model outperforms existing methods significantly, leading to an improvement in accuracy of more than $11\%$ , which shows the advantage of the generative 3D scene parser and the probabilistic neural symbolic reasoning process. Moreover, further analysis on questions with different difficulty levels reveals that the improvements of our model are even greater on harder questions with heavy occlusions and small part sizes. Our results indicate that a reliable 3D understanding, together with the modular reasoning procedure, produces a desirable 3D-aware VQA model.
50
+
51
+ In summary, our contributions are as follows. (1) We introduce the challenging task of 3D-aware VQA and propose the Super-CLEVR-3D dataset, where 3D visual understanding about parts, 3D poses, and occlusions are required. (2) We propose a 3D-aware neural modular model PO3D-VQA that conducts probabilistic reasoning in a step-wise modular procedure based on robust 3D scene parsing. (3) With experiments, we show that 3D-aware knowledge and modular reasoning are crucial for 3D-aware VQA, and suggest future VQA methods take 3D understanding into account.
52
+
53
+ # 2 Related Work
54
+
55
+ Visual Question Answering (VQA). Rapid progress has been made in VQA [4] in both the datasets and the models. To solve the challenging VQA datasets [15, 61, 17, 45] with real images, multiple models are developed including two-stream feature fusion [2, 14, 28, 55, 23, 44, 30] or transformer-based pretraining [48, 36, 31, 59, 25]. However, the real datasets are shown to suffer from spurious correlations and biases [42, 16, 41, 1, 15, 26, 27]. Alternatively, synthetic datasets like CLEVR [24] and Super-CLEVR [32], are developed to study the compositional reasoning ability of VQA systems, which are also extended to study other vision-and-language tasks [34, 29, 53, 58, 6, 47, 20]. The synthetic datasets promote the development of neural modular methods [3, 54, 40, 22], where the reasoning is done in a modular step-by-step manner. It is shown that the modular methods have nice properties including interpretability, data efficiency [54, 40], better robustness [32] and strong performance on synthetic images [54]. However, most existing methods rely on region features [2, 59] extracted using 2D object detectors [46] for image encoding, which is not 3D-aware. We follow the works on the synthetic dataset and enhance the modular methods with 3D understanding.
56
+
57
+ VQA in 3D. Multiple existing works study VQA under the 3D setting, such as SimVQA [8], SQA3D [39], 3DMV-VQA [19], CLEVR-3D [51], ScanQA [52], 3DQA [52], and EmbodiedQA [13], which focus on question answering on the 3D visual scenes like real 3D scans [39, 51, 5, 52], simulated 3D environments [9, 13], or multi-view images [19]. PTR [20] is a synthetic VQA dataset that requires part-based reasoning about physics, analogy and geometry. Our setting differs from these works because we focus on 3D in the questions instead of 3D in the visual scenes, since our 3D-aware questions explicitly query the 3D information that can be inferred from the 2D input images.
58
+
59
+ 3D scene understanding. One popular approach for scene understanding is to use the CLIP features pretrained on large-scale text-image pairs and segment the 2D scene into semantic regions [10, 43]. However, these methods lack a 3D understanding of the scene and cannot be used to answer 3D-related questions. Another approach is to adopt category-level 6D pose estimation methods that can locate objects in the image and estimate their 3D formulations. Previous approaches include classification-based methods that extend a Faster R-CNN model for 6D pose estimation [60, 38] and compositional models that predicts 6D poses with analysis-by-synthesis [38]. We also notice the huge progress of 3D vision language foundation models, which excel in multiple 3D vision-language understanding tasks [19, 37, 21]. Still, we focus on the reasoning with compositional reasoning that brings more interpretability and robustness [32].
60
+
61
+ # 3 Super-CLEVR-3D Dataset
62
+
63
+ To study 3D-aware VQA, we propose the Super-CLEVR-3D dataset, which contains questions explicitly asking about the 3D object configurations of the image. The images are rendered using scenes from the Super-CLEVR dataset [32], which is a VQA dataset containing synthetic scenes of randomly placed vehicles from 5 categories (car, plane, bicycle, motorbike, bus) with various of sub-types (e.g. different types of cars) and attributes (color, material, size). The questions are generated by instantiating the question templates based on the image scenes, using a pipeline similar to Super-CLEVR. In Super-CLEVR-3D, three types of 3D-aware questions are introduced: part questions, 3D pose questions, and occlusion questions. In the following, we will describe these three types of questions, and show the new operations we introduced for our 3D-aware questions about object parts, 3D poses, and occlusions. Examples of the dataset are shown in Fig. 1.
64
+
65
+ Part questions. While in the original Super-CLEVR dataset refers to objects using their holistic names or attributes, objects are complex and have hierarchical parts, as studied in recent works [33, 11, 20]. Therefore, we introduce part-based questions, which use parts to identify objects (e.g. "which vehicle has red door") or query about object parts (e.g. "what color is the door of the car"). To enable the generation of part-based questions, we introduce two new operations into the reasoning programs: part_to_object(\cdot), which find the objects containing the given part, and object_to_part(\cdot), which select all the parts of the given object. We also modify some existing operations (i.e. filter, query and unique), enabling them to operate on both object-level and part-level. With those reasoning operations, we collect 9 part-based templates and instantiate them with the image scene graph to generate questions.
66
+
67
+ 3D pose questions. Super-CLEVR-3D asks questions about the 3D poses of objects (e.g. "which direction is the car facing in"), or the pair-wise pose relationships between objects (e.g. "which object has vertical direction with the red car"). The pose for an individual object (e.g. "facing left") can be processed in a similar way as attributes like colors, so we extend the existing attribute-related operations like filter and query to have them include pose as well. For pair-wise pose relationship between objects, we add three operations, i.e. same Pose, opposite Pose and vertical Pose, to deal with the three types of pose relationships between objects. For example, opposite Pose(\cdot) returns the objects that are in the opposite pose direction with the given object. 17 templates are collected to generate 3D pose questions.
68
+
69
+ Occlusion questions. Occlusion questions ask about the occlusion between entities (i.e. objects or parts). Similar to 3D poses, occlusion can also be regarded as either an attribute for an entity (e.g. "which object is occluded"), or as a relationship between entities (e.g. "which object occludes the car door"). We extend the attribute-related operations, and introduce new operations to handle the pair-wise occlusion relationships: filter_occludee which filters the entities that are being occluded, relate_occluding which finds the entities that are occluded by the given entity, and relate_occluded which finds the entities that are occluding the given entity. Using these operations, 35 templates are collected to generate the occlusion questions.
70
+
71
+ # 4 Method
72
+
73
+ ![](images/ba0f8a3ac3b55955724e88d19064cf351bea25032aedd0b72d5c102e8fe6f558.jpg)
74
+ Figure 2: An overview of our model PO3D-VQA. The image is parsed into 3D-aware scene representations (blue box) using our proposed scene parser based on the idea of render-and-compare (green box). The question is parsed into a program composed of reasoning operations (orange box). Then the operations are executed on the 3D-aware scene representations to predict the answer.
75
+
76
+ In this section, we introduce PO3D-VQA, which is a parse-then-execute modular model for 3D-aware VQA. The overview of our system is shown in Fig. 2. We first parse the image into a scene graph representation that is aware of 3D information like object parts, 3D poses and occlusion relations, then we parse the question into a reasoning program and execute the program on the derived scene representations in a probabilistic manner. In Sec. 4.1, we define the scene representation required; in Sec. 4.2, we describe how we parse the image into the scene representation based on a multi-class 6D pose estimation model with non-trivial extensions; in Sec. 4.3, we describe how the question is executed on the derived scene representation to predict the answer.
77
+
78
+ # 4.1 3D-aware scene representation
79
+
80
+ Given an input image $I$ , we parse it into a 3D-aware scene representation $R$ that contains the objects $(O)$ with attributes $(A^o)$ , the parts $(P)$ with attributes $(A^p)$ , the hierarchical relationships between objects and parts $(H)$ , and the occlusion relationships between them $(S)$ . The attributes include the 3D poses and locations of objects or parts, as well as their colors, materials, and sizes. The scene representation $R = \{O, P, A^o, A^p, H, S\}$ is comprehensive and therefore we can directly execute the symbolic reasoning module on this representation without taking into account the image any further.
81
+
82
+ In more detail, objects are represented as a matrix $O \in \mathbb{R}^{n \times N_{obj}}$ containing the probability scores of each object being a certain instance, where $n$ is the number of objects in the given image and $N_{obj}$ is the number of all possible object categories in the dataset (i.e. vocabulary size of the objects). Similarly, parts are represented as $P \in \mathbb{R}^{p \times N_{prt}}$ , where $p$ is the number of parts in the image and $N_{prt}$ is the vocabulary size of the object parts. The object-part hierarchy is represented by a binary matrix $H \in \mathbb{R}^{n \times p}$ , where $H_{ij} = 1$ if the object $i$ contains the part $j$ or $H_{ij} = 0$ otherwise. The attributes $A^o \in \mathbb{R}^{n \times N_{att}}$ and $A^p \in \mathbb{R}^{p \times N_{att}}$ containing probability scores of each object or part having a certain attribute or the value of bounding box. Here $N_{att}$ is the number of attributes including the 3D poses, location coordinates, colors, materials and sizes. Occlusion relationships are represented by $S \in \mathbb{R}^{(n + p) \times n}$ , where each element $S_{ij}$ represents the score of object (or part) $i$ being occluded by object $j$ .
83
+
84
+ # 4.2 Multi-class 6D Scene Parsing
85
+
86
+ While most existing VQA methods [2, 59] encode the image using pretrained object detectors like Faster-RCNN [46], we build our 6D-aware scene parser in a different way, based on the idea of analysis-by-synthesis through inverse rendering [49] which has the following advantages: first, the model prediction is more robust [49] as the render-and-compare process can naturally integrate a robust reconstruction loss to avoid distortion through occlusion; second, while the object parts
87
+
88
+ ![](images/7fa9370f50279c608fb0575348cdadc7eb1d0d898b9b87e55b1825ecccea7613.jpg)
89
+ (1)
90
+ (a)
91
+
92
+ ![](images/5557163981351de79fc2145d0d3641dd9af729868c3bec003f78b03315b70f51.jpg)
93
+ (b)
94
+
95
+ ![](images/b5cbe2dc57a35c9a177001b13a51800f832a2281daebf9dac25618be1604a618.jpg)
96
+ (c)
97
+
98
+ ![](images/8591ee78fa143ed4bb14506f3b8dc1b978942b592ff92f959c9a98979a3d5923.jpg)
99
+ (d)
100
+
101
+ ![](images/b898434242c22ff7d31d90fa4efc34cc118490e62ece5872be41d51d282956db.jpg)
102
+ (e)
103
+
104
+ ![](images/927bcfbfe88b42eac37da4383cbbb26deda4d3e0ab1316c9ca60805a2787793f.jpg)
105
+ (II)
106
+ bus
107
+ Figure 3: Visualization of intermediate steps in our scene parser. Given an image (a), per-category feature activation maps (shown in II) are computed through render-and-compate. Then the category-wise competition (3D-NMS) is performed (results shown in b) and a post-filtering step is taken to remove mis-detected objects (c). Based on the pose estimation results (d), we project the 3D object mesh back onto the image to locate parts and occlusions(e).
108
+
109
+ ![](images/fc1f8e3e335d29a2aa57d7c708930748f6de4025aa90765872dfe7844dbd9961.jpg)
110
+ aeroplane
111
+
112
+ ![](images/2ab4e105582699b02b24e8de373fe57463f071e69455bd1c57672a5472abaa71.jpg)
113
+ bicycle
114
+
115
+ ![](images/4ba78faa756cb9fc2e33a9e38f95caac29ae392c3ceebf92ff7059e466f9e459.jpg)
116
+ motorbike
117
+
118
+ ![](images/bf53e32f1e084381938576cbe0d3982c9f0e40da3e0084f8dbbbc43e985ae37b.jpg)
119
+ car
120
+
121
+ are usually very challenging for Faster-RCNN to detect due to their small size, they can be much easier located using the 3D object shape, by first finding the object and estimating its 3D pose, and subsequently locating the parts using the 3D object shape (as shown in our experimental evaluation).
122
+
123
+ However, we observe two open challenges for applying existing 6D pose estimators that follow a render-and-compare approach [38, 49]: (a) these pose estimators assume that the object class is known, but in Super-CLEVR-3D the scene parser must learn to estimate the object class jointly with the pose; and (b) the scenes in Super-CLEVR-3D are very dense, containing multiple close-by objects that occlude each other. In order to address these two challenges, we introduce several improvements over [38] that enable it to be integrated into a 3D-aware VQA model.
124
+
125
+ In the following, we first describe neural meshes [49, 38], which were proposed in prior work for pose estimation of single objects following an analysis-by-synthesis approach. Subsequently, we extend this method to complex scenes with densely located and possibly occluded objects to obtain a coherent scene representation, including object parts and attributes.
126
+
127
+ Preliminaries. Our work builds on and significantly extends Neural Meshes [38] that were introduced for 6D pose estimation through inverse rendering. The task is to jointly estimate the 6D pose (2D location, distance to the camera and 3D pose) of objects in an image. An object category is represented with a category-level mesh [49] $M_y = \{v_n \in \mathbb{R}^3\}_{n=1}^N$ and a neural texture $T_y \in \mathbb{R}^{N \times c}$ on the surface of the mesh $M_y$ , where $c$ is the dimension of the feature and $y$ is the object category. Given the object 3D pose in camera view $\alpha$ , we can render the neural mesh model $O_y = \{M_y, T_y\}$ into a feature map with soft rasterization [35]: $F_y(\alpha) = \Re(O_y, \alpha)$ . Following prior work in pose estimation [49] we formulate the render-and-compare process as an optimization of the likelihood model:
128
+
129
+ $$
130
+ p \left(F \mid O _ {y}, \alpha_ {y}, B\right) = \prod_ {i \in \mathcal {F} \mathcal {G}} p \left(f _ {i} \mid O _ {y}, \alpha_ {y}\right) \prod_ {i \in \mathcal {B} \mathcal {G}} p \left(f _ {i} ^ {\prime} \mid B\right) \tag {1}
131
+ $$
132
+
133
+ where $\mathcal{F}\mathcal{G}$ and $\mathcal{B}\mathcal{G}$ are the set of foreground and background locations on the 2D feature map and $f_{i}$ is the feature vector of $F$ at location $i$ . Here the foreground and background likelihoods are modeled as Gaussian distributions.
134
+
135
+ To train the feature extractor $\Phi$ , the neural texture $\{T_y\}$ and the background model $B$ jointly, we utilize the EM-type learning strategy as originally introduced for keypoint detection in CoKe[7]. Specifically, the feature extractor is trained using stochastic gradient descent while the parameters of the generative model $\{T_y\}$ and $B$ are trained using momentum update after every gradient step in the feature extractor, which was found to stabilize training convergence.
136
+
137
+ At inference time, the object poses $\alpha$ can be inferred by minimizing the negative log-likelihood w.r.t. the 3D pose $\alpha$ using gradient descent [38].
138
+
139
+ Multi-object competition with 3D-NMS. We extend Neural Meshes to predict the 6D object pose and class label in complex multi-object scenes. In particular, we introduce 3D-Non-Maximum-Suppression (3D-NMS) into the maximum likelihood inference process. This introduces a competition between Neural Meshes of different categories in explaining the feature map. In contrast to classical
140
+
141
+ 2D-NMS, our 3D-NMS also takes into account the distance of each object to the camera and hence naturally enables reasoning about occlusions of objects in the scene.
142
+
143
+ We denote the 6D pose as $\gamma = \{x, l\}$ , where $x = \{\alpha, \beta\}$ represents the 3D object pose $\alpha$ and object distance to the camera $\beta$ , and $l$ is the 2D object location in the feature map. We first detect the 6D poses of each object category independently and apply 2D-NMS such that for each 2D location $l'$ in a neighborhood defined by radius $r$ , the predicted 6D pose $\{x, l\}$ yields the largest activation:
144
+
145
+ $$
146
+ \max _ {x} p (F \mid x, l) \text {s . t .} p (F \mid x, l) > p (F \mid x, l ^ {\prime}), \forall l ^ {\prime} \in \left\{l ^ {\prime} \mid 0 < | l ^ {\prime} - l | < r \right\} \tag {2}
147
+ $$
148
+
149
+ We enable multi-category 6D pose estimation by extending this formulation to a 3D non-maximum suppression (3D-NMS). Using $\mathcal{V}$ to represent the set of all object categories, we model the category label $y$ from a generative perspective:
150
+
151
+ $$
152
+ \max _ {x} p (F \mid x, l, y) \text {s . t .} p (F \mid x, l, y) > p (F \mid x, l ^ {\prime}, y), \forall l ^ {\prime} \in \left\{l ^ {\prime} \mid 0 < | l ^ {\prime} - l | < r \right\} \tag {3}
153
+ $$
154
+
155
+ $$
156
+ a n d \quad p (F \mid x, l, y) > p (F \mid x, l, y ^ {\prime}), \forall y ^ {\prime} \neq y \in \mathcal {Y} \tag {4}
157
+ $$
158
+
159
+ Dense scene parsing with greedy proposal generation. Typically, object detection in complex scenes requires well chosen thresholds and detection hyperparameters. Our render-and-compare approach enables us to avoid tedious hyperparameter tuning by adopting a greedy approach to maximize the model likelihood (Eq. (1)) using a greedy proposal strategy. In particular, we optimize the likelihood greedily by starting from the object proposal that explains away the most parts of the image with highest likelihood, and subsequently update the likelihood of the overlapping proposals taking into account, that at every pixel in the feature map only one object can be visible [56]. Formally, given a list of objects proposals $\{o_i = (O_{y,i},\alpha_{y,i})\}_{i = 1}^k$ (with predicted category label $y$ and 6D pose $\alpha$ ), we first order the object proposals based on their likelihood score $s = p(F|o_i,B)$ such that $s_i\leq s_j$ for $i < j$ . Based on the ordering, we greedily update the 6D pose $\alpha_{j}$ and the corresponding proposal likelihood for object $o_j$ by masking out the foreground regions of previous objects $o_i$ with $1\leq i\leq j - 1$ . In this way, we can largely avoid missing close-by objects or duplicated detection.
160
+
161
+ Part and attribute prediction. Given the predicted location and pose of each object, we project the object mesh back onto the image to get the locations for each part. To predict the attributes for the objects and parts, we crop the region containing the object or part from the RGB image, and train an additional CNN classifier using the cropped patches to predict the attributes (color, size, material) and the fine-grained classes (i.e. different sub-types of cars) of each patch using a cross-entropy loss. The reason why this additional CNN classifier is needed instead of re-using the features from the 6D pose estimator is that the pose estimation features are learned to be invariant to scale and texture changes, which makes it unsuitable for attribute prediction.
162
+
163
+ Post-filtering. Finally, we post-process the located objects using the fine-grained CNN classifier. We compare the category labels predicted by the 6D pose estimator with the ones predicted by the CNN classifier, and remove the objects for which these two predictions do not agree. This post-filtering step helps with the duplicated detections that cannot be fully resolved with the 3D-NMS.
164
+
165
+ Summary. Fig. 2 provides an overview of our scene parser and Fig. 3 visualize the intermediate results. With the idea of render-and-compare (shown in the green box of Fig. 2), the model first computes an activation map for each possible object category (Fig. 3II). Next, to infer the category for each object, the category-wise competition 3D-NMS is performed (Fig. 3b) and a post-filtering step is taken to remove mis-detected objects (Fig. 3c). Fig. 3d shows the 6D pose estimation results. To predict parts, we project the 3D object mesh back onto the image to locate parts based on projected objects (Fig. 3e). In this way, the input image can be parsed into a 3D-aware representation, which is ready for the question reasoning with program execution.
166
+
167
+ # 4.3 Program execution
168
+
169
+ After the 3D-aware scene representations are predicted for the given image, the question is parsed into a reasoning program, which is then executed on the scene representation to predict the answer. The question parsing follows previous work [54], where a LSTM sequence-to-sequence model is trained to parse the question into its corresponding program. Like P-NSVQA [32], each operation in the program is executed on the scene representation in a probabilistic way. In the following, we describe the execution of the new operations we introduced.
170
+
171
+ The part-related operators are implemented by querying the object-part hierarchy matrix $H$ , so that the object containing a given part (part_to_object) and the parts belonging to the given object
172
+
173
+ (object_to_part) can be determined. The pose-related operators are based on the estimated 3D pose in the object attributes $A^o$ . For the filter and query operations regarding pose, the 3D poses are quantified into four directions (left, right, front, back). For the pair-wise pose relationships, the azimuth angle between two objects is used to determine the same/opposite/vertical directions. The occlusion-related operations are implemented by querying the occlusion matrix $S$ . Based on the occlusion scores $S_{ij}$ representing whether entity $i$ being occluded by entity $j$ , we can compute the score of one entity being occluded $\sum_{j} S_{ij}$ (filter_occludee), find the entities that occlude a given entity (relate_occluded), or find the entities that are occluded by a given entity (relate_occluded).
174
+
175
+ # 5 Experiments
176
+
177
+ # 5.1 Evaluated methods
178
+
179
+ We compare our model with three representative VQA models: FiLM [44], mDETR [25], and PNSVQA [32]. Additionally, we introduce a variant of PNSVQA, PNSVQA+Projection, to analyze the benefit of our generative 6D pose estimation approach.
180
+
181
+ FiLM [44] Feature-wise Linear Modulation is a representative two-stream feature fusion method. The FiLM model merges the question features extracted with GRU [12] and image features extracted with CNN and predicts answers based on the merged features.
182
+
183
+ mDETR [25] mDETR is a pretrained text-guided object detector based on transformers. The model is pretrained with 1.3M image and text pairs and shows strong performance when finetuned on downstream tasks like referring expression understanding or VQA.
184
+
185
+ PNSVQA [32] PNSVQA is a SoTA neural symbolic VQA model. It parses the scene using MaskR-CNN [18] and an attribute extraction network, then executes the reasoning program on the parsed visual scenes with taking into account the uncertainty of the scene parser. To extend PNSVQA to the 3D questions in Super-CLEVR-3D, we add a regression head in the attribute extraction network to predict the 3D posefor each object; parts are detected in a similar way as objects by predicting 2D bounding boxes; the part-object associations and occlusions are computed using intersection-overunion: a part belongs to an intersected object if the part label matches the object label, otherwise it is occluded by this object.
186
+
187
+ PNSVQA+Projection Similar with NSVQA, this model predicts the 6D poses, categories and attributes using MaskRCNN and the attribute extraction network. The difference is that the parts and occlusions are predicted by projecting the 3D object models onto the image using the predicted 6D pose and category (same with how we find parts and occlusions in our model). This model helps us ablate the influence of the two components in our model, i.e. 6D pose prediction by render-and compare, and part/occlusion detection with mesh projection.
188
+
189
+ # 5.2 Experiment setup
190
+
191
+ Dataset. Our Super-CLEVR-3D dataset shares the same visual scenes with Super-CLEVR dataset. We re-render the images with more annotations recorded (camera parameters, parts annotations, occlusion maps). The dataset splits follow the Super-CLEVR dataset, where we have 20k images for training, 5k for validation, and 5k for testing. For question generation, we create 9 templates for part questions, 17 templates for pose questions, 35 templates for occlusion questions (with and without parts). For each of the three types, 8 to 10 questions are generated for each image by randomly sampling the templates. We ensure that the questions are not ill-posed and cannot be answered by taking shortcuts, i.e. the questions contain no redundant reasoning steps, following the no-redundancy setting in [32]. More details including the list of question templates can be found in the Appendix.
192
+
193
+ Implementation details. We train the 6D pose estimator and CNN attribute classifier separately. We train the 6D pose estimator (including the contrastive feature backbone and the nerval mesh models for each of the 5 classes) for 15k iterations with batch size 15, which takes around 2 hours on NVIDIA RTX A5000 for each class. The attribute classifier, which is a ResNet50, is shared for objects and parts. It is trained for 100 epochs with batch size 64. During inference, it takes 22s for 6D pose estimation and 10s for object mesh projection for all the objects in one image. During inference of the 6D pose estimator, we assume the theta is 0. During 3D NMS filtering, we choose the radius $r$ as 2, and we also filter the object proposals with a threshold of 15 on the score map.
194
+
195
+ # 5.3 Quantitative Results
196
+
197
+ We trained our model and baselines on Super-CLEVR-3D's training split, reporting answer accuracies on the test split in Tab. 1. Accuracies for each question type are detailed separately.
198
+
199
+ Table 1: Model accuracies on the Super-CLEVR-3D testing split, reported for each question type, i.e. questions about parts, 3D poses, occlusions between objects, occlusions between objects and parts.
200
+
201
+ <table><tr><td></td><td>Mean</td><td>Part</td><td>Pose</td><td>Occ.</td><td>Part+Occ.</td></tr><tr><td>FiLM [44]</td><td>50.53</td><td>38.24</td><td>67.82</td><td>51.41</td><td>44.66</td></tr><tr><td>mDETR [25]</td><td>55.72</td><td>41.52</td><td>71.76</td><td>64.99</td><td>50.47</td></tr><tr><td>PNSVQA [32]</td><td>64.39</td><td>50.61</td><td>87.78</td><td>65.80</td><td>53.35</td></tr><tr><td>PNSVQA+Projection</td><td>68.15</td><td>56.30</td><td>86.70</td><td>70.70</td><td>58.90</td></tr><tr><td>PO3D-VQA (Ours)</td><td>75.64</td><td>71.85</td><td>86.40</td><td>76.90</td><td>67.40</td></tr></table>
202
+
203
+ Comparison with baselines. First, among all the baseline methods, the neural symbolic method PNSVQA performs the best (64.4% accuracy), outperforming the end-to-end methods mDETR and FiLM by a large margin ( $>8\%$ ). This shows the advantage of the step-wise modular reasoning procedure, which agrees with the findings in prior works that the modular methods excel on the simulated benchmarks that require long-trace reasoning. Second, our model achieves 75.6% average accuracy, which significantly outperforms all the evaluated models. Especially, comparing our PO3D-VQA with its 2D counterpart NSVQA, we see that the injection of 3D knowledge brings a large performance boost of 11%, suggesting the importance of the 3D understanding.
204
+
205
+ Comparison with PNSVQA variants. By analyzing the results of PNSVQA variants (PNSVQA, PNSVQA+Projection, and our PO3D-VQA), we show (a) the benefit of estimating object 3D poses using our analysis-by-synthesis method over regression and (b) the benefit of object-part structure knowledge. First, by detecting part using 3D model projection, PNSVQA+Projection improves the PNSVQA results by $4\%$ , which indicates that locating parts based on objects using the object-part structure knowledge is beneficial. Second, by estimating object 6D poses with our generative render-and-compare method, our PO3D-VQA outperforms PNSVQA+Projection by $7\%$ (from $68.2\%$ to $75.6\%$ ), showing the advantage of our render-and-compare model. Moreover, looking at the per-type results, we find that the improvement of our PO3D-VQA is most significant on the part-related questions ( $21\%$ improvement over PNSVQA) and part-with-occlusion questions ( $14\%$ ), while the accuracy on pose-related questions does not improve. The reason is that part and occlusion predictions require precise pose predictions for accurate mesh projection, while the pose questions only require a rough pose to determine the facing direction.
206
+
207
+ # 5.4 Analysis and discussions
208
+
209
+ To further analyze the advantage of PO3D-VQA over other PNSVQA variants, we compare the models on questions of different difficulty levels. It is shown that the benefit our model is the most significant on hard questions. In Fig. 4, we plot the relative accuracy drop ${}^{3}$ of each model on questions with different occlusion ratios and questions with different part sizes.
210
+
211
+ ![](images/9785c0b763ac4ef5aae9f2eee771ee4f3d33598ab74be86b9e3ec506baefe6bb.jpg)
212
+ (a) Pose wrt. Occlusion Ratio
213
+
214
+ ![](images/6a1a7808025a53076f7af6956af01793d91a24de74f865aaf2b2aee1deb7f085.jpg)
215
+ (b) Part wrt. Part Size
216
+ Figure 4: Analysis on questions of different difficulty levels. The plots show the relative accuracy drop of models, on pose questions w.r.t. different occlusion ratios (a), on part questions w.r.t. different part sizes (b), and on part+occlusion questions w.r.t. different part sizes (c).
217
+
218
+ ![](images/d86930506110bc6556c827effc0fe49b995132f49766f4a99f0bd00e65e585ad.jpg)
219
+ (c) Part + Occlusion wrt. Part Size
220
+
221
+ Questions with different occlusion ratios. We sort pose-related questions into different sub-groups based on their occlusion ratios and evaluate the models on each of the sub-groups. The occlusion ratio $r$ of a question is the minimum of occlusion ratios for all the objects in its reasoning trace. We choose $r$ from $0\%$ to $30\%$ , in increment of $5\%$ . The results are shown in Fig. 4 (a). Our PO3D-VQA is much more robust to occlusions compared to the other two methods: while the performances of all
222
+
223
+ the three models decrease as the occlusion ratio increases, the relative drop of ours is much smaller than others. The results show that our render-and-compare scene parser is more robust to heavy occlusions compared with the discriminative methods.
224
+
225
+ Questions with different part sizes. Questions about small parts are harder than the ones about larger parts. We sort the questions into different part size intervals $(s, t)$ , where the largest part that the question refers to has an area (number of pixels occupied) larger than $s$ and smaller than $t$ . We compare the models on the part questions and the part+occlusion questions with different part sizes in Fig. 4 (b) and (c). In (b), the accuracy drop of PO3D-VQA is smaller than PNSVQA+Projection and PNSVQA when parts get smaller. In (c), PNSVQA+Projection is slightly better than our model and they are both better than the original PNSVQA.
226
+
227
+ In summary, by sorting questions into different difficulty levels based on occlusion ratios and part sizes, we show the advantage of our PO3D-VQA on harder questions, indicating that our model is robust to occlusions and small part sizes.
228
+
229
+ Qualitative results. Fig. 9 shows examples of predictions for our model and PNSVQA variants. In (a), the question asks about occlusion, but with a slight error in the pose prediction, PNSVQA+Projection misses the occluded bus and predicts the wrong answer, while our model is correct with accurate pose. In (b), the question refers to the heavily occluded minivan that is difficult to detect, but our model gets the correct prediction thanks to its robustness to occlusions.
230
+
231
+ (a)
232
+ Q: What is the material of the gray object that is occluded? A: rubber
233
+ ![](images/c25428474835f1455d2e42274a042dd8feb0481fe1c87ea7eeb88417bd82b819.jpg)
234
+ Ours: rubber
235
+ X PNSVQA+Proj: metal
236
+ X PNSVQA: metal
237
+
238
+ (b)
239
+ Q: Which direction is the minivan facing? A: left
240
+ Figure 5: Examples of models' predictions. Our model (a) predicts the object pose accurately and (b) is robust to heavy occlusions. Red boxes are for visualization only.
241
+ ![](images/10d0a62c36e1bd105597f1c1894ee0cb35802268d9a3a5eb9cf325fcaf662f8e.jpg)
242
+ Ours: left
243
+ PNSVQA +Proj.: right
244
+ PNSVQA: front
245
+
246
+ ![](images/425ee9a82844dd239d0f220b691486a7f692378cbd4d2305bf2cc40f3021671f.jpg)
247
+ Ours
248
+
249
+ ![](images/08b63c487f0596745b237c96f6ae9cde7be501c9bdff58f3c1efe1c7a4a6eac7.jpg)
250
+
251
+ ![](images/57b7a7daf89189b5bd2b08961658212a2f1a79be34fa87d5ac564a87793811f0.jpg)
252
+ PNSVQA+Projection
253
+
254
+ ![](images/3b920f4a1b817cb325d5ae3e0720b815d29d8261b04353009c323ff351b4104b.jpg)
255
+
256
+ Limitations and failure cases. Due to the difficulties of collecting real images with compositional scenes and 3D annotations, our work is currently limited by its synthetic nature. For PO3D-VQA, it sometimes fails to detect multiple objects if they are from the same category and heavily overlap (see Appendix D for more visualizations). 3D NMS can effectively improve the dense scene parsing results when objects are from different categories, but conceptually it is limited when objects are from the same category. However, 6D pose estimation in dense scenes is a challenging problem, whereas many current works on 6D pose estimation are still focusing on simple scenes with single objects [38, 50, 57].
257
+
258
+ # 6 Further Discussion
259
+
260
+ In this section, we discuss two meaningful extensions of our work: the incorporation of z-direction questions and the application of our model to real-world images.
261
+
262
+ Z-direction questions. While the proposed Super-CLEVR-3D dataset has been designed with 3D-aware questions, all objects within it are placed on the same surface. Introducing variability in the z direction can further enrich our dataset with more comprehensive 3D spatial relationships.
263
+
264
+ We consider the scenario where aeroplane category, is in different elevations, introducing the $z$ dimension into the spatial relationships (see Fig. 6). This allowed us to formulate questions that probe the model's understanding of height relationships and depth perception. We create a subset containing 100 images and 379 questions and test our PO3D-VQA model directly on it without retraining the 6D
265
+
266
+ parser. On this dataset, our PO3D model achieves $90.33\%$ accuracy on height relationship questions and $78.89\%$ on depth-related questions, suggesting that our model can successfully handle questions about height. As the baseline models only use the bounding box to determine the spatial relationship between objects, they are not able to determine the height relationships.
267
+
268
+ ![](images/aea234a674cf1b45c10bf7f7bce8c5bb11ae651f51084dab8553d6e49311c8f6.jpg)
269
+ Height question: There is a blue object that is below the biplane; what shape is it? Answer: Tandem
270
+ Depth question: Is the biplane closer than the red motorbike? Answer: Yes
271
+
272
+ ![](images/ad7af7995390d2805c9a82f17fca0d9f1098cbd21474faa515176af0701c5216.jpg)
273
+ Height question: How many objects are above the shiny bicycle? Answer: 2
274
+ Depth question: Does the truck have a greater distance than the shiny bus? Answer: Yes
275
+ Figure 6: Example images and questions of objects with different elevations.
276
+
277
+ Extension to real-world images While our PO3D-VQA model has demonstrated impressive performance on the synthetic Super-CLEVR-3D dataset, an essential research direction is extending it to real images or other 3D VQA datasets (such as GQA and FE-3DGQA). However, it's not trivial to truly evaluate it on these real-world problems, and a primary challenge is the lack of 3D annotations and the highly articulated categories (like the human body) in these datasets.
278
+
279
+ However, we show that our PO3D-VQA model can, in principle, work on realistic images. We generate several realistic image samples manually using the vehicle objects (e.g. car, bus, bicycle) from ImageNet with 3D annotation (see Fig. 7) and real-image background. In this experiment, the pose estimator is trained on the PASCAL3D+ dataset, and is used to predict the poses of objects from the image before pasting, as shown in (b). The attribute (color) prediction module is trained on Super-CLEVR-3D and the object shapes are predicted by a ResNet trained on ImageNet. Our model can correctly predict answers to questions about the object pose, parts, and occlusions, e.g. "Which object is occluded by the mountain bike".
280
+
281
+ ![](images/3c7ab2b2a4edf2ff65398d92ba9b2cd094a90606baa0cd9b03174dc124524d23.jpg)
282
+ (a1)
283
+
284
+ ![](images/600c20adbaf2f3ca09c96b6bddac41516e57f07da80f08e58ec2129f29b7ae38.jpg)
285
+ (b1)
286
+ [Post] Q: Which direction does the mountain bike face to? Our: Left Q: Which direction does the race car face to? Our: Right [Part] Q: What is the color of the fin that belongs to the aeroplane? Our: Red Q:What's the shape of the object that has a wing? Our: Aeroplane [Occ.] Q: Which object is occluded by the mountain bike? Our: Trolleybus Q: What is the color of the occluded object? Our: Red (c1) (c2)
287
+ Figure 7: Examples of results on realistic images. Given a realistic image (a1, a2), our model can successfully estimate the 6D poses of objects (b1, b2) and answer the 3D-aware questions (c1, c2).
288
+
289
+ ![](images/bffd1785ca170127d044c23e025de558ef0757818398575e8de9b3e97c0c84a7.jpg)
290
+ (a2)
291
+
292
+ ![](images/06cbd9042bc610025cb12f0bae3bfc4e891a3952855a01564d311ead2207761e.jpg)
293
+ (b2)
294
+
295
+ # 7 Conclusion
296
+
297
+ In this work, we study the task of 3D-aware VQA. We propose the Super-CLEVR-3D dataset containing questions explicitly querying 3D understanding including object parts, 3D poses, and occlusions. To address the task, a 3D-aware neural symbolic model PO3D-VQA is proposed, which enhances the probabilistic symbolic model with a robust 3D scene parser based on analysis-by-synthesis. With the merits of accurate 3D scene parsing and symbolic execution, our model outperforms existing methods by a large margin. Further analysis shows that the improvements are even larger on harder questions. With the dataset, the model, and the experiments, we highlight the benefit of symbolic execution and the importance of 3D understanding for 3D-aware VQA.
298
+
299
+ # Acknowledgements
300
+
301
+ We thank the anonymous reviewers for their valuable comments. We thank Qing Liu, Chenxi Liu, Elias Stengel-Eskin, Benjamin Van Durme for the helpful discussions on early version of the project. This work is supported by Office of Naval Research with grants N00014-23-1-2641, N00014-21-1-2812. A. Kortylewski acknowledges support via his Emmy Noether Research Group funded by the German Science Foundation (DFG) under Grant No.468670075.
302
+
303
+ # References
304
+
305
+ [1] Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. Analyzing the behavior of visual question answering models. arXiv preprint arXiv:1606.07356, 2016.
306
+ [2] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077-6086, 2018.
307
+ [3] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 39-48, 2016.
308
+ [4] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425-2433, 2015.
309
+ [5] Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, and Motoaki Kawanabe. Scanqa: 3d question answering for spatial scene understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19129-19139, 2022.
310
+ [6] Dzmitry Bahdanau, Harm de Vries, Timothy J O'Donnell, Shikhar Murty, Philippe Beaudoin, Yoshua Bengio, and Aaron Courville. Closure: Assessing systematic generalization of clevr models. arXiv preprint arXiv:1912.05783, 2019.
311
+ [7] Yutong Bai, Angtian Wang, Adam Kortylewski, and Alan Yuille. Coke: Contrastive learning for robust keypoint detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 65-74, 2023.
312
+ [8] Remi Cadene, Corentin Dancette, Matthieu Cord, Devi Parikh, et al. Rubi: Reducing unimodal biases for visual question answering. Advances in neural information processing systems, 32, 2019.
313
+ [9] Paola Cascante-Bonilla, Hui Wu, Letao Wang, Rogerio S Feris, and Vicente Ordonez. Simvqa: Exploring simulated environments for visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5056-5066, 2022.
314
+ [10] Runnan Chen, Youquan Liu, Lingdong Kong, Xinge Zhu, Yuexin Ma, Yikang Li, Yuenan Hou, Yu Qiao, and Wenping Wang. Clip2scene: Towards label-efficient 3d scene understanding by clip. arXiv preprint arXiv:2301.04926, 2023.
315
+ [11] Xianjie Chen, Roozbeh Mottaghi, Xiaobai Liu, Sanja Fidler, Raquel Urtasun, and Alan Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1971-1978, 2014.
316
+ [12] Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bouguares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
317
+ [13] Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1-10, 2018.
318
+ [14] Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847, 2016.
319
+ [15] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
320
+ [16] Vipul Gupta, Zhuowan Li, Adam Kortylewski, Chenyu Zhang, Yingwei Li, and Alan Yuille. Swapmix: Diagnosing and regularizing the over-reliance on visual context in visual question answering. arXiv preprint arXiv:2204.02285, 2022.
321
+ [17] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3608-3617, 2018.
322
+ [18] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969, 2017.
323
+
324
+ [19] Yining Hong, Chunru Lin, Yilun Du, Zhenfang Chen, Joshua B Tenenbaum, and Chuang Gan. 3d concept learning and reasoning from multi-view images. arXiv preprint arXiv:2303.11327, 2023.
325
+ [20] Yining Hong, Li Yi, Josh Tenenbaum, Antonio Torralba, and Chuang Gan.Ptr: A benchmark for part-based conceptual, relational, and physical reasoning. Advances in Neural Information Processing Systems, 34, 2021.
326
+ [21] Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. arXiv preprint arXiv:2307.12981, 2023.
327
+ [22] Ronghang Hu, Jacob Andreas, Trevor Darrell, and Kate Saenko. Explainable neural computation via stack neural module networks. In Proceedings of the European conference on computer vision (ECCV), pages 53-69, 2018.
328
+ [23] Drew A Hudson and Christopher D Manning. Compositional attention networks for machine reasoning. 2018.
329
+ [24] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2901-2910, 2017.
330
+ [25] Aishwarya Kamath, Mannat Singh, Yann LeCun, Gabriel Synnaeve, Ishan Misra, and Nicolas Carion. Mdetr-modulated detection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1780-1790, 2021.
331
+ [26] Coretin Kervadec, Grigory Antipov, Moez Baccouche, and Christian Wolf. Roses are red, violets are blue... but should vqa expect them to? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2776-2785, 2021.
332
+ [27] Coretin Kervadec, Theo Jaunet, Grigory Antipov, Moez Baccouche, Romain Vuillemot, and Christian Wolf. How transferable are reasoning patterns in vqa? 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4205-4214, 2021.
333
+ [28] Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. Bilinear attention networks. Advances in Neural Information Processing Systems, 31, 2018.
334
+ [29] Satwik Kottur, José MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. Clevr-dialog: A diagnostic dataset for multi-round reasoning in visual dialog. arXiv preprint arXiv:1903.03166, 2019.
335
+ [30] Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. Relation-aware graph attention network for visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10313-10322, 2019.
336
+ [31] Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. Oscar: Object-semantics aligned pre-training for vision-language tasks. ECCV 2020, 2020.
337
+ [32] Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, and Alan Yuille. Super-clevr: A virtual benchmark to diagnose domain robustness in visual reasoning. arXiv preprint arXiv:2212.00259, 2022.
338
+ [33] Qing Liu, Adam Kortylewski, Zhishuai Zhang, Zizhang Li, Mengqi Guo, Qihao Liu, Xiaoding Yuan, Jiteng Mu, Weichao Qiu, and Alan Yuille. Learning part segmentation through unsupervised domain adaptation from synthetic vehicles. In CVPR, 2022.
339
+ [34] Runtao Liu, Chenxi Liu, Yutong Bai, and Alan L Yuille. Clevr-ref+: Diagnosing visual reasoning with referring expressions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4185-4194, 2019.
340
+ [35] Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. Soft rasterizer: A differentiable renderer for image-based 3d reasoning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7708-7717, 2019.
341
+ [36] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32, 2019.
342
+ [37] Tiange Luo, Chris Rockwell, Honglak Lee, and Justin Johnson. Scalable 3d captioning with pretrained models. arXiv preprint arXiv:2306.07279, 2023.
343
+
344
+ [38] Wufei Ma, Angtian Wang, Alan Yuille, and Adam Kortylewski. Robust category-level 6d pose estimation with coarse-to-fine rendering of neural features. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part IX, pages 492-508. Springer, 2022.
345
+ [39] Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang, Song-Chun Zhu, and Siyuan Huang. Sqa3d: Situated question answering in 3d scenes. In International Conference on Learning Representations, 2023.
346
+ [40] Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, and Jiajun Wu. The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision. In International Conference on Learning Representations, 2019.
347
+ [41] Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, and Ji-Rong Wen. Counterfactual vqa: A cause-effect look at language bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12700-12710, June 2021.
348
+ [42] Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xiansheng Hua, and Ji-Rong Wen. Counterfactual vqa: A cause-effect look at language bias. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12695-12705, 2021.
349
+ [43] Songyou Peng, Kyle Genova, Chiyu "Max" Jiang, Andrea Tagliasacchi, Marc Pollefeys, and Thomas Funkhouser. Openscene: 3d scene understanding with open vocabularies. In CVPR, 2023.
350
+ [44] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
351
+ [45] Mengye Ren, Ryan Kiros, and Richard Zemel. Exploring models and data for image question answering. Advances in neural information processing systems, 28, 2015.
352
+ [46] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 2015.
353
+ [47] Leonard Salewski, A Koepke, Hendrik Lensch, and Zeynep Akata. Clevr-x: A visual reasoning dataset for natural language explanations. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, pages 69-88. Springer, 2022.
354
+ [48] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490, 2019.
355
+ [49] Angtian Wang, Adam Kortylewski, and Alan Yuille. Nemo: Neural mesh models of contrastive features for robust 3d pose estimation. In International Conference on Learning Representations, 2021.
356
+ [50] Yu Xiang, Roozbeh Mottaghi, and Silvio Savarese. Beyond Pascal: A benchmark for 3d object detection in the wild. In IEEE winter conference on applications of computer vision, pages 75-82. IEEE, 2014.
357
+ [51] Xu Yan, Zhihao Yuan, Yuhao Du, Yinghong Liao, Yao Guo, Zhen Li, and Shuguang Cui. Clevr3d: Compositional language and elementary visual reasoning for question answering in 3d real-world scenes. arXiv preprint arXiv:2112.11691, 2021.
358
+ [52] Shuquan Ye, Dongdong Chen, Songfang Han, and Jing Liao. 3d question answering. IEEE Transactions on Visualization and Computer Graphics, 2022.
359
+ [53] Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, and Joshua B Tenenbaum. Clevrer: Collision events for video representation and reasoning. arXiv preprint arXiv:1910.01442, 2019.
360
+ [54] Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Joshua B Tenenbaum. Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. In Advances in Neural Information Processing Systems (NIPS), 2018.
361
+ [55] Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6281–6290, 2019.
362
+ [56] Xiaoding Yuan, Adam Kortylewski, Yihong Sun, and Alan Yuille. Robust instance segmentation through reasoning about multi-object occlusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11141-11150, 2021.
363
+
364
+ [57] Yanjie Ze and Xiaolong Wang. Category-level 6d object pose estimation in the wild: A semi-supervised learning approach and a new dataset. Advances in Neural Information Processing Systems, 35:27469-27483, 2022.
365
+ [58] Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5317-5327, 2019.
366
+ [59] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Making visual representations matter in vision-language models. CVPR 2021, 2021.
367
+ [60] Xingyi Zhou, Arjun Karpur, Linjie Luo, and Qixing Huang. Starmap for category-agnostic keypoint and viewpoint estimation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 318–334, 2018.
368
+ [61] Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4995–5004, 2016.
369
+
370
+ # A Dataset Details
371
+
372
+ # A.1 Part list
373
+
374
+ In Super-CLEVR-3D, the parts of each objects are listed in Tab. 2
375
+
376
+ # A.2 Question templates
377
+
378
+ Part Questions we collect 9 part-based templates when generating the part-based questions, as shown in Tab. 4. In the table, <attribute> means one attribute from shape, material, color or size to be queried, <object> (or <object 1>, <object 2>) means one object to be filtered with a combination of shape, material, color, and size. Different from the pose and occlusion question, we don't query the size of the object.
379
+
380
+ 3D Pose questions We design 17 3D pose-based templates in question generation (as shown in table 5). The 17 templates consist of: 1 template of the query of the pose; 4 questions of the query of shape, material, color, size, where the pose is in the filtering conditions; 12 templates about the query of shape, material, color, size, where the relationship of the pose is the filtering condition.
381
+
382
+ Occlusion Questions There are 35 templates in the occlusion question generation as shown in table 6, which consists of occlusion of objects and occlusion of parts.
383
+
384
+ The occlusion of objects consists of occlusion status and occlusion relationship. For the occlusion status of the object, there are 4 templates to query the shape, color, material, and size respectively. There are 2 occlusion relationships of objects (occluded and occluding), and each of them has 4 templates.
385
+
386
+ Similarly, we then create a template about occlusion status and occlusion relationship for the parts. The only difference between object and part is that the parts only have 3 attributes to be queried: shape (name), material and color.
387
+
388
+ # A.3 Statistics
389
+
390
+ As a result, a total of 314,988 part questions, 314,986 pose questions, and 228,397 occlusion questions and 314,988 occlusion questions with parts.
391
+
392
+ In Fig. 8, we show the distributions of all attributes of objects including categories, colors, sizes, and materials
393
+
394
+ ![](images/60469839bd0ebdfeb8ab9a042a7379f0b06de9f7b08c55ba530e97762c67eb3c.jpg)
395
+ Figure 8: Distributions for all the attributes of objects including categories, colors, sizes, and materials
396
+
397
+ # B Implementation details for the baselines
398
+
399
+ The FiLM and mDETR are trained with default settings as in the official implementation. FiLM is trained for 100k iterations with batch size 256. mDETR is trained for 30 epochs with batch size 64 using 2 GPUs for both the grounding stage and the answer classification stage.
400
+
401
+ For P-NSVQA, we first train a MaskRCNN for 30k iterations with batch size 16 to detect the objects and parts, then train the attribute extraction model (using Res50 backbone) for 100 epochs with batch size 64. Different fully connected(FC) layers are used for a different type of question: the
402
+
403
+ part questions and occlusion questions have 4 FC layers for the shape, material, color, and size classification (as the parts also have size annotations in the dataset when generating scene files, but they are meaningless in the question answering). The pose question includes pose prediction of an object, so we add a new FC layer with 1 output dimension to predict the rotations, followed by an MSE loss during training. For different types of questions (part, pose and occlusion), the MaskRCNN and attribute extraction model are trained separately.
404
+
405
+ In the PNSVQA+Projection baseline, we first train a MaskRCNN to detect all of the objects and predict their 3D pose (azimuth, elevation and theta) without category labels in the scene. This MaskRCNN is trained with batch size 8 and iteration 15000. We use an SGD optimizer with a learning rate of 0.02, momentum of 0.9 and weight decay 0.0001. Then, we use the same setting as our PO3D-VQA to train a CNN to classify the attributes of objects and parts.
406
+
407
+ # C Detailed results of Analysis
408
+
409
+ As an extension for section 5.4 in main paper, here we include the numerical value of accuracy and drop for the pose, part, occlusion + part question with reference to occlusion ratio or part size. The result is shown in Tab. 7, Tab. 9 and Tab. 8.
410
+
411
+ # D Failure cases
412
+
413
+ Examples of failure cases of our PO3D-VQA, as described in Section 5.4 in main paper. In (a) and (b), PO3D-VQA misses the bicycle behind when two bicycles have a heavy overlap, the same for the two motorbikes in (c) and (d).
414
+
415
+ ![](images/2198fb95ba84f423f9b286911203b9cabb9f7879c94e63e5c2566eac36fe26d6.jpg)
416
+ (a)
417
+
418
+ ![](images/b215ba34835a47da4eab3f2a07ea58ff001fc0022f62e8d2768134148f1f2ccf.jpg)
419
+ (b)
420
+ Figure 9: Failure cases of our PO3D-VQA. (a) and (c) is the input images with the objects missed by the model. (b) and (c) is the re-projection results from the model.
421
+
422
+ ![](images/5205f4227568aec91e226f4b9a635d160690549c1fbc01f2ec83ac30b674cc6c.jpg)
423
+ (c)
424
+
425
+ ![](images/f3e65e35a0bf16f4301e7a42822fbd23051e6ccbe9fe869c0a9ffc99bba6b768.jpg)
426
+ (d)
427
+
428
+ Table 2: List of objects and parts.
429
+
430
+ <table><tr><td>shape</td><td>part list</td></tr><tr><td>airliner</td><td>left door, front wheel, fin, right engine, propeller, back left wheel, left engine, back right wheel, left tailplane, right door, right tailplane, right wing, left wing</td></tr><tr><td>biplane</td><td>front wheel, fin, propeller, left tailplane, right tailplane, right wing, left wing</td></tr><tr><td>jet</td><td>left door, front wheel, fin, right engine, propeller, back left wheel, left engine, back right wheel, left tailplane, right tailplane, right wing, left wing</td></tr><tr><td>fighter</td><td>fin, right engine, left engine, left tailplane, right tailplane, right wing, left wing</td></tr><tr><td>utility bike</td><td>left handle, brake system, front wheel, left pedal, right handle, back wheel, saddle, carrier, fork, right crank arm, front fender, drive chain, back fender, left crank arm, side stand, right pedal</td></tr><tr><td>tandem bike</td><td>rearlight, front wheel, back wheel, fork, front fender, back fender</td></tr><tr><td>road bike</td><td>left handle, brake system, front wheel, left pedal, right handle, back wheel, saddle, fork, right crank arm, drive chain, left crank arm, right pedal</td></tr><tr><td>mountain bike</td><td>left handle, brake system, front wheel, left pedal, right handle, back wheel, saddle, fork, right crank arm, drive chain, left crank arm, right pedal</td></tr><tr><td>articulated bus</td><td>left tail light, front license plate, front right door, back bumper, right head light, front left wheel, left mirror, right tail light, back right door, back left wheel, back right wheel, back license plate, front right wheel, left head light, right mirror, trunk, mid right door, roof</td></tr><tr><td>double bus</td><td>left tail light, front license plate, front right door, front bumper, back bumper, right head light, front left wheel, left mirror, right tail light, back left wheel, back right wheel, back license plate, mid left door, front left door, front right wheel, left head light, right mirror, trunk, mid right door, roof</td></tr><tr><td>regular bus</td><td>left tail light, front license plate, front right door, front bumper, back bumper, right head light, front left wheel, left mirror, right tail light, back right door, back left wheel, back right wheel, back license plate, front right wheel, left head light, right mirror, trunk, mid right door, roof</td></tr><tr><td>school bus</td><td>left tail light, front license plate, front right door, front bumper, back bumper, right head light, front left wheel, left mirror, right tail light, back left wheel, back right wheel, back license plate, mid left door, front right wheel, left head light, right mirror, roof</td></tr><tr><td>truck</td><td>front left door, left tail light, left head light, back right wheel, right head light, front bumper, right mirror, front license plate, front right wheel, back bumper, left mirror, back left wheel, right tail light, hood, trunk, front left wheel, roof, front right door</td></tr><tr><td>suv</td><td>front left door, left tail light, left head light, back left door, back right wheel, right head light, front bumper, right mirror, front right wheel, back bumper, left mirror, back left wheel, right tail light, hood, trunk, front left wheel, roof, front right door, back license plate</td></tr><tr><td>minivan</td><td>front left door, left tail light, left head light, back left door, back right wheel, right head light, front bumper, right mirror, front license plate, front right wheel, back bumper, left mirror, back left wheel, right tail light, hood, trunk, front left wheel, back right door, roof, front right door, back license plate</td></tr><tr><td>sedan</td><td>front left door, left tail light, left head light, back left door, back right wheel, right head light, front bumper, right mirror, front license plate, front right wheel, back bumper, left mirror, back left wheel, right tail light, hood, trunk, front left wheel, back right door, roof, front right door, back license plate</td></tr><tr><td>wagon</td><td>front left door, left tail light, left head light, back left door, back right wheel, right head light, front bumper, right mirror, front license plate, front right wheel, back bumper, left mirror, back left wheel, right tail light, hood, trunk, front left wheel, back right door, roof, front right door, back license plate</td></tr><tr><td>chopper</td><td>left handle, center headlight, front wheel, right handle, back wheel, center taillight, left mirror, gas tank, front fender, fork, drive chain, left footrest, right mirror, windscreen, engine, back fender, right exhaust, seat, panel, right footrest</td></tr><tr><td>scooter</td><td>left handle, center headlight, front wheel, right handle, back cover, back wheel, center taillight, left mirror, front cover, fork, drive chain, right mirror, engine, left exhaust, back fender, seat, panel</td></tr><tr><td>cruiser</td><td>left handle, center headlight, right headlight, right taillight, front wheel, right handle, back cover, back wheel, left taillight, left mirror, left headlight, gas tank, front cover, front fender, fork, drive chain, left footrest, license plate, right mirror, windscreen, left exhaust, back fender, right exhaust, seat, panel, right footrest</td></tr><tr><td>dirtbike</td><td>left handle, front wheel, right handle, back cover, back wheel, gas tank, front cover, front fender, fork, drive chain, left footrest, engine, right exhaust, seat, panel, right footrest</td></tr></table>
431
+
432
+ Table 4: Templates of parts questions
433
+
434
+ <table><tr><td>Templates</td><td>Count</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;part&gt; of the &lt;object&gt;?</td><td>3</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;object&gt; that has a &lt;part&gt;?</td><td>3</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;part 1&gt; that belongs to the same object as the &lt;part 2&gt;?</td><td>3</td></tr></table>
435
+
436
+ Table 5: Templates of pose questions
437
+
438
+ <table><tr><td>Templates</td><td>Count</td></tr><tr><td>Which direction the &lt;object&gt; is facing?</td><td>1</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;object&gt; which face to the &lt;0&gt;?</td><td>4</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;object 1&gt; that faces the same direction as a &lt;object 2&gt;</td><td>4</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;object 1&gt; that faces the opposite direction as a &lt;object 2&gt;</td><td>4</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;object 1&gt; that faces the vertical direction as a &lt;object 2&gt;</td><td>4</td></tr></table>
439
+
440
+ Table 6: Templates of occlusion questions
441
+
442
+ <table><tr><td>Templates</td><td>Count</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;object&gt; that is occluded?</td><td>4</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;object 1&gt; that is occluded by the &lt;object 2&gt; ?</td><td>4</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;object 1&gt; that occludes the &lt;object 2&gt; ?</td><td>4</td></tr><tr><td>Is the &lt;part&gt; of the &lt;object&gt; occluded?</td><td>1</td></tr><tr><td>Which part of the &lt;object&gt; is occluded?</td><td>1</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;object&gt; whose &lt;part&gt; is occluded?</td><td>4</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;part&gt; which belongs to an occluded &lt;object&gt; ?</td><td>3</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;part 1&gt; which belongs to the &lt;object&gt; whose &lt;part 2&gt; is occluded?</td><td>3</td></tr><tr><td>Is the &lt;part&gt; of the &lt;object 1&gt; occluded by the &lt;object 2&gt;</td><td>1</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;object 1&gt; whose &lt;part&gt; is occluded by the &lt;object 2&gt; ?</td><td>4</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;part&gt; which belongs to &lt;object 1&gt; which is occluded by the &lt;object 2&gt;</td><td>3</td></tr><tr><td>What is the &lt;attribute&gt; of the &lt;part 1&gt; which belongs to the same object whose &lt;part 2&gt; is occluded by the &lt;object 2&gt; ?</td><td>3</td></tr></table>
443
+
444
+ Table 7: Accuracy value and relative drop for pose questions wrt. occlusion ratio
445
+
446
+ <table><tr><td></td><td>Occlusion Ratio</td><td>0</td><td>5</td><td>10</td><td>15</td><td>20</td><td>25</td><td>30</td></tr><tr><td rowspan="2">PNSVQA</td><td>Accuracy</td><td>87.43</td><td>74.09</td><td>74.09</td><td>63.16</td><td>62.01</td><td>60.33</td><td>58.52</td></tr><tr><td>Drop</td><td>0.00%</td><td>15.26%</td><td>15.26%</td><td>27.76%</td><td>29.08%</td><td>31.00%</td><td>33.07%</td></tr><tr><td rowspan="2">PNSVQA + Projection</td><td>Accuracy</td><td>86.30</td><td>74.61</td><td>67.20</td><td>66.78</td><td>60.26</td><td>56.52</td><td>55.56</td></tr><tr><td>Drop</td><td>0.00%</td><td>13.54%</td><td>22.13%</td><td>22.62%</td><td>30.17%</td><td>34.51%</td><td>35.63%</td></tr><tr><td rowspan="2">Ours</td><td>Accuracy</td><td>86.43</td><td>86.05</td><td>84.32</td><td>75.00</td><td>79.44</td><td>73.22</td><td>67.98</td></tr><tr><td>Drop</td><td>0.00%</td><td>0.44%</td><td>2.44%</td><td>13.22%</td><td>8.09%</td><td>15.28%</td><td>21.35%</td></tr></table>
447
+
448
+ Table 8: Accuracy value and relative drop for occlusion + part wrt. part size
449
+
450
+ <table><tr><td></td><td>Part Size</td><td>max</td><td>300</td><td>150</td><td>100</td><td>50</td><td>20</td></tr><tr><td rowspan="2">PNSVQA</td><td>Accuracy</td><td>58.18</td><td>54.98</td><td>54.05</td><td>52.09</td><td>45.20</td><td>21.28</td></tr><tr><td>Drop</td><td>0.00%</td><td>5.49%</td><td>7.10%</td><td>10.47%</td><td>22.31%</td><td>63.43%</td></tr><tr><td rowspan="2">PNSVQA + Projection</td><td>Accuracy</td><td>61.85</td><td>50.64</td><td>56.77</td><td>53.97</td><td>55.29</td><td>45.83</td></tr><tr><td>Drop</td><td>0.00%</td><td>18.11%</td><td>8.20%</td><td>12.74%</td><td>10.60%</td><td>25.89%</td></tr><tr><td rowspan="2">Ours</td><td>Accuracy</td><td>81.68</td><td>75.32</td><td>77.20</td><td>71.54</td><td>67.00</td><td>53.19</td></tr><tr><td>Drop</td><td>0.00%</td><td>7.78%</td><td>5.49%</td><td>12.41%</td><td>17.97%</td><td>34.88%</td></tr></table>
451
+
452
+ Table 9: Accuracy value and relative drop for part wrt. part size
453
+
454
+ <table><tr><td></td><td>Part Size</td><td>max</td><td>300</td><td>150</td><td>100</td><td>50</td><td>20</td></tr><tr><td rowspan="2">PNSVQA</td><td>Accuracy</td><td>57.31</td><td>51.00</td><td>37.50</td><td>44.18</td><td>40.85</td><td>29.73</td></tr><tr><td>Drop</td><td>0.00%</td><td>11.02%</td><td>34.57%</td><td>22.92%</td><td>28.73%</td><td>48.12%</td></tr><tr><td rowspan="2">PNSVQA + Projection</td><td>Accuracy</td><td>58.89</td><td>57.54</td><td>42.64</td><td>43.20</td><td>46.73</td><td>38.67</td></tr><tr><td>Drop</td><td>0.00%</td><td>2.30%</td><td>27.60%</td><td>26.65%</td><td>20.65%</td><td>34.34%</td></tr><tr><td rowspan="2">Ours</td><td>Accuracy</td><td>64.04</td><td>64.80</td><td>60.16</td><td>57.03</td><td>49.05</td><td>55.41</td></tr><tr><td>Drop</td><td>0.00%</td><td>-1.19%</td><td>6.06%</td><td>10.94%</td><td>23.41%</td><td>13.48%</td></tr></table>
3dawarevisualquestionansweringaboutpartsposesandocclusions/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8bf928a83023ac4032d19c09774d379c95626cb7faf49b3d187f44c881ecc0f
3
+ size 1108112
3dawarevisualquestionansweringaboutpartsposesandocclusions/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ac4ef7b75e69bc41b6584e8295c98ecda079508b256ff33d08d07bec3a36b41
3
+ size 561605
3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fd1fd8cd29ac5dc9d1519189d8c9d8c4e5ae2586d8b3a97878c6a88dcff5a17
3
+ size 80600
3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f42d0881fb5e97e2bcf1c19024c7e8cd4fb1daac5e93863cf519082c2d1696f5
3
+ size 96090
3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/c88da8aa-4777-4c85-87be-c4b38b9e7ee2_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f57c6c1cb7fda30438758c22859c2ad76ea5f8a6f65e0a0fdee87043b927156
3
+ size 4368923
3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/full.md ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D Copy-Paste: Physically Plausible Object Insertion for Monocular 3D Detection
2
+
3
+ Yunhao Ge $^{\diamond \dagger}$ , Hong-Xing Yu $^{\diamond}$ , Cheng Zhao $^{\S}$ , Yuliang Guo $^{\S}$ , Xinyu Huang $^{\S}$ , Liu Ren $^{\S}$ , Laurent Itti $^{\dagger}$ , Jiajun Wu $^{\diamond}$
4
+
5
+ $^{\diamond}$ Stanford University †University of Southern California
6
+
7
+ $^{\S}$ Bosch Research North America, Bosch Center for Artificial Intelligence (BCAI)
8
+
9
+ {yunhaoge, koven, jiajunwu}@cs.stanford.edu {yunhaoge, itti}@usc.edu
10
+
11
+ {Cheng.Zhao, Yuliang.Guo2, Xinyu.Huang, Liu.Ren}@us.bosch.com
12
+
13
+ # Abstract
14
+
15
+ A major challenge in monocular 3D object detection is the limited diversity and quantity of objects in real datasets. While augmenting real scenes with virtual objects holds promise to improve both the diversity and quantity of the objects, it remains elusive due to the lack of an effective 3D object insertion method in complex real captured scenes. In this work, we study augmenting complex real indoor scenes with virtual objects for monocular 3D object detection. The main challenge is to automatically identify plausible physical properties for virtual assets (e.g., locations, appearances, sizes, etc.) in cluttered real scenes. To address this challenge, we propose a physically plausible indoor 3D object insertion approach to automatically copy virtual objects and paste them into real scenes. The resulting objects in scenes have 3D bounding boxes with plausible physical locations and appearances. In particular, our method first identifies physically feasible locations and poses for the inserted objects to prevent collisions with the existing room layout. Subsequently, it estimates spatially-varying illumination for the insertion location, enabling the immersive blending of the virtual objects into the original scene with plausible appearances and cast shadows. We show that our augmentation method significantly improves existing monocular 3D object models and achieves state-of-the-art performance. For the first time, we demonstrate that a physically plausible 3D object insertion, serving as a generative data augmentation technique, can lead to significant improvements for discriminative downstream tasks such as monocular 3D object detection. Project website: https://gyhandy.github.io/3D-Copy-Paste/.
16
+
17
+ # 1 Introduction
18
+
19
+ Monocular indoor 3D object detection methods have shown promising results in various applications such as robotics and augmented reality [Yang and Scherer, 2019, Chen et al., 2017]. However, the deployment of these methods is potentially constrained by the limited diversity and quantity of objects in existing real datasets. For example, in SUN RGB-D dataset [Song et al., 2015], the bathtub category has only less than 500 annotations compared to chair which has over 19,000 annotations. This may be due to the difficulty in acquiring and labeling substantial indoor scene datasets with diverse 3D object annotations [Silberman et al., 2012, Song et al., 2015, Dai et al., 2017].
20
+
21
+ Data augmentation techniques have been widely utilized in 2D detection and segmentation tasks to improve the diversity and quantity of the available training data [Dwibedi et al., 2017, Ge et al., 2022a, Ghiasi et al., 2021, Ge et al., 2022b, 2023]. However, it is non-trivial to scale 2D augmentation methods to 3D scenes due to physical constraints in real 3D scenes. In particular, technical challenges
22
+
23
+ ![](images/574c73e0078ea4b7eb8370a6463eb7d0e10b010668202dac3c6337684cf3b4b5.jpg)
24
+ Figure 1: Overall pipeline of physically plausible object insertion for monocular 3D object detection: Our approach copies external 3D objects (e.g., from Objaverse [Deitke et al., 2022]) and pastes them into indoor scene datasets (e.g., SUN RGB-D [Song et al., 2015]) in a physically plausible manner. The augmented indoor scene dataset, enriched with inserted 3D objects, is then used to train monocular 3D object detection models, resulting in significant performance improvements.
25
+
26
+ emerge especially in how to maintain physical plausibility for: (1) Collision and Occlusion Handling: In 3D data augmentation, handling collisions between objects is more challenging than in 2D data. Properly managing collisions is essential to prevent artifacts and ensure that objects appear as natural and coherent parts of the scene. (2) Illumination and Shading: For 3D data, augmenting objects requires careful consideration of the lighting conditions in the scene to create realistic shading and reflections. This involves estimating the spatially-varying illumination and adapting the appearance of the inserted objects to maintain visual coherence. (3) Geometric Consistency: In 3D data augmentation, maintaining geometric consistency is crucial to ensure that the augmented objects fit naturally within the scene. Unlike 2D augmentation, which deals with flat images, 3D augmentation must consider spatial relationships, object orientations, and their interaction with the surrounding environment.
27
+
28
+ In this paper, we explore a novel approach, 3D Copy-Paste, to achieve 3D data augmentation in indoor scenes. We employ physically plausible indoor 3D object insertion to automatically generate large-scale annotated 3D objects with both plausible physical location and illumination. Unlike outdoor scenarios, indoor environments present unique challenges: (1) complex spatial layouts, notably cluttered backgrounds and limited space for object placement, which require a meticulously crafted method for automated object positioning (ensuring realistic position, size, and pose), and (2) intricate lighting effects, such as soft shadows, inter-reflections, and long-range light source dependency, which necessitate sophisticated lighting considerations for harmonious object insertion.
29
+
30
+ Fig. 1 shows our overall pipeline. In our approach, we take advantage of existing large-scale 3D object datasets, from which we copy simulated 3D objects and paste them into real scenes. To address the challenges associated with creating physically plausible insertions, we employ a three-step process. First, we analyze the scene by identifying all suitable planes for 3D object insertion. Next, we estimate the object's pose and size, taking into account the insertion site to prevent collisions. Lastly, we estimate the spatially-varying illumination to render realistic shading and shadows for the inserted object, ensuring that it is seamlessly blended into the scene.
31
+
32
+ Our proposed method augment existing indoor scene datasets, such as SUN RGB-D [Song et al., 2015], by incorporating large-scale 3D object datasets like Objaverse [Deitke et al., 2022] using our 3D Copy-Paste approach. Our method is an offline augmentation method that creates a new augmented dataset. The monocular 3D object detection model, ImvoxelNet Rukhovich et al. [2022], trained on this augmented dataset, achieves new state-of-the-art performance on the challenging SUN RGB-D dataset. We systematically evaluate the influence of the inserted objects' physical position and illumination on the downstream performance of the final monocular 3D object detection model. Our results suggest that physically plausible 3D object insertion can serve as an effective generative data augmentation technique, leading to state-of-the-art performances in discriminative downstream tasks such as monocular 3D object detection.
33
+
34
+ We make three main contributions: (1) We introduce 3D Copy-Paste, a novel physically plausible indoor object insertion technique for automatically generating large-scale annotated 3D objects. This
35
+
36
+ approach ensures the plausibility of the objects' physical location, size, pose, and illumination within the scene. (2) We demonstrate that training a monocular 3D object detection model on a dataset augmented using our 3D Copy-Paste technique results in state-of-the-art performance. Our results show that a physically plausible 3D object insertion method can serve as an effective generative data augmentation technique, leading to significant improvements in discriminative downstream monocular 3D object detection tasks. (3) We conduct a systematic evaluation on the effect of location and illumination of the inserted objects on the performance of the downstream monocular 3D object detection model. This analysis provides valuable insights into the role of these factors in the overall effectiveness of our proposed approach.
37
+
38
+ # 2 Related Works
39
+
40
+ # 2.1 Monocular 3D Object Detection
41
+
42
+ Monocular 3D Object Detection estimates the 3D location, orientation, and dimensions (3D bounding box) of objects from a single 2D image. It has garnered significant attention in recent years due to its potential applications in autonomous driving, robotics, and augmented reality. There are many works of monocular 3D detection in driving scenarios, such as 3DOP[Chen et al., 2015], MLFusion[Xu and Chen, 2018], M3D-RPN[Brazil and Liu, 2019], MonoDIS[Simonelli et al., 2019], Pseudo-LiDAR[Wang et al., 2019], FCOS3D[Wang et al., 2021], SMOKE[Liu et al., 2020], RTM3D[Li et al., 2020a], PGD[Wang et al., 2022a], CaDDN[Reading et al., 2021]. Specifically, Geometry-based Approaches: MV3D Chen et al. [2017] utilized both LiDAR-based point clouds and geometric cues from images for 3D object detection. Mousavian et al. [2017] introduced a method that regresses object properties such as dimensions, orientation, and location from 2D bounding boxes using geometric constraints. In the context of indoor scenes, multi-task learning has gained traction. Recent studies, including PointFusion by Xu et al. [2018], have amalgamated 3D object detection with tasks like depth estimation or semantic segmentation to improve performance. Total3D [Nie et al., 2020] and Implicit3D [Zhang et al., 2021] use end-to-end solutions to jointly reconstruct room layout, object bounding boxes and meshes from a single image. ImvoxelNet [Rukhovich et al., 2022] achieves state-of-the-art performance by using the image-voxels projection for monocular 3d object detection.
43
+
44
+ # 2.2 3D Data Augmentation
45
+
46
+ Data augmentation in 3D has become increasingly vital for enhancing performance across various 3D perception tasks. Most of work focuses on outdoor scenes [Zhang et al., 2020, Lian et al., 2022, Abu Alhaija et al., 2018, Chen et al., 2021, Tong et al., 2023]. Geometric Transformations: Wu et al. [2015] applied rotations, translations, and scaling to augment the ModelNet dataset, improving classification and retrieval tasks. Point Cloud Augmentation: Engelcke et al. [2017] proposed techniques such as random point removal, Gaussian noise, and point cloud interpolation for augmenting LiDAR datasets, enhancing object detection and segmentation performance. Generative Model-based Augmentation: Smith and Meger [2017] used a conditional GAN to generate diverse and realistic 3D objects. Similarly, Achlioptas et al. [2018] employed a VAE for learning a generative model of 3D shapes for shape completion and exploration tasks. However, while 3D generative models can achieve object-level augmentation, they are not scalable to scene-level augmentation. 2D generative models can produce highly realistic images, but they do not provide physically plausible 3D labels. 3D Common corruptions [Kar et al., 2022] use 3D information to generate real-world corruptions for 2D dataset, which can evaluate the model robustness and be used as a data augmentation for model training, but does not support 3D detection because it does not introduce new 3D object content.
47
+
48
+ # 2.3 Illumination Estimation
49
+
50
+ Illumination estimation is a critical focus within computer vision research, given its crucial role in various applications. Li et al. [2020b] addressed the inverse rendering problem for complex indoor scenes, estimating spatially-varying lighting, SVBRDF, and shape from a single image. Meanwhile, a differentiable ray tracing method combined with deep learning was proposed for the learning-based inverse rendering of indoor scenes [Zhu et al., 2022]. Additionally, research has been conducted on using deep learning for indoor lighting estimation, with methods like Deep Parametric Indoor Lighting Estimation offering enhanced accuracy and efficiency Gardner et al. [2019]. Furthermore,
51
+
52
+ ![](images/9f3af0f0c97f2d171400bc750f3dc99ebdf7d343aa14dd101ba4b74febdfce88.jpg)
53
+ Figure 2: 3D Copy-Paste method overview: Our method (a) processes the input RGB image and depth data to reconstruct floor planes that can accommodate inserted objects. (b) Using the reconstructed planes and information about objects in the original scene, we estimate a physically plausible position, pose, and size for the inserted objects, ensuring they do not collide with existing objects. (c) We predict the spatially-varying lighting of the scene. (d) By registering the insertion position determined in (b) to spatially-varying lighting, our light estimation module (d) refined an HDR environment map to represent the lighting information for the inserted objects. (e) The insertion rendering module takes the position, pose, size, and lighting as input and inserts a 3D object into the real scene, adjusting the object's lighting and shadows accordingly to ensure it seamlessly integrates as a natural and coherent part of the scene.
54
+
55
+ Wang et al. [2022b] introduced Neural Light Field Estimation, a method that effectively models complex lighting conditions for virtual object insertion in street scenes. These studies underscore the potential of machine learning in improving illumination estimation capabilities in rendering and computer vision tasks.
56
+
57
+ # 3 Methods
58
+
59
+ This section presents our proposed physically plausible indoor 3D object insertion approach. Fig. 2 shows our 3D Copy-Paste method overview. Section 3.1 addresses the question of "where and how to place the object", detailing the process of estimating suitable insertion positions, poses, and sizes for the objects while avoiding collisions with existing objects. Section 3.2 explains "what illumination should we add to the object": estimate the scene's spatially-varying illumination and render the inserted objects with realistic lighting and shadows. Section 3.3 describes how we create an augmented dataset using the inserted objects and train monocular 3D object detection models.
60
+
61
+ # 3.1 Where and how: Physically Plausible Position, Pose, and Size Estimation
62
+
63
+ This section describes handling the first challenge of avoiding collisions during insertion by estimating physically plausible position, pose, and size parameters.
64
+
65
+ # 3.1.1 Ground Plane Selection
66
+
67
+ Given a scene and a 3D object to insert, the initial question is where to place the object. To accommodate a new object, we must identify and understand the available regions where the object can be situated. We perform plane reconstruction to comprehend the scene's layout and subsequently, we estimate physically plausible key parameters such as position, size, and pose. Fig. 2(a) presents an overview of our plane reconstruction and selection module, which takes an RGB image and depth data as input and predicts all potential planes, then narrows down to the ground plane.
68
+
69
+ To get a rough plane reconstruction, we followed the plane extraction method using Agglomerative Hierarchical Clustering (AHC) described in Feng et al. [2014]. There are three main steps: (1)
70
+
71
+ we construct a graph with nodes and edges representing groups of points, obtained by dividing the point cloud (merging RGB with depth) into non-overlapping groups. (2) We then perform AHC on the organized graph to identify potential planes by merging nodes that belong to the same plane, continuing until the mean squared error of plane fitting surpasses a threshold. (3) We use a pixelwise region-growing method to refine the detected planes. To further refine the extracted planes while preserving clear face textures and sharp features without losing geometric details, we utilize a back-end indoor plane optimization and reconstruction method described in Wang and Guo [2018]. Specifically, we first partition the entire dense mesh into different planar clusters based on the planes extracted with AHC, treating them as plane primitives. We then create a texture patch for each plane and sample points on it, followed by executing a global optimization process to maximize the photometric consistency of sampled points across frames by optimizing camera poses, plane parameters, and texture colors. Further, we optimize the mesh geometry by maximizing consistency between geometry and plane primitives, further preserving the original scene's sharp features, such as edges and corners of plane intersections. Finally, we get the reconstructed plane with the geometry parameters (e.g., surface normal).
72
+
73
+ To select a proper plane for insertion, we first identify all horizontal planes based on surface direction and the standard deviation along the Z-axis. Specifically, there are two constraints for considering a plane as horizontal: (1) The plane must have a surface normal aligned with the positive direction of the Z-axis (opposite of the gravity vector), and (2) the standard deviation along the Z-axis should be smaller than a predefined threshold. In our scenario, we aim to insert furniture into the scene, such as the ten interest classes in the SUN RGB-D dataset [Song et al., 2015]: sofa, bed, chair, desk, table, nightstand, dresser, bookshelf, toilet, and bathtub. Consequently, we must identify the floor plane by selecting the horizontal plane with the lowest average Z value among all detected horizontal planes.
74
+
75
+ # 3.1.2 Constrained Insertion Parameter Search
76
+
77
+ To address the question of where and how to place the object, we estimate specific insertion parameters: position $(p)$ , size $(s)$ , and pose $(o)$ . We propose an efficient constrained insertion parameter searching algorithm to calculate plausible insertion parameters while avoiding collisions with existing objects in the scene (Algorithm 1). Given the reconstructed floor plane, we first determine the search space for each parameter. For position, we want the inserted object to touch the floor, so we find the 3D bounding box of the object and calculate the center of the bottom surface $(p)$ as the optimization parameter of position. To prevent potential collisions between the inserted object and existing assets in the original scene, we search for a suitable position around the center of the reconstructed floor. As shown in Fig. 2(b), we first calculate the floor's center $c \gets (c_x, c_y, c_z)$ , and set a search square, which uses twice the floor's standard deviation along X axis, $\sigma_x$ , and Y axis, $\sigma_y$ , as square width and length. The insertion position is sampled from a Uniform distribution inside the search square $p_x \sim \mathcal{U}[c_x - \sigma_x, c_x + \sigma_x]$ and $p_y \sim \mathcal{U}[c_y - \sigma_y, c_y + \sigma_y]$ , $p \gets (p_x, p_y, c_z)$ . For size $(s)$ , we use the height of the 3D bounding box of the object as the optimization parameter. For each object category, we first calculate the mean $m_h$ and standard deviation $\sigma_h$ of the height of the object belonging to the same category in the original scene dataset. We then assume the height size follows a Normal distribution and sample a height size from this Normal distribution: $s \in \mathcal{N}(m_h, \sigma_h)$ . For the pose $(o)$ , we only allow the object to rotate along the Z-axis to maintain its stability. The optimization parameter is the rotation angles alone the Z-axis, which follows uniform distribution as $o \sim \mathcal{U}[-\pi, \pi]$ .
78
+
79
+ Algorithm 1 details the Constrained Insertion Parameter Search algorithm. We first set a search budget: $k$ search iterations. For each iteration, we randomly sample each parameter (position, size, and pose) from their corresponding search spaces and calculate the inserted object's bounding box based on the sampled parameters. We then check for collisions with existing objects and quantitatively evaluate the degree of collisions. A direct approach for collision checking is to convert the inserted object into a point cloud and then calculate the overlap with existing objects' point clouds. However, this method is time-consuming due to the large number of points involved. We simplify the problem by converting the original 3D collision into a 2D collision to speed up the collision check. Since the inserted objects are on the floor, if two objects collide, their 3D bounding box projections on the top view would also often collide (but not always, e.g., when an object may be placed under a table; we here ignore these candidate placements). In other words, we disregard the absolute value of the 3D volume and use the 2D collision projection as a relative collision score. Utilizing an efficient collision check allows us to set a relatively large search iteration number, such as $k = 1000$ , while still maintaining a limited search time (less than 0.5 seconds). We also consider a resize factor $r$
80
+
81
+ Algorithm 1: Constrained Insertion Parameter Search
82
+ Input: An RGBD image of the scene, a reconstructed floor, a 3D object belonging to the class of interest, $j$
83
+ Output: Position $(\hat{p}$ : 3D bounding box bottom center), size (s: 3D bounding box (bbox) height), and pose (o: orientation along Z-axis)
84
+ 1 Compute position search constrains: floor center $c\gets (c_x,c_y,c_z)$ , standard deviation $\sigma_{x}$ and $\sigma_{y}$
85
+ 2 Initialize search parameters: $k\gets 1000$ degree of collision $\hat{l}\gets \inf$ for $i\in \{1,2,\dots ,k\}$ do Sample position: $p_x\sim \mathcal{U}[c_x - \sigma_x,c_x + \sigma_x]$ and $p_y\sim \mathcal{U}[c_y - \sigma_y,c_y + \sigma_y],p\gets (p_x,p_y,c_z)$ Sample size: $s\sim \mathcal{N}(m_h,\sigma_h)$ , resize factor $r\sim \mathcal{U}[1,r]$ $s\gets s / r$ where $m_h$ and $\sigma_h$ are mean and standard deviation of object height in class $j$ in the raw dataset Sample pose: $o\sim \mathcal{U}[-\pi ,\pi ]$ Calculate 3D bbox $x_{3\mathrm{D}}$ , parameter based on the sampled insertion parameter $(p,s$ and o) Project 3D bbox to 2D bbox $x_{2\mathrm{D}}$ in top view Calculate collision score $l = F(x_{2\mathrm{D}})$ with existing objects in the scene if $l = = 0$ then Return p, s, o if $l < \hat{l}$ then $\hat{p}\gets p,\hat{s}\gets s,\hat{o}\gets o$ $\hat{l}\gets l$
86
+
87
+ to shrink the size of the inserted object to handle inserting a large object in a small empty floor scenario. During the search, we terminate the process if we find an insertion with a collision score of 0; otherwise, we continue to track the best insertion with the lowest collision score and return it after completing $k$ search iterations.
88
+
89
+ # 3.2 What Illumination is on the object
90
+
91
+ # 3.2.1 Spatial-varying Illumination Estimation and Retrieval
92
+
93
+ To answer the question of what kind of illumination should be cast on the object, we first need to estimate the spatially-varying illumination of the scene. This process involves encapsulating intricate global interactions at each spatial location. To achieve this, we utilize the deep inverse rendering framework proposed by Li et al. [2020b]. Initially, we estimate intermediate geometric features such as albedo, normal, depth, and roughness. Subsequently, a LightNet structure, consisting of an encoder-decoder setup, ingests the raw image and the predicted intermediate features. This, in turn, enables the estimation of spatially-varying lighting across the scene.
94
+
95
+ As depicted in Fig. 2(c), the estimated spatially-varying illumination is represented as environment maps. Specifically, each $4 \times 4$ pixel region in the raw image is associated with an environment map, which captures the appearance of the surrounding environment and is used for reflection, refraction, or global illumination. These maps are spherical (equirectangular), representing the environment on a single 2D texture. The X-axis corresponds to longitude, and the Y-axis corresponds to latitude. Each point on the texture corresponds to a specific latitude and longitude on a sphere.
96
+
97
+ To obtain the environment map associated with the position of the inserted object, we register and retrieve the corresponding environment map based on the estimated position after performing the constrained insertion parameter search.
98
+
99
+ # 3.2.2 Environment Map Refinement
100
+
101
+ Coordinate transformation. The environment map, estimated for the inserted object, is based on the local coordinates of the insertion position. In particular, it establishes a coordinate system where the surface normal is designated as the Z-axis. In order to apply this map for relighting the inserted object using a rendering method (such as Blender), it becomes necessary to transform the environment map to align with Blender's coordinate system.
102
+
103
+ Latitude completion. The estimated environment map only contains latitudes in the range $(0, \pi/2)$ because the inverse rendering method cannot estimate the illumination beneath the surface. As shown in Fig. 2(d), we complete the entire environment map by filling in artificial values in the second half.
104
+
105
+ Table 1: Statistics of external 3D objects from Objaverse [Deitke et al., 2022].
106
+
107
+ <table><tr><td>Category</td><td>Bed</td><td>Table</td><td>Sofa</td><td>Chair</td><td>Desk</td><td>Dresser</td><td>Nightstand</td><td>Bookshelf</td><td>Toilet</td><td>Bathtub</td></tr><tr><td>Number</td><td>190</td><td>854</td><td>361</td><td>934</td><td>317</td><td>52</td><td>13</td><td>99</td><td>142</td><td>24</td></tr></table>
108
+
109
+ Intensity refinement. The estimated environment map is in Low Dynamic Range (LDR) format, lacking High Dynamic Range (HDR) details and high contrast. If we use the predicted value directly, the rendered shadow appears relatively fuzzy. We refine the value by adjusting the scale in log space to estimate the HDR value: $I_{\mathrm{HDR}} = I_{\mathrm{LDR}}^{\gamma}$ , where $\gamma$ is a hyperparameter.
110
+
111
+ Finally, we input the HDR environment map after transformation and refinement, along with the position, size, and pose, into an insertion renderer (e.g., Blender). This allows us to obtain the inserted image with 3D bounding boxes serving as ground truth.
112
+
113
+ # 3.3 Dataset Augmentation with Insertion and Downstream Model Training
114
+
115
+ Given an indoor scene dataset and a set of interest classes $\mathcal{C}$ for potential insertion, we can identify external 3D objects set $\mathcal{E}$ that fall within these classes of interest. Before any insertion, we calculate the statistical parameters for each class of interest that we aim to augment. For every class $j\in \mathcal{C}$ , we assume the size parameter (for instance, the height) fits a Gaussian distribution. We then calculate the mean and standard deviation of this size parameter to guide the insertion of external objects. Here are the detailed steps for insertion: For each scene within the indoor scene dataset, we randomly select a category $j$ from the class of interest set $\mathcal{C}$ . Next, we randomly choose an instance from the external 3D objects set $\mathcal{E}$ that belongs to the selected class $j$ . We then utilize our physically plausible insertion method (Algorithm 1) to integrate this external 3D object into the scene. We could train any downstream monocular 3D object detection model with the augmented dataset because we automatically obtain the 3D annotations of the inserted objects.
116
+
117
+ # 4 Experiments
118
+
119
+ This section presents experiments to assess the effectiveness of our proposed physically-plausible 3D object insertion method and evaluate how different insertion parameters affect the final performance of monocular 3D object detection.
120
+
121
+ # 4.1 Dataset and Model Setting
122
+
123
+ Indoor scene dataset. We utilize the SUN RGB-D dataset [Song et al., 2015] as our primary resource for indoor scenes. It is one of the most challenging benchmarks in indoor scene understanding. SUN RGB-D comprises 10,335 RGB-D images captured using four distinct sensors. The dataset is divided into 5,285 training scenes and 5,050 test scenes. Furthermore, it includes 146,617 2D polygons and 58,657 3D bounding boxes, providing a comprehensive dataset for our research.
124
+
125
+ We also use ScanNet dataset [Dai et al., 2017]. ScanNet v2 is a large-scale RGB-D video dataset, which contains 1,201 videos/scenes in the training set and 312 scenes in the validation set. Adapting it for monocular 3D object detection, we utilized one RGB-D image per video, amounting to 1,201 RGB-D images for training and 312 for validation. We compute the ground truth 3D bounding box label for each of our used views from their provided scene level label, as some objects in the scene may not be visible in our monocular viewpoint.
126
+
127
+ External 3D object assets. The quality of 3D objects is crucial for effective insertion. Hence, we use Objaverse [Deitke et al., 2022], a robust dataset with over 800,000 annotated 3D objects. Using word parsing, we extract objects that align with the classes of interest for monocular 3D object detection within SUN RGB-D. Table 1 shows the selected Objaverse data for each SUN RGB-D class.
128
+
129
+ Monocular 3D object detection model. We focus on the challenging task of monocular 3D object detection that relies solely on a single RGB image as input. We employ ImVoxelNet, which achieves state-of-the-art performance on the raw SUN RGB-D dataset using only a single RGB image as input. Other existing methods either resort to using additional modalities and multiple datasets for extra supervision or exhibit underwhelming performance. For the purpose of monocular 3D object
130
+
131
+ Table 2: ImVoxelNet 3D monocular object detection performance on the SUN RGB-D dataset with different object insertion methods. When inserting randomly, the accuracy of the downstream object detector drops, i.e., the detector suffers from random insertions (which may have collisions, occlusions, incorrect lighting, etc.). In contrast, by only applying physically plausible position, size, and pose, performance significantly improved $(41.80\%)$ . Further, when plausible lighting and shadows are added, our 3D copy-paste improves the accuracy of the downstream detector to a new state-of-the-art accuracy $(43.79\%)$ . We use mAP $(\%)$ with 0.25 IOU threshold.
132
+
133
+ <table><tr><td>Setting</td><td>Insertion Position, Pose, Size</td><td>Insertion Illumination</td><td>mAP@0.25</td></tr><tr><td>ImVoxelNet</td><td>N/A</td><td>N/A</td><td>40.96</td></tr><tr><td>ImVoxelNet + random insert</td><td>Random</td><td>Camera point light</td><td>37.02</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste (w/o light)</td><td>Plausible position, size, pose</td><td>Camera point light</td><td>41.80</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste</td><td>Plausible position, size, pose</td><td>Plausible dynamic light</td><td>43.79</td></tr></table>
134
+
135
+ Table 3: Per class average precision (AP) of ImVoxelNet 3D monocular object detection performance on SUN RGB-D dataset.
136
+
137
+ <table><tr><td>Setting</td><td>mAP@0.25</td><td>bed</td><td>chair</td><td>sofa</td><td>table</td><td>bkshf</td><td>desk</td><td>bathtub</td><td>toilet</td><td>dresser</td><td>nightstand</td></tr><tr><td>ImVoxelNet</td><td>40.96</td><td>72.0</td><td>55.6</td><td>53.0</td><td>41.1</td><td>7.6</td><td>21.5</td><td>29.6</td><td>76.7</td><td>19.0</td><td>33.4</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste</td><td>43.79</td><td>72.6</td><td>57.1</td><td>55.1</td><td>41.8</td><td>7.1</td><td>24.1</td><td>40.2</td><td>80.7</td><td>22.3</td><td>36.9</td></tr></table>
138
+
139
+ detection, we train the same ImVoxelNet model on the original SUN RGB-D dataset and its various versions, each augmented via different insertion methods. All mAP results are mAP@0.25.
140
+
141
+ # 4.2 Physically-plausible position, pose, size, and illumination leads to better monocular detection performance
142
+
143
+ Our 3D Copy-Paste focuses on solving two challenges: (1) Where and how to put the object: we estimate the object's position, orientation, and size for insertion while ensuring no collisions. (2) What illumination is on the object: we estimate the spatially-varying illumination and apply realistic lighting and shadows to the object rendering. The following experiments evaluate the model performance.
144
+
145
+ Table 2 presents the results of monocular 3D object detection on the SUN RGB-D dataset, utilizing various object insertion augmentation techniques. The first row is the performance of ImVoxelNet trained on the raw SUN RGB-D dataset without any insertion. The "ImVoxelNet + random insert" row displays results achieved through a naive 3D object insertion without applying physically plausible constraints (random location and Camera point light). This approach led to a drop in accuracy from $40.96\%$ to $37.02\%$ , likely due to the lack of physical plausibility causing severe collisions and occlusions in the final image. The "ImVoxelNet + 3D Copy-Paste (w/o light)" row showcases the performance after implementing our method for only estimating physically plausible insertion position, pose, and size. Despite using a rudimentary camera point light, this approach outperforms "ImVoxelNet" without any insertion, and also outperforms the naive "ImVoxelNet + random insert" $(+4.78\%)$ improvement. This result shows that applying plausible geometry is essential for downstream tasks and makes 3D data augmentation useful over a naive, random augmentation. After further applying physically plausible dynamic light, our proposed "ImVoxelNet + 3D Copy-Paste" further improved the performance and achieved new state-of-the-art, surpassing ImVoxelNet without insertion $(+2.83\%)$ on monocular 3D object detection task. This performance improvement suggests that our 3D Copy-Paste insertion can serve as an efficient data augmentation method to positively benefit downstream 3D object detection tasks. Table 3 shows detailed SUN RGB-D monocular 3D object detection results with ImVoxelNet on each individual object category.
146
+
147
+ Table 4 presents the results of monocular 3D object detection on the ScanNet dataset. We utilized one RGB-D image per video: 1,201 for training and 312 for validation. We compute the ground truth 3D bounding box label for each of our used views from their provided scene-level label. For the baseline, we train an ImVoxelNet monocular 3D object detection model on the training set and test on the validation set. For our method, there are 8 overlapping categories (sofa, bookshelf, chair, table, bed, desk, toilet, bathtub) in the 18 classes of ScanNet with our collected Objaverse data. We use our 3D Copy-Paste to augment the training set and train an ImVoxelNet model. All the training parameters are the same as the training on SUN RGB-D dataset. We show the results on the average accuracy of
148
+
149
+ Table 4: ImVoxelNet 3D monocular object detection performance on the ScanNet dataset with different object insertion methods.
150
+
151
+ <table><tr><td>Setting</td><td>mAP@0.25</td><td>bed</td><td>chair</td><td>sofa</td><td>table</td><td>bkshf</td><td>desk</td><td>bathtub</td><td>toilet</td></tr><tr><td>ImVoxelNet</td><td>14.1</td><td>25.7</td><td>7.9</td><td>13.2</td><td>7.8</td><td>4.2</td><td>20.5</td><td>22.1</td><td>11.5</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste</td><td>16.9</td><td>27.7</td><td>12.7</td><td>10.0</td><td>10.8</td><td>9.2</td><td>26.2</td><td>29.2</td><td>9.0</td></tr></table>
152
+
153
+ Table 5: ImVoxelNet 3D monocular object detection performance on SUN RGB-D dataset with different illumination during insertion rendering. All experiments use the same ImVoxelNet model, insertion also uses our proposed physically plausible position, size, and pose.
154
+
155
+ <table><tr><td>Setting</td><td>Light source type</td><td>Intensity</td><td>Direction</td><td>With shadow?</td><td>mAP@0.25</td></tr><tr><td>Point Light 1</td><td>Point</td><td>100W</td><td>Camera position</td><td>Yes</td><td>41.80</td></tr><tr><td>Point Light 2</td><td>Point</td><td>100W</td><td>Side (left)</td><td>Yes</td><td>42.38</td></tr><tr><td>Area Light 1</td><td>Area</td><td>100W</td><td>Camera position</td><td>Yes</td><td>42.67</td></tr><tr><td>Area Light 2</td><td>Area</td><td>100W</td><td>Side (left)</td><td>Yes</td><td>42.02</td></tr><tr><td>Spot Light 1</td><td>Spot</td><td>100W</td><td>Camera position</td><td>Yes</td><td>40.92</td></tr><tr><td>Spot Light 2</td><td>Spot</td><td>100W</td><td>Side (left)</td><td>Yes</td><td>42.10</td></tr><tr><td>Sun Light 1</td><td>Sun</td><td>5</td><td>Camera position</td><td>Yes</td><td>42.11</td></tr><tr><td>Sun Light 2</td><td>Sun</td><td>5</td><td>Side (left)</td><td>Yes</td><td>41.21</td></tr><tr><td>Ours (Dynamic Light)</td><td>Estimated Plausible light</td><td>Dynamic</td><td>Dynamic</td><td>No</td><td>41.83</td></tr><tr><td>Ours (Dynamic Light)</td><td>Estimated Plausible light</td><td>Dynamic</td><td>Dynamic</td><td>Yes</td><td>43.79</td></tr></table>
156
+
157
+ the 8 overlapping classes (mAP@0.25) in the Table 4. Our 3D Copy-Paste improves ImVoxelNet by $2.8\%$ mAP.
158
+
159
+ # 4.3 Ablation study on the influence of insertion illumination and position on monocular 3D object detection
160
+
161
+ We first explore the influence of illumination of inserted objects on downstream monocular 3D object detection tasks. Table 5 shows the ImVoxelNet performance on SUN RGB-D with different illumination settings during 3D Copy-Paste. To eliminate the influence of other insertion parameters, we fix the estimated position, pose, and size for each scene among all experiments in Table 5.
162
+
163
+ Fig. 3 provides a visualization of the effects of various light sources and light parameters during the insertion rendering process. The corresponding monocular 3D object detection results are presented in Table 5. These illustrate how lighting not only impacts the visual perception of the inserted object from a human observer's standpoint but also considerably affects the performance of downstream detection tasks. Thus, an accurate and physically plausible lighting estimation is crucial for both understanding the scene and for the practical application of downstream detection tasks.
164
+
165
+ Table. 2 shows the importance of physical position, pose, and size (local context) on monocular 3D object detection tasks. We also explored the importance of the global context to the detection performance. The global context here means the semantic relationship of the inserted object to the whole scene. For instance, inserting a toilet into a living room may not satisfy the global context. We propose a plausible global context insertion method where the inserted object class considers the global scene information. Also, we could select an inserted class based on the floor size: insert larger size objects (e.g., bed, bookshelf) on only a large size floor. Table. 6 shows results on different settings. We find considering the global context during the insertion is on par with the random category selecting setting, and the following downstream detection model may not be sensitive to that.
166
+
167
+ # 4.4 Qualitative Analysis
168
+
169
+ Fig. 4 shows the qualitative results of monocular 3D object detection on SUN RGB-D dataset. Our method demonstrates enhanced capabilities in detecting objects with significant occlusion, provides improved pose estimation, and effectively suppresses false positives.
170
+
171
+ ![](images/29bbf7bd82805e0e832dfa3199d79ce5e2640804b05761967c201e4a425f3df1.jpg)
172
+ Figure 3: Visualization of different illumination on inserted objects.
173
+
174
+ Table 6: Ablation study of global context influence on ImVoxelNet monocular 3D object detection performance on SUN RGB-D.
175
+
176
+ <table><tr><td>Method</td><td>Follow global context?</td><td>Select class based on empty size?</td><td>mAP@0.25</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste</td><td>Yes</td><td>No</td><td>43.75</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste</td><td>Yes</td><td>Yes</td><td>43.74</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste</td><td>No</td><td>Yes</td><td>42.50</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste</td><td>No</td><td>No</td><td>43.79</td></tr></table>
177
+
178
+ ![](images/283939afcdcc51354a98b9c15690997c9c287d89efac285e01b8ad39405d0412.jpg)
179
+ Figure 4: Qualitative results on the SUN RGB-D dataset.
180
+
181
+ # 5 Conclusion and Discussion
182
+
183
+ Our work addresses the challenge of scarce large-scale annotated datasets for monocular 3D object detection by proposing a physically plausible indoor 3D object insertion approach. This technique allows us to effectively augment existing indoor scene datasets, such as SUN RGB-D, with large-scale annotated 3D objects that have both plausible physical location and illumination. The resulting augmented dataset enables training a monocular 3D object model that achieves new state-of-the-art performance. Our approach carefully considers physically feasible locations, sizes, and poses for inserted objects, avoiding collisions with the existing room layout, and estimates spatially-varying illumination to seamlessly integrate the objects into the original scene. We also systematically evaluate the impact of the physical position and illumination of the inserted objects on the performance of the final monocular 3D object detection model. This paper is the first to demonstrate that physically plausible 3D object insertion can serve as an effective generative data augmentation technique, leading to state-of-the-art performance in discriminative downstream tasks like monocular 3D object detection. Our findings highlight the potential of 3D data augmentation in improving the performance of 3D perception tasks, opening up new avenues for research and practical applications.
184
+
185
+ Acknowledgments. This work is in part supported by Bosch, Ford, ONR MURI N00014-22-1-2740, NSF CCRI #2120095, Amazon ML Ph.D. Fellowship, National Science Foundation (award 2318101), C-BRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA) and the Army Research Office (W911NF2020053). The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof.
186
+
187
+ # References
188
+
189
+ H. Abu Alhaija, S. K. Mustikovela, L. Mescheder, A. Geiger, and C. Rother. Augmented reality meets computer vision: Efficient data generation for urban driving scenes. International Journal of Computer Vision, 126:961-972, 2018.
190
+ P. Achlioptas, O. Diamanti, I. Mitlagkas, and L. Guibas. Learning representations and generative models for 3d point clouds. In International conference on machine learning, pages 40-49. PMLR, 2018.
191
+ G. Brazil and X. Liu. M3d-rpn: Monocular 3d region proposal network for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9287-9296, 2019.
192
+ X. Chen, K. Kundu, Y. Zhu, A. G. Berneshawi, H. Ma, S. Fidler, and R. Urtasun. 3d object proposals for accurate object class detection. Advances in neural information processing systems, 28, 2015.
193
+ X. Chen, H. Ma, J. Wan, B. Li, and T. Xia. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1907-1915, 2017.
194
+ Y. Chen, F. Rong, S. Duggal, S. Wang, X. Yan, S. Manivasagam, S. Xue, E. Yumer, and R. Urtasun. Geosim: Realistic video simulation via geometry-aware composition for self-driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7230-7240, 2021.
195
+ A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828-5839, 2017.
196
+ M. Deitke, D. Schwenk, J. Salvador, L. Weihs, O. Michel, E. VanderBilt, L. Schmidt, K. Ehsani, A. Kembhavi, and A. Farhadi. Objverse: A universe of annotated 3d objects. arXiv preprint arXiv:2212.08051, 2022.
197
+ D. Dwibedi, I. Misra, and M. Hebert. Cut, paste and learn: Surprisingly easy synthesis for instance detection. In Proceedings of the IEEE international conference on computer vision, pages 1301-1310, 2017.
198
+ M. Engelcke, D. Rao, D. Z. Wang, C. H. Tong, and I. Posner. Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 1355-1361. IEEE, 2017.
199
+ C. Feng, Y. Taguchi, and V. R. Kamat. Fast plane extraction in organized point clouds using agglomerative hierarchical clustering. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 6218-6225. IEEE, 2014.
200
+ M.-A. Gardner, Y. Hold-Geoffroy, K. Sunkavalli, C. Gagné, and J.-F. Lalonde. Deep parametric indoor lighting estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7175–7183, 2019.
201
+ Y. Ge, H. Behl, J. Xu, S. Gunasekar, N. Joshi, Y. Song, X. Wang, L. Itti, and V. Vineet. Neural-sim: Learning to generate training data with nef. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIII, pages 477-493. Springer, 2022a.
202
+
203
+ Y. Ge, J. Xu, B. N. Zhao, L. Itti, and V. Vineet. Dall-e for detection: Language-driven context image synthesis for object detection. arXiv preprint arXiv:2206.09592, 2022b.
204
+ Y. Ge, J. Xu, B. N. Zhao, N. Joshi, L. Itti, and V. Vineet. Beyond generation: Harnessing text to image models for object detection and segmentation. arXiv preprint arXiv:2309.05956, 2023.
205
+ G. Ghiasi, Y. Cui, A. Srinivas, R. Qian, T.-Y. Lin, E. D. Cubuk, Q. V. Le, and B. Zoph. Simple copy-paste is a strong data augmentation method for instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2918-2928, 2021.
206
+ O. F. Kar, T. Yeo, A. Atanov, and A. Zamir. 3d common corruptions and data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18963-18974, 2022.
207
+ P. Li, H. Zhao, P. Liu, and F. Cao. Rtm3d: Real-time monocular 3d detection from object keypoints for autonomous driving. In European Conference on Computer Vision, pages 644-660. Springer, 2020a.
208
+ Z. Li, M. Shafiei, R. Ramamoorthi, K. Sunkavalli, and M. Chandraker. Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and svbrdf from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2475-2484, 2020b.
209
+ Q. Lian, B. Ye, R. Xu, W. Yao, and T. Zhang. Exploring geometric consistency for monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1685-1694, 2022.
210
+ Z. Liu, Z. Wu, and R. Tóth. Smoke: Single-stage monocular 3d object detection via keypoint estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 996-997, 2020.
211
+ A. Mousavian, D. Anguelov, J. Flynn, and J. Kosecka. 3d bounding box estimation using deep learning and geometry. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7074-7082, 2017.
212
+ Y. Nie, X. Han, S. Guo, Y. Zheng, J. Chang, and J. J. Zhang. Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 55-64, 2020.
213
+ C. Reading, A. Harakeh, J. Chae, and S. L. Waslander. Categorical depth distribution network for monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8555-8564, 2021.
214
+ D. Rukhovich, A. Vorontsova, and A. Konushin. Imvoxelnet: Image to voxels projection for monocular and multi-view general-purpose 3d object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2397-2406, 2022.
215
+ N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference from rgbd images. ECCV(5), 7576:746-760, 2012.
216
+ A. Simonelli, S. R. Bulo, L. Porzi, M. López-Antequera, and P. Kontschieder. Disentangling monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1991-1999, 2019.
217
+ E. J. Smith and D. Meger. Improved adversarial systems for 3d object generation and reconstruction. In Conference on Robot Learning, pages 87-96. PMLR, 2017.
218
+ S. Song, S. P. Lichtenberg, and J. Xiao. Sun rgb-d: A rgb-d scene understanding benchmark suite. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 567-576, 2015.
219
+ X. Sun, J. Wu, X. Zhang, Z. Zhang, C. Zhang, T. Xue, J. B. Tenenbaum, and W. T. Freeman. Pix3d: Dataset and methods for single-image 3d shape modeling. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2974-2983, 2018.
220
+
221
+ W. Tong, J. Xie, T. Li, H. Deng, X. Geng, R. Zhou, D. Yang, B. Dai, L. Lu, and H. Li. 3d data augmentation for driving scenes on camera. arXiv preprint arXiv:2303.10340, 2023.
222
+ C. Wang and X. Guo. Plane-based optimization of geometry and texture for rgb-d reconstruction of indoor scenes. In 2018 International Conference on 3D Vision (3DV), pages 533-541. IEEE, 2018.
223
+ T. Wang, X. Zhu, J. Pang, and D. Lin. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 913-922, 2021.
224
+ T. Wang, Z. Xinge, J. Pang, and D. Lin. Probabilistic and geometric depth: Detecting objects in perspective. In Conference on Robot Learning, pages 1475-1485. PMLR, 2022a.
225
+ Y. Wang, W.-L. Chao, D. Garg, B. Hariharan, M. Campbell, and K. Q. Weinberger. Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8445-8453, 2019.
226
+ Z. Wang, W. Chen, D. Acuna, J. Kautz, and S. Fidler. Neural light field estimation for street scenes with differentiable virtual object insertion. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part II, pages 380-397. Springer, 2022b.
227
+ Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015.
228
+ B. Xu and Z. Chen. Multi-level fusion based 3d object detection from monocular images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2345-2353, 2018.
229
+ D. Xu, D. Anguelov, and A. Jain. Pointfusion: Deep sensor fusion for 3d bounding box estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 244-253, 2018.
230
+ S. Yang and S. Scherer. Cubeslam: Monocular 3-d object slam. IEEE Transactions on Robotics, 35 (4):925-938, 2019.
231
+ C. Zhang, Z. Cui, Y. Zhang, B. Zeng, M. Pollefeys, and S. Liu. Holistic 3d scene understanding from a single image with implicit representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8833-8842, 2021.
232
+ W. Zhang, Z. Wang, and C. C. Loy. Exploring data augmentation for multi-modality 3d object detection. arXiv preprint arXiv:2012.12741, 2020.
233
+ J. Zhu, F. Luan, Y. Huo, Z. Lin, Z. Zhong, D. Xi, R. Wang, H. Bao, J. Zheng, and R. Tang. Learning-based inverse rendering of complex indoor scenes with differentiable monte carlo raytracing. In SIGGRAPH Asia 2022 Conference Papers, pages 1-8, 2022.
234
+
235
+ # A Experiments on more Monocular 3D Object Detection methods
236
+
237
+ In our main paper, we utilize ImVoxelNet [Rukhovich et al., 2022] for monocular 3D object detection. To show the robustness of our 3D Copy-Paste across different downstream detection methods. We conducted additional experiments with another monocular 3D object detection model: Implicit3DUnderstanding (Im3D [Zhang et al., 2021]). The Im3D model predicts object 3D shapes, bounding boxes, and scene layout within a unified pipeline. Training this model necessitates not only the SUN RGB-D dataset but also the Pix3D dataset [Sun et al., 2018], which supplies 3D mesh supervision. The Im3D training process consists of two stages. In stage one, individual modules - the Layout Estimation Network, Object Detection Network, Local Implicit Embedding Network, and Scene Graph Convolutional Network - are pretrained separately. In stage two, all these modules undergo joint training. We incorporate our 3D Copy-Paste method only during this second stage of joint training, and it's exclusively applied to the 10 SUN RGB-D categories we used in the main paper. We implemented our experiment following the official Im3D guidelines<sup>1</sup>.
238
+
239
+ Table 7 displays the Im3D results for monocular 3D object detection on the SUN RGB-D dataset, adhering to the same ten categories outlined in main paper. Im3D without insertion, attained a mean average precision (mAP) detection performance of $42.13\%$ . After applying our 3D Copy-Paste method—which encompasses physically plausible insertion of position, pose, size, and light—the monocular 3D object detection mAP performance increased to 43.34. These results further substantiate the robustness and effectiveness of our proposed method.
240
+
241
+ Table 7: Im3D [Zhang et al., 2021] 3D monocular object detection performance on the SUN RGB-D dataset (same 10 categories as the main paper).
242
+
243
+ <table><tr><td>Setting</td><td>Insertion Position, Pose, Size</td><td>Insertion Illumination</td><td>mAP</td></tr><tr><td>Im3D</td><td>N/A</td><td>N/A</td><td>42.13</td></tr><tr><td>Im3D + 3D Copy-Paste</td><td>Plausible position, size, pose</td><td>Plausible dynamic light</td><td>43.34</td></tr></table>
244
+
245
+ # B More experiment details
246
+
247
+ We run the same experiments multiple times with different random seeds. Table 8 shows the main paper Table 2 results with error range.
248
+
249
+ Table 8: ImVoxelNet 3D monocular object detection performance on the SUN RGB-D dataset with different object insertion methods (with error range).
250
+
251
+ <table><tr><td>Setting</td><td>Insertion Position, Pose, Size</td><td>Insertion Illumination</td><td>mAP@0.25</td></tr><tr><td>ImVoxelNet</td><td>N/A</td><td>N/A</td><td>40.96 ± 0.4</td></tr><tr><td>ImVoxelNet + random insert</td><td>Random</td><td>Camera point light</td><td>37.02± 0.4</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste (w/o light)</td><td>Plausible position, size, pose</td><td>Camera point light</td><td>41.80± 0.3</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste</td><td>Plausible position, size, pose</td><td>Plausible dynamic light</td><td>43.79 ± 0.4</td></tr></table>
252
+
253
+ We also show our results with mAP@0.15 on SUN RGB-D dataset (Table 9), our method shows consistent improvements.
254
+
255
+ Table 9: ImVoxelNet 3D monocular object detection performance on the SUN RGB-D dataset with mAP@0.15.
256
+
257
+ <table><tr><td>Setting</td><td>Insertion Position, Pose, Size</td><td>Insertion Illumination</td><td>mAP@0.15</td></tr><tr><td>ImVoxelNet</td><td>N/A</td><td>N/A</td><td>48.45</td></tr><tr><td>ImVoxelNet + 3D Copy-Paste</td><td>Plausible position, size, pose</td><td>Plausible dynamic light</td><td>51.16</td></tr></table>
258
+
259
+ # C Discussion on Limitations and Broader Impact
260
+
261
+ Limitations. Our method, while effective, does have certain limitations. A key constraint is its reliance on the availability of external 3D objects, particularly for uncommon categories where sufficient 3D assets may not be readily available. This limitation could potentially impact the performance of downstream tasks. Moreover, the quality of inserted objects can also affect the results. Possible strategies to address this limitation could include leveraging techniques like Neural Radiance Fields (NeRF) to construct higher-quality 3D assets for different categories.
262
+
263
+ Broader Impact. Our proposed 3D Copy-Paste method demonstrate that physically plausible 3D object insertion can serve as an effective generative data augmentation technique, leading to state-of-the-art performance in discriminative downstream tasks like monocular 3D object detection. The implications of this work are profound for both the computer graphics and computer vision communities. From a graphics perspective, our method demonstrates that more accurate 3D property estimation, reconstruction, and inverse rendering techniques can generate more plausible 3D assets and better scene understanding. These assets not only look visually compelling but can also effectively contribute to downstream computer vision tasks. From a computer vision perspective, it encourages us to utilize synthetic data more effectively to tackle challenges in downstream fields, including computer vision and robotics.
3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18de3b574422919861be65021ce19352426cb2cf5f86ce87e9604645055f5556
3
+ size 510786
3dcopypastephysicallyplausibleobjectinsertionformonocular3ddetection/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19597124d9242e4b03f31350160a693fed854488c5def9ffdf6c0b68eebec35c
3
+ size 343544
3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c27eb301eddd4eb4a8aaad05332a70efe4171dde9ae1fe53187e50171e7c598
3
+ size 117682
3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdf9d12dbca6c50a00a1c9d4f1b06d55887f524f56804498973e2f9870e66eee
3
+ size 132810
3dindoorinstancesegmentationinanopenworld/6c2082bc-1bf4-4053-9c69-d8c68d4d8e68_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a71b58dd8220ca2adbdae70f4f85ec02e67a793571c468d4b3f15dc887133d8
3
+ size 17739921
3dindoorinstancesegmentationinanopenworld/full.md ADDED
@@ -0,0 +1,420 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D Indoor Instance Segmentation in an Open-World
2
+
3
+ Mohamed El Amine Boudjoghra<sup>1</sup>, Salwa K. Al Khatib<sup>1</sup>, Jean Lahoud<sup>1</sup>, Hisham Cholakkal<sup>1</sup>, Rao Muhammad Anwer<sup>1,2</sup>, Salman Khan<sup>1,3</sup>, Fahad Shahbaz Khan<sup>1,4</sup>
4
+
5
+ <sup>1</sup>Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI),
6
+
7
+ $^{2}$ Aalto University, $^{3}$ Australian National University, $^{4}$ Linköping University
8
+
9
+ {mohamed.boudjoghra, salwa.khatib, jean.lahoud,
10
+
11
+ hisham.cholakkal, rao.anwer, salman.khan, fahad.khan}@mbzuai.ac.ae
12
+
13
+ # Abstract
14
+
15
+ Existing 3D instance segmentation methods typically assume that all semantic classes to be segmented would be available during training and only seen categories are segmented at inference. We argue that such a closed-world assumption is restrictive and explore for the first time 3D indoor instance segmentation in an open-world setting, where the model is allowed to distinguish a set of known classes as well as identify an unknown object as unknown and then later incrementally learning the semantic category of the unknown when the corresponding category labels are available. To this end, we introduce an open-world 3D indoor instance segmentation method, where an auto-labeling scheme is employed to produce pseudo-labels during training and induce separation to separate known and unknown category labels. We further improve the pseudo-labels quality at inference by adjusting the unknown class probability based on the objectness score distribution. We also introduce carefully curated open-world splits leveraging realistic scenarios based on inherent object distribution, region-based indoor scene exploration and randomness aspect of open-world classes. Extensive experiments reveal the efficacy of the proposed contributions leading to promising open-world 3D instance segmentation performance. Code and splits are available at: https://github.com/aminebdj/3D-OWIS.
16
+
17
+ # 1 Introduction
18
+
19
+ 3D semantic instance segmentation aims at identifying objects in a given 3D scene, represented by a point cloud or mesh, by providing object instance-level categorization and semantic labels. The ability to segment objects in the 3D domain has numerous vision applications, including robotics, augmented reality, and autonomous driving. Following the developments in the sensors that acquire depth information, a variety of datasets has been presented in the literature which provides instance-level annotations. In view of the availability of large-scale 3D datasets and the advances in deep learning methods, various 3D instance segmentation methods have been proposed in recent years.
20
+
21
+ The dependence of 3D instance segmentation methods on available datasets has a major drawback: a fixed set of object labels (vocabulary) is learned. However, object classes in the real world are plentiful, and many unseen/unknown classes can be present at inference. Current methods that learn on a fixed set not only discard the unknown classes but also supervise them to be labeled as background. This prevents intelligent recognition systems from identifying unknown or novel objects that are not part of the background. Given the importance of identifying unknown objects, recent works have explored open-world learning setting for 2D object detection [18, 11, 28, 33]. In the open-world setting, a model is expected to identify unknown objects, and once new classes are labeled, the new set is desired to be incrementally learned without retraining [18]. While previous methods have been mostly suggested for open-world 2D object detection, it is yet to be explored
22
+
23
+ ![](images/7ca002e1dfce5522ba5750eba00f9e60a7945cce812a1ce81e4b7ebba6cbb487.jpg)
24
+ Figure 1: 3D instance segmentation in an open-world. During each iterative learning phase, the model detects unknown objects, and a human operator gradually assigns labels to some of them and incorporates them into the pre-existing knowledge base for further training.
25
+
26
+ in the 3D domain. The main challenge lies in understanding how objects appear in 3D in order to separate them from the background and other object categories.
27
+
28
+ 3D instance segmentation in the open world, illustrated in Fig. 1, offers more flexibility, allowing the model to identify unknown objects and request annotations for these novel classes from an oracle for further training. However, this approach presents several challenges: (i) the lack of annotations for unknown classes, necessitating quality pseudo-labeling techniques; (ii) the similarities between predicted features of known and unknown classes, requiring separation techniques for improved prediction; and (iii) the need for a more reliable objectness scoring method to differentiate between good and bad predicted masks for 3D point clouds.
29
+
30
+ In this work, we investigate a novel problem setting, namely open-World indoor 3D Instance Segmentation, which aims at segmenting objects of unknown classes while incrementally adding new classes. We define real-world protocols and splits to test the ability of 3D instance segmentation methods to identify unknown objects. In the proposed setup, unknown object labels are also added incrementally to the set of known classes, akin to real-world incremental learning scenarios. We propose an unknown object identifier with a probability correction scheme that enables improved recognition of objects. To the best of our knowledge, we are the first to explore 3D instance segmentation in an open-world setting. The key contributions of our work are:
31
+
32
+ - We propose the first open-world 3D indoor instance segmentation method with a dedicated mechanism for accurate identification of 3D unknown objects. We employ an auto-labeling scheme to generate pseudo-labels during training and induce separation in the query embedding space to delineate known and unknown class labels. At inference, we further improve the quality of pseudo-labels by adjusting the probability of unknown classes based on the distribution of the objectness scores.
33
+ - We introduce carefully curated open-world splits, having known vs. unknown and then incremental learning over the span of 200 classes, for a rigorous evaluation of open-world 3D indoor segmentation. Our proposed splits leverage different realistic scenarios such as inherent distribution (frequency-based) of object classes, various class types encountered during the exploration of indoor areas (region-based), and the randomness aspect of object classes in the open-world. Extensive experiments reveal the merits of the proposed contributions towards bridging the performance gap between our method and oracle.
34
+
35
+ # 2 Related Work
36
+
37
+ 3D semantic instance segmentation: The segmentation of instances in 3D scenes has been approached from various angles. Grouping-based or clustering-based techniques use a bottom-up pipeline by learning an embedding in the latent space to help cluster the object points. [4, 13, 14, 17, 20, 21, 34, 38]. Proposal-based methods work in a top-down fashion, first detecting 3D bounding boxes, then segmenting the object region within the box [10, 15, 22, 36, 37]. Recently, spurred by related 2D work [5, 6], the transformer design [31] has also been applied for the purpose of segmenting 3D instances [29, 30]. Other methods present weakly-supervised alternatives to methods that use dense annotations in order to lower the cost of annotating 3D data [7, 16, 35]. While all these methods aim to improve the quality of 3D instance segmentation, they are trained on a known set of semantic labels. On the other hand, our proposed method aims at segmenting objects with both known and unknown class labels.
38
+
39
+ ![](images/17f1a80c18524ba7df89c37aa9f404e38ea9331f1dc9d67c299da1449a630479.jpg)
40
+ Figure 2: Proposed open-world 3D instance segmentation pipeline. From left to right: 3D instance segmentation model, where the point cloud goes through a 3D convolutional backbone. The extracted feature maps are used in the transformer decoder to refine some initial queries, which then pass through two MLPs to generate label and mask predictions. The Contrastive Clustering block takes the refined queries, the prediction masks, and labels to further process the queries by assigning a target or an unknown pseudo label in the Query Processing module, and then storing them in a Query Store to finally update the class prototypes, which are finally used for contrastive clustering. During inference, the queries are used to correct the probability of the predicted labels based on their reachability to the known class prototypes.
41
+
42
+ Open-world object recognition: Open-world object recognition was introduced in [2], where the Nearest Mean Classifier was extended for an open-world setting. In the direction of open-world object detection, many studies [41, 18, 11, 25] have been conducted in the past. In[18], pseudo-labels for the unknowns are generated to perform contrastive clustering during training for a better unknown-known classes separation, where an energy-based unknown class identifier was proposed to detect the unknown classes, based on the energy of the logits from the known classes. For incremental learning, they adopted exemplar replay to alleviate catastrophic forgetting of old classes. In the same task as [18], [11] used a transformer-based model and proposed another way of unknown pseudo-labels generation, by using a new method of objectness estimation, and introduced a foreground objectness branch that separates the background from the foreground. For the task of outdoor 3D point cloud semantic segmentation, [3] proposed a model that predicts old, novel, and unknown classes from three separate classification heads. The latter is trained on the labels of the known classes and pseudo-labels for old classes generated by the same model to alleviate catastrophic forgetting, while the unknown class is assigned the second-highest score for a better unknown class segmentation. Other methods proposed in [40, 12, 39], primarily focus on enhancing the generalizability of 3D models for novel classes by leveraging supervision from 2D Vision Language Models for object recognition and 3D semantic segmentation tasks. However, these approaches exhibit several limitations, including (i) The 3D model's performance becomes dependent on the 2D Vision Language model. (ii) The 3D geometric properties of unseen objects in the training data are neglected during the training process. (iii) There exists no avenue for enhancing the model's performance on novel classes in cases where new labels are introduced.(iv) The training process necessitates pairs of images and corresponding 3D scenes.
43
+
44
+ # 3 Closed-world 3D Instance Segmentation
45
+
46
+ We adopted the state-of-the-art 3D instance segmentation model Mask3D [29] as our baseline. The latter is a hybrid model that combines Convolutional Neural Networks (CNNs) with transformers to learn class-agnostic masks and labels for instance separation. The backbone of Mask3D is CNN-based and used to extract feature maps from multiple levels. Meanwhile, the decoder is transformer-based and used to refine $n_{Q} \in \mathbb{N}$ instance queries $Q = \{q_{j} \in \mathbb{R}^{D} \mid j \in (1, \dots, n_{Q})\}$ , using the extracted
47
+
48
+ feature maps. The learning scheme consists of a Cross-entropy loss for learning semantic class labels and binary cross-entropy loss for learning instance masks during training.
49
+
50
+ # 4 Open-World 3D Instance Segmentation
51
+
52
+ # 4.1 Problem formulation
53
+
54
+ We start by formulating the problem setting of open-world 3D instance segmentation. At a Task $\mathcal{T}^t$ , there exists a set of known object categories $\mathcal{K}^t = \{1,2,\dots,C\}$ and a set of unknown object categories $\mathcal{U}^t = \{C + 1,\ldots \}$ that may exist on inference time. The training dataset $\mathcal{D}^t = \{\mathbf{X}^t,\mathbf{Y}^t\}$ includes samples from the classes $\mathcal{K}^t$ . The input set $\mathbf{X}^t = \{\mathbf{P}_1,\dots,\mathbf{P}_M\}$ is made of $M$ point clouds, where $\mathbf{P}_i\in \mathbb{R}^{N\times 3}$ is a quantized point cloud of $N$ voxels each carrying average RGB color of the points within. The corresponding labels are $\mathbf{Y}^t = \{\mathbf{Y}_1,\dots,\mathbf{Y}_M\}$ , where $\mathbf{Y}_i = \{\mathbf{y}_1,\dots,\mathbf{y}_k\}$ encodes $k$ object instances. Each object instance $\mathbf{y}_i = [\mathbf{B}_i,l_i]$ represents a binary mask $\mathbf{B}_i\in \{0,1\} ^N$ and a corresponding class label $l_{i}\in \mathcal{K}^{t}$ .
55
+
56
+ In our problem setting, $\mathcal{M}_C$ is a 3D instance segmentation model that is trained on $C$ object categories, and, on test time, can recognize instances from these classes, in addition to instances from new classes not seen during training by classifying them as unknown. The detected unknown instances can be used by a human user to identify a set of $n$ new classes not previously trained on, which can be incrementally added to the learner that updates itself to produce $\mathcal{M}_{C + n}$ without explicitly retraining on previously seen classes. At this point in Task $\mathcal{T}^{t + 1}$ , the known class object categories are $\mathcal{K}^{t + 1} = \mathcal{K}^t\cup \{C + 1,\dots,C + n\}$ . This process repeats throughout the lifespan of the instance segmentation model, continuously improving itself by incorporating new information from new classes until it reaches its maximum capacity of classes it can learn. In the rest of the paper, We assign the unknown class a label $\mathbf{0}$ .
57
+
58
+ # 4.2 Open-world scenarios
59
+
60
+ In order to simulate different realistic scenarios that might be encountered in an open-world, we propose three different ways of grouping classes under three tasks. These scenarios split scenes based on the inherent distribution (frequency-based) of object classes, the various classes encountered during the exploration of various indoor areas (region-based), and the randomness aspect of object classes in the open world.
61
+
62
+ Table 1: The statistics of each split across the three tasks. The number of known classes per task is reported along with the count of instances (3D objects) in the training and validation set, we also show the number of non-empty scenes used during training and validation.
63
+
64
+ <table><tr><td rowspan="2"></td><td colspan="3">Split A</td><td colspan="3">Split B</td><td colspan="3">Split C</td></tr><tr><td>Task 1</td><td>Task 2</td><td>Task 3</td><td>Task 1</td><td>Task 2</td><td>Task 3</td><td>Task 1</td><td>Task 2</td><td>Task 3</td></tr><tr><td>Classes count</td><td>64</td><td>68</td><td>66</td><td>73</td><td>55</td><td>70</td><td>66</td><td>66</td><td>66</td></tr><tr><td>Train instances</td><td>24224</td><td>3791</td><td>1612</td><td>15327</td><td>8177</td><td>6123</td><td>13483</td><td>8239</td><td>7905</td></tr><tr><td>Validation instances</td><td>6539</td><td>1000</td><td>428</td><td>4177</td><td>2261</td><td>1529</td><td>3776</td><td>2102</td><td>2089</td></tr><tr><td>Train scenes</td><td>1201</td><td>924</td><td>627</td><td>1201</td><td>1002</td><td>895</td><td>1169</td><td>1089</td><td>1159</td></tr><tr><td>Validation scenes</td><td>312</td><td>242</td><td>165</td><td>312</td><td>264</td><td>236</td><td>307</td><td>273</td><td>300</td></tr></table>
65
+
66
+ ![](images/128f8d9981c9b7280e70727004892c8e764c3410d0cf763245cd7230b5a90367.jpg)
67
+ Figure 3: Point-wise count for each class across the three tasks under the three open-world scenarios
68
+
69
+ ![](images/0467f2f711e1d4248f87252a19da273e75998036e9b0bc2e48c7b985e393b1bf.jpg)
70
+
71
+ ![](images/fdee196c05d99da65b384ce3603783f9b983ca6df5897080e3584c62c8bcff73.jpg)
72
+
73
+ Split A (Instance frequency-based): We introduce a split that leverages the inherent distribution of objects, with known classes being more prevalent than unknown categories. Task $\mathcal{T}^1$ encompasses all the head classes as defined in the ScanNet200 benchmark [8, 27], while tasks $\mathcal{T}^2$ and $\mathcal{T}^3$ group the common and tail classes, respectively. This division allows us to effectively capture the varying frequency and significance of object categories within the dataset.
74
+
75
+ Split B (Region-based): In this split, our objective is to replicate the diverse class types encountered during indoor exploration. We argue that a perfect model for a robot moving indoors should segment both classes it knows and classes it hasn't seen before. Additionally, it should keep learning and getting better at segmenting new classes over time. This partition draws inspiration from the sequence of classes that a robot might encounter when navigating indoors. To achieve this, we group classes that are likely to be encountered initially when accessing an indoor space and share similarities in scenes. Initially, we assign each class to a specific scene where it predominantly occurs. Subsequently, we divide the classes into three distinct groups, corresponding to the three tasks.
76
+
77
+ Split C (Random sampling of classes): This third split introduces a different challenge inspired by the randomness aspect of the open-world, where tasks can exhibit random levels of class imbalance. To create this split, we randomly shuffled the classes and sampled without replacement, selecting 66 classes three times for each task.
78
+
79
+ # 4.3 Generating pseudo-labels for the unknown classes
80
+
81
+ Because of the wide range of classes in an open-world setting, the auto-labeler is used as an alternative to manual labeling. The former makes use of the existing target labels from the available ground truth classes (known classes) to generate pseudo-labels for the unknown class in the process of training. In [18], the model is assumed to be class agnostic, where unknown objects are predicted as known with high confidence. As a result, the authors of the paper proposed to use the predictions with top-k confidence scores that do not intersect with the ground truth as pseudo-labels for the unknown class. In our study, we show that top-k pseudo-label selection can severely harm the performance of the model on the known and unknown classes. Hence, we propose a Confidence Thresholding (CT) based selection of pseudo-labels. We show that the performance on the known and the unknown classes increases by a large margin in terms of mean Average Precision (mAP).
82
+
83
+ The auto-labeler unit, depicted in Fig. 2, is used for unknown pseudo-labels generation. It takes a set of predicted binary masks $\mathbf{B} = \{\mathbf{B}_i \mid i \in (1, \dots, n_Q)\}$ , where $n_Q$ is the number of queries, $\mathbf{B}_i = \mathbb{1}(M_i > 0.5)$ is a mask from a single query, and $M_i = \{m_{i,j} \in [0,1] \mid j \in (1, \dots, N)\}$ is a heat map measuring the similarity between a query $q_j \in \mathbb{R}^D$ and the features of $N$ voxels extracted from the high-resolution level in the backbone.
84
+
85
+ Moreover, each query $q_{j}$ encodes semantic information and can generate a class prediction $\mathbb{P}_{cls}(q_j) = \{\mathbb{P}_{cls}(c;q_j)\mid c\in (0,1,\dots,|\mathcal{K}^t |)\}$ using a classification head (refer to Fig. 2). Subsequently, the objectness confidence score is assigned to predictions following Eq 1.
86
+
87
+ $$
88
+ s _ {j} = s _ {c l s, j} \cdot \frac {M _ {j} \cdot \mathbb {1} (M _ {j} > 0 . 5) ^ {T}}{| \mathbb {1} (M _ {j} > 0 . 5) | _ {1}} \tag {1}
89
+ $$
90
+
91
+ where $s_{cls,j} \in \mathbb{R}$ is the max output probability from the classification head $\mathbb{P}_{cls}(q_j)$ , and $\mathbb{1}$ is the indicator function. After scoring the predictions, the auto-labeler returns $m$ pseudo-labels $\tilde{\mathbf{Y}} = \{\tilde{\mathbf{y}}_i = [\tilde{\mathbf{B}}_i, \mathbf{0}] \mid i \in (1, \dots, m)\}$ with confidence above a threshold and has a low IoU with the known classes' target masks.
92
+
93
+ # 4.4 Query target assignment and contrastive clustering
94
+
95
+ Similar to [18], we utilize contrastive clustering to enhance the separation of classes within the query embedding space. To achieve this, we employ a set of query prototypes denoted as $\mathcal{Q}_p = \{\mathbf{q}_i \in \mathbb{R}^D \mid i \in (0, 1,.., |\mathcal{K}^t|)\}$ , where $\mathbf{q}_0$ denotes the prototype of the class unknown. We apply a contrastive loss that encourages queries with similar classes to be attracted to their respective prototypes while pushing them away from those representing negative classes, as illustrated in Fig. 2. Since the queries are used to determine the class of the objects (see Fig. 2 inference block), the class prototypes are expected to hold general semantic knowledge of their corresponding classes.
96
+
97
+ Hungarian matching is performed in the Assign target to query module, depicted in Fig. 2, where the indices of prediction-target are used to assign a label to the queries used to generate the matched prediction. The labeled queries are then stored in a query store $\mathcal{Q}_{\text{store}}$ , which represents a queue with a maximum capacity. This queue is employed to update the query prototypes $\mathcal{Q}_p$ using an exponential moving average.
98
+
99
+ Hinge embedding loss is utilized according to Eq 2. This loss ensures that queries belonging to the same class denoted as $q_{c}$ , are pulled towards their corresponding class prototype $\mathbf{q}_c$ , while being pushed away from other prototypes representing different classes.
100
+
101
+ $$
102
+ \mathcal {L} _ {\text {c o n t}} \left(q _ {c}\right) = \sum_ {i = 0} ^ {| \mathcal {K} ^ {t} |} \ell \left(q _ {c}, \mathbf {q} _ {i}\right) \tag {2}
103
+ $$
104
+
105
+ $$
106
+ \ell (q _ {c}, \mathbf {q} _ {i}) = \left\{ \begin{array}{l l} | | q _ {c} - \mathbf {q} _ {i} | | _ {2} & i = c \\ \max (0, \Delta - | | q _ {c} - \mathbf {q} _ {i} | | _ {2}) & i \neq c \end{array} \right.
107
+ $$
108
+
109
+ where $\Delta$ is the margin of the contrastive clustering.
110
+
111
+ # 4.5 Reachability-based probability correction (PC)
112
+
113
+ In [23], an architecture that can deal with long-tail distribution and unknown class prediction for open-world object recognition was proposed, where unknown classes are assumed to be very different in color and texture from the known classes without prior on the unknown classes. However, we show in Fig. 6 that many unknown instances hold similar features to the known ones.
114
+
115
+ In our method, we relax the strict assumption of high dissimilarity of unknown and known classes and correct the predicted output probability following two characteristics of a feature
116
+
117
+ ![](images/d21028e2347e3673153363ef5b2192503c2e9ad8020765ff754da0010b6a093c.jpg)
118
+ Figure 4: Illustration of the region in the query embedding space where the class probability is corrected.
119
+
120
+ from an unknown object: (1) it has to be far from the nearest known class, as features of the class unknown are expected to be pushed away from the prototypes of the known classes, after applying constructive clustering, and (2) the feature should correspond to an object that is not a known class. We show that applying this approach during inference boosts the performance of the model on the unknown class considerably by compensating for the weak pseudo-labels provided by the auto-labeler.
121
+
122
+ Our probability correction scheme is the following
123
+
124
+ $$
125
+ \mathbb {P} (\mathbf {0}; q _ {j}) = \mathbb {P} _ {\text {c l s}} (\mathbf {0}; q _ {j}) \cup \mathbb {P} _ {\text {c o r r}} (\mathbf {0}; q _ {j}) \tag {3}
126
+ $$
127
+
128
+ where $\mathbb{P}_{cls}$ is the probability from the classification head, and $\mathbb{P}_{corr}$ is the correction probability. We base our intuition on the fact that unknown classes have high objectness scores, which makes them not too far from the prototypes of the known classes. To model this behavior we choose
129
+
130
+ $$
131
+ \mathbb {P} _ {c o r r} (\mathbf {0}; q _ {j}) = \mathbb {P} _ {c o r r} (\mathbf {0}; o, q _ {j}) \cdot \mathbb {P} _ {c o r r} (o; q _ {j})
132
+ $$
133
+
134
+ where $\mathbb{P}_{corr}(o; q_j)$ is the likelihood of the query to correspond to an object that is not known (either background or true unknown). Since the query prototypes encode class-specific information we propose the following method to measure the objectness of a query given all prototypes from the known classes, where it assigns a high objectness probability if it is close to only a few known classes. This probability distribution defines the objectness of unknown objects around a certain boundary from the prototypes as follows.
135
+
136
+ $$
137
+ \mathbb {P} _ {c o r r} (o; q _ {j}) = 1 - \sum_ {k = 1} ^ {| \mathcal {K} ^ {t} |} \mathbb {P} _ {c l s} (k; q _ {j})
138
+ $$
139
+
140
+ ![](images/1dc61c33416a7ba8687b5defcc54adb07b52436b5e1662558e916d522e455336.jpg)
141
+
142
+ ![](images/2665a34cb34f9efb53aab6598273aae7931b783ff82052e2f00d29e5a353e441.jpg)
143
+
144
+ ![](images/0b401f42a4b29d96d36e03104a9165869a95dff90091f4ec84eb95c4b3a9fb11.jpg)
145
+
146
+ ![](images/545b5d751319679c9b202d14d3c90df6d3c921488b734de71702833a8c10d757.jpg)
147
+ Ground Truth
148
+
149
+ ![](images/3628792dcc43d969f434a4f87d7593211e5424500693592df1d7abcf7aea7970.jpg)
150
+ 3D-OWIS-PC-CT
151
+
152
+ ![](images/169b7cb558600ad55df4c6054a6a318c4cfb31f0693df5dc4418415bd6c1327d.jpg)
153
+ 3D-OWIS
154
+ Figure 5: Qualitative results for 3D instance segmentation results on some ScanNet200 validation scenes. Points highlighted in blue belong to unknown classes and those highlighted in green belong to known classes. We show the performance of our model in retrieving the unknown class objects compared to 3D-OWIS-PC-CT for the three scenes.
155
+
156
+ while $\mathbb{P}_{corr}(\mathbf{0};o,q_j)$ is the probability of the query being an unknown object, which has a high value the further it is from the nearest prototype of the known classes.
157
+
158
+ $$
159
+ \mathbb {P} _ {\text {c o r r}} (\mathbf {0}; o, q _ {j}) = \sigma \left(\frac {\gamma (q _ {j}) - a}{b}\right); \quad \gamma (q _ {j}) = \min _ {\mathbf {q} _ {i}} | | q _ {j} - \mathbf {q} _ {i} | | _ {2}
160
+ $$
161
+
162
+ Here $\sigma$ is the sigmoid function, $\gamma(q_j)$ is the reachability of the query $q_j$ , $\mathbf{q}_i$ is the prototype of the $i^{th}$ class, and $a, b$ are the shift and scale of the sigmoid function that assure $\mathbb{P}_{corr}(\mathbf{0}; o, q_j, \gamma(q_j) = 0) = 0.05$ and $\mathbb{P}_{corr}(\mathbf{0}; o, q_j, \gamma(q_j) = \frac{\Delta}{2}) = 0.95$ , for a contrastive clustering margin $\Delta$ .
163
+
164
+ We finally normalize the probabilities from the classification head of the known classes as follows
165
+
166
+ $$
167
+ \mathbb {P} (c; q _ {j}) = \frac {\mathbb {P} _ {c l s} (c ; q _ {j})}{\sum_ {l \in \mathcal {K} ^ {t}} \mathbb {P} _ {c l s} (l ; q _ {j})} (1 - \mathbb {P} (\mathbf {0}; q _ {j}))
168
+ $$
169
+
170
+ # 4.6 Alleviating catastrophic forgetting for incremental learning
171
+
172
+ Following the success of exemplar replay in avoiding catastrophic forgetting of the old classes during incremental learning for object detection [18, 11, 41], we adopt it for the task of incremental learning in 3D instance segmentation where we use exemplars from the classes of the previous task to fine-tune the model trained on the novel classes. In our setting, we use the same dataset for the three tasks and mask the classes of the previous task when training on the novel classes from the current task. As a result, the novel classes of the current task might be encountered again when replaying the exemplars from the previous task, as the same scenes are being used in fine-tuning.
173
+
174
+ # 5 Experiments
175
+
176
+ # 5.1 Open-world evaluation protocol
177
+
178
+ We use our proposed splits of classes which mimic the challenges that are mostly faced in the open-world to ensure a strict performance evaluation for 3D instance segmentation models.
179
+
180
+ Evaluation metrics. We adopt three common evaluation metrics, wilderness impact (WI) [9], absolute open set error (A-OSE) [26], and the recall of the unknown classes (U-Recall) [1, 24, 11]
181
+
182
+ Table 2: State-of-the-Art comparison for 3D-OWIS model. We show a comparison of performance under the three open-world scenarios, where 3D-OWIS-PC - CT is our model 3D-OWIS without Probability Correction (PC) and Confidence Thresholding (CT). We rely on the metrics used in the open-world literature, A-OSE which quantifies the number of unknown objects misclassified as one of the known classes, WI which measures the impact of the unknown class on the precision of the model on the known classes, and the U-Recall to evaluate the model's ability to recover the unknown objects. We show that 3D-OWIS performs remarkably better than the other models under all scenarios when dealing with the known classes, and superior performance in split A and B, and slightly less performance in split C when handling the unknown objects. We also provide a closed-setting comparison between Mask3D and Oracle (Ours with access to unknown labels).
183
+
184
+ <table><tr><td>Task IDs (→)</td><td colspan="5">Task 1</td><td colspan="6">Task 2</td><td colspan="3">Task 3</td></tr><tr><td rowspan="2"></td><td rowspan="2">WI(↓)</td><td rowspan="2">A-OSE(↓)</td><td rowspan="2">U-Recall(↑)</td><td colspan="2">mAP (↑)</td><td rowspan="2">WI(↓)</td><td rowspan="2">A-OSE(↓)</td><td rowspan="2">U-Recall(↑)</td><td colspan="3">mAP (↑)</td><td colspan="3">mAP (↑)</td></tr><tr><td>Current known</td><td>All</td><td>Previously known</td><td>Current known</td><td>All</td><td>Previously known</td><td>Current known</td><td>All</td></tr><tr><td colspan="15">Split A</td></tr><tr><td>Oracle</td><td>0.129</td><td>227</td><td>55.94</td><td>38.75</td><td>38.60</td><td>0.03</td><td>112</td><td>45.40</td><td>38.25</td><td>20.91</td><td>29.40</td><td>29.58</td><td>17.78</td><td>26.10</td></tr><tr><td>Mask3D [29]</td><td>-</td><td>-</td><td>-</td><td>39.12</td><td>39.12</td><td>-</td><td>-</td><td>-</td><td>38.30</td><td>20.57</td><td>29.15</td><td>28.61</td><td>18.33</td><td>25.58</td></tr><tr><td>3D-OW-DETR [11]</td><td>0.547</td><td>721</td><td>22.14</td><td>35.56</td><td>35.05</td><td>0.282</td><td>253</td><td>26.24</td><td>18.18</td><td>13.62</td><td>15.76</td><td>21.56</td><td>08.38</td><td>17.67</td></tr><tr><td>3D-OWIS-PC - CT</td><td>1.589</td><td>707</td><td>30.72</td><td>37.50</td><td>37.00</td><td>0.000</td><td>4</td><td>04.75</td><td>11.00</td><td>17.30</td><td>14.10</td><td>21.40</td><td>08.00</td><td>17.50</td></tr><tr><td>Ours: 3D-OWIS</td><td>0.397</td><td>607</td><td>34.75</td><td>40.2</td><td>39.7</td><td>0.007</td><td>126</td><td>27.03</td><td>29.40</td><td>16.40</td><td>22.70</td><td>20.20</td><td>15.20</td><td>18.70</td></tr><tr><td colspan="15">Split B</td></tr><tr><td>Oracle</td><td>1.126</td><td>939</td><td>70.31</td><td>24.57</td><td>24.80</td><td>0.180</td><td>441</td><td>73.16</td><td>25.50</td><td>20.30</td><td>23.40</td><td>23.40</td><td>30.40</td><td>26.00</td></tr><tr><td>Mask3D [29]</td><td>-</td><td>-</td><td>-</td><td>23.48</td><td>23.48</td><td>-</td><td>-</td><td>-</td><td>21.81</td><td>18.91</td><td>20.37</td><td>24.20</td><td>29.22</td><td>26.06</td></tr><tr><td>3D-OW-DETR [11]</td><td>3.229</td><td>1935</td><td>17.18</td><td>20.00</td><td>19.73</td><td>2.053</td><td>1389</td><td>33.31</td><td>12.36</td><td>13.86</td><td>12.93</td><td>07.27</td><td>18.96</td><td>11.62</td></tr><tr><td>3D-OWIS-PC - CT</td><td>3.133</td><td>1895</td><td>21.67</td><td>18.94</td><td>18.70</td><td>3.169</td><td>1081</td><td>26.63</td><td>18.00</td><td>16.40</td><td>17.20</td><td>17.30</td><td>20.10</td><td>18.30</td></tr><tr><td>Ours: 3D-OWIS</td><td>3.684</td><td>1780</td><td>24.79</td><td>23.60</td><td>23.30</td><td>0.755</td><td>581</td><td>24.21</td><td>18.70</td><td>17.30</td><td>17.90</td><td>18.70</td><td>24.60</td><td>20.90</td></tr><tr><td colspan="15">Split C</td></tr><tr><td>Oracle</td><td>1.039</td><td>651</td><td>71.61</td><td>23.30</td><td>23.6</td><td>0.249</td><td>591</td><td>62.83</td><td>20.50</td><td>18.40</td><td>19.60</td><td>25.30</td><td>28.20</td><td>26.30</td></tr><tr><td>Mask3D [29]</td><td>-</td><td>-</td><td>-</td><td>20.82</td><td>21.15</td><td>-</td><td>-</td><td>-</td><td>22.67</td><td>26.67</td><td>24.13</td><td>25.41</td><td>25.21</td><td>25.35</td></tr><tr><td>3D-OW-DETR [11]</td><td>1.463</td><td>1517</td><td>13.00</td><td>14.81</td><td>14.59</td><td>1.330</td><td>847</td><td>16.04</td><td>08.00</td><td>17.41</td><td>12.40</td><td>08.81</td><td>15.63</td><td>11.01</td></tr><tr><td>3D-OWIS-PC - CT</td><td>2.901</td><td>1752</td><td>15.66</td><td>15.00</td><td>14.80</td><td>1.799</td><td>666</td><td>15.99</td><td>13.50</td><td>19.70</td><td>16.40</td><td>17.50</td><td>17.70</td><td>17.50</td></tr><tr><td>Ours: 3D-OWIS</td><td>0.419</td><td>1294</td><td>14.34</td><td>18.00</td><td>17.60</td><td>0.152</td><td>303</td><td>15.80</td><td>13.90</td><td>22.20</td><td>17.80</td><td>17.80</td><td>17.70</td><td>17.80</td></tr></table>
185
+
186
+ to evaluate the performance of our model on the unknown classes and to provide a fair comparison with and without contributions. For the known classes, we use mean Average Precision (mAP). WI measures the impact of the unknown classes on the precision of the model at a specific confidence level. Ideally, WI is nil, i.e., there are no unknown objects predicted as known. For our evaluation, we report WI at 0.5 confidence. It can be computed as follows: $\mathrm{WI} = \frac{P_{\mathcal{K}}}{P_{\mathcal{K}\cup u}} -1$
187
+
188
+ We also report A-OSE, which represents the count of unknown instances misclassified as one of the known classes, and the U-Recall at 0.5 IoU, which reflects the ability of the model to recover unknown objects.
189
+
190
+ # 5.2 Implementation details
191
+
192
+ We adapt Mask3D [29] for the task of open-world instance segmentation. We add an extra prediction output for the unknown class. In training, we assign an ignore label to the classes of the future and previous tasks, while we keep the labels of the previous task and assign an unknown class label to the classes of the future task during evaluation. For contrastive clustering, we use the indices obtained after matching the predictions with the target using Hungarian matching to assign a label to the queries and store them in the Query Store $\mathcal{Q}_{\text{store}}$ . The store is then averaged per class and used to periodically update the prototypes every 10 iterations for the hinge loss computation. Finally, we use 40 exemplars per class on average for incremental learning. The classes from the current task are kept during class exemplar replay since we are using the same dataset for the three tasks.
193
+
194
+ # 5.3 Open-world results
195
+
196
+ Table 2 provides a comprehensive performance comparison between the Oracle, our implementation of [11] as 3D-OW-DETR, 3D-OWIS, and 3D-OWIS-PC - CT when excluding the Probability Correction (PC) and Confidence Thresholding (CT) components. Across all scenarios and tasks,
197
+
198
+ Table 3: Extensive ablation of the added components. We perform the ablation by adding Probability Correction (PC) and Confidence Thresholding (CT) components to 3D-OWIS-PC-CT. We conduct the performance comparison in terms of mAP, U-Recall, WI, and A-OSE. Even though 3D-OWIS is performing well in retrieving the unknown classes without PC and CT, which is reflected by the high U-Recall, it is still performing poorly on the known classes, based on the high WI and A-OSE. This negative impact on the known classes accumulates over the tasks and results in further reduction in mAP. When adding the CT, the performance on the known classes improves considerably and remains consistent throughout the incremental learning process. Probability correction (PC) significantly improves the U-Recall in all cases. Even though the latter shows lower performance in terms of WI and A-OSE, the overall mAP slightly improves or remains higher with a large margin compared 3D-OWIS-PC-CT. This shows that adding PC and CT gives the best compromise in performance on both known and unknown classes.
199
+
200
+ <table><tr><td colspan="2">Task IDs (→)</td><td colspan="6">Task 1</td><td colspan="6">Task 2</td><td colspan="3">Task 3</td></tr><tr><td rowspan="2">w/ Finetuning</td><td rowspan="2">CT PC</td><td rowspan="2">WI(↓)</td><td rowspan="2">A-OSE(↓)</td><td rowspan="2">U-Recall(↑)</td><td colspan="3">mAP(↑)</td><td rowspan="2">WI(↓)</td><td rowspan="2">A-OSE(↓)</td><td rowspan="2">U-Recall(↑)</td><td colspan="3">mAP(↑)</td><td colspan="3">mAP(↑)</td></tr><tr><td>Current known</td><td>All</td><td></td><td>Previously known</td><td>Current known</td><td>All</td><td>Previously known</td><td>Current known</td><td>All</td></tr><tr><td colspan="17">Split A</td></tr><tr><td>×</td><td>× ×</td><td>1.589</td><td>707</td><td>30.72</td><td>37.50</td><td>37.00</td><td>0.870</td><td>321</td><td>19.42</td><td>00.00</td><td>16.74</td><td>08.40</td><td>00.00</td><td>09.30</td><td>02.80</td><td></td></tr><tr><td>×</td><td>✓ ×</td><td>0.237</td><td>443</td><td>30.00</td><td>40.30</td><td>39.70</td><td>0.306</td><td>129</td><td>14.96</td><td>00.00</td><td>21.00</td><td>10.50</td><td>00.00</td><td>17.45</td><td>05.20</td><td></td></tr><tr><td>✓</td><td>× ×</td><td>1.589</td><td>707</td><td>30.72</td><td>37.50</td><td>37.00</td><td>0.000</td><td>4</td><td>04.75</td><td>11.00</td><td>17.30</td><td>14.10</td><td>21.40</td><td>08.00</td><td>17.50</td><td></td></tr><tr><td>✓</td><td>✓ ×</td><td>0.237</td><td>443</td><td>30.00</td><td>40.30</td><td>39.70</td><td>0.004</td><td>102</td><td>23.62</td><td>29.22</td><td>15.80</td><td>22.30</td><td>19.70</td><td>15.70</td><td>18.50</td><td></td></tr><tr><td>✓</td><td>✓</td><td>0.398</td><td>607</td><td>34.75</td><td>40.2</td><td>39.70</td><td>0.007</td><td>126</td><td>27.03</td><td>29.40</td><td>16.40</td><td>22.70</td><td colspan="3">No unknown labels for evaluation</td><td></td></tr><tr><td colspan="17">Split B</td></tr><tr><td>×</td><td>× ×</td><td>3.133</td><td>1895</td><td>21.67</td><td>18.94</td><td>18.70</td><td>1.82</td><td>829</td><td>17.20</td><td>00.00</td><td>15.40</td><td>06.60</td><td>00.00</td><td>20.20</td><td>07.50</td><td></td></tr><tr><td>×</td><td>✓ ×</td><td>2.147</td><td>21.70</td><td>21.70</td><td>23.80</td><td>23.50</td><td>1.563</td><td>375</td><td>13.08</td><td>00.00</td><td>18.30</td><td>07.90</td><td>00.00</td><td>25.40</td><td>09.40</td><td></td></tr><tr><td>✓</td><td>× ×</td><td>3.219</td><td>1905</td><td>21.70</td><td>18.94</td><td>18.70</td><td>3.169</td><td>1081</td><td>26.63</td><td>18.00</td><td>16.40</td><td>17.20</td><td>17.30</td><td>20.10</td><td>18.30</td><td></td></tr><tr><td>✓</td><td>✓ ×</td><td>2.147</td><td>1397</td><td>21.70</td><td>23.80</td><td>23.50</td><td>0.466</td><td>413</td><td>20.90</td><td>18.60</td><td>16.90</td><td>17.70</td><td>18.50</td><td>24.20</td><td>20.60</td><td></td></tr><tr><td>✓</td><td>✓</td><td>3.684</td><td>1780</td><td>24.79</td><td>23.6</td><td>23.30</td><td>0.755</td><td>581</td><td>24.21</td><td>18.70</td><td>17.30</td><td>17.90</td><td colspan="3">No unknown labels for evaluation</td><td></td></tr><tr><td colspan="17">Split C</td></tr><tr><td>×</td><td>× ×</td><td>2.901</td><td>1752</td><td>15.66</td><td>15.00</td><td>14.80</td><td>6.294</td><td>857</td><td>11.05</td><td>0.00</td><td>15.70</td><td>07.50</td><td>00.00</td><td>14.60</td><td>04.70</td><td></td></tr><tr><td>×</td><td>✓ ×</td><td>0.227</td><td>828</td><td>11.44</td><td>18.70</td><td>18.40</td><td>1.361</td><td>365</td><td>10.16</td><td>00.00</td><td>19.50</td><td>09.40</td><td>00.00</td><td>19.10</td><td>6.20</td><td></td></tr><tr><td>✓</td><td>× ×</td><td>2.901</td><td>1752</td><td>15.66</td><td>15.00</td><td>14.80</td><td>1.799</td><td>666</td><td>15.99</td><td>13.50</td><td>19.70</td><td>16.40</td><td>17.50</td><td>17.70</td><td>17.50</td><td></td></tr><tr><td>✓</td><td>✓ ×</td><td>0.227</td><td>828</td><td>11.44</td><td>18.70</td><td>18.40</td><td>0.088</td><td>208</td><td>12.63</td><td>14.50</td><td>22.10</td><td>18.00</td><td>17.80</td><td>17.70</td><td>17.80</td><td></td></tr><tr><td>✓</td><td>✓</td><td>0.419</td><td>1294</td><td>14.34</td><td>18</td><td>17.60</td><td>0.152</td><td>303</td><td>15.80</td><td>13.90</td><td>22.20</td><td>17.80</td><td colspan="3">No unknown labels for evaluation</td><td></td></tr></table>
201
+
202
+ 3D-OWIS-PC - CT consistently exhibits inferior performance in terms of mAP. Additionally, it demonstrates considerably lower U-Recall performance in splits A and B, with slightly higher
203
+
204
+ performance in split C. Of particular note, our 3D-OWIS demonstrates remarkable proficiency in preserving knowledge of the previous classes after fine-tuning. This proficiency is attributed to better pseudo-label selection for the unknown classes. 3D-OWIS outperforms 3D-OWIS-PC - CT in most cases while minimizing the impact of the unknown classes on the known classes, as evidenced by lower WI and A-OSE scores and higher mAP.
205
+
206
+ Table 4 presents a comparison between our model, 3D-OWIS, and our implementation of two methods, GGN [32] and OLN [19]. For both models, we adapt Mask3D and train it with mask loss only for OLN. In the case of GGN, we train a Minkowski backbone to predict affinity maps and use Connected Components to generate class-agnostic proposals. These results underscore the effectiveness and potential of our approach in addressing the three proposed open-world challenges.
207
+
208
+ # 5.4 Incremental learning results
209
+
210
+ Our model's performance in incremental learning is evaluated based on its ability to preserve knowledge from previous classes. With the utilization of exemplar replay, the 3D-OWIS model demonstrates significant improvement on previous classes mAP. Table 2 presents the results, indicating that our model consistently outperforms the others in terms of mean Average Precision (mAP) for the previous classes in all cases.
211
+
212
+ # 5.5 Discussion and analysis
213
+
214
+ Ablation study. We show in Table 3 that 3D-OWIS-PC - CT model performs poorly on the known classes because of the high number of low-quality pseudo-labels generated by Auto-labeler, which is also explained by the high value of Wilderness Impact and Absolute open set error. The U-Recall drops considerably when fine-tuning the 3D-OWIS-PC - CT, while the WI and A-OSE either decrease or increase with the mAP on the unknown. On the other hand, our model limits the training only to the best pseudo-labels, which maintain good performance on the known classes in all cases, before and after fine-tuning, and also achieve results on the unknown class comparable to the 3D-OWIS-PC - CT in most of the cases. Adding the proba
215
+
216
+ Table 4: Open-world instance segmentation comparison. We provide the results of our implementation of two methods for 2D open-world instance segmentation models. We show that our model performs comparatively better than others across all metrics.
217
+
218
+ <table><tr><td colspan="6">Split A</td></tr><tr><td>Task ID</td><td colspan="5">Task 1</td></tr><tr><td rowspan="2"></td><td rowspan="2">WI(↓)</td><td rowspan="2">A-OSE(↓)</td><td rowspan="2">U-Recall(↑)</td><td colspan="2">mAP(↑)</td></tr><tr><td>Current known</td><td>All</td></tr><tr><td>3D-GGN [32]</td><td>15.68</td><td>1452</td><td>21.33</td><td>20.51</td><td>20.12</td></tr><tr><td>3D-OLN [19]</td><td>-</td><td>-</td><td>02.45</td><td>-</td><td>-</td></tr><tr><td>Ours: 3D-OWIS</td><td>0.397</td><td>607</td><td>34.75</td><td>40.2</td><td>39.7</td></tr></table>
219
+
220
+ bility correction module helps in improving the U-Recall while keeping the mAP of the known classes much above the 3D-OWIS-PC - CT. However, it results in an increase in WI and A-OSE because of the increase of false positives in the known classes.
221
+
222
+ tSNE analysis The tSNE plot shown in Fig. 6 illustrates the below-par performance of the 3D-OWIS-PC - CT in clustering the unknown classes, where most queries are still maintaining features representative of the known classes. This behavior is a result of the weak supervision of the unknown class, which shows the need for correcting the predictions, and explains the improvement in U-Recall when applying the probability correction with nil deterioration in the known classes mAP in most cases.
223
+
224
+ Qualitative analysis. Fig. 5 shows that 3D-OWIS is able to correctly identify background and unknown objects as unknown. Also note the second scene, where predictions are corrected from known to unknown without affecting the predictions of the known classes.
225
+
226
+ ![](images/24eb60ca403c95b6e8b5a7f1f4d270696dcd384b5a61793b2eb4ddeffd03bcd2.jpg)
227
+ Figure 6: tSNE visualization of the queries for known & unknown classes
228
+
229
+ # 6 Limitations
230
+
231
+ Confidence Thresholding (CT) enhances the performance of the model on known classes; nonetheless, it diminishes the model's capacity to segment unknown classes, mainly due to its reliance on a smaller number of pseudo-labels during training. Additionally, the effectiveness of Probability Correction (PC) is contingent upon the inherent characteristics of the clusters within the known classes. In scenarios characterized by data imbalance, the performance of probability correction may deteriorate when applied to the undersampled classes.
232
+
233
+ # 7 Conclusion
234
+
235
+ In this paper, we address the challenge of 3D instance segmentation in open-world scenarios, which is a novel problem formulation. We propose an innovative approach that incorporates an unknown object identifier to detect objects not present in the training set. To facilitate evaluation and experimentation, we present three dataset splits of ScanNet200 based on different criteria for selecting unknown objects. Our experimental results demonstrate that our proposed unknown object identifier significantly improves the detection of unknown objects across various tasks and dataset splits. This work contributes to advancing the localization and segmentation of 3D objects in real-world environments and paves the way for more robust and adaptable vision systems.
236
+
237
+ Acknowledgement The computational resources were provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), partially funded by the Swedish Research Council through grant agreement No. 2022-06725, and by the Berzelius resource, provided by Knut and Alice Wallenberg Foundation at the National Supercomputer Center.
238
+
239
+ # References
240
+
241
+ [1] A. Bansal, K. Sikka, G. Sharma, R. Chellappa, and A. Divakaran. Zero-shot object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 384-400, 2018. 7
242
+ [2] A. Bendale and T. Boult. Towards open world recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1893-1902, 2015. 3
243
+ [3] J. Cen, P. Yun, S. Zhang, J. Cai, D. Luan, M. Tang, M. Liu, and M. Yu Wang. Open-world semantic segmentation for lidar point clouds. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXVIII, pages 318-334. Springer, 2022. 3
244
+ [4] S. Chen, J. Fang, Q. Zhang, W. Liu, and X. Wang. Hierarchical aggregation for 3d instance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15467-15476, 2021. 2
245
+ [5] B. Cheng, I. Misra, A. G. Schwing, A. Kirillov, and R. Girdhar. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1290–1299, 2022. 2
246
+ [6] B. Cheng, A. Schwing, and A. Kirillov. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34:17864-17875, 2021. 2
247
+ [7] J. Chibane, F. Engelmann, T. Anh Tran, and G. Pons-Moll. Box2mask: Weakly supervised 3d semantic instance segmentation using bounding boxes. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXI, pages 681-699. Springer, 2022. 2
248
+ [8] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828-5839, 2017. 5
249
+ [9] A. Dhamija, M. Gunther, J. Ventura, and T. Boult. The overlooked elephant of object detection: Open set. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1021-1030, 2020. 7
250
+ [10] F. Engelmann, M. Bokeloh, A. Fathi, B. Leibe, and M. Nießner. 3d-mpa: Multi-proposal aggregation for 3d semantic instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9031–9040, 2020. 2
251
+ [11] A. Gupta, S. Narayan, K. Joseph, S. Khan, F. S. Khan, and M. Shah. Ow-detr: Open-world detection transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9235–9244, 2022. 1, 3, 7, 8
252
+ [12] H. Ha and S. Song. Semantic abstraction: Open-world 3d scene understanding from 2d vision-language models. In 6th Annual Conference on Robot Learning, 2022. 3
253
+ [13] L. Han, T. Zheng, L. Xu, and L. Fang. Occuseg: Occupancy-aware 3d instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2940-2949, 2020. 2
254
+ [14] T. He, C. Shen, and A. Van Den Hengel. Dyco3d: Robust instance segmentation of 3d point clouds through dynamic convolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 354-363, 2021. 2
255
+ [15] J. Hou, A. Dai, and M. Nießner. 3d-sis: 3d semantic instance segmentation of rgb-d scans. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4421-4430, 2019. 2
256
+
257
+ [16] J. Hou, B. Graham, M. Nießner, and S. Xie. Exploring data-efficient 3d scene understanding with contrastive scene contexts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15587-15597, 2021. 2
258
+ [17] L. Jiang, H. Zhao, S. Shi, S. Liu, C.-W. Fu, and J. Jia. Pointgroup: Dual-set point grouping for 3d instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and Pattern recognition, pages 4867-4876, 2020. 2
259
+ [18] K. Joseph, S. Khan, F. S. Khan, and V. N. Balasubramanian. Towards open world object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5830-5840, 2021. 1, 3, 5, 7
260
+ [19] D. Kim, T.-Y. Lin, A. Angelova, I. S. Kweon, and W. Kuo. Learning open-world object proposals without learning to classify. IEEE Robotics and Automation Letters, 7(2):5453-5460, 2022. 9, 10
261
+ [20] J. Lahoud, B. Ghanem, M. Pollefeys, and M. R. Oswald. 3d instance segmentation via multi-task metric learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9256-9266, 2019. 2
262
+ [21] Z. Liang, Z. Li, S. Xu, M. Tan, and K. Jia. Instance segmentation in 3d scenes using semantic superpoint tree networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2783-2792, 2021. 2
263
+ [22] S.-H. Liu, S.-Y. Yu, S.-C. Wu, H.-T. Chen, and T.-L. Liu. Learning gaussian instance segmentation in point clouds. arXiv preprint arXiv:2007.09860, 2020. 2
264
+ [23] Z. Liu, Z. Miao, X. Zhan, J. Wang, B. Gong, and S. X. Yu. Large-scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2537-2546, 2019. 6
265
+ [24] C. Lu, R. Krishna, M. Bernstein, and L. Fei-Fei. Visual relationship detection with language priors. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pages 852–869. Springer, 2016. 7
266
+ [25] S. Ma, Y. Wang, J. Fan, Y. Wei, T. H. Li, H. Liu, and F. Lv. Cat: Localization and identification cascade detection transformer for open-world object detection. arXiv preprint arXiv:2301.01970, 2023. 3
267
+ [26] D. Miller, L. Nicholson, F. Dayoub, and N. Sunderhauf. Dropout sampling for robust object detection in open-set conditions. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 3243-3249. IEEE, 2018. 7
268
+ [27] D. Rozenberszki, O. Litany, and A. Dai. Language-grounded indoor 3d semantic segmentation in the wild. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIII, pages 125-141. Springer, 2022. 5
269
+ [28] K. Saito, P. Hu, T. Darrell, and K. Saenko. Learning to detect every thing in an open world. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIV, pages 268-284. Springer, 2022. 1
270
+ [29] J. Schult, F. Engelmann, A. Hermans, O. Litany, S. Tang, and B. Leibe. Mask3D: Mask Transformer for 3D Semantic Instance Segmentation. In International Conference on Robotics and Automation (ICRA), 2023. 2, 3, 8
271
+ [30] J. Sun, C. Qing, J. Tan, and X. Xu. Superpoint transformer for 3d scene instance segmentation. arXiv preprint arXiv:2211.15766, 2022. 2
272
+ [31] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 2
273
+ [32] W. Wang, M. Feiszli, H. Wang, J. Malik, and D. Tran. Open-world instance segmentation: Exploiting pseudo ground truth from learned pairwise affinity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4422-4432, 2022. 9, 10
274
+
275
+ [33] W. Wang, M. Feiszli, H. Wang, and D. Tran. Unidentified video objects: A benchmark for dense, open-world segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10776-10785, 2021. 1
276
+ [34] W. Wang, R. Yu, Q. Huang, and U. Neumann. Sgpn: Similarity group proposal network for 3d point cloud instance segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2569-2578, 2018. 2
277
+ [35] S. Xie, J. Gu, D. Guo, C. R. Qi, L. Guibas, and O. Litany. Pointcontrast: Unsupervised pretraining for 3d point cloud understanding. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part III 16, pages 574-591. Springer, 2020. 2
278
+ [36] B. Yang, J. Wang, R. Clark, Q. Hu, S. Wang, A. Markham, and N. Trigoni. Learning object bounding boxes for 3d instance segmentation on point clouds. Advances in neural information processing systems, 32, 2019. 2
279
+ [37] L. Yi, W. Zhao, H. Wang, M. Sung, and L. J. Guibas. Gspn: Generative shape proposal network for 3d instance segmentation in point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3947-3956, 2019. 2
280
+ [38] B. Zhang and P. Wonka. Point cloud instance segmentation using probabilistic embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8883-8892, 2021. 2
281
+ [39] J. Zhang, R. Dong, and K. Ma. Clip-for3d: Learning free open-world 3d scene representations from 2d dense clip. arXiv preprint arXiv:2303.04748, 2023. 3
282
+ [40] X. Zhu, R. Zhang, B. He, Z. Zeng, S. Zhang, and P. Gao. Pointclip v2: Adapting clip for powerful 3d open-world learning. arXiv preprint arXiv:2211.11682, 2022. 3
283
+ [41] O. Zohar, K.-C. Wang, and S. Yeung. Prob: Probabilistic objectness for open world object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11444-11453, 2023. 3, 7
284
+
285
+ # Appendix
286
+
287
+ # A Scalability of 3D-OWIS
288
+
289
+ We show in Table 5 that 3D-OWIS can accommodate a large number of classes without a major size increase
290
+
291
+ Table 5: Demonstrating the Scalability of 3D-OWIS with Respect to the maximum number of classes it can learn.
292
+
293
+ <table><tr><td># of classes</td><td>200</td><td>1000</td><td>5000</td><td>10000</td><td>50000</td><td>100000</td></tr><tr><td>Size of 3D-OWIS</td><td>39.7M</td><td>39.8M</td><td>40.7M</td><td>41.9M</td><td>50.9M</td><td>62.2M</td></tr></table>
294
+
295
+ # B Additional details on Split B
296
+
297
+ We utilize the 20 scene types present in the ScanNet200 dataset to distribute the 200 classes over the three tasks. Initially, we establish a notion of similarity between two scene types by assessing the extent of their shared classes. This similarity is quantified through the intersection over the union $(IoU)$ metric, which measures the ratio of common classes to the total count of unique classes across both scenes. By employing this metric, we identify scene types that exhibit a substantial $IoU$ , indicating a higher degree of similarity. The similarity matrix, depicted in Fig. 7, showcases the relationships between the 20 scene types within the ScanNet200 dataset.
298
+
299
+ Subsequently, we employed three criteria to group the classes: $(i)$ the likelihood of encountering them first when accessing an indoor area, $(ii)$ their affiliation with similar scene types, and $(iii)$ the proximity in the number of known classes across tasks. By taking these factors into consideration, we arrived at the split of scenes presented in Table 6.
300
+
301
+ ![](images/fb3e69b22fec04cac8af818823617525804d7845e239e81f60caa0a2d6e78899.jpg)
302
+ Figure 7: Similarity matrix between the 20 scene types in ScanNet200 dataset. We show the ratio of common classes to the total count of unique classes between two scene types.
303
+
304
+ Table 6: Frequently occurring scene when training during the three tasks in Split B. Scene types are grouped into tasks based on three criteria: (i) the likelihood of encountering the classes within the scene types when entering an indoor area, (ii) similarity of scene types containing the classes, and (iii) consistency in the overall number of classes within the scene types across all tasks. This grouping ensures a cohesive organization of scene types for effective evaluation of 3D instance segmentation models integrated with tasks such as robot navigation within indoor environments.
305
+
306
+ <table><tr><td colspan="6">Split B</td></tr><tr><td>Task 1</td><td>Task 2</td><td colspan="4">Task 3</td></tr><tr><td>Bedroom / Hotel</td><td>Kitchen</td><td>ComputerCluster</td><td>Mail Room</td><td>Game room</td><td>Office</td></tr><tr><td>Dining Room</td><td>Bathroom</td><td>Misc.</td><td>Hallway</td><td>Apartment</td><td></td></tr><tr><td>Lounge</td><td>Closet</td><td>Gym</td><td>Classroom</td><td>Lobby</td><td></td></tr><tr><td></td><td>Garage</td><td>Library</td><td>Conference Room</td><td>Stairs</td><td></td></tr></table>
307
+
308
+ # C Additional details on the experimentation
309
+
310
+ Training: We train the model on the entire ScanNet200 dataset for all tasks. In Task 1, objects belonging to the classes from Task 2 and Task 3 are masked, excluding them from the learning process. Moving to Task 2, we utilize the last saved checkpoint of the model from Task 1 as a starting point and mask the objects with labels that correspond to the current known classes of Task 1 and Task 3. This allows the model to focus solely on learning and distinguishing the specific objects associated with the current task. Finally, Task 3 builds upon the progress made in Task 2. We load the
311
+
312
+ latest checkpoint of the model from Task 2 and incorporate an exemplar replay. Similar to Task 2, the objects with labels belonging to the known classes in Task 1 and Task 2 are masked during training. This step further refines the model's understanding and discrimination abilities for the specific objects relevant to the current task.
313
+
314
+ Evaluation: To conduct the evaluation during a task, we assign the "unknown" label to the known classes from all the future tasks.
315
+
316
+ # D Additional qualitative results
317
+
318
+ # D.1 Unknown objects identification
319
+
320
+ The qualitative results depicted in Fig. 10, 12, 13, and 11 highlight the superior performance of our contribution in retrieving unknown objects. Across the majority of scenes, our model consistently corrects the mispredicted unknown classes while preserving the accuracy of known objects, thus demonstrating its robustness and effectiveness.
321
+
322
+ # D.2 Learning novel classes
323
+
324
+ Fig. 8 and Fig. 9 illustrate the sequential process of learning novel classes after identifying unknown objects from the previous task. In Fig. 8, we demonstrate the effectiveness of our method in successfully retrieving unknown classes in all tasks. Additionally, in Fig. 9, we highlight the potential of exemplar replay in retaining knowledge of the old classes after learning the novel classes in Task 2 and Task 3.
325
+
326
+ ![](images/474df57eccd8d68a42ff10fbcd10d26a31eb03949ef79726792e6366d27bbb2f.jpg)
327
+ Figure 8: Illustration of the process of unknown identification and learning novel classes. We use orange circles to highlight the differences between 3D-OWIS and 3D-OWIS-PC-CT. The objects depicted in green represent the known classes, while those in blue represent the unknown objects. The gray objects correspond to the background. The qualitative results demonstrate that 3D-OWIS outperforms 3D-OWIS-PC-CT in retrieving unknown objects. Notably, 3D-OWIS correctly identifies the background objects as unknown, whereas 3D-OWIS-PC-CT misclassifies them as known objects.
328
+
329
+ Table 7: Proposed distribution of ScanNet200 classes across tasks for each split. We show the classes that are known when training the model during a specific task for the three splits.
330
+
331
+ <table><tr><td colspan="4">Split A</td><td colspan="4">Split B</td><td colspan="3">Split C</td></tr><tr><td>Task 1</td><td>Task 2</td><td>Task 3</td><td>Task 1</td><td>Task 2</td><td>Task 3</td><td>Task 1</td><td>Task 2</td><td>Task 3</td><td>Task 2</td><td>Task 3</td></tr><tr><td>tv stand</td><td>cushion</td><td>paper</td><td></td><td>alarm clock</td><td>guitar</td><td>bar</td><td>basket</td><td>ironing board</td><td>mattress</td><td></td></tr><tr><td>curtain</td><td>end table</td><td>plate</td><td></td><td>backpack</td><td>paper towel roll</td><td>basket</td><td>trash can</td><td>diviner</td><td>toaster</td><td></td></tr><tr><td>blinds</td><td>dining table</td><td>soap dispenser</td><td></td><td>bag</td><td>book</td><td>bathroom cabinet</td><td>stair rail</td><td>oven</td><td>stool</td><td></td></tr><tr><td>shower curtain</td><td>keyboard</td><td>bucket</td><td></td><td>bed</td><td>bookshelf</td><td>bathroom counter</td><td>toaster oven</td><td>dish rack</td><td>plant</td><td></td></tr><tr><td>bookshelf</td><td>bag</td><td>clock</td><td></td><td>blanket</td><td>cart</td><td>bathroom stall</td><td>laundry hamper</td><td>shower door</td><td>folded chair</td><td></td></tr><tr><td>tv</td><td>toilet paper</td><td>guitar</td><td></td><td>case of water bottles</td><td>furniture</td><td>bathroom stall door</td><td>bullet</td><td>mini fridge</td><td>microwave</td><td></td></tr><tr><td>kitchen cabinet</td><td>printer</td><td>toilet paper holder</td><td></td><td>ceiling</td><td>blackboard</td><td>bathroom vanity</td><td>dining table</td><td>bicycle</td><td>cushion</td><td></td></tr><tr><td>pillow</td><td>blanket</td><td>speaker</td><td></td><td>closet</td><td>projector</td><td>bathtub</td><td>stuffed animal</td><td>laptop</td><td>bench</td><td></td></tr><tr><td>lamp</td><td>microwave</td><td>cup</td><td></td><td>closet door</td><td>seat</td><td>bottle</td><td>bathroom vanity</td><td>armchair</td><td>soap dispenser</td><td></td></tr><tr><td>dresser</td><td>shoe</td><td>paper towel roll</td><td></td><td>closet wall</td><td>folded chair</td><td>broom</td><td>cock</td><td>couch</td><td>storage organizer</td><td></td></tr><tr><td>monitor</td><td>computer tower</td><td>bar</td><td></td><td>clothes</td><td>office chair</td><td>clothes dryer</td><td>ceiling</td><td>coffee kettle</td><td>shower curtain</td><td></td></tr><tr><td>object</td><td>bottle</td><td>toaster</td><td></td><td>coat rack</td><td>projector screen</td><td>cushion</td><td>potted plant</td><td>counter</td><td>cart</td><td></td></tr><tr><td>ceiling</td><td>bin</td><td>ironing board</td><td></td><td>container</td><td>whiteboard</td><td>doorframe</td><td>luggage</td><td>structure</td><td>kitchen counter</td><td></td></tr><tr><td>board</td><td>otoman</td><td>soap dish</td><td></td><td>curtain</td><td>bin</td><td>fire alarm</td><td>clutter wall</td><td>pipe</td><td>towel</td><td></td></tr><tr><td>stove</td><td>bench</td><td>toilet paper dispenser</td><td></td><td>door</td><td>bucket</td><td>hair dryer</td><td>desk</td><td>bow</td><td>blackboard</td><td></td></tr><tr><td>closet wall</td><td>basket</td><td>fire extinguisher</td><td></td><td>dresser</td><td>button</td><td>handicap bar</td><td>dugger</td><td>shower curtain rod</td><td>TV</td><td></td></tr><tr><td>coach</td><td>fan</td><td>ball</td><td></td><td>dumbbell</td><td>coper</td><td>ledge</td><td>objects</td><td>sofa chair</td><td>printer</td><td></td></tr><tr><td>office chair</td><td>laptop</td><td>hat</td><td></td><td>fan</td><td>machine</td><td>light switch</td><td>rail</td><td>clothes dryer</td><td>stand</td><td></td></tr><tr><td>kitchen counter</td><td>person</td><td>hat</td><td></td><td>guitar case</td><td>mailbox</td><td>mat</td><td>tissue box</td><td>coffee table</td><td>rack</td><td></td></tr><tr><td>shower</td><td>paper towel dispenser</td><td>shower curtain rod</td><td></td><td>hat</td><td>paper cutter</td><td>mirror</td><td>plate</td><td>stairs</td><td>bathroom counter</td><td></td></tr><tr><td>closet</td><td>paper towel dispenser</td><td>paper cutter</td><td></td><td>ironing board</td><td>primer</td><td>paper towel dispenser</td><td>keyboard</td><td>toilet seat cover dispenser</td><td>closet rod</td><td></td></tr><tr><td>dungeon</td><td>oven</td><td>tray</td><td></td><td>lamp</td><td>column</td><td>plunger</td><td>hat</td><td>machine</td><td>bottle</td><td></td></tr><tr><td>doorframe</td><td>rack</td><td>toaster oven</td><td></td><td>laptop</td><td>storage container</td><td>scale</td><td>copier</td><td>paper bag</td><td>range hood</td><td></td></tr><tr><td>sofa chair</td><td>piano</td><td>mouse</td><td></td><td>laundry basket</td><td>blinds</td><td>shower</td><td>sheet</td><td>book</td><td>purse</td><td></td></tr><tr><td>mailbox</td><td>suitcase</td><td>toilet seat cover dispenser</td><td></td><td>laundry hamper</td><td>structure</td><td>shower curtain</td><td>bed</td><td>blinds</td><td>candle</td><td></td></tr><tr><td>nightstand</td><td>rail</td><td>storage container</td><td></td><td>luggage</td><td>water bottle</td><td>shower curtain rod</td><td>paper towel dispenser</td><td>monitor</td><td>person</td><td></td></tr><tr><td>washing machine</td><td>container</td><td>scale</td><td></td><td>mattress</td><td>ball</td><td>shower door</td><td>fire extinguisher</td><td>shower wall</td><td>coffee maker</td><td></td></tr><tr><td>picture</td><td>telephone</td><td>tissue box</td><td></td><td>mini fridge</td><td>board</td><td>shower floor</td><td>paper towel roll</td><td>curtain</td><td>light switch</td><td></td></tr><tr><td>book</td><td>stand</td><td>light switch</td><td></td><td>nightstand</td><td>box</td><td>shower head</td><td>backpack</td><td>closet</td><td>storage container</td><td></td></tr><tr><td>sink</td><td>light</td><td>crate</td><td></td><td>object</td><td>cabinet</td><td>shower wall</td><td>water bottle</td><td>telephone</td><td>bathroom stall door</td><td></td></tr><tr><td>recycling bin</td><td>laundry basket</td><td>power outlet</td><td></td><td>pillow</td><td>cd case</td><td>sink</td><td>stove</td><td>bean bag</td><td>kitchen floor</td><td></td></tr><tr><td>table</td><td>pipe</td><td>sign</td><td></td><td>poster</td><td>ceiling light</td><td>soap dish</td><td>laundry basket</td><td>bucket</td><td>refrigerator</td><td></td></tr><tr><td>backpack</td><td>seat</td><td>projector</td><td></td><td>power outlet</td><td>clock</td><td>soap dispenser</td><td>alarm clock</td><td>sign</td><td>refrigerator</td><td></td></tr><tr><td>toilet</td><td>bicycle</td><td>plunger</td><td></td><td>purse</td><td>computer tower</td><td>toilet</td><td>headphones</td><td>mirror</td><td>tube</td><td></td></tr><tr><td>copier</td><td>ladder</td><td>stuffed animal</td><td></td><td>rack</td><td>cup</td><td>toilet paper</td><td>piano</td><td>clock</td><td>toilet paper holder</td><td></td></tr><tr><td>counter</td><td>jacket</td><td>headphones</td><td></td><td>recycling bin</td><td>desk</td><td>toilet paper dispenser</td><td>guitar</td><td>nightstand</td><td>ceiling light</td><td></td></tr><tr><td>stool</td><td>storage bin</td><td>broom</td><td></td><td>shelf</td><td>divider</td><td>toilet paper holder</td><td>bag</td><td>tv stand</td><td>picture</td><td></td></tr><tr><td>refrigerator</td><td>coffee maker</td><td>guitar case</td><td></td><td>shoe</td><td>file cabinet</td><td>toilet seat cover dispenser</td><td>door</td><td>handicap bar</td><td>end table</td><td></td></tr><tr><td>window</td><td>dishwasher</td><td>dustpan</td><td></td><td>sign</td><td>headphones</td><td>towel</td><td>speaker</td><td>poster</td><td>closet door</td><td></td></tr><tr><td>file cabinet</td><td>machine</td><td>hair dryer</td><td></td><td>storage bin</td><td>keyboard</td><td>trash bin</td><td>water cooler</td><td>blanket</td><td>file cabinet</td><td></td></tr><tr><td>chair</td><td>mat</td><td>water bottle</td><td></td><td>power outlet</td><td>monitor</td><td>washing machine</td><td>cup</td><td>cup</td><td>crate</td><td></td></tr><tr><td>plant</td><td>windowsill</td><td>handicap bar</td><td></td><td>tissue box</td><td>mouse</td><td>dustpan</td><td>water pitcher</td><td>recycling bin</td><td>toilet paper dispenser</td><td></td></tr><tr><td>coffee table</td><td>bullet board</td><td>purse</td><td></td><td>tissue box</td><td>paper</td><td>laundry detergent</td><td>dumbbell</td><td>lamp</td><td>pillow</td><td></td></tr><tr><td>stairs</td><td>fireplace</td><td>vent</td><td></td><td>wardrobe</td><td>person</td><td>stuffed animal</td><td>furniture</td><td>scale</td><td>mat</td><td></td></tr><tr><td>armchair</td><td>mini fridge</td><td>shower floor</td><td></td><td>decoration</td><td>power strip</td><td>stuffed animal</td><td>door</td><td>handicap bar</td><td>end table</td><td></td></tr><tr><td>cabinet</td><td>water cooler</td><td>water heater</td><td></td><td>armchair</td><td>radiator</td><td>bowl</td><td>door</td><td>handicap bar</td><td>end table</td><td></td></tr><tr><td>bathroom vanity</td><td>shower door</td><td>bowl</td><td></td><td>storage bin</td><td>keyboard</td><td>cell phone</td><td>toilet</td><td>handicap bar</td><td>end table</td><td></td></tr><tr><td>chair</td><td>patel</td><td>paper bag</td><td></td><td>candle</td><td>telephone</td><td>coffee kettle</td><td>plunger</td><td>otoman</td><td>container</td><td></td></tr><tr><td>mirror</td><td>lidge</td><td>alarm clock</td><td></td><td>chair</td><td>tray</td><td>coffee maker</td><td>shower</td><td>paper</td><td>sleet</td><td></td></tr><tr><td>blackboard</td><td>furniture</td><td>music stand</td><td></td><td>chair</td><td>tube</td><td>counter</td><td>bar</td><td>powder strip</td><td>jacket</td><td></td></tr><tr><td>trash can</td><td>cart</td><td>laundry detergent</td><td></td><td>coffee table</td><td>window sill</td><td>dishwasher</td><td>fire extinguisher</td><td>fireplace</td><td>dresser</td><td></td></tr><tr><td>stair rail</td><td>correction</td><td>dumbbell</td><td></td><td>couch</td><td>pipe</td><td>dish rack</td><td>suitcase</td><td>doufframe</td><td>dustpan</td><td></td></tr><tr><td>box</td><td>closet door</td><td>tube</td><td></td><td>dining table</td><td>pipe</td><td>fire extinguisher</td><td>cabinet</td><td>toilet</td><td>table</td><td></td></tr><tr><td>toilet</td><td>vacuum cleaner</td><td>cd case</td><td></td><td>end table</td><td>stair rail</td><td>kitchen cabinet</td><td>board</td><td>toilet</td><td>projector</td><td></td></tr><tr><td>dinner</td><td>dishware</td><td>toilet screen</td><td></td><td>fireplace</td><td>stairs</td><td>kitchen counter</td><td>toilet</td><td>handy detergent</td><td>toilet</td><td></td></tr><tr><td>clothes</td><td>range hood</td><td>coffee kettle</td><td></td><td>cup</td><td>oven</td><td>oven</td><td>toilet</td><td>cleaning machine</td><td>toilet</td><td></td></tr><tr><td>whiteboard</td><td>projector screen</td><td>shower head</td><td></td><td>keyboard piano</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>bed</td><td>division</td><td>keyboard piano</td><td></td><td>light</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>bathroom counter</td><td>toilet counter</td><td>case of water bottles</td><td></td><td>music stand</td><td>plate</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>clothes</td><td>laundry hamper</td><td>coat rack</td><td></td><td>ottoman</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>wardrobe</td><td>toilet stall door</td><td>folded chair</td><td></td><td>piano</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>clothes dryer</td><td>ceiling light</td><td>fire alarm</td><td></td><td>picture</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>radiator</td><td>trash bin</td><td>power strip</td><td></td><td>pillar</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>shelf</td><td>trash bin</td><td>color card</td><td></td><td>potted plant</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>radiator</td><td>structure</td><td>poster</td><td></td><td>table</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>shelf</td><td>storage organizer</td><td>potted plant</td><td></td><td>vacuum cleaner</td><td></td><td></td><td></td><td></td><td></td><td></td></tr></table>
332
+
333
+ ![](images/56852bc2e499c5a531d15975391b6174aeeaac341de141d15e73ed7f0dd680a6.jpg)
334
+ Figure 9: Alleviating catastrophic forgetting during incremental learning. The capability of 3D-OWIS in retaining knowledge of the previously known classes after learning the new one is demonstrated across Task 2 and Task 3 for both scenes, where all objects of old known classes are still being predicted as known.
335
+
336
+ ![](images/38793c857c3d279ce2d1cbd4f86f694d593fb2a3dd1c1fae025465f54ea15066.jpg)
337
+ Figure 10: Qualitative results. The objects depicted in green represent the known classes, while the ones in blue represent the "unknown" class, and the gray objects represent the background. To emphasize the differences between 3D-OWIS and 3D-OWIS-PC-CT, we highlight them with orange circles.
338
+
339
+ ![](images/bdf04d60cb832b572d7aa34999a8120641b688de8d97b3653b77955237a6d3e3.jpg)
340
+ Ground Truth
341
+
342
+ ![](images/2759b3305ecd6c27cb55df3d3a2ad781a76be32235e7a991b1230b1f0e1db6ee.jpg)
343
+ 3D-OWIS-PC-CT
344
+
345
+ ![](images/3d2bf2ff94f763d09db4bc21a9ab3bf7c169f997a6d73d189c9d10a6463fb441.jpg)
346
+ 3D-OWIS
347
+
348
+ ![](images/eebb70cb9db0b9519b509499b7438c00e2f17105e0874e005d9175cc04ade200.jpg)
349
+
350
+ ![](images/a14f09233f9605d29d34fff06b3bdffd09fa1a8fb7f8429b8a8772bcfb905e25.jpg)
351
+
352
+ ![](images/d22e73ea59cb01ea2eef10bef3b0b6772213e17747987b231b1da001b47fac35.jpg)
353
+
354
+ ![](images/f2137822f1ea72a197ce8570411130de2079d02e6be892f826b69c46415c5afd.jpg)
355
+
356
+ ![](images/c17ce6ccba7863c563f8cf98f0a439c7f14e148162ea1151862d6de5c409acd2.jpg)
357
+
358
+ ![](images/56e6d46415c4f6ef6a00bd0f343acda8c869244cba4db7a797dddd26e7f5ab38.jpg)
359
+
360
+ ![](images/01c5a871e7789244adb062737bf34e1368c6b176c3ab52bf64854254185553ec.jpg)
361
+
362
+ ![](images/7aed76e909f4cae28e13e1f9e4260008de5c37ea4eff7e4c9f569f583488a6db.jpg)
363
+
364
+ ![](images/163a7a769078238a6d1dbc79f58257e5646316d1954bd4190c1f96a687e38fc4.jpg)
365
+
366
+ ![](images/0cb6c69ede3f8adec9b24314fba8300a2ebcc1d40b63b9e98460ae596db33c1f.jpg)
367
+
368
+ ![](images/0b374660047e50a6b1afecbf7c601f23526a60ee0c5683724836d830b870d35a.jpg)
369
+
370
+ ![](images/ba7923c0f82f9d3e4ca98b5db85b8c77625fd6ed68275fed038acaff183a86c9.jpg)
371
+
372
+ ![](images/4e59196bf321ded453f148d35ecbe39ea287ede5d7904b73e916353d22f13a56.jpg)
373
+ Figure 11: Additional qualitative results We demonstrate the better performance of our model in accurately identifying background objects (depicted in gray) as unknown (represented by the blue color), and also correcting the predictions from known class to unknown class. This capability greatly reduces the misclassification of background objects as known objects, leading to improved overall classification accuracy.
374
+
375
+ ![](images/52f4be168223491adc2e1ce4ef3727bdba76d8a302058d3f9853683047dfa3f9.jpg)
376
+
377
+ ![](images/dd4ccd69c74b093046a3910d05f8059e60657e32f52f1f9e46dbbd3c7772ca7e.jpg)
378
+
379
+ ![](images/2fe9487a323f067256a6a291867c5c2ff7149e6617c2a9d27fcb7937a2e8bed6.jpg)
380
+ Ground Truth
381
+
382
+ ![](images/3c7a7a7478c3b12875f78e0e1681312d6a428dcafcea91ec6a400c9524cc4526.jpg)
383
+ 3D-OWIS-PC-CT
384
+
385
+ ![](images/cbd2a0094e4ec4efb7421edba53981c3b541c3a5449a1e613404724437bea23f.jpg)
386
+ 3D-OWIS
387
+
388
+ ![](images/67aa3d0968db2b87be482b1889b6e26e55e1a80536b685a2b24e1541aa007a50.jpg)
389
+
390
+ ![](images/4161f8915770c57426233e946901e74ae8aa19bacf9889c460e5c44a95e89037.jpg)
391
+
392
+ ![](images/37a9ec0b611eab8a1d69e97841947d471b77a2c9a4bbd0dd53e296d70396964b.jpg)
393
+
394
+ ![](images/1dc43acdf13f200357cccea28a52d4880ab0a693b03a7268af8352e8818bce3c.jpg)
395
+
396
+ ![](images/726b47d281c1a12a5fa76bb56d0f5d29c71c42606483eda3f08593ac8f11100c.jpg)
397
+
398
+ ![](images/5deaa85f8ab6980746708ed8564903029b7ee6def4aa54d8d3b283d1d1d4e948.jpg)
399
+
400
+ ![](images/9c3b631162c0d4daea7bd0d4cf16af9c3a3705ec8fc678952d8dfe408ff56dae.jpg)
401
+
402
+ ![](images/6f43c98692388bc5c8c91f08cab038cfc69ce055da57fc8691289fe4fce0512a.jpg)
403
+
404
+ ![](images/a5c9c763d99ad2664c5264371fe014bbdfc93589bfb94d33ee9d768cbde00d3c.jpg)
405
+
406
+ ![](images/be12ef636e694725ec2b16005d1a4346497d64409f4052fca601e4af1f6b08fa.jpg)
407
+
408
+ ![](images/62da61aaa0f508521790d602a56a598ab3f7506a73250afa63d0c06b9dfc54f7.jpg)
409
+
410
+ ![](images/555a99c0071de297d956f18f6e8e70c723f6e5c36647a829604be2fbcd751887.jpg)
411
+
412
+ ![](images/27c81a3498e0a0e3e3ef1c279df88743c8eef31727b82e7e714023eb9b8bad98.jpg)
413
+ Figure 12: Additional qualitative results
414
+
415
+ ![](images/f751ae665b730952836b361e6092e9a3c2cf522f57e8f9362aefb0daa4563684.jpg)
416
+
417
+ ![](images/6e097c04ed91052c9269278d30712346355395ae88f9f1cb6e1cc0fb76e83fc6.jpg)
418
+
419
+ ![](images/040e7aed043228b97b541e6b76ec64ad11163cb10efb42337dcebfaa4a959261.jpg)
420
+ Figure 13: Additional qualitative results
3dindoorinstancesegmentationinanopenworld/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f1b9fafa64c527bf5f3514ddf215f0b20e19ec9da852787b06ecc45626eac07
3
+ size 1665115
3dindoorinstancesegmentationinanopenworld/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecdd31d2ff5546d463b1a8b6cd9609c6880f54067d3a81891482c9f58d5fb6a5
3
+ size 487562
3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f64a5e3b45bd34296e0b49b0a9ab82da32fc781a827060762a6c5a35460dfa20
3
+ size 103078
3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a1e758f397ca42fe482e35dcd51e0504749676150030bf8d6c64a5d14d50a7b
3
+ size 128294
3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/eea59a9a-0fe1-4630-b690-b39979edd293_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bb7064ad0e30db731e540d1ab6db8102905daefebaec03234248b8d8e66c8df
3
+ size 8361109
3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/full.md ADDED
@@ -0,0 +1,441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D-IntPhys: Towards More Generalized 3D-grounded Visual Intuitive Physics under Challenging Scenes
2
+
3
+ Haotian Xue
4
+
5
+ Antonio Torralba
6
+
7
+ Joshua Tenenbaum
8
+
9
+ Daniel Yamins<sup>3</sup>
10
+
11
+ Yunzhu Li 3,4*
12
+
13
+ Hsiao-Yu Tung*
14
+
15
+ <sup>1</sup> Georgia Tech
16
+
17
+ $^{2}$ MIT
18
+
19
+ 3 Stanford Univeristy
20
+
21
+ 4 UIUC
22
+
23
+ # Abstract
24
+
25
+ Given a visual scene, humans have strong intuitions about how a scene can evolve over time under given actions. The intuition, often termed visual intuitive physics, is a critical ability that allows us to make effective plans to manipulate the scene to achieve desired outcomes without relying on extensive trial and error. In this paper, we present a framework capable of learning 3D-grounded visual intuitive physics models from videos of complex scenes. Our method is composed of a conditional Neural Radiance Field (NeRF)-style visual frontend and a 3D point-based dynamics prediction backend, using which we can impose strong relational and structural inductive bias to capture the structure of the underlying environment. Unlike existing intuitive point-based dynamics works that rely on the supervision of dense point trajectory from simulators, we relax the requirements and only assume access to multi-view RGB images and (imperfect) instance masks acquired using color prior. This enables the proposed model to handle scenarios where accurate point estimation and tracking are hard or impossible. We generate datasets including three challenging scenarios involving fluid, granular materials, and rigid objects in the simulation. The datasets do not include any dense particle information so most previous 3D-based intuitive physics pipelines can barely deal with that. We show our model can make long-horizon future predictions by learning from raw images and significantly outperforms models that do not employ an explicit 3D representation space. We also show that once trained, our model can achieve strong generalization in complex scenarios under extrapolate settings. The code is released in https://github.com/xavihart/3D-IntPhys.
26
+
27
+ # 1 Introduction
28
+
29
+ Humans can achieve a strong intuitive understanding of the 3D physical world around us simply from visual perception [5, 8, 52, 51, 46, 11]. As we constantly make physical interactions with the environment, the intuitive physical understanding applies to objects of a wide variety of materials [6, 56]. For example, after watching videos of water pouring and doing the task ourselves, we can develop a mental model of the interaction process and predict how the water will move when we apply actions like tilting or shaking the cup (Figure 1). The ability to predict the future evolution of the physical environment is extremely useful for humans to plan our behavior and perform everyday manipulation tasks. It is thus desirable to develop computational tools that learn 3D-grounded models of the world purely from visual observations that can generalize to objects with complicated physical properties like fluid and granular materials.
30
+
31
+ ![](images/895d3dc39c2d25900d9e4fba8793dfa5212feac4b242ab0866f7890a3bdf59e3.jpg)
32
+ Figure 1: Visual Intuitive Physics Grounded in 3D Space. Humans have a strong intuitive understanding of the physical environment. We can predict how the environment would evolve when applying specific actions. This ability roots in our understanding of 3D and applies to objects of diverse materials, which is essential when planning our behavior to achieve specific goals. In this work, we are the first to leverage a combination of implicit neural representation and explicit 3D particle representation to build 3D-grounded visual intuitive physics models of the challenging scenes that applies to objects with complicated physical properties, such as fluids, rigid objects, and granular materials.
33
+
34
+ ![](images/0660be69ab178dae853cd48ee7386e413f19c1a47e5d744a661875b9403b8cc9.jpg)
35
+
36
+ ![](images/55a2b90bf56b1e04504a8615cbd353e1764a8ed8a5bf8c2fa71d262389692b2d.jpg)
37
+
38
+ There has been a series of works on learning intuitive physics models of the environment from data. However, most existing work either focuses on 2D environments [60, 1, 21, 64, 20, 4, 44, 28, 67, 24, 23, 49, 33, 19, 31, 58, 22, 12, 65] or has to make strong assumptions about the accessible information of the underlying environment [36, 35, 47, 43, 70, 54, 48, 7, 2, 26] (e.g., full-state information of the fluids represented as points). The limitations prevent their use in tasks requiring an explicit 3D understanding of the environments and make it hard to extend to more complicated real-world environments where only visual observations are available. There are works aiming to address this issue by learning 3D-grounded representation of the environment and modeling the dynamics in a latent vector space [34, 32]. However, these models typically encode the entire scene into one single vector. Such design does not capture the structure of the underlying systems, limiting its generalization to compositional systems or systems of different sizes (e.g., unseen container shapes or different numbers of floating ice cubes).
39
+
40
+ In this work, we propose 3D Visual Intuitive Physics (3D-IntPhys), a framework that learns intuitive physics models of the environment with explicit 3D and compositional structures with visual inputs.
41
+
42
+ Specifically, the model consists of (1) a perception module based on conditional Neural Radiance Fields (NeRF) [41, 68] that transforms the input images and instance masks into 3D point representations and (2) a dynamics module instantiated as graph neural networks to model the interactions between the points and predict their evolutions over time. Despite advances in graph-based dynamics networks [47, 36], existing methods require strong supervision provided by 3D GT point trajectories, which are hard to obtain in most real setups. To tackle the problem, we train the dynamics model using (1) a distribution-based loss function measuring the difference between the predicted point sets and the actual point distributions at the future timesteps and (2) a spacing loss to avoid degenerated point set predictions. Our perception module learns spatial-equivariant representations of the environment grounded in the 3D space, which then transforms into points as a flexible representation to describe the system's state. Our dynamics module regards the point set as a graph and exploits the compositional structure of the point systems.
43
+
44
+ The structures allow the model to capture the compositionality of the underlying environment, handle systems involving objects with complicated physical properties, and perform extrapolated generalization, which we show via experiments greatly outperform various baselines without a structured 3D representation space.
45
+
46
+ # 2 Related Work
47
+
48
+ Visual dynamics learning. Existing works learn to predict object motions from pixels using frame-centric features [1, 20, 4, 24, 23, 53, 30, 59, 69, 10, 25, 62] or object-centric features [21, 61, 28, 44, 27, 58, 14, 22, 45, 66], yet, most works only demonstrate the learning in 2D scenes with objects
49
+
50
+ ![](images/6499542e8f80019e6084805a372086fbfa95d424c0a62837964d697d796268cc.jpg)
51
+ Figure 2: Overview of 3D Visual Intuitive Physics (3D-IntPhys). Our model consists of two major components: Left: The perception module maps the visual observations into implicit neural representations of the environment. We then subsample from the reconstructed implicit volume to obtain a particle representation of the environment. Right: The dynamics module, instantiated as graph neural networks, models the interaction within and between the objects and predicts the evolution of the particle set.
52
+
53
+ moving only on a 2D plane. We argue that one reason that makes it hard for these existing methods to be applied to general 3D visual scenes is because they often operate on view-dependent features that can change dramatically due to changes in the camera viewpoint, which shouldn't have any effect on the actual motion of the objects. Recent works by [9] have shown that only methods that use 3D view-invariant representations can pave the way toward human-level physics dynamics prediction in diverse scenarios.
54
+
55
+ Researchers have attempted to learn object motion in 3D [55, 63, 40, 34], [55] and [63] use object-centric volumetric representations inferred from RGB-D to predict object motion, yet, these volumetric approaches have much higher computation costs than 2D methods due to the 4D representation bottleneck, which hinders them from scaling up to more complex scenes. [40] use self-supervised 3D keypoints and [15, 16] use implicit representations to model multi-object dynamics but cannot handle objects with high degrees of freedom like fluid and granular materials. [34] use neural implicit representation to reduce the potential computational cost, yet the works have not shown how the approach can generalize to unseen scenarios. Our works aim to solve the tasks of learning generalizable object dynamics in 3D by combining the generalization strength of input-feature-conditioned implicit representation and point-based dynamics models.
56
+
57
+ Point-based dynamics models. Existing works in point- and mesh-based dynamics models [36, 42, 57, 47, 43] have shown impressive results in predicting the dynamics of rigid objects, fluid [36, 42, 57, 47, 3, 13], deformable objects [36, 42, 47], and clothes [43, 38]. Most works require access to full 3D states of the points during training and testing, yet, such information is usually not accessible in a real-world setup. [35] learn a visual frontend to infer 3D point states from images, but still require 3D point states and trajectories during training time. [50] propose to learn point dynamics directly from vision, but they only consider elasto-plastic objects consisting of homogeneous materials. How to learn about 3D point states and their motion from raw pixels remain a question. Our paper tries to build the link from pixels to points using recent advances in unsupervised 3D inference from images using NeRF [41, 68].
58
+
59
+ # 3 Methods
60
+
61
+ We present 3D Visual Intuitive Physics (3D-IntPhys), a model that learns to simulate physical events from unlabeled images (Figure 2). 3D-IntPhys contains a perception module that transforms visual observations into a 3D point cloud that captures the object geometries (Section 3.1) and a point-based simulator that learns to simulate the rollout trajectories of the points (Section 3.2). The design choice of learning physics simulation in a 3D-point representation space enables stronger simulation performance and generalization ability. The performance gain mainly comes from the fact that describing/learning objects' motion and interactions in 3D are easier compared to doing so in 2D since objects live and move persistently in the 3D space. 3D-IntPhys also supports better
62
+
63
+ generalization ability since its neural architecture explicitly models how local geometries of two objects/parts interact, and these local geometries and interactions can be shared across different and novel object combinations.
64
+
65
+ Although 3D-IntPhys learns to simulate in a 3D representation space, we show it can learn without any 3D supervision such as dense point trajectories as in previous work [47, 36]. Dense point trajectories are hard and sometimes impossible to obtain in the real world, e.g., capturing the trajectories of each water point. 3D-IntPhys does not require such 3D supervision and can simply learn by observing videos of the scene evolution.
66
+
67
+ # 3.1 2D-to-3D Perception Module
68
+
69
+ Given a static scene, the perception module learns to transform one or a few posed RGB images, $\mathbf{I} = \{(I_i,\pi_i)|i\in \{1,2,\dots ,N_v\} \}$ , taken from $N_{v}$ different views, into a 3D point cloud representation of the scene, $\mathbf{X}$ . We train the model in an unsupervised manner through view reconstruction, using a dataset consisting of $N_{t}$ videos, where each video has $N_{f}$ frames, and each frame contains images taken from $N_{v}$ viewpoints.
70
+
71
+ Neural Radiance Field (NeRF). NeRF [41] learns to reconstruct a volumetric radiance field of a scene from unlabeled multi-view images. After training, the model learns to predict the RGB color $\mathbf{c}$ and the corresponding density $\sigma$ of a query 3D point $\mathbf{x} \in \mathbb{R}^3$ from the viewing direction $\mathbf{d} \in \mathbb{R}^3$ with a function $(\mathbf{c}, \sigma) = f(\mathbf{x}, \mathbf{d})$ . We can formulate a camera ray as $\mathbf{r}(t) = \mathbf{o} + t\mathbf{d}$ , where $\mathbf{o} \in \mathbb{R}^3$ is the origin of the ray. The volumetric radiance field can then be rendered into a 2D image via $\hat{\mathbf{C}}(\mathbf{r}) = \int_{t_n}^{t_f} T(t)\sigma(t)\mathbf{c}(t)dt$ , where $T(t) = \exp(-\int_{t_n}^{t}\sigma(s)ds)$ handles occlusion. The rendering range is controlled by the depths of the near and far plane (i.e., $t_n$ and $t_f$ ). We can train NeRF through view prediction by:
72
+
73
+ $$
74
+ \mathcal {L} = \sum_ {\mathbf {r} \in \mathcal {R} (\mathbf {P})} \| \hat {\mathbf {C}} (\mathbf {r}) - \mathbf {C} (\mathbf {r}) \|, \tag {1}
75
+ $$
76
+
77
+ where $\mathcal{R}(\mathbf{p})$ is the set of camera rays sampled from target camera pose $\mathbf{p}$ .
78
+
79
+ Image-conditioned NeRF. To infer the NeRF function from an image, previous work proposed to encode the input image into a vector, with a CNN encoder, as a conditioning input to the target NeRF function [34]. We found this type of architecture is in general hard to train and does not generalize well. Instead, we adopt pixelNeRF [68], which conditions NeRF rendering with local features, as opposed to global features. Given an image $I$ in a scene, pixelNeRF first extracts a feature volume using a CNN encoder $\mathbf{W} = E(I)$ . For a point $\mathbf{x}$ in the world coordinate, we retrieve its feature vector by projecting it onto the image plane, so that we can get the feature vector $\mathbf{W}(\pi (\mathbf{x}))$ . PixelNeRF combines the feature vector together with the 3D position of that point and predict the RGB color and density information:
80
+
81
+ $$
82
+ \mathbf {V} (\mathbf {x}) = (\mathbf {c}, \sigma) = f (\mathbf {x}, \mathbf {d}, \mathbf {W} (\pi (\mathbf {x}))). \tag {2}
83
+ $$
84
+
85
+ In the experiment section, we will show that it is surprisingly effective to train only one general model to learn a conditional Neural Radiance Field that can apply to all videos in one type of scene (e.g. FluidPour) with five different settings (e.g. extrapolate setting), which provides a better 3D representation for the scene and greatly facilitates the learning of 3D Intuitive Physics.
86
+
87
+ Explicit 3D representation from pixelNeRF. From a few posed RGB images $\mathbf{I}$ , of a scene $s$ , we infer a set of points for $O_{s}$ target object (such as fluid, cube) in the scene. We achieve this by first sampling a set of points according to the predicted occupancy measure, then clustering the points into objects using object segmentations. We found that sampling with low resolution will hurt the quality of the rendered point cloud to generate objects with inaccurate shapes, while sampling with high resolution will increase the computation for training the dynamics model since the input size increases. To speed up training while maintaining the quality of the reconstructed point cloud, we first infer the points with higher resolution and do sparse sampling of each point cloud using FPS (Farthest Point Sampling) [17]. Next, we cluster the inferred points into objects according to object segmentation masks. Since solving object segmentation in general is not the main focus of this paper, we resort to using the color information to obtain the masks.
88
+
89
+ # 3.2 Point-Based Dynamics Learner
90
+
91
+ Given the point representation at the current time step, $\mathbf{X}_t$ , the dynamics simulator predicts the points' evolution $T$ steps in the future, $\{\mathbf{X}_{t + 1},\mathbf{X}_{t + 2},\dots \mathbf{X}_{t + T}\}$ , using graph-based networks [47, 36].
92
+
93
+ We first form a graph $(V,E)$ based on the distance between points. If the distance between two points is smaller than a threshold $\delta$ , we include an edge between these two points. Each vertex $v_{i} = (\dot{x}_{i},a_{i}^{v})\in V$ contains the velocity of the point, $\dot{x}_i$ , and point attributes, $a_i^v$ , to indicate the point's type. For each relation, $(i,j)\in E$ , we have its associated relation attribute $a_{ij}^{e}$ , indicating the types of relation and the relative distance between the connected points.
94
+
95
+ Spatial message passing and propagation. At time step $t$ , we can do message passing to update the points, $v_{i} \in V$ , and relation representations, $(i,j) \in E$ , in the graph:
96
+
97
+ $$
98
+ g _ {i j, t} = Q _ {e} \left(v _ {i, t}, v _ {j, t}, a _ {i j} ^ {e}\right), (i, j) \in E \tag {3}
99
+ $$
100
+
101
+ $$
102
+ h _ {i, t} = Q _ {v} \left(v _ {i, t}, \sum_ {k \in \{j | (i, j) \in E \}} g _ {i k, t}\right), v _ {i} \in V \tag {4}
103
+ $$
104
+
105
+ where $Q_v$ and $Q_e$ are encoders for vertices and relations respectively. Please refer to [7] for more details. Though this kind of message passing can help with updating representation, it can only share one-hop information in each step, limiting its performance on instantaneous passing of forces. To improve long-range instantaneous effect propagation, we use multi-step message propagation as in [37, 36]. The propagation step is shown in Alg 1:
106
+
107
+ Algorithm 1: Point-based Dynamics Predictor
108
+ ```txt
109
+ Data: Current timestep $t$ , point cloud $V_{t}$ , vertex encoder $Q_{v}$ , edge encoder $Q_{e}$ , vertex propagator $P_{v}$ , edge propagator $P_{e}$ , state predictor $f_{s}$
110
+ ```
111
+
112
+ Result: $V_{t + 1}$
113
+
114
+ Form graph $G_{t} = (V_{t},E_{t})$
115
+
116
+ //message passing
117
+
118
+ $$
119
+ g _ {i j, t} = Q _ {e} \left(v _ {i, t}, v _ {j, t}, a _ {i j} ^ {e}\right), (i, j) \in E _ {t}
120
+ $$
121
+
122
+ $$
123
+ h _ {i, t} = Q _ {v} \left(v _ {i, t}, \sum_ {k \in \{j | (i, j) \in E \}} g _ {i k, t}\right), v _ {i} \in V _ {t}
124
+ $$
125
+
126
+ //message propagation
127
+
128
+ $$
129
+ h _ {i, t} ^ {0} = h _ {i, t}, g _ {i, t} ^ {0} = g _ {i, t},
130
+ $$
131
+
132
+ for $l\in \{1,2,3,\dots,L\}$ do
133
+
134
+ $$
135
+ \left| \begin{array}{l l} & g _ {i j, t} ^ {l} = P _ {e} (g _ {i j, t} ^ {l - 1}, h _ {i, t} ^ {l - 1}, h _ {j, t} ^ {l - 1}) \quad , (i, j) \in E _ {t} \\ & h _ {i, t} ^ {l} = P _ {v} (h _ {i, t} ^ {l - 1}, \sum_ {k \in \{j | (i, j) \in E \}} g _ {i k, t} ^ {l}) \quad , v _ {i} \in V _ {t} \end{array} \right.
136
+ $$
137
+
138
+ end
139
+
140
+ //state prediction
141
+
142
+ $$
143
+ v _ {i, t + 1} = f _ {s} \left(h _ {i, t} ^ {L}\right)
144
+ $$
145
+
146
+ $$
147
+ V _ {t + 1} = \{v _ {i, t + 1} \}
148
+ $$
149
+
150
+ where $P_{e}, P_{v}$ are propagation functions of nodes and edges, respectively, and $g_{ij,t}^{l}$ is the effect of relation $(i,j)$ in propagation step $l$ . $h_{i,t}^{l}$ is the hidden states for each point in the propagation process. Finally, we have the predicted states of points at time step $t + 1$ after $L$ steps of propagation:
151
+
152
+ $$
153
+ \hat {v} _ {i, t + 1} = f _ {s} \left(h _ {i, t} ^ {L}\right). \tag {5}
154
+ $$
155
+
156
+ **Environments.** We assume that the surrounding environment (e.g., the table) is known and the robot/tool/container are of known shape and fully actuated, where the model has access to their complete 3D state information. We convert the full 3D states into points through sampling on the 3D meshes and include these points in the prediction of the graph-based dynamics.
157
+
158
+ Fluids, rigid bodies, and granular materials. We distinguish different materials by using different point attributes $a_{i}^{v}$ . We also set different relation attributes $a_{ij}^{e}$ in Equation 3 to distinguish different interaction (e.g., Rigid-Fluids, Fluids-Fluids, Granular-Pusher). For rigid objects, to ensure the object shapes remain consistent throughout the rollout predictions, we add a differentiable rigid constraint in the prediction head following [36].
159
+
160
+ Training dynamics model without point-level correspondence. Since our perception model parses each RGB image into object-centric point clouds independently, there does not exist an explicit one-to-one correspondence for points across frames. To handle this, we measure the Chamfer distance between the prediction $\hat{\mathbf{X}}_t = (\hat{V}_t,\hat{E}_t)$ from the dynamics network and the inferred point state $\mathbf{X}_t = (V_t,E_t)$ from the perception module and treat it as the objective function. The Chamfer
161
+
162
+ ![](images/2df05e618b447a2fe908dc9469ab3844122b79c8b4a583aabc590312f92842e5.jpg)
163
+ Figure 3: Data Collection and Evaluation Setups. Left: We collect multi-view videos of the environment from six cameras. Right: We consider a diverse set of evaluating environments involving fluids, rigid objects, granular materials, and their interactions with the fully-actuated container and the environment. We evaluate the learned visual intuitive physics model on both the interpolated settings (i.e., seen environment but with different action sequences) and extrapolated settings (i.e., unseen environment with different amounts of fluids, cubes, granular pieces, and containers of different sizes).
164
+
165
+ ![](images/e732f4e24a5f669543d7ea1de66d44c51a63f47122840cbf45ffbb46dbe29151.jpg)
166
+
167
+ ![](images/9b3096a43004fcbac80f87949f9105c57c44a8834fcb3f0cac0dcfc9558fd97e.jpg)
168
+
169
+ ![](images/233f7aa3055aae24dcffdf9ffacf9cd4a868d2dbab148e2259bab43ca91497e2.jpg)
170
+
171
+ distance between two point cloud $\hat{V}$ and $V$ is defined as:
172
+
173
+ $$
174
+ L _ {c} (\hat {V}, V) = \frac {1}{\| \hat {V} \|} \sum_ {x \in \hat {V}} \min _ {y \in V} \| x - y \| _ {2} ^ {2} + \frac {1}{\| V \|} \sum_ {x \in V} \min _ {y \in \hat {V}} \| x - y \| _ {2} ^ {2}. \tag {6}
175
+ $$
176
+
177
+ We found that training the model with Chamfer distance in dense scenes with granular materials will often lead to predictions with unevenly distributed points where some points stick too close to each other. To alleviate this issue, we further introduce a spacing loss $L_{s}$ , which penalizes the gated distance (gated by $d_{\mathrm{min}}$ ) of nearest neighbor of each point to ensure enough space between points:
178
+
179
+ $$
180
+ L _ {s} (\hat {V}) = \sum_ {v \in \hat {V}} \left(\operatorname {R e L U} \left(d _ {\min } - \min _ {v ^ {\prime} \in \{\hat {V} \backslash v \}} \| v ^ {\prime} - v \| _ {2} ^ {2}\right)\right) ^ {2}. \tag {7}
181
+ $$
182
+
183
+ The one-step prediction loss $L_{dy}$ for training the dynamics model is $L_{c}(\hat{V}, V) + \sigma L_{s}(\hat{V})$ where $\sigma$ reweights the second loss. To improve long-term rollout accuracy, we train the model with two-step predictions using the first predicted state as input and feed it back into the model to generate the second predicted state. With the two-step loss, the model becomes more robust to errors generated from its own prediction. Finally, the $L_{dy}$ losses for all rolling steps are summed up to get the final loss for this trajectory. More implementation details are included in the supplementary material.
184
+
185
+ # 4 Experiments
186
+
187
+ The experiment section aims to answer the following three questions. (1) How well can the visual inference module capture the content of the environment (i.e., can we use the learned representations to reconstruct the scene)? (2) How well does the proposed framework perform in scenes with objects of complicated physical properties (e.g., fluids, rigid and granular objects) compared to baselines without explicit 3D representations? (3) How well do the models generalize in extrapolate scenarios?
188
+
189
+ Datasets. We generated three simulated datasets using the physics simulator Nvidia FleX [39]. Each of the datasets represents one specific kind of manipulation scenario, where a robot arm interacts with rigid, fluid, and granular objects (Figure 3). For each of the three scenarios, we apply randomized input actions and change some properties of objects in the scene, e.g., the shape of the container, the amount of water, and the color/number of cubes, to make it diverse. To test the generalization capability of the trained model, we design extrapolated datasets where the data is generated from an extrapolated set of parameters outside the training distribution.
190
+
191
+ a) FluidPour. This scenario contains a fully-actuated cup pouring fluid into a container. We design the extrapolate dataset to have a larger container, more quantity of fluid, and different pouring actions.
192
+ b) FluidCubeShake. This scenario contains a fully-actuated container that moves on top of a table. Inside the container are fluids and cubes with diverse colors. We design the extrapolate dataset to have different container shapes, number of cubes, cube colors, and different shaking actions.
193
+
194
+ ![](images/f6b5f967b7f5ff6af7f556eb80e230ffcab68bae33c07e3c5a1c856de6084ead.jpg)
195
+ Figure 4: Qualitative Results of the Dynamics Module on Future Prediction. Here we visualize our model's predicted future evolution of the particle set as compared with the NeRF-dy [34] baseline in both interpolate and extrapolate settings. Our method correctly identifies the shape/distribution of the fluids, rigid objects, and granular pieces with much better accuracy than NeRF-dy. The future evolution predicted by our method also matches the ground truth much better and produces reasonable results even in extrapolate settings.
196
+
197
+ ![](images/39f53279749a577ce688614ce31b237f78c79512e19fe1fddf433e326e271b29.jpg)
198
+
199
+ c) GranularPush. This environment contains a fully-actuated board pushing a pile of granular pieces. We design the extrapolate dataset to have a larger quantity of granular objects in the scene than the model has ever seen during training.
200
+
201
+ Baselines. We compare our method with two baselines, NeRF-dy [34] and autoencoder (AE) (similar to GQN [18] augmented with a latent-space dynamics model). NeRF-dy is a 3D-aware framework that also learns intuitive physics from multi-view videos. Yet, instead of learning the object dynamics with explicit and compositional 3D representations, the model learns dynamics models with implicit 3D representations in the form of a single latent vector. We also compare our method with an autoencoder-based reconstruction model (AE) [18] that can perform novel-view synthesis but is worse at handling 3D transformations than neural implicit representations. AE first learns scene representations through per-frame image reconstruction, and then it learns a dynamics model on top of the learned latent representations. All methods take RGB images and camera parameters as inputs. To incorporate object-level information, we perform color-based segmentation to obtain object masks as additional inputs to the baselines. The implementation details and parameter settings of our method can be found in the supplementary materials.
202
+
203
+ <table><tr><td></td><td></td><td colspan="2">FluidPour</td><td colspan="2">FluidCubeShake</td><td colspan="2">GranularPush</td></tr><tr><td>Metrics</td><td>Model</td><td>InD</td><td>OoD</td><td>InD</td><td>OoD</td><td>InD</td><td>OoD</td></tr><tr><td rowspan="3">MSE(↓)</td><td>AE</td><td>451.03</td><td>542.86</td><td>869.3</td><td>1727.55</td><td>562.06</td><td>1537.2</td></tr><tr><td>NeRF-dy</td><td>202.95</td><td>317.27</td><td>527.46</td><td>1585.97</td><td>481.95</td><td>1020.0</td></tr><tr><td>Ours</td><td>111.66</td><td>124.33</td><td>66.52</td><td>81.38</td><td>147.97</td><td>646.85</td></tr><tr><td rowspan="3">SSIM(↑)</td><td>AE</td><td>0.86</td><td>0.84</td><td>0.71</td><td>0.86</td><td>0.81</td><td>0.62</td></tr><tr><td>NeRF-dy</td><td>0.89</td><td>0.86</td><td>0.73</td><td>0.65</td><td>0.81</td><td>0.61</td></tr><tr><td>Ours</td><td>0.90</td><td>0.89</td><td>0.94</td><td>0.93</td><td>0.89</td><td>0.69</td></tr></table>
204
+
205
+ Table 1: Quantitative Results of the Perception Module. We compare our method with autoencoder (AE) and NeRF-dy [34] with additional instance masks based on color. We measure the quality of rendered images by computing the Mean Squared Error (MSE) and Structural Similarity Index Measure (SSIM) compared to the ground truth. InD stands for in-distribution tests, and OoD stands for out-of-distribution tests.
206
+
207
+ ![](images/9d7996bca1d22662deac2521577329bbd38348a5f49c081d72d1e71aca0ff857.jpg)
208
+ Figure 5: Qualitative Reconstruction Results of the Perception Module. The images generated by our method contain more visual details and are much better aligned with the ground truth. Our model is much better at handling large scene variations than NeRF-dy, especially in extrapolate settings.
209
+
210
+ # 4.1 Image Reconstruction From Learned Scene Representations
211
+
212
+ We test how well the perception modules capture scene information by evaluating the visual front-end of all models on their ability to reconstruct the observed scene from the inferred representations. We measure the difference between the reconstructed and ground truth images with Mean Squared Error (MSE) and Structural Similarity (SSIM) in pixel level (Table 1). Our perception module outperforms all baselines in all three environments. The performance gap is exaggerated in extrapolate settings, especially in scenarios that involve complex interactions between rigid and deformable materials (Figure 5 qualitative comparisons).
213
+
214
+ # 4.2 Learned Visual Dynamics On In-Distribution Held-Out Scenes
215
+
216
+ Next, we compare long-term rollouts in the 3D space. We evaluate the models using the Chamfer distance between the predicted point cloud and the ground truth. For NeRF-dy, we decode the predicted rollouts latent vectors into the point cloud with the learned NeRF decoder. We exclude the comparison with AE since it is unclear how to decode the learned representations into point clouds. We show quantitative comparison in Figure 6 and qualitative results in Figure 4. 3D-IntPhys can
217
+
218
+ ![](images/44574cd6a2ab718edc01a41766864852f4c99331c25166894b25a0607b87be65.jpg)
219
+ Figure 6: Quantitative Results of the Dynamics Module. This figure compares our method and NeRF-dy [34] on their long-horizon open-loop future prediction loss. The loss is measured as the Chamfer distance between the predicted particle set evolution and the actual future. Our method outperforms the baseline in both interpolate and extrapolate settings, showing the benefits of explicit 3D modeling.
220
+
221
+ ![](images/b7ce4620d54d09177c6b7ed2a53d4827397dc6a7f61b9fae0f034d39f60bb3f8.jpg)
222
+
223
+ ![](images/aa7938459d51854d90f1c21843bda2d87a07c60fe92a472512ece5c6e91e155b.jpg)
224
+
225
+ ![](images/6e54fd0687a3c8afdff8bfe2525dde5d60d975274cc88f08d54f6733d5da4faa.jpg)
226
+ Figure 7: Strong Generalization Ability of the Dynamics Module to Wider Pushers. We evaluate our dynamics model on unseen width of pushers in GranularPush environment. The left part shows in 3D space where red indicates granular materials, green shows the table and pusher, and the arrow shows how the pusher is about to move. The right part shows from the top view of the rendering results.
227
+
228
+ ![](images/8bb0e18560dec0f8fff56d9e256a61213500472976162673c7c698f28c0d93b6.jpg)
229
+
230
+ learn reasonable scene dynamics in all scenarios and significantly outperforms NeRF-dy. While NeRF-dy can learn relatively reasonable movements of fluids, it fails to learn complex dynamics such as the floating cube and the morphing of the granular materials. The results suggest that the proposed explicit 3D point-based representations are critical to learning complex multi-material dynamics.
231
+
232
+ # 4.3 Generalization on Out-of-Distribution Scenes
233
+
234
+ To test the generalization ability of the models, we introduce extrapolate settings of all of the three scenarios. See "Extrapolate" results in Table 1, Figure 5, 6, and 4. The proposed 3D-IntPhys generalizes well to extrapolate settings both at the visual perception stage and the dynamics prediction stage, whereas NeRF-dy and autoencoder both fail at generalizing under extrapolate settings. For example, in FluidShake, both baselines cannot capture the number and the color of the rigid cubes (Figure 5). And in GranularPush, both baselines fail to capture the distributions of the granular materials. NeRF-dy performs much worse on extrapolation scenes compared to in-distribution scenes, suggesting that incorporating 3D information in an explicit way, as opposed to implicit, is much better at capturing the structure of the underlying environment, thus leading to better generalization. We further test our model on completely unseen changes to the environment – in the GranularPush environment, we extend the width of the pusher by a factor of 2 and 5. Though the stretched pusher has never shown in the training data, our model can make reasonable pushing predictions (see Fig 7).
235
+
236
+ # 5 Conclusions
237
+
238
+ In this work, we propose a 3D-aware and compositional framework, 3D-IntPhys, to learn intuitive physics from unlabeled visual inputs. Our framework can work on complex scenes involving fluid,
239
+
240
+ rigid objects, and granular materials, and generalize to unseen scenes with containers of different sizes, more objects, or larger quantities of fluids and granular pieces. We show the proposed model outperforms baselines by a large margin, highlighting the importance of learning dynamics models in an explicit 3D representations space. The major limitation of our work is the assumption of access to object masks. However, with the progress on segmentation in the wild [71, 29], we believe that it will be possible to get such kinds of masks in real-world 3D environments. Our work serves as a pioneer in visual intuitive physics learning of complex scenes, and it is an exciting future direction to learn more complex intuitive physics from real-world data with the help of these large models.
241
+
242
+ # References
243
+
244
+ [1] P. Agrawal, A. V. Nair, P. Abbeel, J. Malik, and S. Levine. Learning to poke by poking: Experiential learning of intuitive physics. Advances in neural information processing systems, 29, 2016.
245
+ [2] A. Ajay, M. Bauza, J. Wu, N. Fazeli, J. B. Tenenbaum, A. Rodriguez, and L. P. Kaelbling. Combining physical simulators and object-based networks for control. CoRR, abs/1904.06580, 2019.
246
+ [3] K. R. Allen, T. Lopez-Guevara, K. L. Stachenfeld, A. Sanchez-Gonzalez, P. W. Battaglia, J. B. Hamrick, and T. Pfaff. Physical design using differentiable learned simulators. CoRR, abs/2202.00728, 2022.
247
+ [4] M. Babaeizadeh, M. T. Saffar, S. Nair, S. Levine, C. Finn, and D. Erhan. Fitvid: Overfitting in pixel-level video prediction. CoRR, abs/2106.13195, 2021.
248
+ [5] R. Baillargeon, E. S. Spelke, and S. Wasserman. Object permanence in five-month-old infants. Cognition, 20:191-208, 1985.
249
+ [6] C. Bates, I. Yildirim, J. B. Tenenbaum, and P. W. Battaglia. Modeling human intuitions about liquid flow with particle-based simulation. CoRR, abs/1809.01524, 2018.
250
+ [7] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. F. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, C. Gulçehre, H. F. Song, A. J. Ballard, J. Gilmer, G. E. Dahl, A. Vaswani, K. R. Allen, C. Nash, V. Langston, C. Dyer, N. Heess, D. Wierstra, P. Kohli, M. M. Botvinick, O. Vinyls, Y. Li, and R. Pascanu. Relational inductive biases, deep learning, and graph networks. CoRR, abs/1806.01261, 2018.
251
+ [8] P. W. Battaglia, J. B. Hamrick, and J. B. Tenenbaum. Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, 110:18327 - 18332, 2013.
252
+ [9] D. M. Bear, E. Wang, D. Mrowca, F. J. Binder, H.-Y. F. Tung, R. Pramod, C. Holdaway, S. Tao, K. Smith, L. Fei-Fei, et al. Physion: Evaluating physical prediction from vision in humans and machines. arXiv preprint arXiv:2106.08261, 2021.
253
+ [10] Y. Burda, H. Edwards, D. Pathak, A. Storkey, T. Darrell, and A. A. Efros. Large-scale study of curiosity-driven learning. In ICLR, 2019.
254
+ [11] S. Carey and F. Xu. Infants' knowledge of objects: beyond object files and object tracking. Cognition, 80(1):179-213, 2001. Objects and Attention.
255
+ [12] M. B. Chang, T. D. Ullman, A. Torralba, and J. B. Tenenbaum. A compositional object-based approach to learning physical dynamics. CoRR, abs/1612.00341, 2016.
256
+ [13] F. de Avila Belbute-Peres, T. D. Economon, and J. Z. Kolter. Combining differentiable PDE solvers and graph neural networks for fluid flow prediction. CoRR, abs/2007.04439, 2020.
257
+ [14] D. Ding, F. Hill, A. Santoro, and M. M. Botvinick. Object-based attention for spatio-temporal reasoning: Outperforming neuro-symbolic models with flexible distributed architectures. CoRR, abs/2012.08508, 2020.
258
+ [15] D. Driess, J.-S. Ha, M. Toussaint, and R. Tedrake. Learning models as functionals of signed-distance fields for manipulation planning. In Conference on Robot Learning, pages 245-255. PMLR, 2022.
259
+ [16] D. Driess, Z. Huang, Y. Li, R. Tedrake, and M. Toussaint. Learning multi-object dynamics with compositional neural radiance fields. arXiv preprint arXiv:2202.11855, 2022.
260
+ [17] Y. Eldar, M. Lindenbaum, M. Porat, and Y. Zeevi. The farthest point strategy for progressive image sampling. IEEE Transactions on Image Processing, 6(9):1305-1315, 1997.
261
+ [18] S. A. Eslami, D. Jimenez Rezende, F. Besse, F. Viola, A. S. Morcos, M. Garnelo, A. Ruderman, A. A. Rusu, I. Danihelka, K. Gregor, et al. Neural scene representation and rendering. Science, 360(6394):1204-1210, 2018.
262
+
263
+ [19] C. Finn, I. J. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. CoRR, abs/1605.07157, 2016.
264
+ [20] C. Finn and S. Levine. Deep visual foresight for planning robot motion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 2786-2793. IEEE, 2017.
265
+ [21] K. Fragkiadaki, P. Agrawal, S. Levine, and J. Malik. Learning visual predictive models of physics for playing billiards. In Y. Bengio and Y. LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
266
+ [22] R. Girdhar, L. Gustafson, A. Adcock, and L. van der Maaten. Forward prediction for physical reasoning. CoRR, abs/2006.10734, 2020.
267
+ [23] D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603, 2019.
268
+ [24] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latent dynamics for planning from pixels. In International conference on machine learning, pages 2555-2565. PMLR, 2019.
269
+ [25] D. Hafner, T. P. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors by latent imagination. CoRR, abs/1912.01603, 2019.
270
+ [26] M. Janner, J. Fu, M. Zhang, and S. Levine. When to trust your model: Model-based policy optimization. CoRR, abs/1906.08253, 2019.
271
+ [27] M. Janner, S. Levine, W. T. Freeman, J. B. Tenenbaum, C. Finn, and J. Wu. Reasoning about physical interactions with object-oriented prediction and planning. In International Conference on Learning Representations, 2019.
272
+ [28] T. Kipf, E. van der Pol, and M. Welling. Contrastive learning of structured world models. arXiv preprint arXiv:1911.12247, 2019.
273
+ [29] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023.
274
+ [30] A. X. Lee, R. Zhang, F. Ebert, P. Abbeel, C. Finn, and S. Levine. Stochastic adversarial video prediction. CoRR, abs/1804.01523, 2018.
275
+ [31] A. Lerer, S. Gross, and R. Fergus. Learning physical intuition of block towers by example. CoRR, abs/1603.01312, 2016.
276
+ [32] T. Li, M. Slavcheva, M. Zollhoefer, S. Green, C. Lassner, C. Kim, T. Schmidt, S. Lovegrove, M. Goesele, and Z. Lv. Neural 3d video synthesis. arXiv preprint arXiv:2103.02597, 2021.
277
+ [33] W. Li, S. Azimi, A. Leonardis, and M. Fritz. To fall or not to fall: A visual approach to physical stability prediction. CoRR, abs/1604.00066, 2016.
278
+ [34] Y. Li, S. Li, V. Sitzmann, P. Agrawal, and A. Torralba. 3d neural scene representations for visuomotor control. arXiv preprint arXiv:2107.04004, 2021.
279
+ [35] Y. Li, T. Lin, K. Yi, D. Bear, D. L. Yamins, J. Wu, J. B. Tenenbaum, and A. Torralba. Visual grounding of learned physical models. In International Conference on Machine Learning, 2020.
280
+ [36] Y. Li, J. Wu, R. Tedrake, J. B. Tenenbaum, and A. Torralba. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. In ICLR, 2019.
281
+ [37] Y. Li, J. Wu, J.-Y. Zhu, J. B. Tenenbaum, A. Torralba, and R. Tedrake. Propagation networks for model-based control under partial observation. In 2019 International Conference on Robotics and Automation (ICRA), pages 1205-1211. IEEE, 2019.
282
+ [38] X. Lin, Y. Wang, Z. Huang, and D. Held. Learning visible connectivity dynamics for cloth smoothing. In Conference on Robot Learning, 2021.
283
+
284
+ [39] M. Macklin, M. Müller, N. Chentanez, and T.-Y. Kim. Unified particle physics for real-time applications. ACM Transactions on Graphics (TOG), 33(4):1-12, 2014.
285
+ [40] L. Manuelli, Y. Li, P. Florence, and R. Tedrake. Keypoints into the future: Self-supervised correspondence in model-based reinforcement learning. arXiv preprint arXiv:2009.05085, 2020.
286
+ [41] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision, pages 405-421. Springer, 2020.
287
+ [42] D. Mrowca, C. Zhuang, E. Wang, N. Haber, L. Fei-Fei, J. B. Tenenbaum, and D. L. K. Yamins. Flexible neural representation for physics prediction. CoRR, abs/1806.08047, 2018.
288
+ [43] T. Pfaff, M. Fortunato, A. Sanchez-Gonzalez, and P. W. Battaglia. Learning mesh-based simulation with graph networks. In International Conference on Learning Representations, 2021.
289
+ [44] H. Qi, X. Wang, D. Pathak, Y. Ma, and J. Malik. Learning long-term visual dynamics with region proposal interaction networks. In ICLR, 2021.
290
+ [45] R. Riochet, J. Sivic, I. Laptev, and E. Dupoux. Occlusion resistant learning of intuitive physics from videos. CoRR, abs/2005.00069, 2020.
291
+ [46] A. N. Sanborn, V. K. Mansinghka, and T. L. Griffiths. Reconciling intuitive physics and newtonian mechanics for colliding objects. Psychological review, 120 2:411-37, 2013.
292
+ [47] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. Battaglia. Learning to simulate complex physics with graph networks. In International Conference on Machine Learning, pages 8459-8468. PMLR, 2020.
293
+ [48] A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. A. Riedmiller, R. Hadsell, and P. W. Battaglia. Graph networks as learnable physics engines for inference and control. CoRR, abs/1806.01242, 2018.
294
+ [49] J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lockhart, D. Hassabis, T. Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604-609, 2020.
295
+ [50] H. Shi, H. Xu, Z. Huang, Y. Li, and J. Wu. Robocraft: Learning to see, simulate, and shape elasto-plastic objects with graph networks. arXiv preprint arXiv:2205.02909, 2022.
296
+ [51] K. Smith, L. Mei, S. Yao, J. Wu, E. Spelke, J. Tenenbaum, and T. Ullman. Modeling expectation violation in intuitive physics with coarse probabilistic object representations. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
297
+ [52] E. S. Spelke. Principles of object perception. Cognitive Science, 14(1):29-56, 1990.
298
+ [53] H. Suh and R. Tedrake. The surprising effectiveness of linear models for visual foresight in object pile manipulation. In International Workshop on the Algorithmic Foundations of Robotics, pages 347-363. Springer, 2020.
299
+ [54] A. Tacchetti, H. F. Song, P. A. M. Mediano, V. F. Zambaldi, N. C. Rabinowitz, T. Graepel, M. M. Botvinick, and P. W. Battaglia. Relational forward models for multi-agent learning. CoRR, abs/1809.11044, 2018.
300
+ [55] H.-Y. F. Tung, Z. Xian, M. Prabhudesai, S. Lal, and K. Fragkiadaki. 3d-oes: Viewpoint-invariant object-factorized environment simulators. arXiv preprint arXiv:2011.06464, 2020.
301
+ [56] T. Ullman, E. Kosoy, I. Yildirim, A. A. Soltani, M. H. Siegel, J. Tenenbaum, and E. S. Spelke. Draping an elephant: Uncovering children's reasoning about cloth-covered objects. In Proceedings of the 41st Annual Conference of the Cognitive Science Society, pages 3008-3014, 2019.
302
+
303
+ [57] B. Ummenhofer, L. Prantl, N. Thuerey, and V. Koltun. Lagrangian fluid simulation with continuous convolutions. In International Conference on Learning Representations, 2019.
304
+ [58] R. Veerapaneni, J. D. Co-Reyes, M. Chang, M. Janner, C. Finn, J. Wu, J. B. Tenenbaum, and S. Levine. Entity abstraction in visual model-based reinforcement learning. CoRR, abs/1910.12827, 2019.
305
+ [59] C. Vondrick, H. Pirsiavash, and A. Torralba. Anticipating the future by watching unlabeled video. CoRR, abs/1504.08023, 2015.
306
+ [60] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. Advances in neural information processing systems, 28, 2015.
307
+ [61] N. Watters, D. Zoran, T. Weber, P. Battaglia, R. Pascanu, and A. Tacchetti. Visual interaction networks: Learning a physics simulator from video. Advances in neural information processing systems, 30, 2017.
308
+ [62] B. Wu, S. Nair, R. Martin-Martin, L. Fei-Fei, and C. Finn. Greedy hierarchical variational autoencoders for large-scale video prediction. CoRR, abs/2103.04174, 2021.
309
+ [63] Z. Xu, Z. He, J. Wu, and S. Song. Learning 3d dynamic scene representations for robot manipulation. In Conference on Robotic Learning (CoRL), 2020.
310
+ [64] Z. Xu, J. Wu, A. Zeng, J. B. Tenenbaum, and S. Song. Densephysnet: Learning dense physical object representations via multi-step dynamic interactions. In Robotics: Science and Systems (RSS), 2019.
311
+ [65] T. Xue, J. Wu, K. L. Bouman, and W. T. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In Advances In Neural Information Processing Systems, 2016.
312
+ [66] Y. Ye, D. Gandhi, A. Gupta, and S. Tulsiani. Object-centric forward modeling for model predictive control. In CoRL, 2019.
313
+ [67] Y. Ye, M. Singh, A. Gupta, and S. Tulsiani. Compositional video prediction. In International Conference on Computer Vision (ICCV), 2019.
314
+ [68] A. Yu, V. Ye, M. Tancik, and A. Kanazawa. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4578-4587, 2021.
315
+ [69] M. Zhang, S. Vikram, L. Smith, P. Abbeel, M. J. Johnson, and S. Levine. SOLAR: deep structured latent representations for model-based reinforcement learning. CoRR, abs/1808.09105, 2018.
316
+ [70] R. Zhang, J. Wu, C. Zhang, W. T. Freeman, and J. B. Tenenbaum. A comparative evaluation of approximate probabilistic simulation and deep neural networks as accounts of human physical scene understanding. CoRR, abs/1605.01138, 2016.
317
+ [71] X. Zou, J. Yang, H. Zhang, F. Li, L. Li, J. Gao, and Y. J. Lee. Segment everything everywhere all at once. arXiv preprint arXiv:2304.06718, 2023.
318
+
319
+ # A Additional Results
320
+
321
+ To better understand the performance of our framework visually, we prepare test time rollouts of our framework as well as those of various baselines in the supplementary video. The video is published anonymously and can be accessed in https://sites.google.com/view/3d-intphys
322
+
323
+ # A.1 Ablation Study
324
+
325
+ We find that training the model with Chamfer distance in dense scenes with granular materials will often lead to predictions with unevenly distributed points where some points stick too close to each other. To alleviate the issue, we introduce the spacing loss to penalize the distance between these points. We set the threshold of penalty $d_{min}$ to be 0.08 and the loss weight $\sigma$ to be 10. We find that spacing loss can help improve the performance of the dynamics learner especially under extrapolate settings, as shown in Figure 8. We provide qualitative results in the supplementary video.
326
+
327
+ ![](images/bbca977cda5ec360e7e0137adcba7d69bea41b60fa5b0957e2e3183bd4689fbe.jpg)
328
+ Figure 8: Ablation Study on the Spacing Loss. Training dynamics models in the GranularPush scenario with spacing loss results in better rolling prediction. The performance gap is even more substantial in the extrapolate setting.
329
+
330
+ ![](images/358b2f922cf6cad6b16ee8477a77cdcaffd2b9341a44183fa9167a195cc2b70f.jpg)
331
+
332
+ # B Implementation Details
333
+
334
+ # B.1 Dataset Generation
335
+
336
+ Our datasets are generated by the NVIDIA Flex simulator. Each of the three scenarios (Pour, Shake and Push) has 500 videos of trajectories taken from 6 views, with each trajectory consisting of 300 frames. We manually select the 6 views with reasonable coverage of the tabletop space to minimize the occlusion. The 500 trials are generated from five different sets of environmental parameters, detailed in Table 3. We take one set of parameters that are outside the training distribution as the extrapolate dataset for evaluating model generalization. For the rest of the four settings, we randomly split them into train and test sets with a ratio of 0.8.
337
+
338
+ Next, we provide more details for each scenario:
339
+
340
+ - In the FluidPour environment, we randomly initialize the position of the upper container and then generate random back-and-forth actions by tilting the container. The action space is then the position and tilting angle of the upper container.
341
+ - In FluidCubeShake, we also randomly initialize the position of the container and the cubes inside the container. We then generate random but smooth action sequences moving the container in the 2D plane. The action space is then the x-y location of the container.
342
+ - In GranularPush, we randomly initialize the position of the granular pile. Then, for each push, we randomly generate the starting and ending positions of the pusher and move the pusher along the straight line with an angle perpendicular to the pushing direction. The action space is a four-number tuple stating the starting and ending position on the 2D plane.
343
+
344
+ The following table shows the moving range of the robot arms in the FluidPour and FluidCubeShake environments after normalizing the robot into a size that is the same as in the real world (unit:
345
+
346
+ <table><tr><td></td><td>X-Range</td><td>Y-Range</td><td>Z-Range</td></tr><tr><td>FluidPour</td><td>[-29.11, -12.66]</td><td>[42.00, 60.00]</td><td>[-7.78, 7.78]</td></tr><tr><td>FluidCubeShake</td><td>[-3.25, 42.25]</td><td>[19.25, 19.25]</td><td>[-24.50, 24.00]</td></tr></table>
347
+
348
+ ![](images/87cbf2bccb71f3c0e93e360c2c4660bacab10c23997c7230dc017a7c3835adfb.jpg)
349
+ Figure 9: Illustration of the Environment Settings. In the FluidPour scenario, a robot arm holds a container and tries to pour some fluid into another container. In the FluidShake scenario, a robot moves a container with some fluid and cubes. We show the parameters for the container shape referred in Table 3.
350
+
351
+ Table 2: Robot Action Space(centimeters): we show the range the robot arms can move in the FluidPour and FluidCubeShake environments.
352
+
353
+ <table><tr><td>SceneName</td><td>Params</td><td>Env1</td><td>Env2</td><td>Env3</td><td>Env4</td><td>Extrapolate</td></tr><tr><td rowspan="7">FluidPour</td><td>X2</td><td>0.53</td><td>0.53</td><td>0.81</td><td>0.81</td><td>0.81</td></tr><tr><td>Y2</td><td>0.53</td><td>0.81</td><td>0.53</td><td>0.81</td><td>0.81</td></tr><tr><td>Z2</td><td>1.24</td><td>1.24</td><td>1.24</td><td>1.24</td><td>1.24</td></tr><tr><td>X1</td><td>1.35</td><td>1.35</td><td>1.35</td><td>1.35</td><td>1.35</td></tr><tr><td>Y1</td><td>1.35</td><td>1.35</td><td>1.35</td><td>1.35</td><td>1.35</td></tr><tr><td>Z1</td><td>0.74</td><td>0.74</td><td>0.74</td><td>0.74</td><td>0.74</td></tr><tr><td>AmountofWater</td><td>5125</td><td>5125</td><td>6125</td><td>5375</td><td>7625</td></tr><tr><td rowspan="4">FluidCubeShake</td><td>X1</td><td>0.88</td><td>0.88</td><td>1.32</td><td>1.32</td><td>1.32</td></tr><tr><td>Y1</td><td>0.88</td><td>1.32</td><td>0.88</td><td>1.32</td><td>1.32</td></tr><tr><td>CubeNumber</td><td>1</td><td>1</td><td>2</td><td>2</td><td>3</td></tr><tr><td>Water</td><td>2173</td><td>3322</td><td>3322</td><td>4858</td><td>4983</td></tr><tr><td>GranularPush</td><td>GranularNumber</td><td>2197</td><td>4032</td><td>5832</td><td>9261</td><td>12167</td></tr></table>
354
+
355
+ Table 3: Scene Parameters for Generating the Interpolate and Extrapolate Datasets. We generate the datasets by varying the shape of container, amount of water, number of cubes, and quantity of the granular material. $Z_{i}, X_{i}, Y_{i}$ are the height, width, and depth for a container $i$ . Please refer to Figure 9 for more details.
356
+
357
+ centimeters). For GranularPush, the pusher is moving over the entire table; we ignore the specific number in this environment as we do not have robot arms as a reference.
358
+
359
+ Additional dataset samples. We show samples from the FluidPour, FluidCubeShake and GranularPush dataset in Figure 10, 11 and 12, respectively. Note that all trajectories for the extrapolate settings are used only for testing and will not show up during the training process. We include more samples from the dataset in the video format in the supplementary video.
360
+
361
+ ![](images/f5719c44f4fc8f80110db95462525a68204f7bbab2e022a954867c1288dcb6bf.jpg)
362
+ Figure 10: Samples from FluidPour Dataset. We show sequences of frames over time with an interval of 20 frames. The sequences above the dashed line are for interpolate data, and the bottom images illustrate the extrapolate data.
363
+
364
+ ![](images/34b771a35bb59d5c833fd8f132fc168720b4022e6b13147cccc5e1e632c4e1de.jpg)
365
+ Figure 11: Samples from FluidCubeShake Dataset. We show sequences of frames over time with an interval of 20 frames. The sequences above the dashed line are for interpolate data, and the bottom images illustrate the extrapolate data.
366
+
367
+ # B.2 Model Architecture
368
+
369
+ Image-conditional NeRF. We follow the architectural design by [68]. For the feature encoder, we employ a ResNet-34 backbone to extract features. We use the output layers prior to the first four pooling layers, upsampling them using bilinear interpolation to the same size, and then concatenating these four feature maps. We initialize the weight of the feature extractor of the scene using ImageNet pre-trained weight. For the NeRF function $f$ , We use fully-connected ResNet architecture with 5 ResNet blocks with a width of 512.
370
+
371
+ ![](images/fe854b1c4543d387706d496a3c79406f37d8ad055f9275b18f7b1134af6e7a90.jpg)
372
+ Figure 12: Samples from GranularPush Dataset. We show sequences of frames over time with an interval of 20 frames. The sequences above the dashed line are for interpolate data, and the bottom images illustrate the extrapolate data.
373
+
374
+ Dynamics predictor. For the edge and vertex encoders, $Q_{e}$ and $Q_{v}$ , we use 3-layer fully-connected networks activated by the ReLU function with 150 hidden units. For the propagators, $P_{e}$ and $P_{v}$ , we use a 1-layer fully-connected network followed by ReLU activation. The output dimension of the linear layer is 150.
375
+
376
+ Sampling 3D points from the trained visual perception module. We sample points on a $40 \times 40 \times 40$ grid from an area of $55\mathrm{cm} \times 55\mathrm{cm} \times 55\mathrm{cm}$ and $63\mathrm{cm} \times 63\mathrm{cm} \times 63\mathrm{cm}$ at the center of the table for FluidPour and FluidCubeShake respectively, and on a $70 \times 70 \times 70$ grid from an area of $6\mathrm{cm} \times 6\mathrm{cm} \times 6\mathrm{cm}$ for GranularPush. We evaluate and include points with a density (measured by the occupancy in the predicted neural radiance fields) larger than 0.99. To reduce the total number of points, we subsample the inferred points with FPS with a ratio of $5\%$ for FluidPour and $10\%$ for FluidCubeShake and GranularPush.
377
+
378
+ Graph building. We set the neighbour distance threshold $\delta$ to be 0.2, 0.15, 0.15 for FluidPour, FluidCubeShake and GranularPush respectively. We select the threshold so that each point will have on average 20 30 neighbors. Since, in FluidPour, we sample the points with lower density 2000points/ $m^2$ , we use a larger threshold for this scenario. For FluidShape and GranularPush, since the density is around 3000 points/ $m^2$ , we cut down the number by $25\%$ .
379
+
380
+ We found that if the threshold is too small, the performance will degrade significantly since each particle will only receive messages from a few neighbors (and miss out on the larger context). On the other hand, setting the threshold too large will cause the training time to increase since the graph will have more edges. We found that setting the threshold around the right scale generally leads to more effective training of a reasonable dynamics network.
381
+
382
+ # B.3 Training Details
383
+
384
+ The models are implemented in PyTorch. We train the perception module using Adam optimizer with a learning rate of $1e - 4$ , and we reduce the learning rate by $80\%$ when the performance on the validation set has stopped improving for 3 epochs. To compute the rendering loss when training the perception module, we sample 64 points through each ray in the scene and set the ray-batch size of the NeRF query function $f$ to be $1024\times 32$ . Training the perception module on a single scenario takes around 5 hours on one RTX-3090.
385
+
386
+ We train the dynamics simulator using Adam optimizer with a learning rate of $1e - 4$ , and we reduce the learning rate by $80\%$ when the performance on the validation set has stopped improving for 3
387
+
388
+ epochs. The batch size is set to 4. We train the model for 20, 30, and 40 epochs for FluidPour, FluidCubeShake, and GranularPush, respectively. It takes around $10 \sim 15$ hours to train the dynamics model in one environment on one single RTX-3090.
389
+
390
+ # B.4 Graph-Based Dynamics Model without Particle-level Correspondence
391
+
392
+ The velocity of an object provides critical information on how the object will move in the future, yet, we do not have access to such information when tracking the object is impossible. As described in Section 3.2, the attributes $a_i^v$ of a vertex $v_i$ in the built graph consists of (1) velocity of this point in the past frames and (2) attributes of the point (rigid, fluid, granular). To get the velocity of a vertex $v$ , we should have the history position of this vertex. However, since the point clouds are inferred from each frame independently, we do not know how each point moves over time since we do not have point correspondence between frames.
393
+
394
+ To address the problem, we leverage the fact that some objects in the scene are easier to track, and we try to use the motion of these trackable objects to infer motion for the untrackable units. We assume that we know the dense-labeled states of some known fully-actuated shapes like desks and cups connected to the robot arms. Here we will list one specific scenario where a cup of water is poured into another cup. In this case, we have two different types of points: points for fluid and points for cups, we name the states of them in time step $t$ as $V_{P}^{t} = \{v_{P,i}^{t}\}$ and $V_{S}^{t} = \{v_{S,i}^{t}\}$ respectively. For the particle encoder $Q_{v}$ , if the particle belongs to the cups, then the input of particle encoder contains $n_{s}$ history states before $t_0: \{V_S^{(t_0 - n_s):t_0}\}$ . If the particle belongs to the water, then we have no history states, so the input of $Q_{v}$ is all-zero.
395
+
396
+ By adding the relative position between receiver and sender points, we can pass the momentum of $V_{P}$ to $V_{S}$ . Compared with human intuition, we can get an intuitive prediction of the movement of water by simply knowing the past movement of the cup without knowing the past movement of water.
397
+
398
+ Following [47], we use the velocity of points and their relative position as inputs to the dynamics module instead of using the absolute positions of the points. This ensures the model is translation-invariant so the learned dynamics model can be shared across different spatial locations.
399
+
400
+ # B.5 Inference Speed of Our Model
401
+
402
+ The prediction speed of the dynamics module depends on the number of input particles, and it takes around 0.1s for graphs with around 300 nodes in FluidShake and FluidPour, and around 0.2s for scenes with $700+$ nodes in GranularPush.
403
+
404
+ For our visual module, the main time consumption comes from NeRF sampling, it takes 0.2s to sample from a grid space introduced in the experiment section of our paper, this was run in blocks, with block-size=1000, made up 4G of a V100 GPU. And it can be even faster with larger blocks. The sub-sampling process (FPS, segmentation) is fast since they are all written in parallel versions, which takes less than 5ms.
405
+
406
+ # C Potential Society Impact
407
+
408
+ Our work shows the possibility of learning dynamics models from raw sensory inputs, opening up opportunities to automate the design of differentiable physics engines through data-driven learning algorithms. The resulting system can potentially benefit many downstream tasks, including general scene understanding, robotics manipulation, the construction of 3D generative models, and inverse tasks like planning/control and inverse design. Furthermore, predictions from our model are highly interpretable, which makes it straightforward to explain model behaviors and re-purpose the outputs for other downstream applications.
409
+
410
+ Though data-driven approaches are potentially more scalable with enough data, concerns still exist that it might be hard to ensure the robustness of the model under sensor noise and adversarial attacks. It also becomes less clear how to fully mitigate data biases. Therefore, bringing in advanced techniques from ML robustness will be one critical future avenue to pursue.
411
+
412
+ ![](images/dcb7915087941b7f73e618b7cce4267711ed614ce3b585378c59a6a9e60374cb.jpg)
413
+ Figure 13: SAM Working on FluidCube Shake: Recent large segmentation models can well generate masks for different objects in the scene.
414
+
415
+ # D Important Clarifications and Discussions
416
+
417
+ Q: What is the input of 3D-IntPhys?
418
+
419
+ Video inputs and object instance masks based on color assumption.
420
+
421
+ Q: What is the difference of our work compared with [34, 35, 16]?
422
+
423
+ 1. Compared with nef-dy [34]: our method uses explicit 3D representation instead of implicit representation, which we show in our paper that our method can generalize better than [34].
424
+ 2. Compared with VPGL [35]: (1)[35] requires GT 3D particle set as supervision to train the visual frontend, while our 3D-IntPhys does not need particle-level supervision (2) In [35], the 3D representation is a particle set, which is an ordered list. As a result, it can only generate a fixed number of points. In contrast, our method produces dense representations that are not limited to ordered sets, making it more flexible and adaptable to systems of varying sizes. (3) [35] assumes that we have a learned dynamics model from the simulation as a dynamics prior, while we learn the dynamics model from the data.
425
+ 3. Compared with Comp nerfdyn [16]: [16]only works on rigid objects and a rope that exhibits only slight deformation, most of the objects do not have topological changes, and they are all constrained to move in a 2D plane. So while the object-centric dynamics model used in [16] can solve the tasks in their paper, the object-centric representation is not suitable for learning the complex dynamics of fluids or granular materials, as in our paper. Our settings contain much more diverse 3D dynamics of challenging materials, which can not be solved using [16].
426
+
427
+ Q: Is the color segmentation of the fluid objects a reasonable assumption?
428
+
429
+ It should be noted that the color-based segmentation will not degrade the challenging problem of learning 3D Intuitive Physics, since the task focuses more on learning complex visual dynamics from images.
430
+
431
+ We want to emphasize that the work focuses more on learning complex visual dynamics from images, as opposed to solving object segmentation in general. Learning fluids dynamics from videos is a challenging task, and there are only a few existing works. NeRF-dy is the closest to us, yet the model's generalization ability is limited. We have shown in the proposed work that we can significantly improve the generalization ability by operating with a hybrid of implicit and explicit, as opposed to pure implicit, 3D representations. We agree object segmentation is a critical visual understanding
432
+
433
+ problem, and solving it is an important next step to getting a more general visual dynamics learning framework.
434
+
435
+ With recent advancements such as SAM [29] and SEER [71], which focus on segmentation in real-world scenarios, the possibility of video segmentation without the need for annotations has emerged (as is shown in Figure 13). This development paves the way for leveraging existing large-scale models to enhance the segmentation pipeline, offering great promise for future applications.
436
+
437
+ Q: Since the fluid has zero velocities, how to predict the intuitive dynamics?
438
+
439
+ The intuition is that we can infer the water movement from the container's movement. We also assume that the initial velocity of water is nearly zero, which is also used in [50], so the momentum can be gradually passed from the container to the water.
440
+
441
+ We propose this assumption so that the intuitive physics model can be learned from (1) particles sampled from the neural radiance field, which is not stable (2) point clouds without one-to-one correspondence. The results show that we can learn reasonable dynamics (water poured out from a cup, water falling in the container, cubes moving in water, and granular materials pushed away by a pusher). It also shows the potential of distribution-based loss in learning visual dynamics.
3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc2cb47e0509aa77d598e3bbf608ff1534f77c533766ec891a9e40a802fe0ded
3
+ size 1048489
3dintphystowardsmoregeneralized3dgroundedvisualintuitivephysicsunderchallengingscenes/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9446478059c03bfe8a345964e7801503c38f7efac9d0071b83715670917d46a0
3
+ size 520402
3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9db1ed9c0748e094102ae6e5ec3462c22dfbd68d1e1f2b36503d3e187932ecf
3
+ size 69483
3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c7cd884a1c8536fe90ac6be6afbababddc89b8ff0d54a116c65f75df679e998
3
+ size 87348
3dllminjectingthe3dworldintolargelanguagemodels/e38cecd1-3c25-4b8b-a733-528a1e9ea802_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1317b63cc8e7995c6b5de12f1e84c75f083dfc41900ec25b2ec7412e46eaac4
3
+ size 4144084
3dllminjectingthe3dworldintolargelanguagemodels/full.md ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D-LLM: Injecting the 3D World into Large Language Models
2
+
3
+ Yining Hong
4
+
5
+ University of California, Los Angeles
6
+
7
+ Haoyu Zhen
8
+
9
+ Shanghai Jiao Tong University
10
+
11
+ Peihao Chen
12
+
13
+ South China University of Technology
14
+
15
+ Shuhong Zheng
16
+
17
+ University of Illinois Urbana-Champaign
18
+
19
+ Yilun Du
20
+
21
+ Massachusetts Institute of Technology
22
+
23
+ Zhenfang Chen
24
+
25
+ MIT-IBM Watson AI Lab
26
+
27
+ Chuang Gan
28
+
29
+ UMass Amherst and MIT-IBM Watson AI Lab
30
+
31
+ # Abstract
32
+
33
+ Large language models (LLMs) and Vision-Language Models (VLMs) have been proven to excel at multiple tasks, such as commonsense reasoning. Powerful as these models can be, they are not grounded in the 3D physical world, which involves richer concepts such as spatial relationships, affordances, physics, layout, and so on. In this work, we propose to inject the 3D world into large language models and introduce a whole new family of 3D-LLMs. Specifically, 3D-LLMs can take 3D point clouds and their features as input and perform a diverse set of 3D-related tasks, including captioning, dense captioning, 3D question answering, task decomposition, 3D grounding, 3D-assisted dialog, navigation, and so on. Using three types of prompting mechanisms that we design, we are able to collect over 1M 3D-language data covering these tasks. To efficiently train 3D-LLMs, we first utilize a 3D feature extractor that obtains 3D features from rendered multi-view images. Then, we use 2D VLMs as our backbones to train our 3D-LLMs. By introducing a 3D localization mechanism, 3D-LLMs can better capture 3D spatial information. Experiments on held-out evaluation dataset, ScanQA, SQA3D and 3DMV-VQA, outperform state-of-the-art baselines. In particular, experiments on ScanQA show that our model outperforms state-of-the-art baselines by a large margin (e.g., the BLEU-1 score surpasses state-of-the-art score by $9\%$ ). Furthermore, experiments on our held-in datasets for 3D captioning, task composition, and 3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitative examples also show that our model could perform more tasks beyond the scope of existing LLMs and VLMs. Project Page: : https://vis-www.cs.umass.edu/3d11m/.
34
+
35
+ # 1 Introduction
36
+
37
+ In the past several years, we have witnessed a surge of large language models (LLMs) (e.g., GPT4 [33]) that excel at multiple tasks, such as communication and commonsense reasoning. Recent works have explored aligning images and videos with LLM for a new generation of multi-modal LLMs (e.g., Flamingo [15], BLIP-2 [29]) that equip LLMs with the ability to understand and reason about 2D images. However, as powerful as the models can be in communication and reasoning,
38
+
39
+ ![](images/14873319919fb6039fa4911382e3ae3f9adc70016587371a27bf881bfd2d4e4f.jpg)
40
+ Figure 1: Examples from our generated 3D-language data, which covers multiple 3D-related tasks.
41
+
42
+ they are not grounded in the real 3D physical world, which involves richer concepts such as spatial relationships, affordances, physics and interaction so on. Therefore, such LLMs pale in comparison with the robots depicted in sci-fi movies - the assistants that could understand the 3D environments, as well as perform reasoning and planning based on the 3D understandings.
43
+
44
+ To this end, we propose to inject the 3D world into large language models, and introduce a whole new family of 3D-LLMs that could take 3D representations (i.e., 3D point clouds with their features) as input, and perform a series of 3D-related tasks. By taking the 3D representations of scenes as input, LLMs are blessed with twofold advantages: (1) long-term memories about the entire scene can be stored in the holistic 3D representations, instead of episodic partial-view observations. (2) 3D properties such as affordances and spatial relationships can be reasoned from 3D representations, far beyond the scope of language-based or 2D image-based LLMs.
45
+
46
+ One major challenge of training the proposed 3D-LLMs lies in data acquisition. Unlike the vast amount of paired 2D-images-and-text data on the Internet, the scarcity of 3D data hinders the development of 3D-based foundation models. 3D data paired with language descriptions are even harder to obtain. To address this, we propose a set of unique data generation pipelines that could generate large-scale 3D data paired with language. Specifically, we make use of ChatGPT [33] and devise three efficient prompting procedures for communication between 3D data and language. In this way, we are able to obtain approximately one million 3D-language data covering a diverse set of
47
+
48
+ tasks, including but not limited to 3D captioning, dense captioning, 3D question answering, 3D task decomposition, 3D grounding, 3D-assisted dialog, navigation and so on, as shown in Figure 1.
49
+
50
+ The next challenge resides in how to obtain meaningful 3D features that could align with language features for 3D-LLMs. One way is to train 3D encoders from scratch using a similar contrastive-learning paradigm for the alignment between 2D images and language (e.g., CLIP [36]). However, this paradigm consumes tremendous data, time, and GPU resources. From another perspective, there are numerous recent works that build 3D features from 2D multi-view images (e.g., concept fusion [24], 3D-CLR [20]). Inspired by this, we also utilize a 3D feature extractor that constructs 3D features from the 2D pretrained features of rendered multi-view images. Recently, there are also quite a few visual-language models (e.g., BLIP-2 [29], Flamingo [15]) utilizing the 2D pretrained CLIP features for training their VLMs. Since our extracted 3D features are mapped to the same feature space as 2D pretrained features, we can seamlessly use 2D VLMs as our backbones and input the 3D features for the efficient training of 3D-LLMs.
51
+
52
+ One crucial aspect of 3D-LLMs, different from vanilla LLMs and 2D VLMs, is that 3D-LLMs are expected to have an underlying 3D spatial sense of information. Thus, we develop a 3D localization mechanism that bridges the gap between language and spatial locations. Specifically, we append 3D position embeddings to the extracted 3D features to better encode spatial information. In addition, we append a series of location tokens to the 3D-LLMs, and localization can be trained via outputting location tokens given the language descriptions of specific objects in the scenes. In this way, 3D-LLMs could better capture 3D spatial information.
53
+
54
+ To sum up, our paper has the following contributions:
55
+
56
+ - We introduce a new family of 3D-based Large Language models (3D-LLMs) that can take 3D points with features and language prompts as input, and perform a variety of 3D-related tasks. We focus on tasks beyond the scope of vanilla LLMs or 2D-LLMs, such as tasks about holistic scene understanding, 3D spatial relationships, affordances and 3D planning.
57
+ - We devise novel data collection pipelines that could generate large-scale 3D-language data. Based on the pipelines, we collect a dataset that has over 1M 3D-language data that cover a diverse set of 3D-related tasks, including but not limited to 3D captioning, dense captioning, 3D question answering, task decomposition, 3D grounding, 3D-assisted dialog, navigation, and so on.
58
+ - We use a 3D feature extractor that extracts meaningful 3D features from rendered multi-view images. We utilize 2D pretrained VLMs as our backbones for efficient training. We introduce a 3D localization mechanism for training the 3D-LLMs to better capture 3D spatial information.
59
+ - Experiments on held-out evaluation dataset, ScanQA, SQA3D and 3DMV-VQA, outperform state-of-the-art baselines. In particular, 3D LLMs outperform baselines by a large margin on ScanQA (e.g., $9\%$ for BLEU-1 and $10\%$ for CIDER). Experiments on held-in datasets for 3D captioning, task composition, and 3D-assisted dialogue show that our model outperforms 2D VLMs. Qualitative studies further demonstrate that our model is able to handle a diverse set of tasks.
60
+ - We release our 3D-LLMs, the 3D-language dataset, and language-aligned 3D features of the dataset for future research development ${}^{1}$ .
61
+
62
+ # 2 Related Works
63
+
64
+ Large Language Models. Our work is closely related to large language models [4, 14, 37, 10, 34] (LLMs) like GPT-3 [4] and PaLM [10], which are able to handle different language tasks with a single model and show strong generalization abilities. These models are typically trained on massive textual data with self-supervised training targets like predicting the next tokens [4, 37] or reconstructing the masked tokens [14, 38]. To better align these LLMs' predictions to human instructions, improve the models' generalization abilities on unseen tasks, a series of instruction tuning methods [35, 42] and datasets [11, 13] have been proposed. In this work, we aim to inject the 3D world into large language models, understanding rich 3D concepts such as spatial relations, affordances, and physics.
65
+
66
+ Vision-Language Pre-trained Models. Our work is also related to vision-language pre-trained models that connect images and natural language [30, 31, 18, 36, 25]. Some research [36, 25] learn to train models from scratch with massive image-language pairs and apply them to downstream tasks like visual question answering [19, 47], captioning [7], and referring expression comprehension [46] with finetuning. Other researchers have connected pre-trained vision models and pre-trained LLMs
67
+
68
+ ![](images/945f996c9a5c9375481ce577ac0c1f935e212e80b3111af4588e5b2d9ce451d8.jpg)
69
+ Figure 2: 3D-language data generation pipelines.
70
+
71
+ with additional learnable neural modules like perceiver [2] and QFormers [30], leveraging perception abilities in pre-trained vision models, and reasoning and generalization capacities in LLMs. Inspired by these previous works, we plan to build an AI assistant that could understand the 3D world and perform corresponding 3D reasoning and planning. This is not trivial and we need to overcome obstacles like how to handle the problem of data sparsity, how to align the 3D world with 2D images, and how to capture 3D spatial information.
72
+
73
+ 3D & Language. Another line of research that is similar to ours is 3D and language [5, 45, 8, 20, 1, 16, 22, 45, 3]. ScanQA [45] requires a model to answer questions related to the 3D world; ScanRefer [5] asks a model to localize a region that the text expression refers to; 3D captioning [8] tests models' abilities to generate captions describing the 3D scenes. However, these 3D tasks and their corresponding models are usually task-specific and could only handle cases within the same distribution of the training sets without generalization. Different from them, we aim to build a 3D model that could handle different tasks at the same time and enable new abilities like 3D-assistant dialog and task decomposition.
74
+
75
+ # 3 3D-Language Data Generation
76
+
77
+ The community has witnessed the proliferation of multi-modal data thanks to easy access to a tremendous amount of 2D image and text pairs on the internet. However, when it comes to 3D-related data, obtaining multimodal resource is not easy, due to not only the scarcity of 3D assets, but also the difficulty of providing language data for 3D assets. There are some existing datasets that contain 3D-language data (e.g., ScanQA [45], ScanRefer [5]). However, they are limited with regard to both quantity and diversity, restricted to only one task per dataset. How to generate a 3D-language dataset that can be utilized for all kinds of 3D-related tasks is well worth delving into.
78
+
79
+ Inspired by the recent success of large language models like GPT [33], we propose to leverage such models for 3D-language data collection. Specifically, as shown in Figure 2, we have three ways to prompt a text-only GPT for generating data. 1) boxes-demonstration-instruction based prompting. We input the axis-aligned bounding boxes (AABB) of both the rooms and the objects in the 3D
80
+
81
+ ![](images/e2c45a87bfa2e029cfb620bd7b4a8c8cfee73c780487ec6c0ac56607459dcd62.jpg)
82
+ Figure 3: Overview of our 3D-LLM framework. The first two columns show our 3D feature extractor. We first render a few multi-view images from the 3D scene, extract 2D dense features, and then construct 3D features from these multi-view images using three kinds of methods. And then, the 3D features and input language prompts are input to the 3D-LLMs to generate responses.
83
+
84
+ scenes, providing information about the semantics and spatial locations of the scene. We then provide specific instructions to the GPT model to generate diverse data. We give 0-3 few-shot demonstration examples of the GPT model showing what kind of data it is instructed to generate. 2) ChatCaptioner based prompting. We utilize techniques similar to [48], in which ChatGPT is prompted to ask a series of informative questions about an image and BLIP-2 [29] answers the questions. In order to collect 3D-related data, we first sample several images from different views of a 3D scene. These images are fed into ChatGPT and BLIP-2 to get the caption of each image. We then leverage ChatGPT to summarize all these captions, which contain information about different regions, to form a global 3D description of the entire scene. 3) Revision based prompting. It can be used to transfer one type of 3D data to another.
85
+
86
+ Given the prompting pipelines, GPT is able to generate various types of 3D-language data as summarized in Figure 1. More data generation details and prompt designs are shown in the Appendix.
87
+
88
+ We mainly establish our 3D-language dataset upon several 3D assets:
89
+
90
+ - Objaverse is a universe of 800K 3D objects. However, since the language descriptions were extracted from online sources and not examined by humans, most objects have very noisy descriptions (e.g., with urls) or no descriptions. We utilize ChatCaptioner based prompting to generate high-quality 3D-related descriptions for the scenes and reivison-based prompting to generate questions.
91
+ - Scannet [12] is a richly-annotated dataset of approximately 1k 3D indoor scenes. It provides semantics and bounding boxes of the objects in the scenes.
92
+ - Habitat-Matterport (HM3D) [39] is a dataset of 3D environments of embodied AI. HM3DSem [44] further adds semantic annotations and bounding boxes for more than 200 scenes of HM3D. We use the pre-segmented rooms of HM3D in 3D-CLR [20].
93
+
94
+ # 4 3D-LLM
95
+
96
+ # 4.1 Overview
97
+
98
+ In this section, we introduce how we train our 3D-LLMs. We argue that it's hard to train 3D-LLMs from scratch, since our collected 3D-language dataset is still not the size of billion-scale image-language dataset used to train 2D VLMs. Furthermore, for 3D scenes, there are no available pretrained encoders like those for 2D images (e.g., CLIP ViT encoders). Thus, retraining 3D-language models from scratch is data-inefficient and resource-heavy. Recently, researchers have proposed to extract 3D features from 2D multi-view images [24, 20]. Using these alignment methods, we could use pretrained image encoders to extract image features, and then map the features to the 3D data. Since the pretrained image features serve as inputs to 2D VLMs, the mapped 3d features of the same feature space can also be seamlessly fed into the pretrained 2D VLMs, which we use as our backbones to train 3D-LLMs. We also propose a 3D localization mechanism to boost the model's ability to capture 3D spatial information. Figure 3 shows our framework.
99
+
100
+ # 4.2 3D Feature Extractor
101
+
102
+ The first step of training 3D-LLMs is to build meaningful 3D features that could be aligned with language features. For 2D images, there exist feature extractors like CLIP, which learn visual models from language supervision. The models are pretrained using billion-scale internet data of image-language pairs. It's hard to pre-train such feature learners from scratch, since there are no 3D-language assets comparable to internet-scale image-language pairs in terms of quantity and diversity.
103
+
104
+ On the contrary, numerous methods have been proposed to extract 3D features from 2D multi-view images [24, 20, 17, 21]. Inspired by these works, we extract features for 3D points by rendering the 3D scenes in several different views, and construct 3D features from rendered image features.
105
+
106
+ We first extract pixel-aligned dense features for rendered images following [24]. Then, we utilize three methods to construct 3D features from rendered image features. These methods are designed for different types of 3D data.
107
+
108
+ - Direct Reconstruction. We directly reconstruct point cloud from rgbd images rendered from the 3D data using ground-truth camera matrixes. The features are directly mapped to the reconstructed 3D points. This method is suitable for rendered rgbd data with perfect camera poses and intrinsics.
109
+ - Feature Fusion. Similar to [24], we fuse 2D features into 3D maps using gradslam [27]. Different from dense mapping methods, the features are fused in addition to depths and colors. This method is suitable for 3D data with noisy depth map renderings, or noisy camera poses and intrinsics.
110
+ - Neural Field. We utilize [20], which constructs 3D compact representation using neural voxel field [40]. Specifically, each voxel in the field has a feature in addition to density and color. Then we align 3D features in the rays and 2D features in the pixels using MSE loss. This method is for 3D data with RGB renderings but no depth data, and noisy camera poses and intrinsics.
111
+
112
+ In this way, we are able to obtain the $< N$ , $\mathcal{D}_v >$ -dim 3D features of each 3D scene, where $N$ is the number of points in the point cloud, and $\mathcal{D}_v$ is the feature dimension.
113
+
114
+ # 4.3 Training 3D-LLMs
115
+
116
+ # 4.3.1 2D VLMs as backbones
117
+
118
+ In addition to the feature extractor, training 3D-LLMs from scratch is also non-trivial. In fact, according to [29, 15], the training of 2D VLMs only begins to show "signs of life" after consuming half a billion images. They usually use frozen and pre-trained image encoders such as CLIP to extract features for 2D images. Considering that with 3D feature extractor, the 3D features can be mapped into the same feature space as 2D images, it's reasonable to use these 2D VLMs as our backbones.
119
+
120
+ The perceiver architecture proposed by [23] leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to handle very large inputs of arbitrary input sizes, thus can tackle different modalities. This architecture is utilized in VLMs like Flamingo [15]. BLIP-2 [29] also utilizes a similar structure called QFormer. The 2D image features, output from frozen image encoders, are flattened and sent to the perceiver to generate a fixed-sized input. Given that our 3D features are in the same feature space as the 2D features by the 3D feature extractor, and that perceiver is able to handle inputs of arbitrary input sizes of the same feature dimension, point cloud features with arbitrary sizes could also be fed into the perceiver. Therefore, we use the 3D feature extractor to extract the 3D features in the same feature space as the features of the frozen image encoders. Then, we use pretrained 2D VLMs as our backbones, input the aligned 3D features to train 3D-LLMs with our collected 3D-language dataset.
121
+
122
+ # 4.3.2 3D Localization Mechanism
123
+
124
+ Notice that since 3D features are reconstructed via 2D pretrained feature extractor that has been aligned with language (e.g., CLIP [36] and EVA-CLIP [41]), localization can be performed by directly calculating the similarity between 3D features and language features. However, Apart from building 3D features, which can be aligned with language semantics, it's also essential that the model itself could capture 3D spatial information. To this end, we propose a 3D localization mechanism that boosts 3D LLMs' abilities to absorb spatial information. It consists of two parts:
125
+
126
+ Augmenting 3D features with position embeddings Besides the 3D features aggregated from 2D multi-view features, we also add position embeddings to the features. Supposing the feature dim is $\mathcal{D}_v$ , we generate sin/cos position embeddings of the three dimensions, each has an embedding size
127
+
128
+ $\mathcal{D}_v / 3$ . We concatenate the embeddings of all three dimensions, and add them to the 3D features with a weight.
129
+
130
+ Augmenting LLM vocabularies with location tokens In order to align 3D spatial locations with LLMs, we propose to embed 3D locations in the vocabularies, following [6] and [43]. To be specific, the region to be grounded can be denoted as a sequence of discrete tokens representing the bounding box in the form of AABB. The continuous corner coordinates of the bounding boxes are uniformly discretized to voxel integers as location tokens $\langle x_{min}, y_{min}, z_{min}, x_{max}, y_{max}, z_{max} \rangle$ . After adding these additional location tokens, we unfreeze the weights for these tokens in the input and output embeddings of language models.
131
+
132
+ # 5 Experiments
133
+
134
+ We first introduce the architecture, and training and evaluation protocols. In Sec 5.1, we analyze the held-out experiments on ScanQA [3], SQA3D [32], and 3DMV-VQA [20] Dataset. Sec 5.2 covers more analysis on held-in evaluation and qualitative examples.
135
+
136
+ Architecture We experiment on three backbone 2D VLMs for 3D-LLMs: Flamingo 9B, BLIP-2 Vit-g Opt2.7B, BLIP-2 Vit-g FlanT5-XL. For BLIP-2, during pre-training the 3D-LLMs, we initialize the model from BLIP-2 checkpoints released in LAVIS library [28], and finetune the parameters for the QFormer. 3D features are 1408-dim features, same as EVA_CLAMP [41] hidden feature dim used by BLIP-2. We keep most parts of the LLMs (i.e., Opt and FlanT5) frozen, except the weights for the newly-added location tokens in the input and the output embeddings. For Flamingo, we initialize the model from the Flamingo9B checkpoint released in OpenFlamingo repository [2]. We finetune the parameters for perceiver, gated cross attention layers, and the weights for additional location tokens in the input and output embeddings. 3D features are 1024-dim features, same as CLIP hidden feature dim used by Flamingo. For generating class-agnostic (generic) object masks for the 2D pixel-aligned dense feature extraction, we follow [24] and use the Mask2Former (M2F) [9] or the segment anything (SAM) [26].
137
+
138
+ Training & Evaluation Datasets & Protocols We split our datasets into two genres, held-in datasets and held-out datasets. Specifically, our 3D-language data generation pipeline generates the held-in datasets of multiple tasks. We utilize training sets of held-in datasets for pre-training foundation 3D-LLMs, and their validation sets can be applied for held-in evaluation. During pre-training, we mix the held-in datasets of all tasks. The models are trained with the standard language modeling loss to output responses. Held-out datasets, on the other hand, are not used in training the foundation 3D-LLMs. We use three held-out 3D question answering datasets for held-out evaluation: ScanQA, SQA3D and 3DMV-VQA.
139
+
140
+ # 5.1 Held-Out Evaluation
141
+
142
+ # 5.1.1 Experiments on ScanQA
143
+
144
+ We finetune our pretrained 3D-LLMs on the ScanQA dataset and compare with baseline models.
145
+
146
+ Baselines & Evaluation Metrics We include representative baseline models on the benchmark. ScanQA is the state-of-the-art method on the benchmark that uses VoteNet to obtain object proposals, and then fuse them with language embeddings. ScanRefer+MCAN is a baseline that identifies the referred object and the MCAN model is applied to the image surrounding the localized object. VoteNet+MCAN detects objects in a 3D space, extracts their features, and uses them in a standard VQA model. Notably, these baseline models all extract explicit object representations from a pretrained localization module. In addition to these baselines, we also design several LLM-based baselines. LLaVA is a visual instruction tuning that connects a vision encoder and LLM for general-purpose visual and language understanding. We use its pretrained model and do zero-shot evaluation on our dataset. We use a single random image as input. We use LLaVA 13B model. ULIP encoders + LLMs use existing pre-trained 3D encoders with LLMs, for comparison between 3D pre-trained encoders, and 2D encoders for feature encoding. Single Image + Pretrained VLMs use our 2D VLM backbones (i.e., flamingo and BLIP-2), replace the 3D inputs of 3D-LLMs with single image features to train the models, and then finetune on ScanQA dataset. Multi-View Image + Pretrained VLMs use our 2D VLM backbones, replace the 3D inputs of 3D-LLMs with concatenated features of multi-view images to train the models, and then finetune on ScanQA dataset. We report BLEU, ROUGE-L, METEOR, CIDEr for robust answer matching. We also use exact match (EM) metric.
147
+
148
+ <table><tr><td></td><td>B-1</td><td>B-2</td><td>B-3</td><td>B-4</td><td>METEOR</td><td>ROUHE-L</td><td>CIDER</td><td>EM</td></tr><tr><td>VoteNet+MCAN*</td><td>28.0</td><td>16.7</td><td>10.8</td><td>6.2</td><td>11.4</td><td>29.8</td><td>54.7</td><td>17.3</td></tr><tr><td>ScanRefer+MCAN*</td><td>26.9</td><td>16.6</td><td>11.6</td><td>7.9</td><td>11.5</td><td>30</td><td>55.4</td><td>18.6</td></tr><tr><td>ScanQA*</td><td>30.2</td><td>20.4</td><td>15.1</td><td>10.1</td><td>13.1</td><td>33.3</td><td>64.9</td><td>21.0</td></tr><tr><td>LLaVA(zero-shot)</td><td>7.1</td><td>2.6</td><td>0.9</td><td>0.3</td><td>10.5</td><td>12.3</td><td>5.7</td><td>0.0</td></tr><tr><td>ULIPPointMLP+flant5</td><td>18.4</td><td>7.2</td><td>2.7</td><td>1.4</td><td>7.4</td><td>18.1</td><td>26.9</td><td>7.5</td></tr><tr><td>ULIPPointMLP+opt</td><td>19.1</td><td>7.3</td><td>2.7</td><td>1.9</td><td>7.4</td><td>18.2</td><td>28.0</td><td>8.4</td></tr><tr><td>ULIPPointBERT+flant5</td><td>29.2</td><td>17.9</td><td>10.3</td><td>6.1</td><td>11.6</td><td>28.1</td><td>50.9</td><td>14.5</td></tr><tr><td>ULIPPointBERT+opt</td><td>28.8</td><td>16.9</td><td>9.7</td><td>5.9</td><td>11.3</td><td>27.9</td><td>50.5</td><td>13.8</td></tr><tr><td>flamingo-SingleImage</td><td>23.8</td><td>14.5</td><td>9.2</td><td>8.5</td><td>10.7</td><td>29.6</td><td>52</td><td>16.9</td></tr><tr><td>flamingo-MultiView</td><td>25.6</td><td>15.2</td><td>9.2</td><td>8.4</td><td>11.3</td><td>31.1</td><td>55</td><td>18.8</td></tr><tr><td>BLIP2-flant5-SingleImage</td><td>28.6</td><td>15.1</td><td>9.0</td><td>5.1</td><td>10.6</td><td>25.8</td><td>42.6</td><td>13.3</td></tr><tr><td>BLIP2-flant5-MultiView</td><td>29.7</td><td>16.2</td><td>9.8</td><td>5.9</td><td>11.3</td><td>26.6</td><td>45.7</td><td>13.6</td></tr><tr><td>3D-LLM (M2F, flamingo)</td><td>30.3</td><td>17.8</td><td>12.0</td><td>7.2</td><td>12.2</td><td>32.3</td><td>59.2</td><td>20.4</td></tr><tr><td>3D-LLM (M2F, BLIP2-opt)</td><td>35.9</td><td>22.5</td><td>16.0</td><td>9.4</td><td>13.8</td><td>34.0</td><td>63.8</td><td>19.3</td></tr><tr><td>3D-LLM (SAM, BLIP2-opt)</td><td>35.0</td><td>21.7</td><td>15.5</td><td>9.5</td><td>14.0</td><td>34.5</td><td>67.1</td><td>19.8</td></tr><tr><td>3D-LLM (M2F, BLIP2-flant5)</td><td>39.3</td><td>25.2</td><td>18.4</td><td>12.0</td><td>14.5</td><td>35.7</td><td>69.4</td><td>20.5</td></tr><tr><td>3D-LLM (SAM, BLIP2-flant5)</td><td>37.5</td><td>24.1</td><td>17.6</td><td>12.9</td><td>15.1</td><td>37.5</td><td>74.5</td><td>21.2</td></tr></table>
149
+
150
+ Table 1: Experimental results on ScanQA validation set. * Means the models use explicit object representations. B-1, B-2, B-3, B-4 denote BLEU-1, BLEU-2, BLEU-3, BLEU-4 respectively. M2F denotes mask2former, SAM denotes Segment Anything.
151
+
152
+ <table><tr><td rowspan="2"></td><td rowspan="2">Format</td><td colspan="7">test set</td><td>Avg.</td></tr><tr><td>What</td><td>Is</td><td>How</td><td>Can</td><td>Which</td><td>Others</td><td></td><td></td></tr><tr><td>Blind test</td><td>SQ → A</td><td>26.75</td><td>63.34</td><td>43.44</td><td>69.53</td><td>37.89</td><td>43.41</td><td>43.65</td><td></td></tr><tr><td>ScanQA(w/o s\( ^{\text{txt}} \))</td><td>VQ → A</td><td>28.58</td><td>65.03</td><td>47.31</td><td>66.27</td><td>43.87</td><td>42.88</td><td>45.27</td><td></td></tr><tr><td>ScanQA</td><td>VSQ → A</td><td>31.64</td><td>63.80</td><td>46.02</td><td>69.53</td><td>43.87</td><td>45.34</td><td>46.58</td><td></td></tr><tr><td>ScanQA+aux task</td><td>VSQ → AL</td><td>33.48</td><td>66.10</td><td>42.37</td><td>69.53</td><td>43.02</td><td>46.40</td><td>47.20</td><td></td></tr><tr><td>MCAN</td><td>VSQ → A</td><td>28.86</td><td>59.66</td><td>44.09</td><td>68.34</td><td>40.74</td><td>40.46</td><td>43.42</td><td></td></tr><tr><td>ClipBERT</td><td>VSQ → A</td><td>30.24</td><td>60.12</td><td>38.71</td><td>63.31</td><td>42.45</td><td>42.71</td><td>43.31</td><td></td></tr><tr><td>Unified QA</td><td>VSQ → A</td><td>33.01</td><td>50.43</td><td>31.91</td><td>56.51</td><td>45.17</td><td>41.11</td><td>41.00</td><td></td></tr><tr><td>Unified QA</td><td>VSQ → A</td><td>27.58</td><td>47.99</td><td>34.05</td><td>59.47</td><td>40.91</td><td>39.77</td><td>38.71</td><td></td></tr><tr><td>GPT-3</td><td>VSQ → A</td><td>39.67</td><td>45.99</td><td>40.47</td><td>45.56</td><td>36.08</td><td>38.42</td><td>41.00</td><td></td></tr><tr><td>GPT-3</td><td>VSQ → A</td><td>28.90</td><td>46.42</td><td>28.05</td><td>40.24</td><td>30.11</td><td>36.07</td><td>34.57</td><td></td></tr><tr><td>3D-LLM</td><td>VSQ → A</td><td>37.05</td><td>65.18</td><td>45.81</td><td>67.46</td><td>51.00</td><td>49.82</td><td>49.79</td><td></td></tr></table>
153
+
154
+ Table 2: Experimental Results on SQA3D test set. In the Format column, "V" means the 3D visual inputs, "S" means the situation inputs, "Q" and "A" denote questions and answers respectively. Here we use 3D-LLM (SAM, BLIP2-flant5).
155
+
156
+ Result Analysis We report our results on ScanQA validation set in Table 1. We observe a significant increase in the evaluation metrics. For example, for BLEU-1, our model outperforms the state-of-the-art ScanQA model by $\sim 9\%$ for validation set. For CIDER, we report a $\sim 10\%$ gain compared to ScanQA, and much higher than other 3D-based baselines. These results show that by injecting 3D into LLMs, the models can generate answers that are much more similar to the ground-truth answers. Furthermore, 3D-based baselines use object detectors like VoteNet to segment the objects, and then send per-object features into their models, while our inputs are holistic 3D features without explicit object representations. This shows that our model could perform visual reasoning about objects and their relationships even without explicit object representations. We then examine whether 2D VLMs have the same ability. We find that by taking single-view images or multi-view images as inputs, the performances drop much compared to 3D-LLMs. Specifically, multi-view images also contain information about the whole scene. However, they have much lower performances compared to 3D-LLMs, probably because features of multi-view images are disorganized, thus losing 3D-related information.
157
+
158
+ # 5.1.2 Experiments on SQA3D
159
+
160
+ SQA3D [32] requires the tested agent to first understand its situation (position, orientation, etc.) in the 3D scene as described by text, then reason about its surrounding environment and answer a question under that situation. We finetune our pretrained 3D-LLMs on the SQA3D dataset and compare with baseline models. We include all baseline models introduced by the original paper. Specifically, ScanQA+aux task achieves the SOTA performance by adding two auxiliary tasks: prediction the
161
+
162
+ position and rotation of the agent situation. Table 2 shows the results. We can see that our 3D-LLM outperforms all baseline models a lot, even without training with auxiliary tasks and losses.
163
+
164
+ <table><tr><td>Methods</td><td>Concept</td><td>Counting</td><td>Relation</td><td>Comparison</td><td>Overall</td></tr><tr><td>NS-VQA*</td><td>59.8</td><td>21.5</td><td>33.4</td><td>61.6</td><td>38.0</td></tr><tr><td>3D-Feature+LSTM</td><td>61.2</td><td>22.4</td><td>49.9</td><td>61.3</td><td>48.2</td></tr><tr><td>3D-CLR*</td><td>66.1</td><td>41.3</td><td>57.6</td><td>72.3</td><td>57.7</td></tr><tr><td>flamingo-SingleImage</td><td>58.7</td><td>18.5</td><td>38.4</td><td>60.1</td><td>40.3</td></tr><tr><td>flamingo-MultiView</td><td>60.0</td><td>18.3</td><td>40.2</td><td>61.4</td><td>41.6</td></tr><tr><td>BLIP-SingleImage</td><td>58.0</td><td>20.4</td><td>42.3</td><td>62.3</td><td>43.1</td></tr><tr><td>BLIP-MultiView</td><td>61.9</td><td>21.1</td><td>48.0</td><td>62.3</td><td>47.1</td></tr><tr><td>3D-LLM (M2F, flamingo)</td><td>68.9</td><td>32.4</td><td>61.6</td><td>68.3</td><td>58.6</td></tr><tr><td>3D-LLM (M2F, BLIP2-opt)</td><td>63.4</td><td>30.7</td><td>57.6</td><td>65.2</td><td>54.9</td></tr><tr><td>3D-LLM (SAM, BLIP2-opt)</td><td>73.4</td><td>24.5</td><td>63.2</td><td>77.6</td><td>61.5</td></tr><tr><td>3D-LLM (M2F, BLIP2-flanT5)</td><td>68.1</td><td>31.4</td><td>55.1</td><td>69.7</td><td>54.6</td></tr><tr><td>3D-LLM (SAM, BLIP2-flanT5)</td><td>76.3</td><td>30.2</td><td>64.3</td><td>80.2</td><td>64.0</td></tr></table>
165
+
166
+ Table 3: Experimental results on 3DMV-VQA dataset. * denotes using explicit object representations and neuro-symbolic reasoning.
167
+
168
+ # 5.1.3 Experiments on 3DMV-VQA
169
+
170
+ We finetune our pretrained 3D-LLMs on the 3DMV-VQA dataset and compare with baseline models. We include all baseline models introduced by the original paper. Specifically, 3D-CLR [20] is the SOTA achieves the SOTA performance via neuro-symbolic reasoning based on 3D features.
171
+
172
+ Result Analysis Table 3 shows the performances on 3DMV-VQA. We can see that 3D-LLMs outperform state-of-the-art baseline model in the question types of concept and relation, and also in the overall performance. Our model also outperforms 3D-Feature+LSTM, demonstrating the power of LLMs over vanilla language models with similar 3D features as inputs. Overall, 3D-based methods outshine 2D-based versions of the methods. Our 3D-LLMs outperform their corresponding 2D VLMs with image input, further demonstrating the importance of 3D representations for 3D-LLMs.
173
+
174
+ <table><tr><td>Tasks</td><td>Models</td><td>BLEU-1</td><td>BLEU-2</td><td>BLEU-3</td><td>BLEU-4</td><td>METEOR</td><td>ROUGH-L</td></tr><tr><td rowspan="8">3D Captioning</td><td>flamingo-SingleImage</td><td>29.0</td><td>17.9</td><td>12.5</td><td>12.1</td><td>12.4</td><td>28.2</td></tr><tr><td>flamingo-MultiView</td><td>29.5</td><td>18.6</td><td>13.7</td><td>12.4</td><td>14.0</td><td>29.0</td></tr><tr><td>BLIP2-flant5-SingleImage</td><td>30.3</td><td>18.3</td><td>14.5</td><td>12.0</td><td>13.1</td><td>30.9</td></tr><tr><td>BLIP2-flant5-MultiView</td><td>34.4</td><td>23.9</td><td>18.0</td><td>14.1</td><td>17.5</td><td>35.7</td></tr><tr><td>3D-LLM (flamingo)</td><td>36.1</td><td>24.5</td><td>18.7</td><td>15.6</td><td>17.6</td><td>35.8</td></tr><tr><td>3D-LLM (BLIP2-opt)</td><td>35.7</td><td>26.7</td><td>20.3</td><td>15.9</td><td>18.7</td><td>40.1</td></tr><tr><td>3D-LLM (BLIP2-t5)</td><td>39.8</td><td>31.0</td><td>24.7</td><td>20.1</td><td>17.7</td><td>42.6</td></tr><tr><td>3D-LLM (SAM, BLIP2-t5)</td><td>44.5</td><td>38.6</td><td>29.5</td><td>24.2</td><td>22.1</td><td>45.4</td></tr><tr><td rowspan="9">3D-assisted Dialog</td><td>flant5</td><td>27.4</td><td>16.5</td><td>11.1</td><td>8.7</td><td>9.5</td><td>27.5</td></tr><tr><td>flamingo-SingleImage</td><td>29.4</td><td>18.7</td><td>11.3</td><td>9.4</td><td>10.0</td><td>26.8</td></tr><tr><td>flamingo-MultiView</td><td>30.6</td><td>21.3</td><td>11.9</td><td>9.1</td><td>10.4</td><td>27.9</td></tr><tr><td>BLIP2-flant5-SingleImage</td><td>28.4</td><td>17.3</td><td>10.6</td><td>9.1</td><td>10.2</td><td>27.4</td></tr><tr><td>BLIP2-flant5-MultiView</td><td>32.4</td><td>20.9</td><td>12.1</td><td>9.5</td><td>11.0</td><td>29.5</td></tr><tr><td>3D-LLM (flamingo)</td><td>35.0</td><td>22.8</td><td>15.4</td><td>10.6</td><td>16.0</td><td>34.2</td></tr><tr><td>3D-LLM (BLIP2-opt)</td><td>39.6</td><td>27.5</td><td>20.5</td><td>16.2</td><td>18.4</td><td>38.6</td></tr><tr><td>3D-LLM (BLIP2-flant5)</td><td>39.0</td><td>27.8</td><td>21.2</td><td>16.6</td><td>18.9</td><td>39.3</td></tr><tr><td>3D-LLM (SAM, BLIP2-t5)</td><td>40.5</td><td>29.4</td><td>23.9</td><td>21.4</td><td>19.6</td><td>40.8</td></tr><tr><td rowspan="9">Task Decomposition</td><td>flant5</td><td>25.5</td><td>21.1</td><td>16.7</td><td>6.0</td><td>13.9</td><td>28.4</td></tr><tr><td>flamingo-SingleImage</td><td>31.4</td><td>23.0</td><td>18.8</td><td>7.1</td><td>15.6</td><td>30.6</td></tr><tr><td>flamingo-MultiView</td><td>33.1</td><td>24.7</td><td>21.4</td><td>7.3</td><td>16.1</td><td>33.2</td></tr><tr><td>BLIP2-flant5-SingleImage</td><td>32.2</td><td>25.3</td><td>18.2</td><td>6.9</td><td>15.0</td><td>31.0</td></tr><tr><td>BLIP2-flant5-MultiView</td><td>33.1</td><td>27.0</td><td>20.6</td><td>6.9</td><td>15.5</td><td>34.0</td></tr><tr><td>3D-LLM (flamingo)</td><td>32.9</td><td>25.6</td><td>20.2</td><td>6.4</td><td>16.0</td><td>33.5</td></tr><tr><td>3D-LLM (BLIP2-opt)</td><td>34.1</td><td>27.7</td><td>20.8</td><td>7.6</td><td>16.5</td><td>35.4</td></tr><tr><td>3D-LLM (BLIP2-flant5)</td><td>33.9</td><td>28.1</td><td>20.7</td><td>7.4</td><td>15.9</td><td>37.8</td></tr><tr><td>3D-LLM (SAM, BLIP2-t5)</td><td>31.6</td><td>22.3</td><td>17.2</td><td>8.8</td><td>14.0</td><td>38.3</td></tr></table>
175
+
176
+ Table 4: Experimental Results on Held-In Datasets. 3D-LLMs outperform 2D VLMs.
177
+
178
+ # 5.2 More Extensive Evaluation
179
+
180
+ Held-In Evaluation We carry out experiments on held-in datasets of three tasks: 3D captioning, 3D-assisted dialog and task decomposition. The baselines include 2D VLMs as for the held-out evaluation. We add one language-only baseline: FlanT5, which examines LLMs' ability to complete these tasks without any visual input. To evaluate the quality of responses, we include BLEU, ROUGE-L, METEOR, CIDEr as our metrics. We report the held-in evaluation performances in Table 4. From the table, we could see that 3D-LLMs could generate high-quality responses, outperforming both 2D VLMs and language-only LLMs.
181
+
182
+ Qualitative Examples In Figure 4, we show qualitative examples of 3D-LLM's predictions. We can see that our 3D-LLM is able to perform a variety of tasks.
183
+
184
+ ![](images/aaa8c18d6150002b2e605fe3826c0580ad4a4de65be7875fc5e531fcef710cfd.jpg)
185
+ Figure 4: Qualitative examples of 3D-LLM's prediction.
186
+
187
+ # 6 Conclusion
188
+
189
+ In this paper, we propose a new family of 3D-LLMs that can take 3D representations as inputs and generate responses. We introduce a series of 3D-language data generation pipelines to generate a dataset of 1M 3D-language pairs to train our 3D-LLMs. Our 3D-LLMs leverage 2D pretrained VLMs as backbones and a novel 3D localization mechanism. Experiments show that our 3D-LLMs outperform state-of-the-art baseline models on ScanQA datasets, and could perform a diverse set of 3D-related tasks. A limitation is that the 3D feature extractor relies on 2D multi-view images, and thus all 3D scenes need to be rendered so that they can be trained in 3D-LLMs, which introduces an additional rendering process.
190
+
191
+ # 7 Acknowledgements
192
+
193
+ This work was supported by the MIT-IBM Watson AI Lab, DARPA MCS, DSO grant DSOCO21072, and gift funding from MERL, Cisco, Sony, and Amazon. We would also like to thank the computation support from AiMOS, a server cluster for the IBM Research AI Hardware Center.
194
+
195
+ # References
196
+
197
+ [1] P. Achlioptas, A. Abdelreheem, F. Xia, M. Elhoseiny, and L. J. Guibas. ReferIt3D: Neural listeners for fine-grained 3D object identification in real-world scenes. In ECCV, 2020.
198
+ [2] A. Awadalla, I. Gao, J. Gardner, J. Hessel, Y. Hanafy, W. Zhu, K. Marathe, Y. Bitton, S. Gadre, J. Jitsev, S. Kornblith, P. W. Koh, G. Ilharco, M. Wortman, and L. Schmidt. Openflamingo, Mar. 2023.
199
+ [3] D. Azuma, T. Miyanishi, S. Kurita, and M. Kawanabe. ScanQA: 3D question answering for spatial scene understanding. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19107-19117, 2022.
200
+ [4] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, pages 1877-1901, 2020.
201
+ [5] D. Z. Chen, A. X. Chang, and M. Nießner. ScanRefer: 3D object localization in RGB-D scans using natural language. 16th European Conference on Computer Vision (ECCV), 2020.
202
+ [6] T. Chen, S. Saxena, L. Li, D. J. Fleet, and G. E. Hinton. Pix2seq: A language modeling framework for object detection. ArXiv, abs/2109.10852, 2021.
203
+ [7] X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Dollar, and C. L. Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015.
204
+ [8] Z. Chen, A. Gholami, M. Nießner, and A. X. Chang. Scan2cap: Context-aware dense captioning in rgb-d scans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3193-3203, 2021.
205
+ [9] B. Cheng, A. Choudhuri, I. Misra, A. Kirillov, R. Girdhar, and A. G. Schwing. Mask2former for video instance segmentation. ArXiv, abs/2112.10764, 2021.
206
+ [10] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
207
+ [11] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
208
+ [12] A. Dai, A. X. Chang, M. Savva, M. Halber, T. A. Funkhouser, and M. Nießner. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2432–2443, 2017.
209
+ [13] Databricks. Free dolly: Introducing the world's first truly open instruction-tuned llm, 2023.
210
+ [14] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
211
+ [15] J.-B. A. et al. Flamingo: a visual language model for few-shot learning. 2022.
212
+ [16] M. Feng, Z. Li, Q. Li, L. Zhang, X. Zhang, G. Zhu, H. Zhang, Y. Wang, and A. S. Mian. Free-form description guided 3d visual graph network for object grounding in point cloud. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 3702-3711, 2021.
213
+ [17] S. Y. Gadre, M. Wortsman, G. Ilharco, L. Schmidt, and S. Song. CLIP on wheels: Zero-shot object navigation as object localization and exploration. ArXiv, abs/2203.10421, 2022.
214
+ [18] T. Gong, C. Lyu, S. Zhang, Y. Wang, M. Zheng, Q. Zhao, K. Liu, W. Zhang, P. Luo, and K. Chen. MultiModal-GPT: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790, 2023.
215
+
216
+ [19] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
217
+ [20] Y. Hong, C. Lin, Y. Du, Z. Chen, J. B. Tenenbaum, and C. Gan. 3D concept learning and reasoning from multi-view images, 2023.
218
+ [21] C. Huang, O. Mees, A. Zeng, and W. Burgard. Visual language maps for robot navigation, 2023.
219
+ [22] P.-H. Huang, H.-H. Lee, H.-T. Chen, and T.-L. Liu. Text-guided graph neural networks for referring 3D instance segmentation. In AAAI, 2021.
220
+ [23] A. Jaegle, F. Gimeno, A. Brock, A. Zisserman, O. Vinyals, and J. Carreira. Perceiver: General perception with iterative attention. In International Conference on Machine Learning, 2021.
221
+ [24] K. M. Jatavallabhula, A. Kuwajerwala, Q. Gu, M. Omama, T. Chen, S. Li, G. Iyer, S. Saryazdi, N. Keetha, A. Tewari, J. B. Tenenbaum, C. M. de Melo, M. Krishna, L. Paull, F. Shkurti, and A. Torralba. Conceptfusion: Open-set multimodal 3D mapping, 2023.
222
+ [25] C. Jia, Y. Yang, Y. Xia, Y.-T. Chen, Z. Parekh, H. Pham, Q. Le, Y.-H. Sung, Z. Li, and T. Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR, 2021.
223
+ [26] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollar, and R. B. Girshick. Segment anything. ArXiv, abs/2304.02643, 2023.
224
+ [27] J. Krishna Murthy, S. Saryazdi, G. Iyer, and L. Paull. gradslam: Dense slam meets automatic differentiation. arXiv, 2020.
225
+ [28] D. Li, J. Li, H. Le, G. Wang, S. Savarese, and S. C. H. Hoi. LAVIS: A library for language-vision intelligence, 2022.
226
+ [29] J. Li, D. Li, S. Savarese, and S. Hoi. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models, 2023.
227
+ [30] J. Li, D. Li, S. Savarese, and S. Hoi. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
228
+ [31] H. Liu, C. Li, Q. Wu, and Y. J. Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023.
229
+ [32] X. Ma, S. Yong, Z. Zheng, Q. Li, Y. Liang, S.-C. Zhu, and S. Huang. Sqa3d: Situated question answering in 3d scenes, 2023.
230
+ [33] OpenAI. GPT-4 technical report, 2023.
231
+ [34] OpenAI. GPT-4 technical report. ArXiv, abs/2303.08774, 2023.
232
+ [35] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744, 2022.
233
+ [36] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021.
234
+ [37] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
235
+ [38] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551, 2020.
236
+ [39] S. K. Ramakrishnan, A. Gokaslan, E. Wijmans, O. Maksymets, A. Clegg, J. M. Turner, E. Undersander, W. Galuba, A. Westbury, A. X. Chang, M. Savva, Y. Zhao, and D. Batra. Habitat-matterport 3D dataset (HM3D): 1000 large-scale 3D environments for embodied AI. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2021.
237
+ [40] C. Sun, M. Sun, and H.-T. Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5459-5469, June 2022.
238
+
239
+ [41] Q. Sun, Y. Fang, L. Wu, X. Wang, and Y. Cao. Eva-clip: Improved training techniques for clip at scale, 2023.
240
+ [42] Z. Sun, Y. Shen, Q. Zhou, H. Zhang, Z. Chen, D. Cox, Y. Yang, and C. Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv e-prints, pages arXiv-2305, 2023.
241
+ [43] P. Wang, A. Yang, R. Men, J. Lin, S. Bai, Z. Li, J. Ma, C. Zhou, J. Zhou, and H. Yang. Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In International Conference on Machine Learning, 2022.
242
+ [44] K. Yadav, R. Ramrakhya, S. K. Ramakrishnan, T. Gervet, J. Turner, A. Gokaslan, N. Maestre, A. X. Chang, D. Batra, M. Savva, et al. Habitat-matterport 3D semantics dataset. arXiv preprint arXiv:2210.05633, 2022.
243
+ [45] S. Ye, D. Chen, S. Han, and J. Liao. 3D question answering. IEEE transactions on visualization and computer graphics, PP, 2021.
244
+ [46] L. Yu, P. Poirson, S. Yang, A. C. Berg, and T. L. Berg. Modeling context in referring expressions. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pages 69-85. Springer, 2016.
245
+ [47] P. Zhang, Y. Goyal, D. Summers-Stay, D. Batra, and D. Parikh. Yin and Yang: Balancing and answering binary visual questions. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
246
+ [48] D. Zhu, J. Chen, K. Haydarov, X. Shen, W. Zhang, and M. Elhoseiny. Chatgpt asks, blip-2 answers: Automatic questioning towards enriched visual descriptions, 2023.
3dllminjectingthe3dworldintolargelanguagemodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fb7c926ba1151bd480dda8a528ff8899a536408d27b2b48b80fec45df7a535c
3
+ size 960145
3dllminjectingthe3dworldintolargelanguagemodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd9f818d14c7cb70a0c819d02a9bbda55945d880818ba2b82e37eb498848b4b9
3
+ size 275754
3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c909af40d7a6dde678b101a8de1cecf619d119541091f34a938323a45ad137cb
3
+ size 113840
3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7091d7ff19179188d8972fd6668bc742ee5aaf7a9aa9bfcbd46c4bbb21c3deb6
3
+ size 144508
3dmoleculegenerationbydenoisingvoxelgrids/5e66fe4c-3e54-4199-9601-ce7b6a0cca97_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34407024fc369349c4bdc21df0cf27ff090a289a208e736b7371a28ca82760ad
3
+ size 11073044
3dmoleculegenerationbydenoisingvoxelgrids/full.md ADDED
@@ -0,0 +1,446 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D molecule generation by denoising voxel grids
2
+
3
+ Pedro O. Pinheiro, Joshua Rackers, Joseph Kleinhenz, Michael Maser, Omar Mahmood, Andrew Martin Watkins, Stephen Ra, Vishnu Sresht, Saeed Saremi
4
+
5
+ Prescient Design, Genentech
6
+
7
+ # Abstract
8
+
9
+ We propose a new score-based approach to generate 3D molecules represented as atomic densities on regular grids. First, we train a denoising neural network that learns to map from a smooth distribution of noisy molecules to the distribution of real molecules. Then, we follow the neural empirical Bayes framework [1] and generate molecules in two steps: (i) sample noisy density grids from a smooth distribution via underdamped Langevin Markov chain Monte Carlo, and (ii) recover the "clean" molecule by denoising the noisy grid with a single step. Our method, VoxMol, generates molecules in a fundamentally different way than the current state of the art (i.e., diffusion models applied to atom point clouds). It differs in terms of the data representation, the noise model, the network architecture and the generative modeling algorithm. Our experiments show that VoxMol captures the distribution of drug-like molecules better than state of the art, while being faster to generate samples.
10
+
11
+ # 1 Introduction
12
+
13
+ Finding novel molecules with desired properties is an important problem in chemistry with applications to many scientific domains. In drug discovery in particular, standard computational approaches perform some sort of local search—by scoring and ranking molecules—around a region of the molecular space (chosen based on some prior domain knowledge). The space of possible drug-like molecules is prohibitively large (it scales exponentially with the molecular size [2, 3], estimated to be around $10^{60}$ [4]), therefore search in this space is very hard. Search-based approaches achieve some successes in practice, but have some severe limitations: we can only explore very small portions of the molecular space (on the order of billions to trillions molecules) and these approaches cannot propose new molecules conditioned on some desiderata.
14
+
15
+ Generative models for molecules have been proposed to overcome these limitations and explore the molecular space more efficiently [5]. These approaches often consider one of the following types of molecule representations: (i) one-dimensional sequences such as SMILES [6] or SELFFIES [7] (e.g., [8, 9, 10]), (ii) two-dimensional molecular graphs, where nodes represent atoms or molecular substructures and edges represent bonds between them (e.g., [11, 12, 13, 14]), or (iii) atoms as three-dimensional points in space. Molecules are entities laying on three-dimensional space, therefore 3D representations are arguably the most complete ones—they contain information about atom types, their bonds and the molecular conformation.
16
+
17
+ Recent generative models consider molecules as a set of points in 3D Euclidean space and apply diffusion models on them [15, 16, 17, 18, 19, 20]. Point-cloud representations allow us to use equivariant graph neural networks [21, 22, 23, 24, 25]—known to be very effective in molecular discriminative tasks—as the diffusion model's denoising network. However, point-based diffusion approaches have some limitations when it comes to generative modeling. First, the number of atoms in the molecule (i.e., nodes on the 3D graph) to be diffused need to be known beforehand. Second, atom types and their coordinates have very different distributions (categorical and continuous variables, respectively) and are treated separately. Because a score function is undefined on discrete distributions,
18
+
19
+ ![](images/7c158c828322bfa3fc24c422fdfc3f3f40391e788f24aed57e2c75d20aa65a23.jpg)
20
+ Figure 1: Voxelized molecules generated by our model and their corresponding molecular graphs. Left, samples from a model trained on QM9 dataset $(32^{3}$ voxels). Right, samples from a model trained on GEOM-drugs $(64^{3}$ voxels). In both cases, each voxel is a cubic grid with side length of .25Å. Each color represents a different atom (and a different channel on the voxel grid). Best seen in digital version. See appendix for more generated samples.
21
+
22
+ some workaround is necessary. Finally, graph networks operate only on nodes and edges (single and pairwise iterations, respectively). Therefore, capturing long-range dependencies over multiple atoms (nodes) can become difficult as the number of atoms increases. This is related to the limitations of the message-passing formalism in graph neural networks [26]. Higher-order message passing can alleviate this problem to a degree [27, 28], but they come at a significant computational cost and they have been limited to third-order models [29] (see next section for more discussions on the tradeoffs between model expressivity and built-in equivariance).<sup>1</sup>
23
+
24
+ In this work we introduce VoxMol, a new score-based method to generate 3D molecules. Similar to [33], and unlike most recent approaches, we represent atoms as continuous (Gaussian-like) densities and molecules as a discretization of 3D space on voxel (i.e., a discrete unit of volume) grids. Voxelized representations allow us to use the same type of denoising architectures used in computer vision. These neural networks—the workhorse behind the success of score-based generative models on images, e.g. [34, 35, 36]—are very effective and scale very well with data.
25
+
26
+ We start by training a neural network to denoise noisy voxelized molecules. Noisy samples are created simply by adding Gaussian noise (with a fixed identity covariance matrix scaled by a large noise level) to each voxel in the molecular grid. This denoising network also parametrizes the score function of the smooth/noisy distribution. Note that in contrast to diffusion models, the noise process we use here does not displace atoms. Then, we leverage the (learned) denoising network and generate molecules in two steps [1]: (i) (walk) sample noisy density grids from the smooth distribution via Langevin Markov chain Monte Carlo (MCMC), and (ii) (jump) recover "clean" molecules by denoising the noisy grid. This sampling scheme, referred to as walk-jump sampling in [1], has been successfully applied before to 2D natural images [37, 38] and 1D amino acid sequences [39].
27
+
28
+ Compared to point-cloud diffusion models, VoxMol is simpler to train, it does not require knowing the number of atoms beforehand, and it does not treat features as different distributions (continuous, categorical and ordinal for coordinates, atom types and formal charge)—we only use the "raw" voxelized molecule. Moreover, due to its expressive network architecture, our method scales better to large, drug-sized molecules. Figure 1 (and Figures 8, 9 on appendix) illustrates voxelized molecules and their corresponding molecular graphs generated by our model, trained on two different datasets. These samples show visually that our model learns valences of atoms and symmetries of molecules.
29
+
30
+ The main contributions of this work can be summarized as follows. We present VoxMol, a new score-based method for 3D molecule generation. The proposed method differs from current approaches—usually diffusion models on point clouds—in terms of the data representation, the noise model, the network architecture, and the generative modeling algorithm. We show in experiments that VoxMol performs slightly worse than state of the art on a small dataset (QM9 [40]), while outperforms it (by a large margin) on a challenging, more realistic drug-like molecules dataset (GEOM-drugs [41]).
31
+
32
+ # 2 Related Work
33
+
34
+ Voxel-based unconditional 3D molecule generation. Skalic et al. [42] and Ragoza et al. [33] map atomic densities on 3D regular grids and train VAEs [43] using 3D convolutional networks to generate voxelized molecules. To recover atomic coordinates from the generated voxel grids², [33] introduces a simple optimization-based solution, while [42] trains another model that "translates" voxel structures into SMILES strings. Voxel representations are flexible and can trivially be applied to related problems with different data modalities. For instance, [44] proposes a GAN [45] on voxelized electron densities, while [46] leverages voxelized 3D pharmacophore features to train a pocket-conditional model. Similar to these works, our model also relies on discretization of 3D space. Like [33], we use a simple peak detection algorithm to extract atomic coordinates from the generated voxel grids. However, our method differs on the underlying generative modeling, architecture, datasets, input representations and evaluations.
35
+
36
+ Point cloud-based unconditional generation. Most recent models treat molecules as sets of points, where each node is associated with a particular atom type, its coordinates and potentially extra information like formal charge. Different modeling approaches have been proposed, e.g., [47, 48, 49] utilize autoregressive models to iteratively sample atoms, and [50, 51] use normalizing flows [52]. Hoogeboom et al. [15] proposes E(3) Equivariant Diffusion Models (EDM), a diffusion [53]-based approach that performs considerably better than previous models on this task. EDMs learn to denoise a diffusion process (operating on both continuous and categorical data) and generate molecules by iteratively applying the denoising network on an initial noise. Several works have been proposed on the top of EDM [54, 55, 20, 56]. For instance, Xu et al. [56] improves EDM by applying diffusion on a latent space instead of the atomic coordinates, while MiDi [20] shows that EDM results can be improved by jointly generating the 3D conformation and the connectivity graph of molecules (in this setting, the model has access to both the 3D structure and the 2D connectivity graph).
37
+
38
+ Conditional 3D molecule generation. A related body of work is concerned with conditional generation. In many cases, conditional generation is built on the top of unconditional generation methods. Some authors propose to predict the 3D structure of the molecules given a molecular graph (this is called the conformer generation task): VAEs [57, 58], normalizing flows [59], reinforcement learning [60], optimal transport [61], autoregressive models [62] and diffusion models [63, 16, 64] have been proposed to this task. Some work [65, 66] condition 3D generation on shape while other works condition molecule generation on other structures. For instance, [17, 18, 19, 67] adapt (unconditional) diffusion models to condition on protein pockets, while [68] adapt their previous work [33] to condition voxelized structures to protein targets. Finally, [46] proposes a hybrid conditional generation model by modeling fragments/scaffolds with point cloud representation and the 3D target structures and pharmacophores features [69] with voxel grids.
39
+
40
+ Comparison between voxel and point-cloud representations. Voxels have some advantages and disadvantages compared to point cloud representations. First, voxels are straightforward generalizations of 2D pixels to 3D space, therefore we can leverage similar machinery used in score-based generative modeling for images. These models are known to perform well and scale nicely with data. Second, message passing on graphs operates on single and pairwise interactions while convolution filters (and potentially transformer layers applied to regular grids) can capture multiple local interactions by construction (see [70] for a discussion on the many-body representation hypothesis). Third, voxel representations have a higher memory footprint and lower random memory accesses than point cloud representations [71]. We note however, that developing models on drug-sized molecules (that is, molecules with size close to those on GEOM-drugs [41]) with reasonable resolution $(.1 - .2\AA)$ is possible on current GPU hardware. Fourth, recovering point coordinates from a discrete grid has no analytical solution, therefore voxel-based models require an extra step to retrieve atomic coordinates. We show empirically that this is not a problem in practice as we can achieve competitive results, even with a very simple peak detection algorithm.
41
+
42
+ Finally, graph networks are less expressive due to message passing formalism [26, 27], but are a better fit for built-in SE(3)-equivariance architectures (e.g. [21, 22, 23, 24, 25]). Rotation-equivariant 3D con
43
+
44
+ volitional network have been proposed [72, 73, 74]<sup>3</sup>, but current models do not scale as well as standard convnets, and it would be a challenge to apply them to drug-sized molecules. Built-in rotation equivariance is a good property to have, however equivariance can also be learned with strong data augmentation/larger datasets [75, 76, 32]. In fact, concurrently to this work, [77] also show that built-in SE(3)-equivariant architecture is not necessary to generate molecules. Our experiments show that an expressive denoiser scales up better, allowing VoxMol to outperform current state of the art on GEOM-drugs. However, we hope our results motivate exploration of more efficient SE(3)-equivariant convnet architectures.
45
+
46
+ # 3 Method
47
+
48
+ We follow previous work (e.g., [78, 33, 70, 79]) and represent atoms as continuous Gaussian-like atomic densities in 3D space, centered around their atomic coordinates. Molecules are generated by discretizing the 3D space around the atoms into voxel grids, where each atom type (element) is represented by a different grid channel. See appendix for more information on how we discretize molecules. This discretization process gives us a dataset with $n$ voxelized molecules $\{x_{i}\}_{i = 1}^{n}, x_{i} \in \mathbb{R}^{d}, d = c \times l^{3}$ , where $l$ is the length of each grid edge and $c$ is the number of atom channels in the dataset. Each voxel in the grid can take values between 0 (far from all atoms) and 1 (at the center of atoms). Throughout our experiments, we consider a fixed resolution of .25 Å (we found it to be a good trade-off between accuracy and computation). Therefore, voxel grids occupy a volume of $(l / 4)^{3}$ cubic Ångströms.
49
+
50
+ # 3.1 Background: neural empirical Bayes
51
+
52
+ Let $p(x)$ be an unknown distribution of voxelized molecules and $p(y)$ a smoother version of it obtained by convolving $p(x)$ with an isotropic Gaussian kernel with a known covariance $\sigma^2 I_d$ . Equivalently, $Y = X + N$ , where $X \sim p(x)$ , $N \sim \mathcal{N}(0, \sigma^2 I_d)$ . Therefore $Y$ is sampled from:
53
+
54
+ $$
55
+ p (y) = \int_ {\mathbb {R} ^ {d}} \frac {1}{(2 \pi \sigma^ {2}) ^ {d / 2}} \exp \left(- \frac {\| y - x \| ^ {2}}{2 \sigma^ {2}}\right) p (x) d x.
56
+ $$
57
+
58
+ This transformation will smooth the density of $X$ while still preserving some of the structure information of the original voxel signals. Robbins [80] showed that if we observe $Y = y$ , then the least-square estimator of $X$ is the Bayes estimator, i.e., $\hat{x}(y) = \mathbb{E}[X|Y = y]$ . Built on this result, Miyasawa [81] showed that, if the noisng process is Gaussian (as in our case), then the least-square estimator $\hat{x}(y)$ can be obtained purely from the (unnormized) smoothed density $p(y)$ :
59
+
60
+ $$
61
+ \hat {x} (y) = y + \sigma^ {2} g (y), \tag {1}
62
+ $$
63
+
64
+ where $g(y) = \nabla_y\log p(y)$ is the score function [82] of $p(y)$ . This interesting equation tells us that, if we know $p(y)$ up to a normalizing constant (and therefore the score function associated with it), we can estimate the original signal $x$ only by observing its noisy version $y$ . Equivalently, if we have access to the estimator $\hat{x} (y)$ , we can compute the score function of $p(y)$ via (1).
65
+
66
+ Our generative model is based on the neural empirical Bayes (NEB) formalism [1]: we are interested in learning the score function of the smoothed density $p(y)$ and the least-square estimator $\hat{x}(y)$ from a dataset of voxelized molecules $\{x_i\}_{i=1}^n$ , sampled from unknown $p(x)$ . We leverage the (learned) estimator and score function to generate voxelized molecules in two steps: (i) sample $y_k \sim p(y)$ with Langevin MCMC [83], and (ii) generate clean samples with the least-square estimator. The intuition is that it is much easier to sample from the smooth density than the original distribution. See Saremi and Hyvarinen [1] for more details.
67
+
68
+ # 3.2 Denoising voxelized molecules
69
+
70
+ We parametrize the Bayes estimator of $X$ using a neural network with parameters $\theta$ denoted by $\hat{x}_{\theta}:\mathbb{R}^{d}\to \mathbb{R}^{d}$ . Since the Bayes estimator is the least-squares estimator, the learning becomes a least-squares denoising objective as follows:
71
+
72
+ $$
73
+ \mathcal {L} (\theta) = \mathbb {E} _ {x \sim p (x), y \sim \mathbb {N} \left(x, \sigma^ {2} I _ {d}\right)} | | x - \hat {x} _ {\theta} (y) | | ^ {2}. \tag {2}
74
+ $$
75
+
76
+ ![](images/a2a5536f5fe6182708a2ede8054c2d6a23720dbc8384ae71c7bb430879061223.jpg)
77
+ Figure 2: (a) A representation of our denoising training procedure. Each training sample (i.e., a voxelized molecule) is corrupted with isotropic Gaussian noise with a fixed noise level $\sigma$ . The model is trained to recover clean voxel grids from the noisy version. To facilitate visualization, we threshold the grid values, $\hat{x} = \mathbb{1}_{\geq 1}(\hat{x})$ . (b) Graphical model representation of the walk-jump sampling scheme. The dashed arrows represent the walk, a MCMC chain to draw noisy samples from $p(y)$ . The solid arrow represents the jump. Both walks and jumps leverage the trained denoising network.
78
+
79
+ Using (1), we have the following expression for the smoothed score function in terms of the denoising network<sup>5</sup>:
80
+
81
+ $$
82
+ g _ {\theta} (y) = \frac {1}{\sigma^ {2}} \left(\hat {x} _ {\theta} (y) - y\right). \tag {3}
83
+ $$
84
+
85
+ By minimizing the learning objective (2) we learn the optimal $\hat{x}_{\theta}$ and by using (3) we can compute the score function $g_{\theta}(y)\approx \nabla_y\log p(y)$ .
86
+
87
+ We model the denoising network $\hat{x}_{\theta}$ with an encoder-decoder 3D convolutional network that maps every noised voxel on the grid to a clean version of it. Figure 2(a) shows a general overview of the denoising model. The noise level, $\sigma$ , is kept constant during training and is a key hyperparameter of the model. Note that in the empirical Bayes formalism, $\sigma$ can be any (large) value.
88
+
89
+ Compared to diffusion models, this training scheme is simpler as the noise level is fixed during training. VoxMol does not require noise scheduling nor temporal embedding on the network layers. We observe empirically that single-step denoising is sufficient to reconstruct voxelized molecules (within the noise levels considered in this paper). Our hypothesis is that this is due to the nature of the voxel signals, which contain much more "structure" than "texture" information, in comparison to natural images.
90
+
91
+ # 3.3 Sampling voxelized molecules
92
+
93
+ We use the learned score function $g_{\theta}$ and the estimator $\hat{x}_{\theta}$ to sample. We follow the walk-jump sampling scheme [1, 37, 38, 39] to generate voxelized molecules $x_{k}$ :
94
+
95
+ (i) (walk step) For sampling noisy voxels from $p(y)$ , we consider Langevin MCMC algorithms that are based on discretizing the underdamped Langevin diffusion [84]:
96
+
97
+ $$
98
+ d v _ {t} = - \gamma v _ {t} d t - u g _ {\theta} \left(y _ {t}\right) d t + (\sqrt {2 \gamma u}) d B _ {t} \tag {4}
99
+ $$
100
+
101
+ $$
102
+ d y _ {t} = v _ {t} d t,
103
+ $$
104
+
105
+ where $B_{t}$ is the standard Brownian motion in $\mathbb{R}^d$ , $\gamma$ and $u$ are hyperparameters to tune (friction and inverse mass, respectively). We use the discretization algorithm proposed by Sachs et al. [85] to generate samples $y_{k}$ , which requires a discretization step $\delta$ . See appendix for a description of the algorithm.
106
+
107
+ (ii) (jump step) At an arbitrary time step $k$ , clean samples can be generated by estimating $X$ from $y_{k}$ with the denoising network, i.e., computing $x_{k} = \hat{x}_{\theta}(y_{k})$ .
108
+
109
+ This approach allows us to approximately sample molecules from $p(x)$ without the need to compute (or approximate) $\nabla_{x}\log p(x)$ . In fact, we do MCMC on the smooth density $p(y)$ , which is known to be easier to sample and mixes faster than the original density $p(x)$ [1, 38, 86]. Figure 2(b) shows a schematic representation of the generation process. Following [37], we initialize the chains at by adding uniform noise to Gaussian noise (with the same $\sigma$ used during training), i.e., $y_0 = N + U$ , $N\sim \mathcal{N}(0,\sigma^2 I_d)$ , $U\sim \mathcal{U}_d(0,1)$ (this was observed to mix faster in practice).
110
+
111
+ ![](images/89ad952c0c7fc554a35d4abf9f92cba02ab386b649fcaeb51ca5183bc54d2d30.jpg)
112
+ Figure 3: Illustration of walk-jump sampling chain. We do Langevin MCMC on the noisy distribution (walk) and estimate clean samples with the denoising network at arbitrary time (jump).
113
+
114
+ The noise level plays a key role in this sampling framework. If the noise is low, denoising (jump step) becomes easier, with lower variance, while sampling a "less smooth" $p(y)$ (walk step) becomes harder. If the noise is high, the opposite is true.
115
+
116
+ Figure 3 illustrates an example of a walk-jump sampling chain, where generated molecules change gradually as we walk through the chain (the clean samples are shown every ten steps, $\Delta k = 10$ ). This figure is a demonstration of the fast-mixing properties of our sampling scheme in generating 3D molecules. For instance, some atoms (or other structures like rings) might appear/disappear/change as we move through the chain. Interestingly, this behavior happened on most chains we looked into explicitly.
117
+
118
+ # 3.4 Recovering atomic coordinates from voxelized molecules
119
+
120
+ It is often useful to extract atomic coordinates from generated voxelized molecules (e.g., to validate atomic valences and bond types or compare with other models). We use a very simple algorithm (a simplified version of the approach used in [33]) to recover the set of atomic coordinates from generated voxel grids: first we set to 0 all voxels with value less than .1, i.e., $x_{k} = \mathbb{1}_{\geq .1}(x_{k})$ . Then we run a simple peak detection to locate the voxel on the center of each Gaussian blob (corresponding to the center of each atom). Finally we run a simple gradient descent coordinate optimization algorithm to find the set of points that best create the generated voxelized molecule. Once we have obtained the optimized atomic coordinates, we follow previous work [33, 18, 17, 20] and use standard cheminformatics software to determine the molecule's atomic bonds. Figure 4 shows our pipeline to recover atomic coordinates and molecular graphs from generated voxelized molecules. See appendix for more details.
121
+
122
+ # 4 Experiments
123
+
124
+ In this section, we evaluate the performance of our model on the task of unconditional 3D molecule generation. Our approach is the first of its kind and therefore the objective of our experiments is to show that (i) VoxMol is a feasible approach for unconditional generation (this is non-trivial) and (ii) it scales well with data, beating a established model on a large, drug-like dataset. In principle, VoxMol can be used for guided (or conditional) generation, an arguably more useful application for molecular sciences (see appendix for a discussion on how guidance can be used on generation).
125
+
126
+ We start with a description of our experimental setup, followed by results on two popular datasets for this problem. We then show ablation studies performed on different components of the model.
127
+
128
+ # 4.1 Experimental setup
129
+
130
+ Architecture. The denoising network $\hat{x}_{\theta}$ is used in both the walk and jump steps described above. Therefore, its parametrization is very important to the performance of this approach. We use a 3D U-Net [87] architecture for our denoising network. We follow the same architecture recipe as DDPM [34], with two differences: we use 3D convnets instead of 2D and we use fewer channels on all layers. The model has 4 levels of resolution and we use self-attention on the two lowest resolutions. We augment our dataset during training by applying random rotation and translation to every training sample. Our models are trained with noise level $\sigma = .9$ , unless stated otherwise. We train our models with batch size of 128 and 64 (for QM9 and GEOM-drugs, respectively) and we use AdamW [88]
131
+
132
+ ![](images/1eb4021d1b51cdd88968dbf25a2f70f13ce89246131c85dc6033ae1ec645e740.jpg)
133
+ Figure 4: Pipeline for recovering atomic coordinates from voxel grids: (i) VoxMol generates voxelized molecules, (ii) atomic coordinates are extracted from voxel grid with simple peak detection algorithm, (iii) we use cheminformatics software to add atomic bonds and extract SMILES strings, molecular graphs, etc.
134
+
135
+ (learning rate $2 \times 10^{-5}$ , weight decay $10^{-2}$ ) to optimize the weights. The weights are updated with exponential moving average with a decay of .999. We use $\gamma = 1.0$ , $u = 1.0$ and $\delta = .5$ for all our MCMC samplings. See appendix for more details on the architecture, training and sampling.
136
+
137
+ Datasets. We consider two popular datasets for this task: QM9 [40] and GEOM-drugs [41]. QM9 contains small molecules with up to 9 heavy atoms (29 if we consider hydrogen atoms). GEOM-drugs contain multiple conformations for 430k drug-sized molecules and its molecules have 44 atoms on average (up to 181 atoms and over $99\%$ are under 80 atoms). We use grids of dimension $32^3$ and $64^3$ for QM9 and GEOM-drugs respectively. These volumes are able to cover over $99.8\%$ of all points on both datasets. All our models model hydrogens explicitly. For QM9, we consider all 5 chemical elements (C, H, O, N and F) present on the dataset. For GEOM-drugs, we consider 8 elements (C, H, O, N, F, S, Cl and Br). We ignore P, I and B elements as they appear in less than $1\%$ of the molecules in the dataset. Finally, the input voxel grids are of dimension $\mathbb{R}^{5\times 32\times 32\times 32}$ and $\mathbb{R}^{8\times 64\times 64\times 64}$ for QM9 and GEOM-drugs, respectively. We perform the same pre-processing and dataset split as [20] and end up with 100K/20K/13K molecules for QM9 and 1.1M/146K/146K for GEOM-drugs (train, validation, test splits respectively).
138
+
139
+ Baselines. We compare our method with two state-of-the-art approaches: $GSchNet$ [47], a point-cloud autoregressive model and $EDM$ [15], a point-cloud diffusion-based model. We note that both methods rely on equivariant networks, while ours does not. Our results could potentially be improved by successfully exploiting equivariant 3D convolutional networks. We also show results of $\mathrm{VoxMol}_{\mathrm{oracle}}$ in our main results, where we assume we have access to real samples from the noisy distribution. Instead of performing MCMC to sample $y_{k}$ , we sample molecules from the validation set and add noise to them. This baseline assumes we would have perfect sampling of noisy samples (walk step) and let us assess the quality of our model to recover clean samples. It serves as an upper bound for our model and allows us to disentangle the quality of the walk (sampling noisy samples) and jump (estimating clean molecules) steps.
140
+
141
+ All methods generate molecules as a set of atom types and their coordinates (in the case of voxelized molecules, we use the post-processing described above to get the atomic coordinates). We follow previous work [33, 18, 17, 20] and use standard cheminformatics software to determine the molecule's atomic bonds given the atomic coordinates $^6$ . Using the same post-processing for all methods allows a more apples-to-apples comparison of the models.
142
+
143
+ Metrics. Most metrics we use to benchmark our model come from [20] $^{7}$ . We draw 10,000 samples from each method and measure performance with the following metrics: stable mol and stable atom, the percentage of stable molecules and atoms, respectively, as defined in [15]; validity, the percentage of generated molecules that passes RDKit [90]'s sanitization filter; uniqueness, the proportion of valid molecules that have different canonical SMILES; valency $W_{1}$ , the Wasserstein distance between the distribution of valencies in the generated and test set; atoms $TV$ and bonds $TV$ , the total variation
144
+
145
+ <table><tr><td></td><td>stable mol %↑</td><td>stable atom %↑</td><td>valid %↑</td><td>unique %↑</td><td>valency W1↓</td><td>atom TV↓</td><td>bond TV↓</td><td>bond len W1↓</td><td>bond ang W1↓</td></tr><tr><td>data</td><td>98.7</td><td>99.8</td><td>98.9</td><td>99.9</td><td>.001</td><td>.003</td><td>.000</td><td>.000</td><td>.120</td></tr><tr><td>GSchNet</td><td>92.0</td><td>98.7</td><td>98.1</td><td>94.5</td><td>.049</td><td>.042</td><td>.041</td><td>.005</td><td>1.68</td></tr><tr><td>EDM</td><td>97.9</td><td>99.8</td><td>99.0</td><td>98.5</td><td>.011</td><td>.021</td><td>.002</td><td>.001</td><td>0.44</td></tr><tr><td>VoxMolno rot</td><td>84.2 (±1.6)</td><td>98.2 (±.3)</td><td>98.1 (±.4)</td><td>77.2 (±1.7)</td><td>.043 (±.0)</td><td>.171 (±.200)</td><td>.050 (±.010)</td><td>.007 (±.0)</td><td>3.80 (±.7)</td></tr><tr><td>VoxMol</td><td>89.3 (±.6)</td><td>99.2 (±.1)</td><td>98.7 (±.1)</td><td>92.1 (±.3)</td><td>.023 (±.002)</td><td>.029 (±.009)</td><td>.009 (±.002)</td><td>.003 (±.002)</td><td>1.96 (±.04)</td></tr><tr><td>VoxMoloracle</td><td>90.1</td><td>99.3</td><td>98.9</td><td>99.9</td><td>.024</td><td>.009</td><td>.002</td><td>.001</td><td>0.37</td></tr></table>
146
+
147
+ Table 1: Results on QM9. We use 10,000 samples from each method. Our results are shown with mean/standard deviation across 3 runs.
148
+
149
+ between the distribution of atom types and bond types, respectively; bond length $W_{1}$ and bond angle $W_{1}$ , the Wasserstein distance between the distribution of bond and lengths, respectively. Finally, we also report the strain energy metric proposed in [91]. This metric is defined as the difference between the internal energy of the generated molecule's pose and a relaxed pose of the molecule. The relaxation and the energy are computed using the Universal Force Field (UFF) [92] within RDKit. See appendix for more details about the metrics.
150
+
151
+ # 4.2 Experimental results
152
+
153
+ Table 1 and Table 2 show results on QM9 and GEOM-drugs respectively. We report results for models trained with and without data augmentation (VoxMol and VoxMol $_{\text{no rot}}$ , respectively) and generate 10,000 samples with multiple MCMC chains. Each chain is initialized with 1,000 warm-up steps, as we observed empirically that it slightly improves the quality of generated samples. Then, samples are generated after each 500 walk steps (each chain having a maximum of 1,000 steps after the warm-up steps). Results for our models are shown with mean/standard deviation among three runs. The row data on both tables are randomly sampled molecules from the training set.
154
+
155
+ On QM9, VoxMol performs similar to EDM in some metrics while performing worse in others (specially stable molecule, uniqueness and angle lengths). On GEOM-drugs, a more challenging and realistic drug-like dataset, the results are very different: VoxMol outperforms EDM in eight out of nine metrics, often by a considerably large margin.
156
+
157
+ Figure 5(a,b) shows the cumulative distribution function (CDF) of strain energies for the generated molecules of different models on QM9 and GEOM-drugs, respectively. The closer the CDF of generated molecules from a model is to that of data (samples from training set), the lower is the strain energy of generated molecules. The ground truth data has median strain energy of 43.87 and 54.95 kcal/mol for QM9 and GEOM-drugs, respectively. On QM9, all models have median strain energy around the same ballpark: 52.58, 66.32 and 56.54 kcal/mol for EDM, $\mathrm{VoxMol}_{\mathrm{no rot}}$ and $\mathrm{VoxMol}$ , respectively. On GEOM-drugs, the molecules generated by VoxMol have considerably lower median strain energy than EDM: 951.23 kcal/mol for EDM versus 286.06 and 171.57 for $\mathrm{VoxMol}_{\mathrm{no rot}}$ and $\mathrm{VoxMol}$ .
158
+
159
+ We observe, as expected, that augmenting the training data with random rotations and translations improves the performance of the model. The improvement is bigger on QM9 (smaller dataset) than on GEOM-drugs. In particular, the augmentations help to capture the distribution of bonds and angles between atoms and to generate more unique molecules. We note that, unlike EDM, our model does not require knowledge of the number of atoms beforehand (neither for training nor sampling). In fact, Figure 6 show that our model learns the approximate distribution of the number of atoms per molecule on both datasets. Implicitly learning this distribution can be particularly useful in applications related to in-painting (e.g., pocket conditioning, linking, scaffold conditioning). Finally, our method generates drug-like molecules in fewer iterations and is faster than EDM on average (see Table 3). EDM sampling time scales quadratically with the number of atoms, while ours has constant time (but scales cubically with grid dimensions).
160
+
161
+ These results clearly show one of the main advantages of our approach: a more expressive model scales better with data. Architecture inductive biases (such as built-in SE(3) equivariance) are helpful in the setting of small dataset and small molecules. However, on the large-scale regime, a more expressive model is more advantageous in capturing the modes of the distribution we want to model. Compared
162
+
163
+ ![](images/65d4cf3b40d0f0a769c348f1ca0d97acb98c66e2aeb15c054033371fd13de84a.jpg)
164
+ Figure 5: The cumulative distribution function of strain energy of generated molecules on (a) QM9 and (b) GEOM-drugs. For each method, we use 10,000 molecules.
165
+
166
+ ![](images/0945746fc003f7553bea40f21301d7893d70cbf2edbd16d9c7bcc5ff4895b2ab.jpg)
167
+
168
+ <table><tr><td></td><td>stable mol %↑</td><td>stable atom %↑</td><td>valid %↑</td><td>unique %↑</td><td>valency W1↓</td><td>atom TV↓</td><td>bond TV↓</td><td>bond len W1↓</td><td>bond ang W1↓</td></tr><tr><td>data</td><td>99.9</td><td>99.9</td><td>99.8</td><td>100.</td><td>.001</td><td>.001</td><td>.025</td><td>.000</td><td>.050</td></tr><tr><td>EDM</td><td>40.3</td><td>97.8</td><td>87.8</td><td>99.9</td><td>.285</td><td>.212</td><td>.048</td><td>.002</td><td>6.42</td></tr><tr><td>VoxMolno rot</td><td>44.4 (±.1)</td><td>96.6 (±.1)</td><td>89.7 (±.2)</td><td>99.9 (±.0)</td><td>.238 (±.001)</td><td>.025 (±.001)</td><td>.024 (±.001)</td><td>.004 (±.000)</td><td>2.14 (±.02)</td></tr><tr><td>VoxMol</td><td>75.0 (±.1.)</td><td>98.1 (±.3)</td><td>93.4 (±.5)</td><td>99.1 (±.2)</td><td>.254 (±.003)</td><td>.033 (±.041)</td><td>.036 (±.006)</td><td>.002 (±.001)</td><td>0.64 (±.13)</td></tr><tr><td>VoxMoloracle</td><td>81.9</td><td>99.0</td><td>94.7</td><td>97.4</td><td>.253</td><td>.002</td><td>.024</td><td>.001</td><td>0.31</td></tr></table>
169
+
170
+ Table 2: Results on GEOM-drugs. We use 10,000 samples from each method. Our results are shown with mean/standard deviation across 3 runs.
171
+
172
+ ![](images/8d112d6da16be7b5ab3b72c77cfa7bd46f31d32342999ebce6e791f012f1192c.jpg)
173
+ Figure 6: Empirical distribution of number of atoms per molecule on QM9 (left) and GEOM-drugs (right). We sample 10,000 molecules from train set and generate the same number of VoxMol samples.
174
+
175
+ ![](images/5e2dbd5dc7ea351eb5e4c78761b6ce7f773a664f21015dcbff8dca806a823bd7.jpg)
176
+
177
+ to VoxMol<sub>oracle</sub> results, we see that VoxMol can still be vastly improved. We can potentially close this gap by improving the quality of the denoising network (e.g., by improving the architecture, train on more data, efficient built-in SE(3)-equivariance CNNs, etc).
178
+
179
+ # 4.3 Ablation studies
180
+
181
+ Noise level $\sigma$ . Unlike diffusion models, the noise level is considered fixed during training and sampling. It is an important hyperparameter as it poses a trade-off between the quality of the walk step (Langevin MCMC) and the jump step (empirical Bayes). The ideal noise level is the highest possible value such that the network can still learn how to denoise. We train models on QM9 with $\sigma$ in $\{.6, .7, ..., 1.2\}$ , while keeping all other hyperparameters the same. Figure 7(a,b,c) shows how noise level $\sigma$ influences the performance on the validation set. While most metrics get better as the noise level increases, others (like stable molecules and valency W1) get worse after a value. We observe empirically that $\sigma = .9$ is the sweet spot level that achieves better overall performance on the validation set of QM9.
182
+
183
+ Number of steps $\Delta k$ . Table 3 shows how VoxMol's performance on GEOM-drugs change with the number of walk steps $\Delta k$ on the Langevin MCMC sampling. In this experiment, we use the same trained model and only change the number of steps during sampling. Results of EDM are also shown for comparison (it always requires 1,000 diffusion steps for generation). We see that some metrics barely change, while others improve as $\Delta k$ increases. The average time (in seconds) to generate a
184
+
185
+ ![](images/f1bc463bc94fad60062eb43ea09481cbebf55414387912b6a78a8d66a04e1f45.jpg)
186
+ Figure 7: Effect of noise level $\sigma$ on generation quality. Models are trained on QM9 with a different noise level. Each plot shows two metrics: (a) molecule stability and uniqueness, (b) atom and bond TV, (c) valency and angle lengths W1.
187
+
188
+ ![](images/6832e3fcd2fdf2be5b1db0a21a04bcdd22ffd59ef26ef99b1eacd9f7c7670f00.jpg)
189
+
190
+ ![](images/0220035cc944cc2c930740814cdd440a7640943bcb919041dd74a2ccc351ecd5.jpg)
191
+
192
+ <table><tr><td>Δk(n steps)</td><td>stable mol %↑</td><td>stable atom %↑</td><td>valid %↑</td><td>unique %↑</td><td>valency W1↓</td><td>atom TV↓</td><td>bond TV↓</td><td>bond len W1↓</td><td>bond ang W1↓</td><td>avg. t s/mol.↓</td></tr><tr><td>50</td><td>78.9</td><td>98.7</td><td>96.3</td><td>87.8</td><td>.250</td><td>.073</td><td>.102</td><td>.002</td><td>1.18</td><td>0.90</td></tr><tr><td>100</td><td>78.6</td><td>98.6</td><td>95.5</td><td>94.3</td><td>.256</td><td>.050</td><td>.101</td><td>.002</td><td>1.62</td><td>1.64</td></tr><tr><td>200</td><td>77.9</td><td>98.4</td><td>94.4</td><td>98.6</td><td>.253</td><td>.037</td><td>.104</td><td>.002</td><td>1.02</td><td>3.17</td></tr><tr><td>500</td><td>76.7</td><td>98.2</td><td>93.8</td><td>99.2</td><td>.252</td><td>.043</td><td>.042</td><td>.002</td><td>0.56</td><td>7.55</td></tr><tr><td>1,000</td><td>75.5</td><td>98.4</td><td>93.4</td><td>99.8</td><td>.257</td><td>.029</td><td>.050</td><td>.002</td><td>0.79</td><td>14.9</td></tr><tr><td>EDM</td><td>40.3</td><td>97.8</td><td>87.8</td><td>99.9</td><td>.285</td><td>.212</td><td>.048</td><td>.002</td><td>6.42</td><td>9.35</td></tr></table>
193
+
194
+ Table 3: Effect of number of walk steps $\Delta k$ on generation quality on GEOM-drugs (2,000 samples). EDM results are shown for comparison.
195
+
196
+ molecule increases linearly with the number of steps, as expected. We observe that even using 500 steps, our model is still faster than EDM on average, while achieving better performance in these metrics. Remarkably, with only 50 steps, VoxMol already outperforms EDM in most metrics, while being an order of magnitude faster on average.
197
+
198
+ Atomic density radii. We also assess how the performance of the model changes with respect to the size of atomic radii chosen during the voxelization step (while always keeping the resolution of the grid fixed at .25 Å). See appendix for how this is done. We tried four different values for the radii (same for all elements): .25, .5, .75 and 1.0. We observe—throughout different versions of the model, with different hyperparameters—that using a fixed radius of .5 consistently outperform other values. Training does not converge with radius .25 and quality of generated samples degrades as we increase the radius. We also tried to use Van der Waals radii (where each atom type would have their own radius), but results were also not improved.
199
+
200
+ # 5 Conclusion
201
+
202
+ We introduce VoxMol, a novel score-based method for 3D molecule generation. This method generates molecules in a fundamentally different way than the current state of the art (i.e., diffusion models applied to atoms). The noise model used is also novel in the class of score-based generative models for molecules. We represent molecules on regular voxel grids and VoxMol is trained to predict "clean" molecules from its noised counterpart. The denoising model (which approximates the score function of the smoothed density) is used to sample voxelized molecules with walk-jump sampling strategy. Finally atomic coordinates are retrieved by extracting the peaks from the generated voxel grids. Our experiments show that VoxMol scales better with data and outperforms (by a large margin) a representative state of the art point cloud-based diffusion model on GEOM-drugs, while being faster to generate samples.
203
+
204
+ Broader impact. Generating molecules conditioned on some desiderata can have huge impacts in many different domains, such as, drug discovery, biology, materials, agriculture, climate, etc. This work deals with unconditional 3D molecule generation (in a pure algorithmic way): a problem that can be seen as an initial stepping stone (out of many) to this long-term objective. We, as a society, need to find solutions to use these technologies in ways that are safe, ethical, accountable and exclusively beneficial to society. These are important concerns and they need to be thought of at the same time we design machine learning algorithms.
205
+
206
+ Acknowledgements. The authors would like to thank the whole President Design team for helpful discussions and Genentech's HPC team for providing a reliable environment to train/analyse models.
207
+
208
+ # References
209
+
210
+ [1] Saeed Saremi and Aapo Hyvarinen. Neural empirical Bayes. JMLR, 2019.
211
+ [2] Tobias Fink, Heinz Bruggesser, and Jean-Louis Reymond. Virtual exploration of the small-molecule chemical universe below 160 daltons. Angewandte Chemie International Edition, 2005.
212
+ [3] Jiankun Lyu, Sheng Wang, Trent E Balius, Isha Singh, Anat Levit, Yurii S Moroz, Matthew J O'Meara, Tao Che, Enkhjargal Algaa, Kateryna Tolmachova, et al. Ultra-large library docking for discovering new chemotypes. Nature, 2019.
213
+ [4] Regine S Bohacek, Colin McMartin, and Wayne C Guida. The art and practice of structure-based drug design: a molecular modeling perspective. Medicinal research reviews, 1996.
214
+ [5] Camille Bilodeau, Wengong Jin, Tommi Jaakkola, Regina Barzilay, and Klavs F Jensen. Generative models for molecular discovery: Recent advances and challenges. Computational Molecular Science, 2022.
215
+ [6] David Weininger. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences, 1988.
216
+ [7] Mario Krenn, Florian Häse, Akshit Kumar Nigam, Pascal Friederich, and Alan Aspuru-Guzik. Self-referencing embedded strings (selfies): A $100\%$ robust molecular string representation. Machine Learning: Science and Technology, 2020.
217
+ [8] Marwin HS Segler, Thierry Kogej, Christian Tyrchan, and Mark P Waller. Generating focused molecule libraries for drug discovery with recurrent neural networks. ACS central science, 2018.
218
+ [9] Thomas Blaschke, Marcus Olivecrona, Ola Engkvist, Jürgen Bajorath, and Hongming Chen. Application of generative autoencoder in de novo molecular design. Molecular informatics, 2018.
219
+ [10] Gabriel Lima Guimaraes, Benjamin Sanchez-Lengeling, Carlos Outeiral, Pedro Luis Cunha Farias, and Alán Aspuru-Guzik. Objective-reinforced generative adversarial networks (organ) for sequence generation models. arXiv preprint arXiv:1705.10843, 2017.
220
+ [11] Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. In ICML, 2018.
221
+ [12] Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter Battaglia. Learning deep generative models of graphs. arXiv preprint arXiv:1803.03324, 2018.
222
+ [13] Jiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, and Jure Leskovec. Graphnn: A deep generative model for graphs. In ICML, 2018.
223
+ [14] Omar Mahmood, Elman Mansimov, Richard Bonneau, and Kyunghyun Cho. Masked graph modeling for molecule generation. Nature Communications, 2021.
224
+ [15] Emiel Hoogeboom, Víctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3D. In ICML, 2022.
225
+ [16] Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, and Jian Tang. Geodiff: A geometric diffusion model for molecular conformation generation. In ICLR, 2022.
226
+ [17] Ilia Igashov, Hannes Stärk, Clement Vignac, Victor Garcia Satorras, Pascal Frossard, Max Welling, Michael M Bronstein, and Bruno Correia. Equivariant 3d-conditional diffusion models for molecular linker design. In NeurIPS, AI for ScienceWorkshop, 2022.
227
+ [18] Arne Schneuing, Yuanqi Du, Charles Harris, Arian Jamasb, Ilia Igashov, Weitao Du, Tom Blundell, Pietro Lió, Carla Gomes, Max Welling, et al. Structure-based drug design with equivariant diffusion models. arXiv preprint arXiv:2210.13695, 2022.
228
+ [19] Gabriele Corso, Hannes Stärk, Bowen Jing, Regina Barzilay, and Tommi S. Jaakkola. Diffdock: Diffusion steps, twists, and turns for molecular docking. In ICLR, 2023.
229
+ [20] Clement Vignac, Nagham Osman, Laura Toni, and Pascal Frossard. Midi: Mixed graph and 3d denoising diffusion for molecule generation. arXiv preprint arXiv:2302.09048, 2023.
230
+ [21] Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018.
231
+
232
+ [22] Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael JL Townshend, and Ron Dror. Learning from protein structure with geometric vector perceptrons. *ICLR*, 2021.
233
+ [23] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In ICML, 2021.
234
+ [24] Mario Geiger and Tess Smidt. e3nn: Euclidean neural networks. arXiv preprint arXiv:2207.09453, 2022.
235
+ [25] Yi-Lun Liao and Tess Smidt. Equformer: Equivariant graph attention transformer for 3d atomistic graphs. *ICLR*, 2023.
236
+ [26] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? ICLR, 2019.
237
+ [27] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI, 2019.
238
+ [28] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. Neurips, 32, 2019.
239
+ [29] Ilyes Batatia, David P Kovacs, Gregor Simm, Christoph Ortner, and Gábor Csányi. MACE: Higher order equivariant message passing neural networks for fast and accurate force fields. Neurips, 35:11423-11436, 2022.
240
+ [30] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
241
+ [31] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
242
+ [32] Nate Gruver, Marc Finzi, Micah Goldblum, and Andrew Gordon Wilson. The Lie derivative for measuring learned equivariance. arXiv preprint arXiv:2210.02984, 2022.
243
+ [33] Matthew Ragoza, Tomohide Masuda, and David Ryan Koes. Learning a continuous representation of 3d molecular structures with deep generative models. In Neurips, Structural Biology workshop, 2020.
244
+ [34] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In NeurIPS, 2020.
245
+ [35] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. NeurIPS, 2021.
246
+ [36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022.
247
+ [37] Saeed Saremi and Rupesh Kumar Srivastava. Multimeasurement generative models. ICLR, 2022.
248
+ [38] Saeed Saremi, Rupesh Kumar Srivastava, and Francis Bach. Universal smoothed score functions for generative modeling. arXiv preprint arXiv:2303.11669, 2023.
249
+ [39] Nathan C Frey, Dan Berenberg, Joseph Kleinhenz, Isidro Hotzel, Julien Lafrance-Vanasse, Ryan Lewis Kelly, Yan Wu, Arvind Rajpal, Stephen Ra, Richard Bonneau, Kyunghyun Cho, Andreas Loukas, Vladimir Gligorijevic, and Saeed Saremi. Learning protein family manifolds with smoothed energy-based models. In ICLR, Workshop on Physics for Machine Learning, 2023.
250
+ [40] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemical science, 2018.
251
+ [41] Simon Axelrod and Rafael Gomez-Bombarelli. Geom, energy-annotated molecular conformations for property prediction and molecular generation. Scientific Data, 2022.
252
+ [42] Miha Skalic, José Jiménez, Davide Sabbadin, and Gianni De Fabritiis. Shape-based generative modeling for de novo drug design. Journal of chemical information and modeling, 2019.
253
+ [43] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In ICLR, 2014.
254
+ [44] Lvwei Wang, Rong Bai, Xiaoxuan Shi, Wei Zhang, Yinuo Cui, Xiaoman Wang, Cheng Wang, Haoyu Chang, Yingsheng Zhang, Jielong Zhou, et al. A pocket-based 3d molecule generative model fueled by experimental electron density. Scientific reports, 2022.
255
+
256
+ [45] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yosh Bengio. Generative adversarial nets. In NIPS, 2014.
257
+ [46] Fergus Imrie, Thomas E Hadfield, Anthony R Bradley, and Charlotte M Deane. Deep generative design with 3d pharmacophoric constraints. Chemical science, 2021.
258
+ [47] Niklas Gebauer, Michael Gastegger, and Kristof Schütt. Symmetry-adapted generation of 3d point sets for the targeted discovery of molecules. In NeurIPS, 2019.
259
+ [48] Niklas Gebauer, Michael Gastegger, and Kristof T Schütt. Generating equilibrium molecules with deep neural networks. arXiv preprint arXiv:1810.11347, 2018.
260
+ [49] Youzhi Luo and Shuiwang Ji. An autoregressive flow model for 3d molecular geometry generation from scratch. In ICLR, 2022.
261
+ [50] Jonas Kohler, Leon Klein, and Frank Noé. Equivariant flows: exact likelihood generative learning for symmetric densities. In ICML, 2020.
262
+ [51] Victor Garcia Satorras, Emiel Hoogeboom, Fabian Fuchs, Ingmar Posner, and Max Welling. E (n) equivariant normalizing flows. In NeurIPS, 2021.
263
+ [52] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In ICML, 2015.
264
+ [53] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML, 2015.
265
+ [54] Lei Huang, Hengtong Zhang, Tingyang Xu, and Ka-Chun Wong. Mdm: Molecular diffusion model for 3d molecule generation. arXiv preprint arXiv:2209.05710, 2022.
266
+ [55] Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, et al. Diffusion-based molecule generation with informative prior bridges. In NeurIPS, 2022.
267
+ [56] Minkai Xu, Alexander Powers, Ron Dror, Stefano Ermon, and Jure Leskovec. Geometric latent diffusion models for 3d molecule generation. In ICML, 2023.
268
+ [57] Elman Mansimov, Omar Mahmood, Seokho Kang, and Kyunghyun Cho. Molecular geometry prediction using a deep generative graph neural network. Scientific Reports, 2019.
269
+ [58] Gregor NC Simm and José Miguel Hernández-Lobato. A generative model for molecular distance geometry. ICML, 2020.
270
+ [59] Frank Noé, Simon Olsson, Jonas Köhler, and Hao Wu. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science, 2019.
271
+ [60] Gregor NC Simm, Robert Pinsler, Gábor Csányi, and José Miguel Hernández-Lobato. Symmetry-aware actor-critic for 3d molecular design. arXiv preprint arXiv:2011.12747, 2020.
272
+ [61] Octavian Ganea, Lagnajit Pattanaik, Connor Coley, Regina Barzilay, Klavs Jensen, William Green, and Tommi Jaakkola. Geomol: Torsional geometric generation of molecular 3d conformer ensembles. In NeurIPS, 2021.
273
+ [62] Xingang Peng, Shitong Luo, Jiaqi Guan, Qi Xie, Jian Peng, and Jianzhu Ma. Pocket2mol: Efficient molecular sampling based on 3d protein pockets. In ICML, 2022.
274
+ [63] Chence Shi, Shitong Luo, Minkai Xu, and Jian Tang. Learning gradient fields for molecular conformation generation. In ICML, 2021.
275
+ [64] Bowen Jing, Gabriele Corso, Jeffrey Chang, Regina Barzilay, and Tommi Jaakkola. Torsional diffusion for molecular conformer generation. arXiv preprint arXiv:2206.01729, 2022.
276
+ [65] Siyu Long, Yi Zhou, Xinyu Dai, and Hao Zhou. Zero-shot 3d drug design by sketching and generating. In NeurIPS, 2022.
277
+ [66] Keir Adams and Connor W Coley. Equivariant shape-conditioned generation of 3d molecules for ligand-based drug design. arXiv preprint arXiv:2210.04893, 2022.
278
+ [67] Jiaqi Guan, Wesley Wei Qian, Xingang Peng, Yufeng Su, Jian Peng, and Jianzhu Ma. 3d equivariant diffusion for target-aware molecule generation and affinity prediction. *ICLR*, 2023.
279
+
280
+ [68] Matthew Ragoza, Tomohide Masuda, and David Ryan Koes. Generating 3d molecules conditional on receptor binding sites with deep generative models. Chemical science, 2022.
281
+ [69] David Schaller, Dora Šribar, Theresa Noonan, Lihua Deng, Trung Ngoc Nguyen, Szymon Pach, David Machalz, Marcel Bermudez, and Gerhard Wolber. Next generation 3d pharmacophore modeling. Wiley Interdisciplinary Reviews: Computational Molecular Science, 2020.
282
+ [70] Raphael JL Townshend, Martin Vögele, Patricia Suriana, Alexander Derry, Alexander Powers, Yianni Laloudakis, Sidhika Balachandar, Bowen Jing, Brandon Anderson, Stephan Eismann, et al. Atom3d: Tasks on molecules in three dimensions. NeurIPS, 2020.
283
+ [71] Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han. Point-voxel cnn for efficient 3d deep learning. In NeurIPS, 2019.
284
+ [72] Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco S Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. NeurIPS, 2018.
285
+ [73] Ivan Diaz, Mario Geiger, and Richard Iain McKinley. An end-to-end se (3)-equivariant segmentation network. arXiv preprint arXiv:2303.00351, 2023.
286
+ [74] Jiehong Lin, Hongyang Li, Ke Chen, Jiangbo Lu, and Kui Jia. Sparse steerable convolutions: An efficient learning of se (3)-equivariant features for estimation and tracking of object poses in 3d space. NeurIPS, 2021.
287
+ [75] Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equivariance and equivalence. In CVPR, 2015.
288
+ [76] Diane Bouchacourt, Mark Ibrahim, and Ari Morcos. Grounding inductive biases in natural images: invariance stems from variations in data. In NeurIPS, 2021.
289
+ [77] Daniel Flam-Shepherd and Alán Aspuru-Guzik. Language models can generate molecules, materials, and protein binding sites directly in three dimensions as xyz, cif, and pdb files. arXiv preprint arXiv:2305.05708, 2023.
290
+ [78] Matthew Ragoza, Joshua Hochuli, Elisa Idrobo, Jocelyn Sunseri, and David Ryan Koes. Protein-ligand scoring with convolutional neural networks. Journal of chemical information and modeling, 2017.
291
+ [79] Michael Maser and SE Reisman. 3d computer vision models predict dft-level homo-lumo gap energies from force-field-optimized geometries. ChemRvix, 2021.
292
+ [80] Herbert Ellis Robbins. An empirical Bayes approach to statistics. In Proc. 3rd Berkeley Symp. Math. Statist. Probab., 1956, 1956.
293
+ [81] Koichi Miyasawa. An empirical bayes estimator of the mean of a normal population. Bull. Inst. Internat. Statist, 1961.
294
+ [82] Aapo Hyvarinen. Estimation of non-normalized statistical models by score matching. JMLR, 2005.
295
+ [83] Giorgio Parisi. Correlation functions and computer simulations. *Nuclear Physics B*, 1981.
296
+ [84] Xiang Cheng, Niladri S. Chatterji, Peter L. Bartlett, and Michael I. Jordan. Underdamped Langevin MCMC: A non-asymptotic analysis. In $COLT$ , 2018.
297
+ [85] Matthias Sachs, Benedict Leimkuhler, and Vincent Danos. Langevin dynamics with variable coefficients and nonconservative forces: from stationary states to numerical methods. Entropy, 2017.
298
+ [86] Saeed Saremi, Ji Won Park, and Francis Bach. Chain of log-concave Markov chains. arXiv preprint arXiv:2305.19473, 2023.
299
+ [87] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
300
+ [88] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In $ICLR$ , 2019.
301
+ [89] Noel M O'Boyle, Michael Banck, Craig A James, Chris Morley, Tim Vandermeersch, and Geoffrey R Hutchison. Open babel: An open chemical toolbox. Journal of cheminformatics, 2011.
302
+ [90] Greg Landrum. Rdkit: Open-source cheminformatics software, 2016.
303
+
304
+ [91] Charles Harris, Kieran Didi, Arian R Jamasb, Chaitanya K Joshi, Simon V Mathis, Pietro Lio, and Tom Blundell. Benchmarking generated poses: How rational is structure-based drug design with generative models? arXiv preprint arXiv:2308.07413, 2023.
305
+ [92] Anthony K Rappé, Carla J Casewit, KS Colwell, William A Goddard III, and W Mason Skiff. Uff, a full periodic table force field for molecular mechanics and molecular dynamics simulations. Journal of the American chemical society, 1992.
306
+ [93] Lin Li, Chuan Li, and Emil Alexov. On the modeling of polar component of solvation energy using smooth gaussian-based dielectric function. Journal of Theoretical and Computational Chemistry, 2014.
307
+ [94] Gabriele Orlando, Daniele Raimondi, Ramon Duran-Romana, Yves Moreau, Joost Schymkowitz, and Frederic Rousseau. Pyuul provides an interface between biological structures and deep learning algorithms. Nature communications, 2022.
308
+ [95] Yuxin Wu and Kaiming He. Group normalization. In ECCV, 2018.
309
+ [96] Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks, 2018.
310
+ [97] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019.
311
+
312
+ # A Extra implementation details
313
+
314
+ # A.1 Voxel representation
315
+
316
+ Molecules in our datasets are converted into voxelized atomic densities. For each molecule, we consider a box around its center and divide it into discrete volume elements. We follow [93, 94] and first convert each atom (of each molecule) into 3D Gaussian-like densities:
317
+
318
+ $$
319
+ V _ {a} (d, r _ {a}) = \exp \left(- \frac {d ^ {2}}{(. 9 3 \cdot r _ {a}) ^ {2}}\right), \tag {5}
320
+ $$
321
+
322
+ where $V_{a}$ is defined as the fraction of occupied volume by atom $a$ of radius $r_a$ at distance $d$ from its center. Although we could consider a different radius for each element, in this work we consider all atoms to have the same radius $r_a = .5\AA$ . The occupancy of each voxel in the grid is computed by integrating the occupancy generated by every atom in a molecule:
323
+
324
+ $$
325
+ \operatorname {O c c} _ {i, j, k} = 1 - \prod_ {n = 1} ^ {N _ {a}} \left(1 - V _ {a _ {n}} \left(\left| \left| C _ {i, j, k} - x _ {n} \right| \right|, r _ {a _ {n}}\right)\right), \tag {6}
326
+ $$
327
+
328
+ where $N_{a}$ is the number of atoms in the molecule, $a_{n}$ is the $\mathfrak{n}^{th}$ atom, $C_{i,j,k}$ are the coordinates (i,j,k) in the grid and $x_{n}$ is the coordinates of the center of atom $n$ [93]. The occupancy takes the maximum value of 1 at the center of the atom and goes to 0 as it moves away from it. Every channel is considered independent of one another and they do not interact nor share volumetric contributions. We use the python package PyUUL [94] to generate the voxel grids from the raw molecules (.xyz or .sdf format).
329
+
330
+ We use grids with $32^{3}$ voxels on QM9 and $64^{3}$ on GEOM-drugs and place the molecules on the center of the grid. These volumes are able to cover over $99\%$ of all points in the datasets. We use all 5 chemical elements present on the dataset (C, H, O, N and F), while for GEOM-drugs, we use 8 (C, H, O, N, F, S, Cl and Br). We model hydrogen explicitly in all our experiments. Finally, the input voxel grids are of dimension $\mathbb{R}^{5\times 32\times 32\times 32}$ and $\mathbb{R}^{8\times 64\times 64\times 64}$ for QM9 and GEOM-drugs, respectively. We augment the dataset during training by applying random rotation and translation to the molecules. For rotation, we sample three Euler angles uniformly (between $[0,2\pi)$ ) and rotate each training sample. For translation, we randomly shift the center of the molecules on each of the three dimensions by sampling an uniform shift between $[0,.25]\mathring{\mathrm{A}}$ .
331
+
332
+ # A.2 Architecture
333
+
334
+ Our neural network architecture follows standard encoder-decoder convnet architecture. We use a very similar architecture recipe to DDPM [34]. The model uses four levels of resolution: $32^3$ to $4^3$ for the QM9 dataset and $64^3$ to $8^3$ for the GEOM-drugs dataset. The input voxel is embedded into a 32 dimensional space with a grid projection layer (3D convnet with kernel size $3 \times 3 \times 3$ ). Each resolution (on both encoder and decoder) has two convolutional residual blocks. Each block contains a group normalization [95] layer, followed by SiLU [96] non-linearity and a 3D convnet (with kernel size $3 \times 3 \times 3$ ). All convolutions have stride 1 and we pad the feature maps with 1 on each side. We use self-attention layers between the convolutional layers in the two lowest resolutions. We reduce (increase, respectively) the resolution of the encoder (decoder) with $2 \times 2 \times 2$ (stride 1) max-poolings (bilinear-upsampling). The model has skip connections at each resolution to concatenate the encoder feature map with the decoder feature map. We double the number of feature maps at each resolution, except the last resolution where we quadruple. VoxMol has approximately 111M parameters. We also implemented a smaller version (with reduced number of channels per layer) with around 30M. These models achieve performance close to the base model and are faster to train and sample.
335
+
336
+ # A.3 Built-in SE(3) equivariance experiments
337
+
338
+ In early experiments, we made an attempt at using a SE(3)-equivariant 3D U-Net using steerable convnets [72] for denoising, but initial experiments were not successful. The hypothesis is that a built-in SE(3) equivariant version of our model, $\mathrm{VoxMole_{equi}}$ , would be advantageous over the non-equivariant version for the task of molecule generation. We start with the official implementation of [73] and tune several network hyperparameters (related to architecture, optimization and training) so that the network is able to achieve good denoising metrics on QM9. We then use the same procedure to
339
+
340
+ generate samples as described in the main paper (only switching the network from the non-equivariant to the equivariant version). We tried different sampling hyperparameters, but we were never able to achieve the same performance as non-equivariant VoxMol. Table 4 compares the results of the model with and without built-in SE(3) equivariance.
341
+
342
+ <table><tr><td>QM9</td><td>stable mol %↑</td><td>stable atom %↑</td><td>valid %↑</td><td>unique %↑</td><td>valency W1↓</td><td>atom TV↓</td><td>bond TV↓</td><td>bond len W1↓</td><td>bond ang W1↓</td></tr><tr><td>VoxMol</td><td>89.3</td><td>99.2</td><td>98.7</td><td>92.1</td><td>.023</td><td>.029</td><td>.009</td><td>.003</td><td>1.96</td></tr><tr><td>VoxMolequi</td><td>25.1</td><td>81.8</td><td>95.9</td><td>92.9</td><td>13.2</td><td>.104</td><td>.015</td><td>.015</td><td>5.31</td></tr></table>
343
+
344
+ Table 4: Results on QM9 of our model without (VoxMol) and with (VoxMolequi) built-in SE(3) equivariance.
345
+
346
+ There might be many reasons why this is the case: (i) the best reconstruction loss we found with the equivariant model is higher than the non-equivariant (approx. $9.4 \times 10^{-5}$ vs. $5.4 \times 10^{-5}$ MSE on val. set), (ii) the equivariant model needs more capacity to be competitive with the non-equivariant one (currently it has over $90 \times$ fewer parameters), (iii) something in the sampling procedure needs to be different on the equivariant version (unlikely).
347
+
348
+ We hypothesize that if an equivariant version of VoxMol achieves similar (or lower) reconstruction loss as the vanilla version, it will probably achieve competitive/better results in the task of molecule generation. Finally, our equivariant implementation is less efficient (around $50 - 60\%$ slower) and consumes more memory than the original version. This poses, therefore, an extra challenge to scale up the size of the dataset and the size of the molecules (e.g., GEOM-drugs requires $64^{3}$ voxel grid).
349
+
350
+ # A.4 Training and sampling
351
+
352
+ The weights are optimized with batch size 128 and 64 (for QM9 and GEOM-drugs, respectively), AdamW optimizer $(\beta_{1} = .9, \beta_{2} = .999)$ , learning rate of $10^{-5}$ and weight decay of $10^{-2}$ . The models are trained for 500 epochs on QM9 and around 24 epochs on GEOM-drugs. We discretize the underdamped Langevin MCMC (Equation 4) with the algorithm proposed by Sachs et al. [85] (this has been applied on images before [37]). Algorithm 1 describes this process.
353
+
354
+ Algorithm 1: Walk-jump sampling [1] using the discretization of Langevin diffusion by [85]. Lines 6-13 correspond to the walk step and line 14 to the jump step.
355
+ 1: Input $\delta$ (step size), $u$ (inverse mass), $\gamma$ (friction), $K$ (steps taken)
356
+ 2: Input Learned score function $g_{\theta}(y) \approx \nabla_y \log p(y)$ and noise level $\sigma$
357
+ 3: Output $\widehat{x}_K$
358
+ 4: $y_0 \sim \mathcal{N}(0, \sigma^2 I_d) + \mathcal{U}_d(0, 1)$
359
+ 5: $v_0 \gets 0$
360
+ 6: for $k = 0, \dots, K-1$ do
361
+ 7: $y_{k+1} \gets y_k + \frac{\delta}{2} v_k$
362
+ 8: $g \gets g_{\theta}(y_{k+1})$
363
+ 9: $v_{k+1} \gets v_k + \frac{u\delta}{2} g$
364
+ 10: $\varepsilon \sim \mathcal{N}(0, I_d)$
365
+ 11: $v_{k+1} \gets \exp(-\gamma\delta) v_{k+1} + \frac{u\delta}{2} g + \sqrt{u(1 - \exp(-2\gamma\delta))}\varepsilon$
366
+ 12: $y_{k+1} \gets y_{k+1} + \frac{\delta}{2} v_{k+1}$
367
+ 13: end for
368
+ 14: $\hat{x}_K \gets y_K + \sigma^2 g_{\theta}(y_K)$
369
+
370
+ We use $\gamma = 1.0$ , $u = 1.0$ , $\delta = .5$ for all samplings and we generate multiple chains in parallel (200 chains for QM9 and 100 for GEOM-drugs). We follow [37] and initialize the chains by adding uniform noise to the initial Gaussian noise (with the same $\sigma$ used during training), i.e., $y_0 = \mathcal{N}(0, \sigma^2 I_d) + \mathcal{U}_d(0, 1)$ (this was observed to mix faster in practice).
371
+
372
+ All experiments and analysis on this paper were done on A100 GPUs and with PyTorch [97]. The models on QM9 were trained with 2 GPUs and the models on GEOM-drugs on 4 GPUs.
373
+
374
+ <table><tr><td>dset</td><td>coordinates ref.</td><td>stable mol %↑</td><td>stable atom %↑</td><td>valid %↑</td><td>unique %↑</td><td>valency W1↓</td><td>atom TV↓</td><td>bond TV↓</td><td>bond len W1↓</td><td>bond ang W1↓</td></tr><tr><td rowspan="2">QM9</td><td>-</td><td>80.5</td><td>98.5</td><td>98.1</td><td>93.3</td><td>.051</td><td>.028</td><td>.005</td><td>.008</td><td>2.94</td></tr><tr><td>✓</td><td>89.3</td><td>99.2</td><td>98.7</td><td>92.1</td><td>.023</td><td>.029</td><td>.009</td><td>.003</td><td>1.96</td></tr><tr><td rowspan="2">GEOM</td><td>-</td><td>73.9</td><td>99.0</td><td>94.7</td><td>98.6</td><td>.236</td><td>.030</td><td>.038</td><td>.008</td><td>2.92</td></tr><tr><td>✓</td><td>74.9</td><td>98.1</td><td>93.4</td><td>99.2</td><td>.254</td><td>.033</td><td>.036</td><td>.002</td><td>.63</td></tr></table>
375
+
376
+ Table 5: Effect of coordinate refinement on QM9 and GEOM-drugs. We use 10,000 samples from each method.
377
+
378
+ # A.5 Recovering atomic coordinates from voxel grid
379
+
380
+ Figure 4 shows our pipeline to recover atomic coordinates and molecular graphs from generated voxelized molecules. In the first step, we use the model to "jump" to the data manifold generating a sample in the voxelized representation, $x_{k}$ . We set to 0 all voxels with value less than .1, i.e., $x_{k} = \mathbb{1}_{\geq .1}(x_{k})$ . We then apply a simple peak finding algorithm to find the voxel coordinates corresponding to the peaks in the generated sample. Our peak finding algorithm uses a maximum filter with a $3\times 3\times 3$ stencil to find local maxima. Note that this algorithm always returns points on the voxel grid and is therefore limited by the resolution of the discretization.
381
+
382
+ In order to further refine the atomic coordinates we take advantage of the fact that our voxelization procedure is differentiable to perform gradient based optimization of the coordinates. Specifically we use L-BFGS to optimize the atomic coordinates based on the L2 norm of the reconstruction error in the voxel representation. Note, unlike some previous work [33] we perform peak detection and refinement in a single step and do not perform search over multiple possible numbers of atoms or atom identities.
383
+
384
+ Table 5 shows the effect of coordinate refinement on molecule generation. We generate molecules on the same setting in the experimental section.
385
+
386
+ Once we have obtained the optimized atomic coordinates, we follow previous work [33, 18, 17, 20] and use standard cheminformatics software to determine the molecule's atomic bonds.
387
+
388
+ # A.6 Metrics
389
+
390
+ Most of the metrics used to benchmark models come from $[20]^8$ . Below we describe the metrics:
391
+
392
+ - Atom stability: the percentage of generated atoms with the right valency. This metric is computed on the raw 3D sample (before any postprocessing), therefore it is more stringent than validity.
393
+ - Molecule stability: the percentage of generated molecules where all its atoms are stable.
394
+ - Validity: The percentage of generated molecules that passes RDKit's sanitization filter.
395
+ - Uniqueness:. The proportion of valid molecules (defined above) that has a unique canonical SMILES (generated with RDKit) representation.
396
+ - Atoms TV: The total variation between the distribution of bond types in the generated and test set. We consider 5 atom types on QM9 and 8 atom types on GEOM-drugs. The histograms $\hat{h}_{\mathrm{atm}}$ and $h_{\mathrm{atm}}$ are generated by counting the number of each atom type on all molecules on both the generated and real sample set. The total variation is computed as:
397
+
398
+ $$
399
+ \text {A t o m s} \mathrm {T V} (\hat {h} _ {\mathrm {a t m}}, h _ {\mathrm {a t m}}) = \sum_ {x \in \text {a t o m t y p e s}} | \hat {h} _ {\mathrm {a t m}} (x) - h _ {\mathrm {a t m}} (x) |
400
+ $$
401
+
402
+ - Bonds TV: Similar to above, the histograms for real and generated samples are created by counting all bond types on all molecules. The total variation is computed as:
403
+
404
+ $$
405
+ \text {B o n d s} \mathrm {T V} (\hat {h} _ {\text {b o n d}}, h _ {\text {b o n d}}) = \sum_ {x \in \text {b o n d t y p e s}} | \hat {h} _ {\text {b o n d}} (x) - h _ {\text {b o n d}} (x) |
406
+ $$
407
+
408
+ - Valency $W_{1}$ : This is the weighted sum of the Wasserstein distance between the distribution of valencies for each atom type:
409
+
410
+ $$
411
+ \operatorname {V a l e n c y} \mathrm {W} _ {1} (\text {g e n e r a t e d}, \text {t a r g e t}) = \sum_ {x \in \text {a t o m t y p e s}} p (x) W _ {1} \left(\hat {h} _ {\text {v a l}} (x), h _ {\text {v a l}} (x)\right),
412
+ $$
413
+
414
+ where $\hat{h}_{\mathrm{val}}(x)$ and $h_{\mathrm{val}}(x)$ are the histogram of valencies for atom type $x$ for generated and holdout set samples, respectively.
415
+
416
+ - Bond length $W_{1}$ : The weighted sum of Wasserstein distance between the distribution of bond lengths for each bond type:
417
+
418
+ $$
419
+ \text {B o n d L e n W} _ {1} (\text {g e n e r a t e d}, \text {t a r g e t}) = \sum_ {b \in \text {b o n d t y p e s}} p (b) W _ {1} \left(\hat {h} _ {\text {d i s t}} (b), h _ {\text {d i s t}} (b)\right),
420
+ $$
421
+
422
+ where $\hat{h}_{\mathrm{dist}}(b)$ and $h_{\mathrm{dist}}(b)$ are the histogram of bond lengths for bond type $b$ , for generated and holdout set samples, respectively.
423
+
424
+ - Bond angles $W_{1}$ : The weighted sum of Wasserstein distance between the distribution of bond angles (in degrees) for each atom type in the dataset:
425
+
426
+ $$
427
+ \text {B o n d A n g W} _ {1} (\text {g e n e r a t e d}, \text {t a r g e t}) = \sum_ {x \in \text {a t o m t y p e s}} p (x) W _ {1} \left(\hat {h} _ {\text {a n g}} (x), h _ {\text {a n g}} (x)\right),
428
+ $$
429
+
430
+ where $\hat{h}_{\mathrm{ang}}(x)$ and $h_{\mathrm{ang}}(x)$ are the histogram of angles for atom type $x$ for generated and holdout set samples, respectively. See [20] for how angles are measured.
431
+
432
+ - Strain energy: The strain energy for a generated molecule is computed as the difference between the energy on the generated pose and the energy of a relaxed position. The relaxation and the energy are computed using UFF provided by RDKit. We use [91]'s implementation<sup>9</sup>.
433
+
434
+ # A.7 Guiding the generation process
435
+
436
+ Like diffusion models, our method also leverages (learned) score functions and relies on Langevin MCMC for sampling. Therefore, in theory we can condition VoxMol similarly to how it is done in diffusion models: by constraining the score function as we walk through the MCMC chain. In the case of diffusion models, the score function of all steps is constrained to guide the transition steps from noise to a (conditioned) sample. In VoxMol, the constrained score function would affect the "walk steps" (the Langevin MCMC steps): it would restrict the region where the chain samples noisy molecules $y$ to $p(y|c)$ (instead of $p(y))$ , $c$ is the condition (e.g., gradient of a classifier). The "jump step" (a forward pass of the denoising network over the noised molecules) is independent of the condition and remains unchangeable.
437
+
438
+ Many of the innovations on conditioning diffusion models come from computer vision, where U-nets are usually used. Since VoxMol has the same architecture (albeit 3D instead of 2D), many of the conditioning techniques/tricks used in images may be more easily transferable. For example, we could in principle use the gradient of a classifier (trained jointly) to guide the sampling (using the same trick as in Dhariwal and Nichol [35]) or adapt gradient-free guidance ([34]). Pocket conditioning could also be possible, as in e.g., [18, 67], but utilizing voxel representations instead of point clouds and neural empirical Bayes instead of diffusion models. In-painting has also proven to work very well in 2D U-Nets, so it could potentially work with 3D U-Nets as well. These in-painting techniques could also be leveraged in the context of molecule generation on voxel grids, e.g., for linker generation, scaffold/fragment conditioning.
439
+
440
+ # B Generated samples
441
+
442
+ ![](images/53e6f4723cb233a46bef5bccba06f437f3cd1ef6d08277401a515227355ef5e5.jpg)
443
+ Figure 8: Random generated samples from VoxMol trained on QM9 (passing RDKit's sanitization). Molecular graphs are generated with RDKit.
444
+
445
+ ![](images/b2615d2eeafb2711b9624e7502ff9460cf69c07466840efb4b617b74ec444682.jpg)
446
+ Figure 9: Random generated samples from VoxMol trained on GEOM-drugs (passing RDKit's sanitization). Molecular graphs are generated with RDKit.
3dmoleculegenerationbydenoisingvoxelgrids/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4859b66ef363115ec285164e2ddbb61c9ef1081b6345a59cda1835afb556d743
3
+ size 828147
3dmoleculegenerationbydenoisingvoxelgrids/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dcf2b7085c815abf80fb6f178013f56fd686221f525076698d75d46fa4443c32
3
+ size 628472
4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5917ac3ff88ec36c19f0205f1b24a00009dd5280c90f6e00a60ad595099fc1d7
3
+ size 78199
4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63ca9ca1177cfc6acfb7a49f762a12e0ffbfd2ae5585e5764a07694fe4bfce5a
3
+ size 101588
4dpanopticscenegraphgeneration/fca91bbd-cfde-4ac5-88f3-7795e453aa2d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:479a0ced364432246b62a72b83f6c31210114666b329d7110c4cc73593a0d3ec
3
+ size 13878005
4dpanopticscenegraphgeneration/full.md ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 4D Panoptic Scene Graph Generation
2
+
3
+ Jingkang Yang $^{1}$ , Jun Cen $^{2}$ , Wenxuan Peng $^{1}$ , Shuai Liu $^{3}$ , Fangzhou Hong $^{1}$ , Xiangtai Li $^{1}$ , Kaiyang Zhou $^{4}$ , Qifeng Chen $^{2}$ , Ziwei Liu $^{1}$
4
+
5
+ $^{1}$ S-Lab, Nanyang Technological University
6
+
7
+ 2The Hong Kong University of Science and Technology
8
+
9
+ $^{3}$ Beijing University of Posts and Telecommunications $^{4}$ Hong Kong Baptist University
10
+
11
+ https://github.com/Jingkang50/PSG4D
12
+
13
+ ![](images/75b3f03e48249328a44860e9db03098019c8e01d21180b72ac08ca0bb9900481.jpg)
14
+ (a) Visual Input from the 4D Dynamic World
15
+ (b) PSG-4D: 4D Panoptic Scene Graph
16
+
17
+ ![](images/a345ccd293fe97b6f5b9ee14f8490bcb7d0d836e161aed5751ca06bba0bd808c.jpg)
18
+ Figure 1: Conceptual illustration of PSG-4D. PSG-4D is essentially a spatiotemporal representation capturing not only fine-grained semantics in image pixels (i.e., panoptic segmentation masks) but also the temporal relational information (i.e., scene graphs). In (a) and (b), the model abstracts information streaming in RGB-D videos into (i) nodes that represent entities with accurate location and status information and (ii) edges that encapsulate the temporal relations. Such a rich 4D representation serves as a bridge between the PSG-4D system and a large language model, which greatly facilitates the decision-making process, as illustrated in (c).
19
+
20
+ ![](images/6ae545ca6553d8a70f0c60525f0c22ae60120edcace9f5b3975a574c14b3390a.jpg)
21
+ (c) Reasoning & Planning
22
+
23
+ ![](images/feb383c0baa9e12c71c71afc83e10627a5bf9b6962a6767a26eec527f606fae7.jpg)
24
+
25
+ # Abstract
26
+
27
+ We are living in a three-dimensional space while moving forward through a fourth dimension: time. To allow artificial intelligence to develop a comprehensive understanding of such a 4D environment, we introduce 4D Panoptic Scene Graph (PSG-4D), a new representation that bridges the raw visual data perceived in a dynamic 4D world and high-level visual understanding. Specifically, PSG-4D abstracts rich 4D sensory data into nodes, which represent entities with precise location and status information, and edges, which capture the temporal relations. To facilitate research in this new area, we build a richly annotated PSG-4D dataset consisting of 3K RGB-D videos with a total of 1M frames, each of which is labeled with 4D panoptic segmentation masks as well as fine-grained, dynamic scene graphs. To solve PSG-4D, we propose PSG4DFormer, a Transformer-based model that can predict panoptic segmentation masks, track masks along the time axis, and generate the corresponding scene graphs via a relation component. Extensive experiments on the new dataset show that our method can serve as a strong baseline for future research on PSG-4D. In the end, we provide a real-world application example to demonstrate how we can achieve dynamic scene understanding by integrating a large language model into our PSG-4D system.
28
+
29
+ # 1 Introduction
30
+
31
+ The emergence of intelligent agents, autonomous systems, and robots demands a profound understanding of real-world environments [1, 2, 3, 4, 5, 6]. This understanding involves more than just recognizing individual objects - it requires an intricate understanding of the relationships between these objects. In this context, research on Scene Graph Generation (SGG) [7], has sought to provide a more detailed, relational perspective on scene understanding. In this approach, scene graphs represent objects as nodes and their relationships as edges, offering a more comprehensive and structured understanding of the scene [8, 7, 9, 10, 11]. Panoptic Scene Graph Generation (PSG) [12] expands the scope of SGG to encompass pixel-level precise object localization and comprehensive scene understanding, including background elements. Then PSG has been further extended to the domain of videos [13] with the inspiration from Video Scene Graph Generation (VidSGG) [14, 15].
32
+
33
+ The utility of scene graphs also extends into the realm of 3D perception, introducing the concept of 3D Scene Graphs (3DSG) [16, 17]. 3DSGs offer a precise representation of object locations and inter-object relationships within three-dimensional scenes [18, 19]. Despite these developments, the existing approaches have not fully integrated dynamic, spatio-temporal relationships, particularly those involving human-object and human-human interactions. Consider Figure 1 as an illustrative example. Traditional 3D scene graph methods may recognize the static elements of this scene, such as identifying a booth situated on the ground. However, a more ideal, advanced, and dynamic perception is required for real-world scenarios. For instance, a system should be capable of identifying a dynamic event like a person who has fallen off their bike, so that it could then comprehend the necessity to offer assistance, like helping the person stand up and stabilize their bike.
34
+
35
+ Therefore, our work takes a significant step towards a more comprehensive approach to sensing and understanding the world. We introduce a new task, the 4D Panoptic Scene Graph (PSG-4D), aiming to bridge the gap between raw visual inputs in a dynamic 4D world and high-level visual understanding. PSG-4D comprises two main elements: nodes, representing entities with accurate location and status information, and edges, denoting temporal relations. This task encapsulates both spatial and temporal dimensions, bringing us closer to a true understanding of the dynamic world.
36
+
37
+ To facilitate research on this new task, we contribute an extensively annotated PSG-4D dataset that is composed of 2 sub-sets, PSG4D-GTA and PSG4D-HOI. The PSG4D-GTA subset consists of 67 RGB-D videos with a total of 28K frames, selected from the SAIL-VOS 3D dataset [20] collected from the video game Grand Theft Auto V (GTA-V) [21]. The PSG4D-HOI subset is a collection of 3K egocentric real-world videos sampled from the HOI4D dataset [22]. All frames in either of the subset are labeled with 4D panoptic segmentation masks as well as fine-grained, dynamic scene graphs. We believe this dataset will serve as a valuable resource for researchers in the field.
38
+
39
+ To tackle this novel task, we propose a unified framework called PSG4DFormer. This unified structure encapsulates two primary components: a 4D Panoptic Segmentation model and a Relation model. The 4D Panoptic Segmentation model is designed to accommodate both RGB-D and point cloud data inputs, yielding a 4D panoptic segmentation. This output comprises 3D object masks, which are continuously tracked across temporal dimensions. Then, the Relation model accepts these 3D mask tubes and utilizes a spatial-temporal transformer architecture to delineate long-term dependencies and intricate inter-entity relationships, subsequently yielding a relational scene graph. Through extensive experiments, we demonstrate the effectiveness of the proposed PSG-4D task and the PSG4DFormer model. Our work constitutes a pivotal step towards a comprehensive understanding of dynamic environments, setting the stage for future research in this exciting and crucial area of study.
40
+
41
+ In summary, we make the following contributions to the community:
42
+
43
+ - A New Task: We propose a novel scene graph generation task focusing on the prediction of 4D panoptic scene graphs from RGB-D or point cloud video sequences.
44
+ - A New Dataset: We provide a PSG-4D dataset, which covers diverse viewpoints: (i) a third-view synthetic subset (PSG4D-GTA) and (ii) an egocentric real-world subset (PSG4D-HOI).
45
+ - A Unified Framework: We propose a unified two-stage model composed of a feature extractor and a relation learner. In addition, we offer demo support for both synthetic and real-world scenarios to facilitate future research and real-world applications.
46
+ - Open-Source Codebase: We open-source our codebase to facilitate future PSG-4D research.
47
+
48
+ # 2 Related Work
49
+
50
+ Scene Graph Generation (SGG) SGG transforms an image into a graph, where nodes represent objects and edges represent relationships [7]. Several datasets [23] and methods, including two-stage [8, 7, 9, 10, 11] and one-stage models [24, 12, 25], have been developed for SGG. Video scene graph generation (VidSGG) extends SGG to videos with notable datasets [14, 15, 26]. Despite progress, limitations remain in SGG and VidSGG due to noisy grounding annotations caused by coarse bounding box annotations and trivial relation definitions. Recent work on panoptic scene graph generation (PSG) [12, 27, 28, 29, 30, 31] has attempted to overcome these issues, and PVSG [13, 32] further extends it into the video domain. This paper presents an extension of PSG into a 4D dynamic world, meeting the needs of active agents for precise location and comprehensive scene understanding.
51
+
52
+ 3D Scene Graph Generation 3D Scene Graphs (3DSGs) offer a precise 3D representation of object locations and inter-object relationships, making them a vital tool for intelligent agents operating in real-world environments [16, 17]. 3DSGs can be categorized into flat and hierarchical structures [33]. The former represents objects and relationships as a simple graph [18, 19], while the latter layers the structures of 3D scenes [34, 35]. Recent 3DSG techniques [19] employ PointNet [36] with 3D object detectors on point clouds or RGBD scans, generating 3D graphs via graph neural networks [18]. Some settings, such as Kimera [37], emphasize pairwise spatiotemporal status to facilitate task planning, while incremental 3DSG necessitates agents to progressively explore environments [38]. However, these graphs largely represent positional relations, lacking dynamic spatiotemporal relations like human-object interactions and human-human relations.
53
+
54
+ 4D Perception Research on 4D perception can be divided by the specific data format they use. The first one is RGB-D video, which can be easily obtained using cheap sensors, e.g. Kinect, and iPhone. With the additional depth data, more geometric and spatial information can be used for reliable and robust detection [39, 40, 41] and segmentation [42, 43, 44, 45]. For RGB-D video, the depth input is usually treated like images. But for point clouds video, 3D or higher dimension convolutions [46, 47, 48, 49] are more commonly used, especially on LiDAR point cloud videos for autonomous driving perception system. In this work, beyond the 4D panoptic segmentation, we focus on more daily scenes and pursue a more high-level and structured understanding of 4D scenes by building 4D scene graphs.
55
+
56
+ # 3 The PSG-4D Problem
57
+
58
+ The PSG-4D task is aimed at generating a dynamic scene graph, which describes a given 4D environment. In this context, each node corresponds to an object, while each edge represents a spatial-temporal relation. The PSG-4D model ingests either an RGB-D video sequence or a point cloud video sequence, subsequently outputting a PSG-4D scene graph $\mathbf{G}$ . This graph is composed of 4D object binary mask tubes $\mathbf{M}$ , object labels $\mathbf{O}$ , and relations $\mathbf{R}$ .
59
+
60
+ The object binary mask tubes, $\mathbf{m}_i\in \{0,1\}^{T\times H\times W\times 4}$ , express the 3D location and extent of the tracked object $i$ over time $(T)$ in the case of an RGB-D sequence input, while $\mathbf{m}_i\in \{0,1\}^{T\times M\times 6}$ is used for point cloud video inputs. Here, 4 denotes RGB-D values, and 6 represents XYZ plus RGB values. M stands for the number of point clouds of interest. The object label, $o_i\in \mathbb{C}^O$ , designates the category of the object. The relation $r_i\in \mathbb{C}^R$ represents a subject and an object linked by a predicate class and a time period. $\mathbb{C}^O$ and $\mathbb{C}^R$ refer to the object and predicate classes, respectively. The PSG-4D task can be mathematically formulated as:
61
+
62
+ $$
63
+ \Pr (\mathbf {G} \mid \mathbf {I}) = \Pr (\mathbf {M}, \mathbf {O}, \mathbf {R} \mid \mathbf {I}), \tag {1}
64
+ $$
65
+
66
+ where $\mathbf{I}$ represents the input RGB-D video sequence or point cloud representation.
67
+
68
+ Evaluation Metrics For evaluating the performance of the PSG-4D model, we employ the R@K and mR@K metrics, traditionally used in the scene graph generation tasks. R@K calculates the triplet recall, while mR@K computes the mean recall, both considering the top K triplets from the PSG-4D model. A successful recall of a ground-truth triplet must meet the following criteria: 1) correct category labels for the subject, object, and predicate; 2) a volume Intersection over Union (vIOU) greater than 0.5 between the predicted mask tubes and the ground-truth tubes. When these criteria are satisfied, a soft recall score is recorded, representing the time vIOU between the predicted and the ground-truth time periods.
69
+
70
+ Table 1: Illustration of the PSG-4D dataset and related datasets. Unlike the static 3D indoor scenes usually found in 3DSG datasets, the PSG-4D dataset introduces dynamic 3D videos, each annotated with panoptic segmentation. Various 3D video datasets were evaluated as potential sources for PSG-4D, resulting in the creation of two subsets: PSG4D-GTA and PSG4D-HOI. Regarding annotations, PS represents Panoptic Segmentation, BB represents Bounding Box, SS represents Semantic Segmentation, KP represents key points, and PC represents point clouds. TPV represents third-person-view.
71
+
72
+ <table><tr><td>Dataset</td><td>Type</td><td>Scale</td><td>View</td><td>#ObjCls</td><td>#RelCls</td><td>Annotation</td><td>Year</td></tr><tr><td>3DSSG [18]</td><td>3DSG</td><td>363K RGB-D images, 1482 scans, 478 scenes,</td><td>TPV</td><td>534</td><td>40</td><td>3D model, 3D graph</td><td>2020</td></tr><tr><td>Rel3D [50]</td><td>3DSG</td><td>27K RGB-D images, 9990 3D Scenes</td><td>TPV</td><td>67</td><td>30</td><td>3D model</td><td>2020</td></tr><tr><td>ScanNet [51]</td><td>3D Images</td><td>2.5M RGB-D images, 1513 indoor scenes</td><td>TPV</td><td>20</td><td>-</td><td>SS, 3D model</td><td>2017</td></tr><tr><td>Matterport 3D [52]</td><td>3D Images</td><td>194,400 RGB-D images, 90 building-scale scenes</td><td>TPV</td><td>40</td><td>-</td><td>SS, 3D model</td><td>2017</td></tr><tr><td>Nuscenes [53]</td><td>2D Video+PC</td><td>1K videos (avg. 20s), 1.3M pointclouds</td><td>Vehicle</td><td>23</td><td>-</td><td>3D BB</td><td>2020</td></tr><tr><td>WAYMO [54]</td><td>2D Video+PC</td><td>1.2K videos (avg. 20s), 177K pointclouds</td><td>Vehicle</td><td>20</td><td>-</td><td>2D BB, 3D BB</td><td>2020</td></tr><tr><td>Sail-VOS 3D [55]</td><td>3D Video</td><td>484 videos, 238K RGB-D image, 6807 clips</td><td>egocentric</td><td>178</td><td>-</td><td>SS, 3D model</td><td>2021</td></tr><tr><td>HOI4D [56]</td><td>3D Video</td><td>4K videos, 2.4M RGB-D image, 610 indoor scenes</td><td>egocentric</td><td>16</td><td>11</td><td>PS, KP</td><td>2022</td></tr><tr><td>EgoBody [57]</td><td>3D Video</td><td>125 videos, 199K RGB-D images, 15 indoor scenes</td><td>egocentric, TPV</td><td>36</td><td>13</td><td>3D model, KP</td><td>2022</td></tr><tr><td>PSG4D-GTA</td><td>PSG4D</td><td>67 videos (avg. 84s), 28K RGB-D images, 28.3B pointclouds</td><td>TPV</td><td>35</td><td>43</td><td>PS, 4DSG</td><td>2023</td></tr><tr><td>PSG4D-HOI</td><td>PSG4D</td><td>2973 videos (avg. 20s), 891K RGB-D images, 282 indoor scenes</td><td>egocentric</td><td>46</td><td>15</td><td>PS, 4DSG</td><td>2023</td></tr></table>
73
+
74
+ # 4 The PSG-4D Dataset
75
+
76
+ This section outlines the development of the PSG-4D dataset. We begin by exploring existing datasets that inspired the creation of PSG-4D, followed by a presentation of its statistics, and finally a brief overview of the steps involved in its construction.
77
+
78
+ # 4.1 Leveraging Existing Datasets for PSG-4D
79
+
80
+ Rather than constructing the PSG-4D dataset from the ground up, we sought to evaluate whether currently available datasets could either directly support or be adapted for the PSG-4D task. As shown in Table 1, our initial exploration focused on 3D datasets, including 3D scene graph datasets like 3DSGG [18] and Rel3D [50], along with more conventional 3D datasets such as ScanNet [51] and Matterport 3D [52]. However, while these datasets can be used to reconstruct entire scenes and can generate 3D videos accordingly, the resulting scenes remain static and lack dynamic elements.
81
+
82
+ We then shifted our focus to video datasets containing 3D information. Autonomous driving datasets such as Nuscenes [53] and WAYMO [54] incorporate point cloud videos, particularly bird's-eye view footage. Nevertheless, the vehicles within these scenes are only captured in 2D video. While this technically constitutes a dynamic 4D scene, it does not align well with the objectives of this study. The dynamic relations in traffic scenarios are relatively limited, and our goal is to develop a visual understanding model for embodied AI [58, 59, 60, 61] that captures 3D scenes from the agent's perspective, not a bird's-eye view.
83
+
84
+ Another category of 3D videos uses RGB-D sequences as input, which can be easily converted into point clouds. This data format aligns perfectly with the operation of intelligent agents, mimicking human perception, which captures continuous RGB images with depth. Thankfully, recent datasets like SAIL-VOS 3D [55], HOI4D [56], and EgoBody [57] have adopted this approach. While SAIL-VOS 3D uses synthetic data from the GTA game [21], the HOI4D dataset captures egocentric RGB-D videos of simple tasks, such as tool picking. On the other hand, the EgoBody dataset [57] records office activities like conversations, but lacks segmentation annotation and is primarily intended for human pose reconstruction. Despite its wealth of videos, the object interaction in EgoBody is limited. In the medical domain, 4D-OR [60] excels in providing detailed depictions of surgical scenes, showcasing its specialized utility. To cater to a broader spectrum of research applications, we formulated the PSG-4D dataset, integrating the versatile strengths of the SAIL-VOS 3D [55] and HOI4D [56] datasets.
85
+
86
+ # 4.2 Dataset Statistics
87
+
88
+ Figure 2 presents a selection of four video frames, drawn from both the PSG4D-GTA and PSG4D-HOI datasets. Each frame is an RGB-D video with corresponding panoptic segmentation annotations. Underneath each scene, we depict the associated scene graph and statistical word clouds. Annotators constructed these scene graphs as triplets, complete with frame duration. The PSG4D-GTA dataset is
89
+
90
+ ![](images/4118cd43cb02474c9640b91f9269d824264cf4c23dafa2e7bd0b94614fac3273.jpg)
91
+ (a) PSG4D-GTA (Synthetic, Third-Person View)
92
+
93
+ ![](images/d9c23cfff3603bc197874f832c0f6d792c22b220af2d6fad93f924900368350d.jpg)
94
+ (b) PSG4D-HOI (Real-World, Egocentric)
95
+ Figure 2: The Examples and Word Clouds of PSG-4D dataset. The PSG-4D dataset contains 2 subsets, including (a) PSG4D-GTA selected from the SAIL-VOS 3D [20] dataset, and (b) PSG4D-HOI from HOI4D [22] dataset. We selected 4 frames of an example video from each subset. Each frame has aligned RGB and depth with panoptic segmentation annotation. The scene graph is annotated in the form of triplets. The word cloud for object and relation categories in each dataset is also represented.
96
+
97
+ particularly noteworthy for its composition: it contains 67 videos with an average length of 84 seconds, amounting to 27,700 RGB-D images, 28.3 billion point clouds, and comprises 35 object categories, and 43 relationship categories. This synthetic dataset was captured from a third-person perspective. In contrast, the PSG4D-HOI dataset is compiled from an egocentric perspective, providing a different context for analysis. It includes 2,973 videos with an average duration of 20 seconds, equating to 891,000 RGB-D images across 282 indoor scenes. This dataset includes 46 object categories and 15 object-object relationship categories, offering a diverse range of real-world data for the study. The combination of these two datasets offers a comprehensive understanding of 4D environments due to their complementary nature. A statistical overview of both datasets can be found in the final two rows of Table 1.
98
+
99
+ # 4.3 Dataset Construction Pipeline
100
+
101
+ As outlined in Section 4.1, the PSG4D-GTA is built upon the SAIL-VOS 3D dataset, while the PSG4D-HOI is derived from the HOI4D dataset. To adapt the SAIL-VOS 3D dataset for our purpose, we commenced with a comprehensive review of all 178 GTA videos within the dataset. This stage involved a meticulous elimination process to exclude videos containing NSFW content, resulting in a refined pool of 67 videos. The SAIL-VOS 3D dataset, which is equipped with 3D instance segmentation, required additional annotation for background elements to integrate panoptic segmentation. Leveraging the PVSG annotation pipeline, we employed an event detection method [62] to isolate the key frames. The background elements within these key frames were subsequently annotated using the pre-annotation provided by the SAM model [63]. Upon completion of key frame annotations, the AOT method [64] was utilized to propagate the segmentation across the entire video sequence. The final step involved overlaying the instance segmentation on the stuff segmentation, thereby completing the process. The HOI-4D dataset, devoid of NSFW content, already provides a 4D panoptic segmentation. Consequently, we included all videos from the HOI-4D dataset in the PSG4D-HOI dataset without further modifications.
102
+
103
+ Upon completion of 4D panoptic segmentation annotation, we proceed to annotate the dynamic scene graph according to the masks. Although HOI4D includes action annotation concerning the person, it doesn't account for interactions between objects. Nevertheless, certain actions such as "pick up" are appropriately considered predicates, and we automatically position the key object in the video to form a subject-verb-object triplet. Once the automatic-annotated dataset is prepared, we ask annotators to review and revise the pre-annotations to ensure accuracy. As SAIL-VOS 3D lacks all kinds of relational annotation, we commence scene graph annotation from scratch. The entire annotation process is diligently executed by the authors around the clock.
104
+
105
+ # 5 Methodology
106
+
107
+ This section details a unified pipeline PSG4DFormer for addressing the PSG-4D problem. As shown in Figure 3, our approach comprises two stages. The initial 4D panoptic segmentation stage aims to segment all 4D entities, including objects and background elements in Figure 3 (a), with the accurate
108
+
109
+ ![](images/d019444ed0209112cdb67b2496d9f8175f6ac2b05ebf3ac019ed4fe58d4b0c89.jpg)
110
+ (a) Frame-Level Panoptic Segmentation
111
+ (b) Tracking
112
+ (c) Inference $(\uparrow)$ and Training $(\downarrow)$ of Relation Model
113
+ Figure 3: Illustration of the PSG4DFormer pipeline. This unified pipeline supports both RGB-D and point cloud video inputs and is composed of two main components: 4D panoptic segmentation modeling and relation modeling. The first stage seeks to obtain the 4D panoptic segmentation mask for each object, along with its corresponding feature tube spanning the video length. This is accomplished with the aid of (a) frame-level panoptic segmentation and (b) a tracking model. The subsequent stage (c) employs a spatial-temporal transformer to predict pairwise relations based on all feature tubes derived from the first stage.
114
+
115
+ temporal association in Figure 3 (b). We extract features for each object and obtain feature tubes according to tracking results for subsequent relation modeling in Figure 3 (c).
116
+
117
+ # 5.1 4D Panoptic Segmentation Modeling
118
+
119
+ As specified in Section 3, given a 3D video clip input, such as an RGB-D sequence of $\mathbf{I} \in \mathbb{R}^{T \times H \times W \times 4}$ or a point cloud sequence of $\mathbf{I} \in \mathbb{R}^{T \times M \times 6}$ , the initial stage's goal is to segment and track each pixel non-overlappingly. The model predicts a set of video clips with the output of $(\mathbf{m}_i, \mathbf{q}_i, p_i)_{i=1}^N$ , where $\mathbf{m}_i$ denotes the tracked object mask tube, $\mathbf{q}_i$ denotes the tracked feature tube, and $p_i$ represents the probability of the object belonging to each category. $N$ is the number of entities, encompassing things and stuff classes.
120
+
121
+ Frame-Level Panoptic Segmentation with RGB-D Sequence Given the dual input of RGB and depth images, we adopt a separation-and-aggregation gate (SA-Gate) [65] to efficiently blend information from both modalities. This combined feature set, enriched with data from both inputs, is then fed into a robust Mask2Former [4] for frame-level panoptic segmentation. In the inference stage, at the frame $t$ , given an RGB-D image $\mathbf{I}$ , the Mask2Former with SA-Gate directly outputs a set of object query features $q_{i}^{t} \in \mathbb{R}^{d}, i = 1,\dots ,N$ , each $q_{i}^{t}$ representing one entity at the frame $t$ .
122
+
123
+ Frame-Level Panoptic Segmentation with Point Cloud Sequence Apart from perceiving point cloud sequences directly, 3D point cloud coordinates can be calculated and converted from RGB-D data. This conversion involves computing the Normalized Device Coordinates (NDC) using the depth map and projecting the NDC to world coordinates using the transformation matrix provided. We retain only points with a depth below a defined threshold $\lambda$ , discarding distant, less relevant elements like far-off mountains. To leverage texture information from the image, point cloud coordinates can be augmented with corresponding RGB values, creating a colorful point cloud representation $\mathbf{P} \in \mathbb{R}^{M \times 6}$ , where $M$ is the total number of points in a frame.
124
+
125
+ We employ DKNet [66], a state-of-the-art indoor segmentation method, as our point cloud segmentation network. It processes input point clouds with a 3D UNet-like [67] backbone and uses sparse convolutions [68] for feature extraction. DKNet localizes instance centroids based on a candidate mining branch and encodes each instance's information into an instance kernel $k_{i} \in \mathbb{R}^{d}$ . These instance kernels $\{k_{i}\}_{i=1}^{N}$ are used as the weights of a few convolution layers to obtain the final instance masks.
126
+
127
+ Tracking After frame-level panoptic segmentation, we link each frame via using UniTrack [69] for tracking to obtain the final tracked video cubes for each clip for either modality input. Specifically, instead of incorporating an additional appearance model for tracking embedding extraction, we directly utilize the instance kernels $\{k_i\}_{i=1}^N$ from the segmentation step of DKNet, or object query features $\{q_i\}_{i=1}^N$ from Mask2Former as the tracking embeddings for the association. We find that the instance kernels exhibit sufficient distinctiveness for tracking purposes, even when dealing with different objects belonging to the same semantic class. This is primarily because each instance kernel is designed to maximize the response for a specific instance while suppressing the responses of all other instances, including those with the same semantic class. For a video sequence with the length $T$ , the obtained 4D feature tubes are noted as $Q_i = \{q_i^t\}_{t=1}^T$ .
128
+
129
+ # 5.2 Relation Modeling: 4D Scene Graph Generation
130
+
131
+ The object query tubes $Q_{i}$ and mask tubes $\mathbf{m}_i$ form a bridge between the first and second stages. These feature tubes first pass through a spatial-temporal transformer encoder, which augments them with both global context information from the overall image and global temporal space.
132
+
133
+ Spatial-Temporal Transformer Encoder To infuse the feature tubes with additional temporal dimension information and characteristics from other objects in the scene, we draw inspiration from the Spatial-Temporal Transformer [70]. A spatial encoder is initially employed. For all objects co-occurring at the same time $t$ , a two-layer transformer encoder is applied to the input, comprising all object features specific to time frame $t$ . The spatially-encoded feature tube updates the object feature tube into $\{\tilde{q}_i^t\}_{i=1}^N$ . Subsequently, a temporal transformer encoder updates each object feature tube along the temporal dimension $T$ . By leveraging both the spatial and temporal encoders, we obtain the final feature tube $\{\hat{q}_i^t\}_{i=1}^N$ , ready for relation training.
134
+
135
+ Relation Classification Training To train the relation model based on the updated query tube, a training set for relation training must be constructed. It is worth noting that the relation annotation in the training set is in the form of "object-1 relation object-2", with the mask tube of both objects annotated. To start, we associate the updated query tube with ground truth objects. For each ground truth tube, we find the most suitable updated query tube by calculating the video Intersection over Union (vIOU) between ground truth mask tubes, and assign the query feature tube to the respective objects. A frame-level predicate classification is conducted with the assistance of a lightweight fully-connected layer. The inference of the relation classification component simply computes the relation probability between pairs of $\hat{q}_i^t$ and $\hat{q}_j^t$ .
136
+
137
+ # 6 Experiments
138
+
139
+ Table 2 presents the results of experiments conducted on the PSG-4D dataset. For RGB-D sequences, an ImageNet-pretrained ResNet-101 serves as both the RGB and depth encoder. We set the training duration to 12 epochs. The DKNet, trained from scratch, requires a longer training period of 200 epochs. In the second stage, both spatial and temporal transformer encoders span two layers, and training continues for an additional 100 epochs. Besides the standard PSG4DFormer, we also examine variants with the temporal encoder removed (denoted as “/t”) and the depth branch removed (denoted as “/d”). As a baseline, we use the 3DSGG model [18], which employs a GNN model to encode frame-level object and relation information, without considering temporal data.
140
+
141
+ RGB-D vs. Point Cloud Input Table 2 is divided into two sections. The upper part (#1-#3) reports results from point cloud input, while the latter part (#4-#7) details results from the RGB-D sequence. It appears that the RGB-D sequence generally yields better results than the point cloud sequence, particularly for the PSG4D-GTA dataset. This could potentially be attributed to the ResNet-101 backbone used for the RGB-D data, which being pretrained on ImageNet, exhibits robust performance on complex datasets like PSG4D-GTA. Meanwhile, the PSG4D-HOI dataset seems to offer a more consistent scenario with abundant training data, thus narrowing the performance gap between the point cloud and RGB-D methods.
142
+
143
+ Significance of Depth The results in Table 2 also allow us to evaluate the importance of depth in the RGB-D method. Specifically, we designed a variant of PSG4DFormer (marked as “/d”) that
144
+
145
+ Table 2: Main Results on PSG4D. Experimental results are reported on both the PSG4D-GTA and PSG4D-HOI datasets. In addition to comparing with traditional 3DSGG methods, we conduct experiments to compare the PSG4DFormer and its variants. This includes a version with the temporal encoder removed (denoted as “/t”) and one with the depth branch removed (denoted as “/d”).
146
+
147
+ <table><tr><td rowspan="2">Input Type</td><td rowspan="2">Method</td><td colspan="3">PSG4D-GTA</td><td colspan="3">PSG4D-HOI</td></tr><tr><td>R/mR@20</td><td>R/mR@50</td><td>R/mR@100</td><td>R/mR@20</td><td>R/mR@50</td><td>R/mR@100</td></tr><tr><td rowspan="3">Point Cloud Sequence</td><td>#1 3DSGG [18]</td><td>1.48 / 0.73</td><td>2.16 / 0.79</td><td>2.92 / 0.85</td><td>3.46 / 2.19</td><td>3.15 / 2.47</td><td>4.96 / 2.84</td></tr><tr><td>#2 PSG4DFounder/t</td><td>2.25 / 1.03</td><td>2.67 / 1.72</td><td>3.14 / 2.05</td><td>3.26 / 2.04</td><td>3.16 / 2.35</td><td>4.18 / 2.64</td></tr><tr><td>#3 PSG4DFounder</td><td>4.33 / 2.10</td><td>4.83 / 2.93</td><td>5.22 / 3.13</td><td>5.36 / 3.10</td><td>5.61 / 3.95</td><td>6.76 / 4.17</td></tr><tr><td rowspan="4">RGB-D Sequence</td><td>#4 3DSGG [18]</td><td>2.29 / 0.92</td><td>2.46 / 1.01</td><td>3.81 / 1.45</td><td>4.23 / 2.19</td><td>4.47 / 2.31</td><td>4.86 / 2.41</td></tr><tr><td>#5 PSG4DFounder/t</td><td>4.43 / 1.34</td><td>4.89 / 2.42</td><td>5.26 / 2.83</td><td>4.44 / 2.37</td><td>4.83 / 2.43</td><td>5.21 / 2.84</td></tr><tr><td>#6 PSG4DFounder/d</td><td>4.40 / 1.42</td><td>4.91 / 1.93</td><td>5.49 / 2.27</td><td>5.49 / 3.42</td><td>5.97 / 3.92</td><td>6.43 / 4.21</td></tr><tr><td>#7 PSG4DFounder</td><td>6.68 / 3.31</td><td>7.17 / 3.85</td><td>7.22 / 4.02</td><td>5.62 / 3.65</td><td>6.16 / 4.16</td><td>6.28 / 4.97</td></tr></table>
148
+
149
+ doesn't utilize the depth branch. In other words, both the RGB encoder and the SA-Gate are removed, turning the pipeline into a video scene graph generation pipeline. The performance of this variant is inferior compared to the original, which highlights the significance of depth information in the scene graph generation task.
150
+
151
+ Necessity of Temporal Attention Table 2 includes two methods that do not utilize temporal attention. Specifically, the 3DSGG baseline learns interactions between static object features using a graph convolutional network, while PSG4DFormer $^{t}$ removes the temporal transformer encoders. The results demonstrate that ignoring the temporal component could lead to sub-optimal outcomes, emphasizing the importance of temporal attention in 4D scene graph generation.
152
+
153
+ # 7 Real-World Application
154
+
155
+ This section illustrates the deployment of the PSG-4D model in a real-world application, specifically within a service robot. It extends beyond theoretical concepts and computational models, delving into the practical integration and execution of this cutting-edge technology. As shown in Figure 4, the focus here is to demonstrate how the robot leverages the PSG-4D model (pretrained from PSG4D-HOI, RGB-D input) to interpret and respond to its surroundings effectively.
156
+
157
+ Interaction with Large Language Models The recent advancements in large language models (LLMs) have displayed their exceptional capabilities in reasoning and planning [71]. LLMs have been utilized as planners in numerous recent studies to bridge different modalities, paving the way for more intuitive and efficient human-machine interaction [72]. In this work, we employ GPT-4 [71], as the primary planner. Designed to align with human instruction, GPT-4 communicates with the robot by translating the raw scene graph representations into comprehensible human language. Therefore, the interaction begins with the prompt, "I am a service robot. For every 30 seconds, I will give you what I have seen in the last 30 seconds. Please suggest me what I could serve." Subsequently, every 30 seconds, the robot engages with GPT-4, providing an update: "In the past 30s, what I captured is: <from start_time to end_time, object-1 relation object-2>, <...>, <...>." This enables GPT-4 to analyze the situation and provide appropriate feedback.
158
+
159
+ Post-Processing for Execution The effective deployment of the PSG-4D model necessitates a robust set of predefined actions that the robot can execute. Currently, the action list includes tasks such as picking up litter and engaging in conversation with individuals. After GPT-4 provides its suggestions, it is further prompted to select a suitable action from this predefined list for the robot to execute. However, the flexibility of this system allows for the expansion of this action list, paving the way for more complex and varied tasks in the future. To encourage community involvement and the development of fascinating applications, we also release the robot deployment module alongside the PSG4D codebase. The demo robot is priced at approximately $1.2K and comes equipped with an RGB-D sensor, microphone, speakers, and a robotic arm.
160
+
161
+ ![](images/896fe11c1fa6c301a950b1c8df80b04f956db607c2962ec51eaa5d077755587f.jpg)
162
+ (a) The RGB-D sequence that is captured by the robot.
163
+
164
+ ![](images/2e9172670129e3cc6802c90015b923d2bad3af57ddebcea9db4c80c6aaf93e99.jpg)
165
+ (b) PSG-4D Parsing
166
+
167
+ ![](images/565cbce4886b4895b84ac75fc29b7d6c99582549a443e75dd2fef3464f433d1c.jpg)
168
+ (c) Reasoning & Planning
169
+
170
+ ![](images/3918f45ce857b31c060377a8f97ba7e2eb20d7634fa28d911c269fd9aefe18fb.jpg)
171
+ Figure 4: Demonstration of a Robot Deployed with the PSG-4D Model. The service robot interprets the RGB-D sequence shown in (a), where a man is seen drinking coffee and subsequently dropping the empty bottle on the ground. The robot processes this sequence, translating it into a 4D scene graph depicted in (b). This graph comprises a set of temporally stamped triplets, with each object associated with a panoptic mask, accurately grounding it in 3D space. The robot regularly updates its PSG4D to GPT-4, awaiting feedback and instructions. In this scenario, GPT-4 advises the robot to clean up the discarded bottle and remind the man about his action. This directive is translated into robot action, as visualized in (d).
172
+
173
+ ![](images/e61a9cc3afbfc89114aae4c8a9d1fa4255bf408c560e6ec6b64d2aa8edbe0b93.jpg)
174
+ (d) Robot Reaction
175
+
176
+ # 8 Conclusion, Challenges, and Outlook
177
+
178
+ This paper presents a novel and demanding extension to the traditional scene graph generation, the 4D Panoptic Scene Graph Generation, which incorporates the spatio-temporal domain into the framework. We introduce a comprehensive framework, the PSG4DFormer, capable of processing both RGB-D and point cloud sequences. The successful deployment of this pipeline in a practical service robot scenario underscores its potential in real-world applications. However, these achievements also highlight the nascent state of this field, emphasizing the necessity for continued advancements to fully exploit the potential of 4D Panoptic Scene Graph Generation.
179
+
180
+ Challenges Despite encouraging results, we have also revealed several persistent challenges in the realm of 4D Panoptic Scene Graph Generation. Through our demonstration, we found that current models, whether derived from PSG4D-GTA or PSG4D-HOI, can handle only simple scenes and falter when faced with more complex real-world environments. Notably, there exist robust models trained in the 2D world. Finding effective and efficient strategies to adapt these models to the 4D domain presents a compelling direction for future exploration.
181
+
182
+ Outlook Future work in this field presents several intriguing trajectories. There is a pressing need for more efficient algorithms for 4D Panoptic Scene Graph Generation, which can handle larger and more diverse environments. Equally important is the creation of comprehensive and diverse datasets that would allow more rigorous evaluation and foster advancements in model development. Particularly noteworthy is a recent Digital Twin dataset [73], which promises a high level of accuracy and photorealism, aligning seamlessly with the objectives of PSG4D. This dataset will be incorporated as the third subset of the PSG4D dataset, readily accessible from our codebase. In addition to robotics, as demonstrated by the practical application of PSG4DFormer, we are also exploring its potential as an autonomous player in the GTA game. Actually, our recent endeavor Octopus [58] strives to complete GTA missions by employing a visual-language programmer to generate executable action code. In contrast to the previously passive task completion, the application in this paper actively perceives and understands the environment, showcasing a shift towards autonomy in robotics. Furthermore, Octopus [58] utilizes a 4D scene graph structure to capture environmental information during the visual-language programmer training, exemplifying a practical application of the PSG4D modality.
183
+
184
+ We eagerly anticipate the future progress in the field of 4D Panoptic Scene Graph Generation and its potential to revolutionize our understanding of real-world dynamics.
185
+
186
+ Potential Negative Societal Impacts This work releases a dataset containing human behaviors, posing possible gender and social biases inherently from data. Potential users are encouraged to consider the risks of overseeing ethical issues in imbalanced data, especially in underrepresented minority classes. Nevertheless, all the NSFW content is removed from the dataset.
187
+
188
+ # Acknowledgement
189
+
190
+ This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOE-T2EP20221-0012), NTU NAP, and under the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, the National Key R&D Program of China under grant number 2022ZD0161501, as well as cash and in-kind contribution from the industry partner(s).
191
+
192
+ # References
193
+
194
+ [1] Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang, Song-Chun Zhu, and Siyuan Huang. Sqa3d: Situated question answering in 3d scenes. arXiv preprint arXiv:2210.07474, 2022. 2
195
+ [2] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. 2
196
+ [3] Sonia Raychaudhuri, Tommaso Campari, Unnat Jain, Manolis Savva, and Angel X Chang. Reduce, reuse, recycle: Modular multi-object navigation. arXiv preprint arXiv:2304.03696, 2023. 2
197
+ [4] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. 2022. 2, 6
198
+ [5] Xiangtai Li, Haobo Yuan, Wenwei Zhang, Guangliang Cheng, Jiangmiao Pang, and Chen Change Loy. Tube-link: A flexible cross tube baseline for universal video segmentation. In ICCV, 2023. 2
199
+ [6] Xiangtai Li, Henghui Ding, Wenwei Zhang, Haobo Yuan, Guangliang Cheng, Pang Jiangmiao, Kai Chen, Ziwei Liu, and Chen Change Loy. Transformer-based visual segmentation: A survey. arXiv pre-print, 2023. 2
200
+ [7] Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. In CVPR, 2017. 2, 3
201
+ [8] Kaihua Tang, Hanwang Zhang, Baoyuan Wu, Wenhan Luo, and Wei Liu. Learning to compose dynamic tree structures for visual contexts. In CVPR, 2019. 2, 3
202
+ [9] Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. Neural motifs: Scene graph parsing with global context. In CVPR, 2018. 2, 3
203
+ [10] Mohammed Suhail, Abhay Mittal, Behjat Siddiquie, Chris Broaddus, Jayan Eledath, Gerard Medioni, and Leonid Sigal. Energy-based learning for scene graph generation. In CVPR, 2021. 2, 3
204
+ [11] Yiwu Zhong, Jing Shi, Jianwei Yang, Chenliang Xu, and Yin Li. Learning to generate scene graph from natural language supervision. In ICCV, 2021. 2, 3
205
+ [12] Jingkang Yang, Yi Zhe Ang, Zujin Guo, Kaiyang Zhou, Wayne Zhang, and Ziwei Liu. Panoptic scene graph generation. In European Conference on Computer Vision, pages 178-196. Springer, 2022. 2, 3
206
+ [13] Jingkang Yang, Wenxuan Peng, Xiangtai Li, Zujin Guo, Liangyu Chen, Bo Li, Zheng Ma, Kaiyang Zhou, Wayne Zhang, Chen Change Loy, and Ziwei Liu. Panoptic video scene graph generation. In CVPR, 2023. 2, 3
207
+ [14] Xindi Shang, Tongwei Ren, Jingfan Guo, Hanwang Zhang, and Tat-Seng Chua. Video visual relation detection. In ACM MM, 2017. 2, 3
208
+ [15] Xindi Shang, Donglin Di, Junbin Xiao, Yu Cao, Xun Yang, and Tat-Seng Chua. Annotating objects and relations in user-generated videos. In ICMR, 2019. 2, 3
209
+ [16] Matthew Fisher, Manolis Savva, and Pat Hanrahan. Characterizing structural relationships in scenes using graph kernels. In ACM SIGGRAPH 2011 papers, pages 1-12. 2011. 2, 3
210
+
211
+ [17] Robert F Tobler. Separating semantics from rendering: a scene graph based architecture for graphics applications. The Visual Computer, 27(6-8):687-695, 2011. 2, 3
212
+ [18] Johanna Wald, Helisa Dhamo, Nassir Navab, and Federico Tombari. Learning 3d semantic scene graphs from 3d indoor reconstructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3961-3970, 2020. 2, 3, 4, 7, 8
213
+ [19] Shoulong Zhang, Aimin Hao, Hong Qin, et al. Knowledge-inspired 3d scene graph prediction in point cloud. Advances in Neural Information Processing Systems, 34:18620-18632, 2021. 2, 3
214
+ [20] Y.-T. Hu, J. Wang, R. A. Yeh, and A. G. Schwing. SAIL-VOS 3D: A Synthetic Dataset and Baselines for Object Detection and 3D Mesh Reconstruction from Video Data. In Proc. CVPR, 2021. 2, 5
215
+ [21] Grand theft auto v, 2014. 2, 4
216
+ [22] Yunze Liu, Yun Liu, Che Jiang, Kangbo Lyu, Weikang Wan, Hao Shen, Boqiang Liang, Zhoujie Fu, He Wang, and Li Yi. Hoi4d: A 4d egocentric dataset for category-level human-object interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21013-21022, June 2022. 2, 5
217
+ [23] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2017. 3
218
+ [24] Rongjie Li, Songyang Zhang, and Xuming He. Sgtr: End-to-end scene graph generation with transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19486-19496, 2022. 3
219
+ [25] Yuren Cong, Michael Ying Yang, and Bodo Rosenhahn. Reltr: Relation transformer for scene graph generation. arXiv preprint arXiv:2201.11460, 2022. 3
220
+ [26] Jingwei Ji, Ranjay Krishna, Li Fei-Fei, and Juan Carlos Niebles. Action genome: Actions as compositions of spatio-temporal scene graphs. In CVPR, 2020. 3
221
+ [27] Jinghao Wang, Zhengyu Wen, Xiangtai Li, Zujin Guo, Jingkang Yang, and Ziwei Liu. Pair then relation: Pair-net for panoptic scene graph generation. arXiv preprint arXiv:2307.08699, 2023. 3
222
+ [28] Chengyang Zhao, Yikang Shen, Zhenfang Chen, Mingyu Ding, and Chuang Gan. Textpsg: Panoptic scene graph generation from textual descriptions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2839-2850, 2023. 3
223
+ [29] Zijian Zhou, Miaojing Shi, and Holger Caesar. Hilo: Exploiting high low frequency relations for unbiased panoptic scene graph generation. arXiv preprint arXiv:2303.15994, 2023. 3
224
+ [30] Julian Lorenz, Florian Barthel, Daniel Kienzle, and Rainer Lienhart. Haystack: A panoptic scene graph dataset to evaluate rare predicate classes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 62–70, 2023. 3
225
+ [31] Jingkang Yang, Zheng Ma, Qixun Wang, Xiaofeng Guo, Haofan Wang, Ziwei Liu, Wayne Zhang, Xing Xu, and Hai Zhang. The psg challenge: towards comprehensive scene understanding. National Science Review, 10(6):nwad126, 2023. 3
226
+ [32] Xiangtai Li, Wenwei Zhang, Jiangmiao Pang, Kai Chen, Guangliang Cheng, Yunhai Tong, and Chen Change Loy. Video k-net: A simple, strong, and unified baseline for video segmentation. In CVPR, 2022. 3
227
+ [33] Jaewon Bae, Dongmin Shin, Kangbeen Ko, Juchan Lee, and Ue-Hwan Kim. A survey on 3d scene graphs: Definition, generation and application. In Robot Intelligence Technology and Applications 7: Results from the 10th International Conference on Robot Intelligence Technology and Applications, pages 136-147. Springer, 2023. 3
228
+ [34] Ue-Hwan Kim, Jin-Man Park, Taek-Jin Song, and Jong-Hwan Kim. 3-d scene graph: A sparse and semantic representation of physical environments for intelligent agents. IEEE transactions on cybernetics, 50(12):4921-4933, 2019. 3
229
+ [35] Iro Armeni, Zhi-Yang He, JunYoung Gwak, Amir R Zamir, Martin Fischer, Jitendra Malik, and Silvio Savarese. 3d scene graph: A structure for unified semantics, 3d space, and camera. In ICCV, 2019. 3
230
+
231
+ [36] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. 3
232
+ [37] Antoni Rosinol, Andrew Violette, Marcus Abate, Nathan Hughes, Yun Chang, Jingnan Shi, Arjun Gupta, and Luca Carlone. Kimera: From slam to spatial perception with 3d dynamic scene graphs. The International Journal of Robotics Research, 40(12-14):1510-1546, 2021. 3
233
+ [38] Shun-Cheng Wu, Johanna Wald, Keisuke Tateno, Nassir Navab, and Federico Tombari. Scenagraphfusion: Incremental 3d scene graph prediction from rgb-d sequences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7515-7525, 2021. 3
234
+ [39] Hema Koppula and Ashutosh Saxena. Learning spatio-temporal structure from rgb-d videos for human activity detection and anticipation. In International conference on machine learning, pages 792-800. PMLR, 2013. 3
235
+ [40] Qian Xie, Ouussama Remil, Yanwen Guo, Meng Wang, Mingqiang Wei, and Jun Wang. Object detection and tracking under occlusion for object-level rgb-d video segmentation. IEEE Transactions on Multimedia, 20(3):580-592, 2017. 3
236
+ [41] Guyue Zhang, Jun Liu, Hengduo Li, Yan Qiu Chen, and Larry S Davis. Joint human detection and head pose estimation via multistream networks for rgb-d videos. IEEE Signal Processing Letters, 24(11):1666-1670, 2017. 3
237
+ [42] David Weikersdorfer, Alexander Schick, and Daniel Cremers. Depth-adaptive supervoxels for rgb-d video segmentation. In 2013 IEEE International Conference on Image Processing, pages 2708-2712. IEEE, 2013. 3
238
+ [43] Huazhu Fu, Dong Xu, and Stephen Lin. Object-based multiple foreground segmentation in rgbd video. IEEE Transactions on Image Processing, 26(3):1418-1427, 2017. 3
239
+ [44] Steven Hickson, Stan Birchfield, Irfan Essa, and Henrik Christensen. Efficient hierarchical graph-based segmentation of rgbd videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 344-351, 2014. 3
240
+ [45] Numair Khan, Qian Zhang, Lucas Kasser, Henry Stone, Min H Kim, and James Tompkin. View-consistent 4d light field superpixel segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7811-7819, 2019. 3
241
+ [46] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3075-3084, 2019. 3
242
+ [47] Wenwei Zhang, Hui Zhou, Shuyang Sun, Zhe Wang, Jianping Shi, and Chen Change Loy. Robust multimodality multi-object tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2365-2374, 2019. 3
243
+ [48] Xinshuo Weng, Yongxin Wang, Yunze Man, and Kris M Kitani. Gnn3dmot: Graph neural network for 3d multi-object tracking with 2d-3d multi-feature learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6499–6508, 2020. 3
244
+ [49] Xinshuo Weng, Jianren Wang, David Held, and Kris Kitani. 3d multi-object tracking: A baseline and new evaluation metrics. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 10359-10366. IEEE, 2020. 3
245
+ [50] Ankit Goyal, Kaiyu Yang, Dawei Yang, and Jia Deng. Rel3d: A minimally contrastive benchmark for grounding spatial relations in 3d. Advances in Neural Information Processing Systems, 33:10514-10525, 2020. 4
246
+ [51] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828–5839, 2017. 4
247
+ [52] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158, 2017. 4
248
+ [53] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11621-11631, 2020. 4
249
+
250
+ [54] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2446-2454, 2020. 4
251
+ [55] Yuan-Ting Hu, Jiahong Wang, Raymond A Yeh, and Alexander G Schwing. Sail-vos 3d: A synthetic dataset and baselines for object detection and 3d mesh reconstruction from video data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1418-1428, 2021. 4
252
+ [56] Yunze Liu, Yun Liu, Che Jiang, Kangbo Lyu, Weikang Wan, Hao Shen, Boqiang Liang, Zhoujie Fu, He Wang, and Li Yi. Hoi4d: A 4d egocentric dataset for category-level human-object interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21013-21022, 2022. 4
253
+ [57] Siwei Zhang, Qianli Ma, Yan Zhang, Zhiyin Qian, Taein Kwon, Marc Pollefeys, Federica Bogo, and Siyu Tang. Egobody: Human body shape and motion of interacting people from head-mounted devices. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part VI, pages 180-200. Springer, 2022. 4
254
+ [58] Jingkang Yang, Yuhao Dong, Shuai Liu, Bo Li, Ziyue Wang, Chencheng Jiang, Haoran Tan, Jiamu Kang, Yuanhan Zhang, Kaiyang Zhou, et al. Octopus: Embodied vision-language programmer from environmental feedback. arXiv preprint arXiv:2310.08588, 2023. 4, 9
255
+ [59] Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, and Niko Suenderhauf. Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. arXiv preprint arXiv:2307.06135, 2023.4
256
+ [60] Ege Özsoy, Evin Pinar Örnek, Ulrich Eck, Tobias Czempiel, Federico Tombari, and Nassir Navab. 4d-or: Semantic scene graphs for or domain modeling. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 475-485. Springer, 2022. 4
257
+ [61] Saeid Amiri, Kishan Chandan, and Shiqi Zhang. Reasoning with scene graphs for robot planning under partial observability. IEEE Robotics and Automation Letters, 7(2):5560-5567, 2022. 4
258
+ [62] Kiyotaka Otsuji and Yoshinobu Tonomura. Projection detecting filter for video cut detection. In Proceedings of the first ACM international conference on Multimedia, pages 251-257, 1993. 5
259
+ [63] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. 5
260
+ [64] Zongxin Yang, Yunchao Wei, and Yi Yang. Associating objects with transformers for video object segmentation. In NeurIPS, 2021. 5
261
+ [65] Xiaokang Chen, Kwan-Yee Lin, Jingbo Wang, Wayne Wu, Chen Qian, Hongsheng Li, and Gang Zeng. Bi-directional cross-modality feature propagation with separation-and-aggregation gate for rgb-d semantic segmentation. In European Conference on Computer Vision (ECCV), 2020. 6
262
+ [66] Yizheng Wu, Min Shi, Shuaiyuan Du, Hao Lu, Zhiguo Cao, and Weicai Zhong. 3d instances as 1d kernels. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIX, pages 235-252. Springer, 2022. 6
263
+ [67] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234-241. Springer, 2015. 6
264
+ [68] Benjamin Graham, Martin Engelcke, and Laurens Van Der Maaten. 3d semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9224-9232, 2018. 6
265
+ [69] Zhongdao Wang, Hengshuang Zhao, Ya-Li Li, Shengjin Wang, Philip HS Torr, and Luca Bertinetto. Do different tracking tasks require different appearance models? NeurIPS, 2021. 7
266
+ [70] Yuren Cong, Wentong Liao, Hanno Ackermann, Bodo Rosenhahn, and Michael Ying Yang. Spatial-temporal transformer for dynamic scene graph generation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 16372-16382, 2021. 7
267
+ [71] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. 8
268
+
269
+ [72] Wenlong Huang, P. Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. ArXiv, abs/2201.07207, 2022. 8
270
+ [73] Xiaqing Pan, Nicholas Charron, Yongqian Yang, Scott Peters, Thomas Whelan, Chen Kong, Omkar Parkhi, Richard Newcombe, and Yuheng Carl Ren. Aria digital twin: A new benchmark dataset for egocentric 3d machine perception. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20133-20143, 2023. 9
4dpanopticscenegraphgeneration/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a4deccd865490152f2e604aeeb9241368f17748320c0ef278311018ede6d929
3
+ size 492238
4dpanopticscenegraphgeneration/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cfe7ad2225663f05156da77745bf25fc40a33419a677c452f4eafbc95e1fcc3
3
+ size 364142
4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e46757ffada37b6d3381d065eed8c1a85d3dfa622ca015a1edb62eee7e308a6
3
+ size 251997
4mmassivelymultimodalmaskedmodeling/68ab2f1a-8284-4952-a6a6-7a2436cf1706_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eea6f181248927c9527bf7efa2ecbf77b325ce9cfa50b7d00d8d243ab3c4a2fe
3
+ size 308409